├── .gitignore ├── LICENSE ├── Makefile ├── README.md ├── RELEASES.md ├── doc ├── 2014-03-06-Erlang-Factory.pdf ├── 2016-09-09-EUC-Concurrency-Fount.pdf └── Epocxy_Explained.md ├── erlang.mk ├── example ├── cxy_synch_trace.erl ├── fount_worker.erl ├── hexdump_fount.erl └── list_feeder.erl ├── include ├── cxy_cache.hrl └── tracing_levels.hrl ├── rebar.config ├── src ├── batch_feeder.erl ├── cxy_cache.erl ├── cxy_cache_fsm.erl ├── cxy_cache_sup.erl ├── cxy_ctl.erl ├── cxy_fount.erl ├── cxy_fount_sup.erl ├── cxy_regulator.erl ├── cxy_synch.erl ├── epocxy.app.src └── ets_buffer.erl └── test ├── epocxy.coverspec ├── epocxy.spec └── epocxy ├── batch_feeder_SUITE.erl ├── cxy_cache_SUITE.erl ├── cxy_ctl_SUITE.erl ├── cxy_fount_SUITE.erl ├── cxy_fount_fail_behaviour.erl ├── cxy_fount_hello_behaviour.erl ├── cxy_regulator_SUITE.erl ├── epocxy_common_test.hrl ├── ets_buffer_SUITE.erl ├── fox_obj.erl ├── frog_obj.erl └── rabbit_obj.erl /.gitignore: -------------------------------------------------------------------------------- 1 | erl_crash.dump 2 | .eunit 3 | deps 4 | ebin 5 | logs 6 | *.DS_Store 7 | *.d 8 | *.o 9 | *.beam 10 | *.plt 11 | *~ 12 | .erlang.mk 13 | .erlang.mk.packages.* 14 | _build 15 | rebar.lock -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2013, DuoMark International, Inc. 2 | All rights reserved. 3 | 4 | Sponsored by TigerText. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright 10 | notice, this list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright 13 | notice, this list of conditions and the following disclaimer in the 14 | documentation and/or other materials provided with the distribution. 15 | 16 | * Neither the name of DuoMark International, Inc. nor the names of its 17 | contributors or sponsors may be used to endorse or promote products 18 | derived from this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 21 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 22 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL DUOMARK INTERNATIONAL, INC. OR TIGERTEXT BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 25 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 26 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 27 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 29 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | PROJECT = epocxy 2 | V = 0 3 | 4 | DEPS = proper 5 | ## DEPS = proper eper # when debugging 6 | 7 | dep_proper = git https://github.com/manopapad/proper master 8 | 9 | ERLC_OPTS := +debug_info 10 | # +\"{cover_enabled, true}\" 11 | 12 | TEST_ERLC_OPTS := +debug_info -I include -I test/epocxy 13 | # +\"{cover_enabled, true}\" 14 | 15 | CT_OPTS := -cover test/epocxy.coverspec 16 | CT_SUITES = cxy_regulator cxy_fount batch_feeder ets_buffer cxy_ctl cxy_cache 17 | 18 | DIALYZER_OPTS := -I include -Werror_handling -Wrace_conditions -Wunmatched_returns 19 | 20 | include erlang.mk 21 | 22 | run: 23 | erl -pa ebin -pa deps/*/ebin -smp enable -name epocxy -boot start_sasl 24 | 25 | my_dialyzer: .epocxy.plt 26 | dialyzer --plt .epocxy.plt --no_native --src -r src test/epocxy -I include -Werror_handling -Wrace_conditions -Wunmatched_returns 27 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Erlang Patterns of Concurrency (epocxy) 2 | ======================================= 3 | 4 | The version number format epocxy uses is Major.Minor.Rev (e.g., 1.1.0). If the Rev is an even number, this is a release version and will be tagged for builds. If it is an odd number, it is a modification of Master that is under development but available for testing, and will only be marked in the epocxy.app file, not as a tagged version. Only use Master in your dependency if you are testing partial fixes; always use an even number tag starting with 1.1.0 for stable release dependency. 5 | 6 | NOTE: 7 | - 'make tests' is significantly slower in 1.1.0 because of cxy_fount PropEr tests. Enchancements are planned to speed testing in 1.1.1. Production performance is not impacted. Update: cxy_fount_SUITE now takes 30 seconds on my laptop, it won't get any faster. 8 | - cxy_regulator gives intermittent failures when running 'make tests'. This is a timing issue in the test itself. Run again and/or stop some CPU intensive operations before running. Hoping to fix this in 1.1.1 9 | 10 | Erlang/OTP offers many components for distributed, concurrent, fault-tolerant, non-stop services. Many concurrent systems need common constructs which are not provided in the basic OTP system. This library is an Open Source set of concurrency tools which have been proven in a production environment and are being released publicly in anticipation that community interest and contributions will lead to the most useful tools being submitted for inclusion in Erlang/OTP. 11 | 12 | This library is released under a "Modified BSD License". The library was sponsored by [TigerText](http://tigertext.com/) during 2013, and was validated on their HIPAA-compliant secure text XMPP servers in a production environment handling more than 1M messages per day. 13 | 14 | A talk from Erlang Factory San Francisco 2014 on this library is in the 'doc' directory. 15 | 16 | Concurrency Fount was added in 2016 and the 'doc' directory includes the slides from a talk at EUC 2016, as well as quick summary of the epocxy patterns. 17 | 18 | 19 | ETS (Erlang Term Storage) 20 | ------------------------- 21 | 22 | ETS tables offer fast access to very large amounts of mutable data, and unlike normal processes, have the added benefit of not being garbage collected. Under normal production situations, access to data stored in ets tables can be twice as fast as a synchronous request for the same information from another process. 23 | 24 | ETS tables also offer a limited concurrency advantage. While they are not currently suited for scaling with manycore architectures, they do provide concurrent access because they employ internal record-level locking, whereas process message queues are inherently serial and therefore cannot service two requests by two separate processes. 25 | 26 | These benefits make ETS tables one of the most useful OTP constructs in building common concurrency idioms for multi-core processing systems. 27 | 28 | 29 | Buffers 30 | ------- 31 | 32 | FIFO, LIFO and Ring buffers are very useful in absorbing data from a source stream when the processing time is exceeded by the amount of data being supplied. If the difference in time and volume is not excessive, or the loss of data is inconsequential, an in-memory buffer can provide a smooth intermediate location for data in transit. If the volume becomes excessive, an alternative persistent data store may be needed. 33 | 34 | OTP provides no explicit buffer tools, although it does provide queues and lists. These algorithms only allow serial access when they are embedded inside a process. Implementing buffers with an ets table allows multiple processes to access the buffer directly (although internal locks and the sequential nature of buffers prevent actual concurrent access) and operate in a more efficient and approximately concurrent manner. 35 | 36 | The message queues of processes are naturally FIFO queues, with the additional feature of being able to scan through them using selective receive to find the first occurrence of a particular message pattern. LIFO queues cannot be efficiently modeled using just the built in message queue of an erlang process. 37 | 38 | ETS buffers allow independent readers and writes to access data simultaneously through distinct read and write index counters which are atomically updated. A ring buffer is extremely useful under heavy load situations for recording a data window of the most recent activity. It is light weight in its processing needs, limits the amount of memory used at the cost of overwriting older data, and allows read access completely independent of any writers. It is often used to record information in realtime for later offline analysis. 39 | 40 | 41 | Concurrency Control 42 | ------------------- 43 | 44 | Most distributed systems need dynamic adaptation to traffic demand, however, there are limits to how much concurrency they can handle without overloading and causing failure. Most teams employ pooling as a solution to this problem, treating processes as a limited resource allocation problem. This approach is overly complicated, turns out to not be very concurrent because of the central management of resource allocation, can introduce latent errors by recycling previously used processes (beware of process dictionary garbage!), and leads to message queue overload and cascading timeout failures under heavy pressure. 45 | 46 | A more lightweight approach is to limit concurrency via a governor on the number of processes active. When the limit is exceeded functions are executed inline rather than spawned, thus providing much needed backpressure on the source of the concurrency requests. The governor here is implemented as a counter in an ETS table with optional execution performance times recorded in a ring buffer residing in the same ETS table. 47 | 48 | 49 | Concurrency Fount 50 | ----------------- 51 | 52 | Concurrency fount (cxy_fount) offers an alternative control to avoid concurrency overload. It is a reservoir of one-shot pre-spawned processes which refresh at a controlled rate. The reservoir represents a pre-allocation of the total potential computational power for a single category of tasks on a node. Once it is consumed, no additional work should be started (or even accepted) until the compute potential is partially replenished. It is assumed that actual progress is made with allocated workers, so that they complete before too many new processes are available. 53 | 54 | This pattern uses a supervisor and two gen_fsms, and does not rely on any ets tables. The cxy_fount API is a serial mailbox on the cxy_fount process, so it may become a limiting factor in some situations, but the overall structure is designed to avoid catastrophic failure in favor of surviving individual failures (even if occuring rapidly and repeatedly in the worker tasks). 55 | 56 | The cxy_fount pattern is new as of Version 1.1.0, and more improvements will arrive before 1.2.0. 57 | 58 | Caches 59 | ------ 60 | 61 | Most distributed systems consult information that is slow to generate or obtain. This is often because it is supplied by an external application such as a database or key/value store, or it comes via a TCP connection to an external system. In these situations, the built-in facilities of ETS key/value RAM storage can be used to retain previously retrieved information in anticipation of its reuse by other processes. Many development teams build their own application-specific caching on top of ETS because there is no basic caching capability provided in OTP. 62 | 63 | The ets_cache provided here is a generational cache with two generations. New objects are inserted in the newest generation. When a generation change occurs, a new empty generation is created and the oldest generation is deleted. The previously active generation becomes the old generation. The pattern of access is: new generation -> old generation -> external data source. When an item is found in the old generation, it is copied to the new generation so that it will survive the next generation change. Everything residing solely in the old generation will be automatically eliminated in the single action of deleting that generation when it has aged. 64 | 65 | Generation cycling can be performed based on elapsed time, number of access, an arbitrary function, or, when the entire dataset can comfortably fit in memory, never. 66 | 67 | 68 | Batch Feeder 69 | ------------ 70 | 71 | Batch feeder is used for the lazy generation of large sets, context-aware iteration over sets, or paced iteration. It provides a behaviour framework for implementing a control structure that you may control via the generation of a continuation function at each step of the iteration. It may be combined with concurrency constructs to chop a large structure into segments which can be processed as batches in the background. 72 | -------------------------------------------------------------------------------- /RELEASES.md: -------------------------------------------------------------------------------- 1 | Release History 2 | =============== 3 | 4 | The following releases of epocxy are planned: 5 | 6 | 1.5.0 () 7 | -------- 8 | 9 | Requires OTP 19 or later to run. 10 | 11 | 1.2.0 () 12 | -------- 13 | 14 | New read-only ETS ring buffer 15 | cxy_fount uses proc_lib thus allowing worker OTP tracing/debugging 16 | 17 | 1.1.1 () 18 | -------- 19 | cxy_fount tests run much faster due to clock skewing 20 | 21 | ===== 22 | 23 | The following releases of epocxy have been made: 24 | 25 | 26 | 1.1.0 (15 Sep 2016) 27 | ------------------- 28 | 29 | WARNING: The addition of cxy_fount slowed down 'make tests' 30 | significantly because it uses PropEr and attempts many variations 31 | of waiting on the cxy_regulator to slowly refill the reservoir. 32 | 33 | Add RELEASES.md for overview of historical changes 34 | Allow use on OTP 19.0 35 | Update erlang.mk to latest version 36 | Complete concurrency fount functionality and hexdump example 37 | Add epocxy to Hex (courtesy of Fernando Bienevides) 38 | Fix restart and refresh item issues with concurrency cache 39 | 40 | 1.0.0 () 41 | -------- 42 | 43 | Add high water mark to cxy_ctl (courtesy of David Hull) 44 | Initial prototype of concurrency fount 45 | Initial property-based testing of concurrency fount 46 | Fix some warnings with batch feeder 47 | 48 | 0.9.9 () 49 | -------- 50 | 51 | 0.9.8e 52 | ------ 53 | -------------------------------------------------------------------------------- /doc/2014-03-06-Erlang-Factory.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/duomark/epocxy/affd1c41aeae256050e2b2f11f2feb3532df8ebd/doc/2014-03-06-Erlang-Factory.pdf -------------------------------------------------------------------------------- /doc/2016-09-09-EUC-Concurrency-Fount.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/duomark/epocxy/affd1c41aeae256050e2b2f11f2feb3532df8ebd/doc/2016-09-09-EUC-Concurrency-Fount.pdf -------------------------------------------------------------------------------- /doc/Epocxy_Explained.md: -------------------------------------------------------------------------------- 1 | Epocxy Explained 2 | ================ 3 | 4 | The epocxy library is a set of concurrency patterns that are useful for high-performance systems, but they also can be used on smaller systems to achieve a more disciplined architecture that is easier to maintain, understand, control, and monitor. 5 | 6 | The following patterns are available: 7 | 8 | 1. cxy_ctl: limits the number of spawned processes by category 9 | 1. cxy_fount: limits the number of spawned processes with a paced replacement policy 10 | 1. cxy_cache: generational caching for large numbers of objects 11 | 1. cxy_synch: awaits for M of N replies from processes before continuing 12 | 1. batch_feeder: provides a generalized pattern for continuation-based iteration 13 | 1. ets_buffers: concurrently accessible LIFO, FIFO and Ring buffers 14 | 15 | cxy_ctl 16 | ------- 17 | 18 | A single ets table is used to hold metadata about all the concurrency categories that are limited. This table is indexed by category and contains the following fields for each category of limited concurrency: 19 | 20 | 1. Max procs: the maximum number of simultaneous processes allowed of this type 21 | 1. Active procs: the current number of simultaneous processes 22 | 1. Max history: size of a circular buffer of historical spawns for debugging 23 | 1. High water max procs: the highest number of processes since last reset 24 | 25 | When a new process is spawned by category using the cxy_ctl module, it will only get created if the active processes is less than max processes. 26 | 27 | cxy_fount 28 | --------- 29 | 30 | Concurrency fount uses a reservoir and a regulator to maintain a potential computation capacity. The reservoir consists of a stack of slab-allocated processes. The regulator supplies each slab of processes, but no faster than one slab every 1/100th of a second. Processes in the reservoir are spawned before they are needed, and are linked to the cxy_fount so that it can be stopped and all unallocated processes are terminated. 31 | 32 | While cxy_fount reduces latency by pre-spawning, the most important purpose is to limit the maximum computational resource consumed during overload situations. It is designed to accept but limit the impact of sudden spikes, so that a node doesn't crash under unexpected pressure. When the reservoir is empty, no new tasks can be started. The processes in a reservoir are expected to perform a task and then terminate within in a roughly predictable amount of time which doesn't dramatically exceed the reservoir refill time. If a tasked worker does not terminate, but the reservoir keeps refilling at a rate of one slab per 1/100th of a second, the reservoir will not be a capacity limit on processing potential. 33 | 34 | There is an example that demonstrates using cxy_fount to perform a parallel hexdump on a blob of binary data. 35 | 36 | cxy_cache 37 | --------- 38 | 39 | Concurrency cache uses two ets tables (and a metatable) to track cached objects. There is a new generation and an old generation table. When an object is accessed, it is migrated forward from an older generation or retrieved for the first time and placed in the current generation. When a generation expires, the oldest generation will only contain those objects that have not been accessed once in the last 2 generations, therefore the table can be deleted and all resident objects removed in one action. This approach has far less overhead than tracking thousands of objects with indpendent timers for each one. 40 | 41 | cxy_synch 42 | --------- 43 | 44 | Synchronization barriers are under development. There is an example of collecting results from the first M of N processes, but this code will likely change soon now that cxy_fount is available. The existing code also demonstrates the use of Event Tracing to obtain message sequence diagrams for debugging concurrency. 45 | 46 | batch_feeder 47 | ------------ 48 | 49 | Another experiment, this time using a behaviour to implement function continuations to chew through a batch of transactions sequentially. Still under development. 50 | 51 | ets_buffers 52 | ----------- 53 | 54 | As of 1.1.0 and earlier versions, the ets_buffers are not reliable with a high number of readers and writers. Improvements are not planned until 1.2.0. It is best to avoid using them until a replacement implementation is provided. 55 | -------------------------------------------------------------------------------- /example/cxy_synch_trace.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2014-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson [http://duomark.com/] 4 | %%% @reference 2014-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Synchronization barriers force multiple processes to pause until all 9 | %%% participants reach the same point. They then may proceed independently. 10 | %%% 11 | %%% @since 0.9.8 12 | %%% @end 13 | %%%------------------------------------------------------------------------------ 14 | -module(cxy_synch_trace). 15 | -author('Jay Nelson '). 16 | 17 | -include("tracing_levels.hrl"). 18 | 19 | %% External API 20 | -export([ 21 | example/0, 22 | start_debug/0, 23 | start/0, 24 | start/2 25 | ]). 26 | 27 | example() -> 28 | start(), 29 | F = fun() -> 3+4 end, 30 | cxy_synch:before_task(3, F). 31 | 32 | start() -> start("cxy_synch:before_task", ?TRACE_TIMINGS). 33 | start_debug() -> start("cxy_synch:before_task", ?TRACE_DEBUG). 34 | 35 | start(Title, Trace_Level) -> 36 | et_viewer:start([ 37 | {title, Title}, 38 | {trace_global, true}, 39 | {trace_pattern, {et, Trace_Level}}, 40 | {max_actors, 10} 41 | ]), 42 | dbg:p(all,call), 43 | dbg:tpl(et, trace_me, 5, []), 44 | ok. 45 | -------------------------------------------------------------------------------- /example/fount_worker.erl: -------------------------------------------------------------------------------- 1 | -module(fount_worker). 2 | 3 | -behaviour(cxy_fount). 4 | 5 | -export([start_pid/1, send_msg/3]). 6 | 7 | -spec start_pid(cxy_fount:fount_ref()) -> pid(). 8 | start_pid(Fount_Ref) -> 9 | Pid = spawn(fun() -> receive {From, hello} -> From ! goodbye 10 | end 11 | end), 12 | link(Fount_Ref), 13 | Pid. 14 | 15 | -spec send_msg(cxy_fount:fount_ref(), module, tuple()) -> pid(). 16 | send_msg(_Fount_Ref, _Module, _Tuple) -> 17 | spawn_link(fun() -> receive {From, hello} -> From ! goodbye 18 | after 3000 -> timeout 19 | end 20 | end). 21 | -------------------------------------------------------------------------------- /example/hexdump_fount.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference The license is based on the template for Modified BSD from 5 | %%% OSI 6 | %%% @doc 7 | %%% An example cxy_fount behaviour which splits data into lines, 8 | %%% then uses separate cxy_fount allocated processes to format 9 | %%% a line-oriented dump of the data. 10 | %%% 11 | %%% The following steps are used to format the data: 12 | %%% 13 | %%% 1) Grab a process for the source data 14 | %%% 2) Read and split the source data to lines 15 | %%% 3) Grab a process for each line plus a collector 16 | %%% 4) Route the data to the line processes 17 | %%% 5) Send expected summary data to collector 18 | %%% 6) Collect formatted results in an array 19 | %%% 7) Collector prints results out 20 | %%% 21 | %%% @since v1.1.0 22 | %%% @end 23 | %%%------------------------------------------------------------------------------ 24 | -module(hexdump_fount). 25 | -author('Jay Nelson '). 26 | 27 | -behaviour(cxy_fount). 28 | 29 | %%% External API 30 | -export([format_data/3, report_data/0]). 31 | 32 | %%% Behaviour API 33 | -export([init/1, start_pid/2, send_msg/2]). 34 | 35 | %%% Exported for spawn access 36 | -export([formatter/2]). 37 | -export([hex/1, split_lines/3, addr/1]). 38 | 39 | 40 | %%%=================================================================== 41 | %%% External API 42 | %%%=================================================================== 43 | -spec format_data(cxy_fount:fount_ref(), binary(), pid()) -> [pid()]. 44 | -spec report_data() -> string() | no_results. 45 | 46 | format_data(Fount, Data, Caller) -> 47 | cxy_fount:task_pid(Fount, {load, Data, Caller}). 48 | 49 | report_data() -> 50 | receive {hexdump, Lines} -> 51 | lists:flatten( 52 | [io_lib:format(" ~p. ~s ~s |~-16.16s| ~p~n", 53 | [Index, Address, Hexpairs, Window, Pid]) 54 | || {Index, Address, Hexpairs, Window, Pid} <- Lines]) 55 | after 1000 -> no_results 56 | end. 57 | 58 | 59 | %%%=================================================================== 60 | %%% Behaviour callback datatypes and functions 61 | %%%=================================================================== 62 | -type loader() :: pid(). 63 | -type collector() :: pid(). 64 | -type formatter() :: pid(). 65 | 66 | -type data() :: binary(). 67 | -type line() :: binary(). 68 | -type hexchars() :: binary(). 69 | -type window() :: binary(). 70 | 71 | -type num_workers() :: pos_integer(). 72 | -type position() :: pos_integer(). 73 | -type address() :: pos_integer(). 74 | 75 | -type load_cmd() :: {load, data()}. 76 | -type format_cmd() :: {format, position(), address(), line(), collector()}. 77 | -type collect_cmd() :: {collect, position(), address(), hexchars(), window(), worker()} 78 | | {collect, num_workers()}. 79 | 80 | -type worker() :: loader() | formatter() | collector(). 81 | -type hexdump_cmd() :: load_cmd() | format_cmd() | collect_cmd(). 82 | 83 | -spec init({}) -> {}. 84 | -spec start_pid(cxy_fount:fount_ref(), {}) -> pid(). 85 | -spec send_msg(Worker, hexdump_cmd()) -> Worker when Worker :: worker(). 86 | 87 | init({}) -> {}. 88 | 89 | start_pid(Fount, State) -> 90 | cxy_fount:spawn_worker(Fount, ?MODULE, formatter, [Fount, State]). 91 | 92 | send_msg(Worker, Msg) -> 93 | cxy_fount:send_msg(Worker, Msg). 94 | 95 | 96 | %%%=================================================================== 97 | %%% Customized behaviour message handling 98 | %%% - all idle, untasked workers are waiting on this receive 99 | %%% - a single message arrives after unlinking from fount 100 | %%% - the freed worker runs to completion 101 | %%%=================================================================== 102 | formatter(Fount, {}) -> 103 | 104 | %% Workers can be 1 loader, N formatters, or 1 collector 105 | %% arranged in a fan out -> fan in structure. 106 | %% Each responds to one particular message only. 107 | receive 108 | 109 | %% First stage data loader 110 | {load, Data, Caller} when is_binary(Data), is_pid(Caller) -> 111 | Lines = split_lines(Data, [], 0), 112 | Num_Workers = length(Lines), 113 | [Collector | Workers] = cxy_fount:get_pids(Fount, Num_Workers+1), 114 | Collector ! {collect, Num_Workers, Caller}, 115 | done = send_format_msgs(Workers, Lines, Collector); 116 | 117 | %% Data formatting worker 118 | {format, Position, Address, Line, Collector} -> 119 | Collector ! {collect, Position, addr(Address), hex(Line), Line, self()}; 120 | 121 | %% Collector 122 | {collect, Num_Workers, Requester} -> 123 | collect_hexdump_lines(array:new(), Num_Workers, Requester) 124 | end. 125 | 126 | %%% Collect and display the hexdump. 127 | line_fmt(Index, {Address, Hexchars, Window, Pid}, Lines) -> 128 | Hexpairs = string:join(Hexchars, " "), 129 | [{Index, Address, Hexpairs, Window, Pid} | Lines]. 130 | 131 | collect_hexdump_lines(Array, 0, Requester) -> 132 | Requester ! {hexdump, lists:reverse(array:foldl(fun line_fmt/3, [], Array))}; 133 | collect_hexdump_lines(Array, Remaining_Responses, Requester) -> 134 | receive 135 | {collect, Position, Address, Hexchars, Window, Pid} 136 | when is_integer(Position), Position >= 0, 137 | is_binary(Address), is_list(Hexchars), is_binary(Window) -> 138 | Array_Value = {Address, Hexchars, Window, Pid}, 139 | New_Array = array:set(Position, Array_Value, Array), 140 | collect_hexdump_lines(New_Array, Remaining_Responses-1, Requester) 141 | end. 142 | 143 | 144 | %%%=================================================================== 145 | %%% Worker reformatting 146 | %%%=================================================================== 147 | 148 | %%% Order doesn't matter since we are formatting each line 149 | %%% concurrently in a separate process. The returned list 150 | %%% tags each line with its original position. 151 | split_lines(Data, Lines, Pos) -> 152 | case Data of 153 | <<>> -> 154 | lists:reverse(Lines); 155 | <> -> 156 | split_lines(Rest, [{Pos, Line} | Lines], Pos+1); 157 | Last_Line -> 158 | [{Pos, Last_Line} | Lines] 159 | end. 160 | 161 | send_format_msgs([], [], _Collector) -> 162 | done; 163 | send_format_msgs([Worker | More], [{Pos, Line} | Lines], Collector) -> 164 | Worker ! {format, Pos, Pos*16, Line, Collector}, 165 | send_format_msgs(More, Lines, Collector). 166 | 167 | addr(Address) -> 168 | list_to_binary(lists:flatten(io_lib:format("~8.16.0b", [Address]))). 169 | 170 | hex(Line) -> 171 | Hex_Line = [hexval(Char) || <> <= Line], 172 | case length(Hex_Line) of 173 | 16 -> Hex_Line; 174 | N -> Pad_Size = 16 - N, 175 | Pad_Chars = lists:duplicate(Pad_Size, " "), 176 | Hex_Line ++ Pad_Chars 177 | end. 178 | 179 | hexval(Char) -> 180 | Dig1 = hexdigit(Char div 16), 181 | Dig2 = hexdigit(Char rem 16), 182 | [Dig1, Dig2]. 183 | 184 | hexdigit(0) -> $0; 185 | hexdigit(1) -> $1; 186 | hexdigit(2) -> $2; 187 | hexdigit(3) -> $3; 188 | hexdigit(4) -> $4; 189 | hexdigit(5) -> $5; 190 | hexdigit(6) -> $6; 191 | hexdigit(7) -> $7; 192 | hexdigit(8) -> $8; 193 | hexdigit(9) -> $9; 194 | hexdigit(10) -> $a; 195 | hexdigit(11) -> $b; 196 | hexdigit(12) -> $c; 197 | hexdigit(13) -> $d; 198 | hexdigit(14) -> $e; 199 | hexdigit(15) -> $f. 200 | -------------------------------------------------------------------------------- /example/list_feeder.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% An example of a batch_feeder behaviour implementation to process a 9 | %%% list of integers. 10 | %%% @since 0.9.9 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(list_feeder). 14 | -author('Jay Nelson '). 15 | 16 | -behaviour(batch_feeder). 17 | 18 | -export([ 19 | first_batch/1, 20 | prep_batch/3, 21 | exec_batch/3 22 | ]). 23 | 24 | %%% Localized narrower batch_feeder dialyzer datatypes. 25 | -type context() :: {module(), batch_feeder:context(proplists:proplist())}. 26 | -type batch_type() :: [pos_integer()]. 27 | -type batch_chunk() :: batch_feeder:batch_chunk(batch_type()). 28 | -type batch_continue() :: batch_feeder:batch_continue(batch_type(), context()). 29 | -type batch_done() :: batch_done(). 30 | -type thunk_return() :: batch_feeder:thunk_return(batch_type()). 31 | 32 | -spec first_batch(context()) -> thunk_return(). 33 | -spec prep_batch(pos_integer(), batch_chunk(), context()) -> batch_continue(). 34 | -spec exec_batch(pos_integer(), batch_chunk(), context()) -> {ok, context()} | {error, any()}. 35 | 36 | 37 | %%% Model implementation details... 38 | all_ids() -> [1,2,3,4,5,6,7,8,9,10]. 39 | batch_size_prop() -> batch_size. 40 | 41 | 42 | %%% batch_feeder behaviour implementation. 43 | first_batch({_Module, Env} = Context) -> 44 | Num_Items = proplists:get_value(batch_size_prop(), Env), 45 | {Batch, Rest} = batch_feeder:get_next_batch_list(Num_Items, all_ids()), 46 | {{Batch, Context}, make_continuation_fn(Rest)}. 47 | 48 | prep_batch(Iteration, Batch, Context) -> 49 | {[{{pass, Iteration}, Elem} || Elem <- Batch], Context}. 50 | 51 | exec_batch(Iteration, Batch, Context) -> 52 | _ = [io:format("Iteration ~p: ~p~n", [Iteration, {processed, Elem}]) || Elem <- Batch], 53 | {ok, Context}. 54 | 55 | 56 | %%%------------------------------------------------------------------------------ 57 | %%% Support functions 58 | %%%------------------------------------------------------------------------------ 59 | 60 | make_continuation_fn([]) -> 61 | fun(_Iteration, _Context) -> done end; 62 | make_continuation_fn(Batch_Remaining) -> 63 | fun(_Iteration, {_Module, Env} = Context) -> 64 | Num_Items = proplists:get_value(batch_size_prop(), Env), 65 | {Next_Batch, More} = batch_feeder:get_next_batch_list(Num_Items, Batch_Remaining), 66 | {{Next_Batch, Context}, make_continuation_fn(More)} 67 | end. 68 | 69 | get_next_batch(Num_Items_Per_Batch, Items) -> 70 | case length(Items) of 71 | N when N =< Num_Items_Per_Batch -> 72 | {Items, []}; 73 | Len -> 74 | lists:split(lists:min([Num_Items_Per_Batch, Len]), Items) 75 | end. 76 | -------------------------------------------------------------------------------- /include/cxy_cache.hrl: -------------------------------------------------------------------------------- 1 | 2 | -record(cxy_cache_meta, 3 | { 4 | cache_name :: cxy_cache:cache_name(), 5 | started = os:timestamp() :: erlang:timestamp(), 6 | gen1_hit_count = 0 :: cxy_cache:gen1_hit_count(), 7 | gen2_hit_count = 0 :: cxy_cache:gen2_hit_count(), 8 | refresh_count = 0 :: cxy_cache:refresh_count(), 9 | delete_count = 0 :: cxy_cache:delete_count(), 10 | fetch_count = 0 :: cxy_cache:fetch_count(), 11 | error_count = 0 :: cxy_cache:error_count(), 12 | miss_count = 0 :: cxy_cache:miss_count(), 13 | new_gen_time = undefined :: erlang:timestamp() | undefined, 14 | old_gen_time = undefined :: erlang:timestamp() | undefined, 15 | new_gen = undefined :: ets:tid() | undefined, 16 | old_gen = undefined :: ets:tid() | undefined, 17 | cache_module :: module(), 18 | new_generation_function = none :: cxy_cache:check_gen_fun(), 19 | new_generation_thresh = 0 :: non_neg_integer() 20 | }). 21 | 22 | -record(cxy_cache_value, 23 | { 24 | key :: cxy_cache:cached_key(), 25 | value :: cxy_cache:cached_value(), 26 | version :: cxy_cache:cached_value_vsn() 27 | }). 28 | 29 | -------------------------------------------------------------------------------- /include/tracing_levels.hrl: -------------------------------------------------------------------------------- 1 | -define(TRACE_ALL, 99). 2 | -define(TRACE_DEBUG, 30). 3 | -define(TRACE_TIMINGS, 20). 4 | -define(TRACE_CXY, 10). 5 | -define(TRACE_ERRORS, 1). 6 | -------------------------------------------------------------------------------- /rebar.config: -------------------------------------------------------------------------------- 1 | {deps, [ 2 | {proper,".*",{git,"https://github.com/manopapad/proper",{branch, "master"}}} 3 | ]}. 4 | {erl_opts, [debug_info]}. 5 | -------------------------------------------------------------------------------- /src/batch_feeder.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Batches are often used as a way to increase throughput or to handle a 9 | %%% large amount of data in bite-size chunks. The key features of this 10 | %%% module are the ability to control the size and pace of the batches 11 | %%% and to adapatively modify the amount of processing in each step. A 12 | %%% common pattern is to pair a batch_feeder with a cxy_fount or cxy_ctl 13 | %%% to incrementally process a large set of data using concurrency in a 14 | %%% controlled way to avoid overloading the CPU and memory. 15 | %%% @since 0.9.9 16 | %%% @end 17 | %%%------------------------------------------------------------------------------ 18 | -module(batch_feeder). 19 | -author('Jay Nelson '). 20 | 21 | -export([process_data/1, get_next_batch_list/2]). 22 | 23 | 24 | %% Source of work is consumed using continuation thunks. 25 | %% Each unit of work is handled independently, but the 26 | %% pace from one unit to the next is controlled. 27 | 28 | -type iteration() :: pos_integer(). 29 | -type context(Ctxt) :: {module, Ctxt}. 30 | -type batch_chunk(Type) :: Type. 31 | -type batch_done() :: done. 32 | -type batch_continue(Type, Context) :: {batch_chunk(Type), Context} | batch_done(). 33 | 34 | -type thunk(Type, Context) :: fun((pos_integer(), Context) -> batch_continue(Type, Context)). 35 | -type thunk_return(Type, Context) :: {batch_continue(Type, Context), thunk(Type, Context)}. 36 | 37 | -export_type([batch_chunk/1, context/1, batch_continue/2, batch_done/0, 38 | thunk/2, thunk_return/2]). 39 | 40 | 41 | %%% The first batch is generated from only a contextual configuration. 42 | %%% It returns a batch plus a thunk for the next batch. 43 | -callback first_batch(context(Ctxt1)) -> 44 | thunk_return(Type, context(Ctxt2)) 45 | when Type::any(), Ctxt1 :: any(), Ctxt2 :: any(). 46 | 47 | %%% Reformat the raw batch in preparation for processing (can be just a noop()). 48 | -callback prep_batch(iteration(), batch_chunk(Type), context(Ctxt1)) -> 49 | batch_continue(Type, context(Ctxt2)) 50 | when Type :: any(), Ctxt1 :: any(), Ctxt2 :: any(). 51 | 52 | %%% Perform the downstream task on the batch chunk. 53 | -callback exec_batch(iteration(), batch_chunk(Type), context(Ctxt1)) -> 54 | {ok, context(Ctxt2)} | {error, Reason} 55 | when Type :: any(), Ctxt1 :: any(), Ctxt2 :: any(), Reason :: any(). 56 | 57 | %%% Helper function with simple list splitting for batches. 58 | -spec get_next_batch_list(pos_integer(), list()) -> {list(), list()}. 59 | get_next_batch_list(Num_Items_Per_Batch, Items) -> 60 | case length(Items) of 61 | N when N =< Num_Items_Per_Batch -> 62 | {Items, []}; 63 | Len -> 64 | lists:split(lists:min([Num_Items_Per_Batch, Len]), Items) 65 | end. 66 | 67 | -spec process_data(context(Ctxt)) -> done | {error, tuple()} when Ctxt :: any(). 68 | process_data({Module, _Env} = Context) -> 69 | try Module:first_batch(Context) of 70 | done -> done; 71 | {{First_Batch, Context2}, Continuation_Fn} -> 72 | process_batch(1, First_Batch, Context2, Continuation_Fn) 73 | catch Error:Type:STrace -> Args = [?MODULE, Error, Type, STrace], 74 | error_logger:error_msg("~p:process_data error {~p,~p}~n~99999p", Args), 75 | {error, {first_batch, {1, Error, Type}}} 76 | end. 77 | 78 | %% todo: report progress, elapsed time. 79 | process_batch(Iteration, This_Batch, {Module, _Env} = Context, Continuation_Fn) -> 80 | try 81 | {Prepped_Batch, Context2} = Module:prep_batch(Iteration, This_Batch, Context), 82 | {_Reply, Context3} = case Module:exec_batch(Iteration, Prepped_Batch, Context2) of 83 | {error, _Reason} = Err -> {Err, Context2}; 84 | {ok, Context2a} -> {ok, Context2a} 85 | end, 86 | Next_Iteration = Iteration+1, 87 | case Continuation_Fn(Next_Iteration, Context3) of 88 | done -> done; 89 | {{Next_Batch, Context4}, Next_Continuation_Fn} -> 90 | process_batch(Iteration+1, Next_Batch, Context4, Next_Continuation_Fn) 91 | end 92 | 93 | catch Error:Type:STrace -> Args = [?MODULE, Error, Type, STrace], 94 | error_logger:error_msg("~p:process_batch error {~p,~p}~n~99999p", Args), 95 | {error, {prepped_batch, {Iteration, Error, Type}}} 96 | end. 97 | -------------------------------------------------------------------------------- /src/cxy_cache_fsm.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Generational caching ets owner process. Implemented as a 9 | %%% supervised FSM to separate management logic from caching logic. 10 | %%% 11 | %%% As of v0.9.8c the poll frequency for cxy_cache_fsm cannot be less 12 | %%% than 10 milliseconds. 13 | %%% 14 | %%% @since v0.9.6 15 | %%% @end 16 | %%%------------------------------------------------------------------------------ 17 | -module(cxy_cache_fsm). 18 | -author('Jay Nelson '). 19 | 20 | -behaviour(gen_statem). 21 | 22 | %% API 23 | -export([start_link/2, start_link/3, start_link/4, start_link/5]). 24 | 25 | %% gen_statem state functions 26 | -export(['POLL'/3]). 27 | 28 | %% gen_statem callbacks 29 | -export([init/1, callback_mode/0, terminate/3, code_change/4]). 30 | 31 | -define(SERVER, ?MODULE). 32 | 33 | %% Default polling frequency in millis to check for new generation. 34 | -define(POLL_FREQ, 60000). 35 | 36 | -record(ecf_state, { 37 | cache_name :: cxy_cache:cache_name(), 38 | cache_meta_ets = cxy_cache :: cxy_cache, 39 | poll_frequency = ?POLL_FREQ :: pos_integer() 40 | }). 41 | 42 | 43 | %%%=================================================================== 44 | %%% External API 45 | %%%=================================================================== 46 | 47 | -spec start_link(cxy_cache:cache_name(), module()) -> {ok, pid()}. 48 | -spec start_link(cxy_cache:cache_name(), module(), pos_integer()) -> {ok, pid()}; 49 | (cxy_cache:cache_name(), module(), cxy_cache:gen_fun()) -> {ok, pid()}. 50 | -spec start_link(cxy_cache:cache_name(), module(), cxy_cache:thresh_type(), 51 | pos_integer()) -> {ok, pid()}. 52 | -spec start_link(cxy_cache:cache_name(), module(), cxy_cache:thresh_type(), 53 | pos_integer(), pos_integer()) -> {ok, pid()}. 54 | 55 | %% start_link is called to reserve a cxy_cache name and specification. 56 | %% The metadata ets table for all caches is created if not already 57 | %% present, so the caller should be a single supervisor to ensure that 58 | %% there is one source for the cache metadata ets ownership. The cache 59 | %% itself is initialized inside the init function so that the new 60 | %% FSM instance is the owner of the internal generation ets tables. 61 | 62 | %% Supervisor restarts of cxy_cache_fsm need to ensure the old 63 | %% cache meta data is not still around, so any lingering old 64 | %% caches have to be deleted before instantiating them again. 65 | 66 | %% No generation creation, so no poll time. 67 | start_link(Cache_Name, Cache_Mod) 68 | when is_atom(Cache_Name), is_atom(Cache_Mod) -> 69 | _ = cxy_cache:delete(Cache_Name), 70 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod), 71 | gen_statem:start_link(?MODULE, {Cache_Name}, []). 72 | 73 | start_link(Cache_Name, Cache_Mod, Gen_Fun) 74 | when is_atom(Cache_Name), is_atom(Cache_Mod), is_function(Gen_Fun, 3) -> 75 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod, Gen_Fun), 76 | gen_statem:start_link(?MODULE, {Cache_Name}, []). 77 | 78 | %% Change frequency that generation function runs... 79 | start_link(Cache_Name, Cache_Mod, Gen_Fun, Poll_Time) 80 | when is_atom(Cache_Name), is_atom(Cache_Mod), is_function(Gen_Fun, 3), 81 | is_integer(Poll_Time), Poll_Time >= 10 -> 82 | _ = cxy_cache:delete(Cache_Name), 83 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod, Gen_Fun), 84 | gen_statem:start_link(?MODULE, {Cache_Name, Poll_Time}, []); 85 | %% Use strictly time-based microsecond generational change 86 | %% (but millisecond granularity on FSM polling)... 87 | start_link(Cache_Name, Cache_Mod, time, Gen_Frequency) 88 | when is_atom(Cache_Name), is_atom(Cache_Mod), 89 | is_integer(Gen_Frequency), Gen_Frequency >= 10000 -> 90 | _ = cxy_cache:delete(Cache_Name), 91 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod, time, Gen_Frequency), 92 | Poll_Time = round(Gen_Frequency / 1000) + 1, 93 | gen_statem:start_link(?MODULE, {Cache_Name, Poll_Time}, []); 94 | %% Generational change occurs based on access frequency (using default polling time to check). 95 | start_link(Cache_Name, Cache_Mod, count, Threshold) 96 | when is_atom(Cache_Name), is_atom(Cache_Mod), is_integer(Threshold), Threshold > 0 -> 97 | _ = cxy_cache:delete(Cache_Name), 98 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod, count, Threshold), 99 | gen_statem:start_link(?MODULE, {Cache_Name}, []); 100 | %% Override default polling with a non-generational cache. 101 | start_link(Cache_Name, Cache_Mod, none, Poll_Time) 102 | when is_atom(Cache_Name), is_atom(Cache_Mod), is_integer(Poll_Time), Poll_Time >= 10 -> 103 | _ = cxy_cache:delete(Cache_Name), 104 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod, none), 105 | gen_statem:start_link(?MODULE, {Cache_Name, Poll_Time}, []). 106 | 107 | %% Generational change occurs based on access frequency (using override polling time to check). 108 | start_link(Cache_Name, Cache_Mod, count, Threshold, Poll_Time) 109 | when is_atom(Cache_Name), is_atom(Cache_Mod), is_integer(Threshold), Threshold > 0 -> 110 | _ = cxy_cache:delete(Cache_Name), 111 | Cache_Name = cxy_cache:reserve(Cache_Name, Cache_Mod, count, Threshold), 112 | gen_statem:start_link({local, ?SERVER}, ?MODULE, {Cache_Name, Poll_Time}, []). 113 | 114 | 115 | %%%=================================================================== 116 | %%% gen_statem callbacks 117 | %%%=================================================================== 118 | 119 | -type state_name() :: 'POLL'. 120 | 121 | -spec init({cxy_cache:cache_name()} | {cxy_cache:cache_name(), pos_integer()}) -> {ok, 'POLL', #ecf_state{}}. 122 | -spec terminate (any(), state_name(), #ecf_state{}) -> ok. 123 | -spec code_change (any(), state_name(), #ecf_state{}, any()) -> {ok, state_name(), #ecf_state{}}. 124 | 125 | %% Internal ets table instances containing the generations of cached 126 | %% data will be owned by a corresponding FSM process so that 127 | %% cache instances cannot outlive the parent metadata ets table. 128 | %% The loss of an ets owner automatically means the loss of any owned 129 | %% ets table instances, so all generation creation and rollover needs 130 | %% to be managed and called directly within the FSM process. 131 | 132 | init({Cache_Name}) -> 133 | true = cxy_cache:create(Cache_Name), 134 | Init_State = #ecf_state{cache_name=Cache_Name}, 135 | init_finish(Init_State); 136 | init({Cache_Name, Poll_Millis}) -> 137 | true = cxy_cache:create(Cache_Name), 138 | Init_State = #ecf_state{cache_name=Cache_Name, poll_frequency=Poll_Millis}, 139 | init_finish(Init_State). 140 | 141 | init_finish(#ecf_state{poll_frequency=Poll_Millis} = Init_State) -> 142 | erlang:send_after(Poll_Millis, self(), timeout), 143 | {ok, 'POLL', Init_State}. 144 | 145 | code_change (_OldVsn, State_Name, #ecf_state{} = State, _Extra) -> {ok, State_Name, State}. 146 | terminate (_Reason, _State_Name, #ecf_state{cache_name=Cache_Name}) -> 147 | _ = cxy_cache:delete(Cache_Name), 148 | ok. 149 | 150 | %% The FSM has only the 'POLL' state. 151 | -spec 'POLL'(cast, stop, #ecf_state{}) -> {stop, normal}. 152 | 'POLL'(cast, stop, #ecf_state{}) -> {stop, normal}; 153 | 154 | 'POLL'(info, timeout, #ecf_state{cache_name=Cache_Name, poll_frequency=Poll_Millis} = State) -> 155 | _ = cxy_cache:maybe_make_new_generation(Cache_Name), 156 | erlang:send_after(Poll_Millis, self(), timeout), 157 | {keep_state, State}; 158 | 159 | 'POLL'(_, _, State) -> 160 | {keep_state, State}. 161 | 162 | -spec callback_mode() -> atom(). 163 | callback_mode() -> 164 | state_functions. 165 | -------------------------------------------------------------------------------- /src/cxy_cache_sup.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Generational caching supervisor. Manages fsm children which are 9 | %%% owners of the underlying ets tables implementing the cache. The 10 | %%% ets tables need to be owned by long-lived processes which do 11 | %%% little computation to avoid crashing and losing the cached data. 12 | %%% 13 | %%% As of v0.9.8c the poll frequency for cxy_cache_fsm cannot be less 14 | %%% than 10 milliseconds. 15 | %%% 16 | %%% @since v0.9.6 17 | %%% @end 18 | %%%------------------------------------------------------------------------------ 19 | -module(cxy_cache_sup). 20 | -author('Jay Nelson '). 21 | 22 | -behaviour(supervisor). 23 | 24 | %% API 25 | -export([start_link/0, start_cache/2, start_cache/3, start_cache/4]). 26 | 27 | %% Supervisor callbacks 28 | -export([init/1]). 29 | 30 | -define(SERVER, ?MODULE). 31 | 32 | 33 | %% =================================================================== 34 | %% API functions 35 | %% =================================================================== 36 | 37 | -spec start_link() -> {ok, pid()}. 38 | -spec start_cache(cxy_cache:cache_name(), module()) -> {ok, pid()}. 39 | -spec start_cache(cxy_cache:cache_name(), module(), pos_integer()) -> {ok, pid()}; 40 | (cxy_cache:cache_name(), module(), cxy_cache:gen_fun()) -> {ok, pid()}. 41 | 42 | -spec start_cache(cxy_cache:cache_name(), module(), cxy_cache:thresh_type(), pos_integer()) -> {ok, pid()}. 43 | 44 | %% start_link creates the single metadata supervisor for all caches. 45 | %% start_cache creates an instance of a cache as a child of this supervisor. 46 | 47 | start_link() -> supervisor:start_link({local, ?SERVER}, ?MODULE, {}). 48 | 49 | %% Start a non-generational cache (all data fits in memory). 50 | start_cache(Cache_Name, Cache_Mod) 51 | when is_atom(Cache_Name), is_atom(Cache_Mod) -> 52 | Args = [Cache_Name, Cache_Mod], 53 | supervisor:start_child(?SERVER, Args). 54 | 55 | %% Start a generational cache with a specific poll time. 56 | start_cache(Cache_Name, Cache_Mod, Poll_Time) 57 | when is_atom(Cache_Name), is_atom(Cache_Mod), 58 | is_integer(Poll_Time), Poll_Time >= 10 -> 59 | Args = [Cache_Name, Cache_Mod, none, Poll_Time], 60 | supervisor:start_child(?SERVER, Args); 61 | 62 | %% Start a generational cache which ages using a fun call. 63 | start_cache(Cache_Name, Cache_Mod, Gen_Fun) 64 | when is_atom(Cache_Name), is_atom(Cache_Mod), is_function(Gen_Fun, 3) -> 65 | Args = [Cache_Name, Cache_Mod, Gen_Fun], 66 | supervisor:start_child(?SERVER, Args). 67 | 68 | %% Start other types of caches. 69 | start_cache(Cache_Name, Cache_Mod, Type, Threshold) 70 | when is_atom(Cache_Name), is_atom(Cache_Mod), 71 | (Type =:= count orelse Type =:= time), 72 | is_integer(Threshold), Threshold > 0 -> 73 | Args = [Cache_Name, Cache_Mod, Type, Threshold], 74 | supervisor:start_child(?SERVER, Args). 75 | 76 | 77 | %% =================================================================== 78 | %% Supervisor callbacks 79 | %% =================================================================== 80 | 81 | -type restart() :: {supervisor:strategy(), non_neg_integer(), pos_integer()}. 82 | -type sup_init_return() :: {ok, {restart(), [supervisor:child_spec()]}}. 83 | 84 | -spec init({}) -> sup_init_return(). 85 | 86 | -define(CHILD(__Mod, __Args), {__Mod, {__Mod, start_link, __Args}, transient, 2000, worker, [__Mod]}). 87 | 88 | init({}) -> 89 | Cache_Fsm = ?CHILD(cxy_cache_fsm, []), 90 | {ok, { {simple_one_for_one, 5, 10}, [Cache_Fsm]} }. 91 | -------------------------------------------------------------------------------- /src/cxy_ctl.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson [http://duomark.com/] 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Concurrency limiter, caps number of processes based on user config, 9 | %%% reverts to inline execution or refuses to execute when demand is too high. 10 | %%% @since 0.9.0 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(cxy_ctl). 14 | -author('Jay Nelson '). 15 | 16 | %% External interface 17 | -export([ 18 | init/1, 19 | add_task_types/1, remove_task_types/1, adjust_task_limits/1, 20 | execute_task/4, maybe_execute_task/4, 21 | execute_task/5, maybe_execute_task/5, 22 | execute_pid_link/4, execute_pid_monitor/4, 23 | execute_pid_link/5, execute_pid_monitor/5, 24 | maybe_execute_pid_link/4, maybe_execute_pid_monitor/4, 25 | maybe_execute_pid_link/5, maybe_execute_pid_monitor/5, 26 | concurrency_types/0, history/1, history/3, 27 | slow_calls/1, slow_calls/2, 28 | high_water/1, high_water/2 29 | ]). 30 | 31 | -export([make_process_dictionary_default_value/2]). 32 | 33 | %% Spawn interface 34 | -export([execute_wrapper/8]). 35 | 36 | %% Internal functions to test 37 | -export([update_inline_times/5, update_spawn_times/5]). 38 | 39 | -define(VALID_DICT_VALUE_MARKER, '$$dict_prop'). 40 | 41 | -type task_type() :: atom(). 42 | -type cxy_limit() :: pos_integer() | unlimited | inline_only. 43 | -type cxy_clear() :: clear | no_clear. 44 | 45 | -type dict_key() :: any(). 46 | -type dict_value() :: any(). 47 | -type dict_entry() :: {dict_key(), dict_value()}. 48 | -type dict_prop() :: dict_key() | dict_entry(). 49 | -type dict_props() :: [dict_prop()] | none | all_keys. 50 | -type dict_prop_vals() :: [{?VALID_DICT_VALUE_MARKER, dict_entry()}]. 51 | 52 | -spec make_process_dictionary_default_value(Key, Value) 53 | -> {?VALID_DICT_VALUE_MARKER, {Key, Value}} 54 | when Key :: dict_key(), 55 | Value :: dict_value(). 56 | 57 | make_process_dictionary_default_value(Key, Value) -> 58 | {?VALID_DICT_VALUE_MARKER, {Key, Value}}. 59 | 60 | 61 | %%%------------------------------------------------------------------------------ 62 | %%% Internal interface for maintaining process limits / history counters 63 | %%%------------------------------------------------------------------------------ 64 | 65 | %%% Used raw tuples for ets counters because two record structs with the 66 | %%% same key would be needed, or redundant record name in key position. 67 | %%% These tuples are also exposed when the history mechanism is used 68 | %%% and are made readable by not including redundant record names. 69 | 70 | %%% Cxy process values and cumulative moving avgs are kept in the cxy_ctl ets table as a tuple. 71 | make_proc_values(Task_Type, Max_Procs_Allowed, Max_History, Slow_Factor_As_Percentage) -> 72 | Active_Procs = 0, 73 | Stored_Max_Procs_Value = max_procs_to_int(Max_Procs_Allowed), 74 | High_Water_Procs = 0, 75 | 76 | %% Cxy counters init tuple... 77 | {{Task_Type, Stored_Max_Procs_Value, Active_Procs, Max_History, Slow_Factor_As_Percentage, High_Water_Procs}, 78 | 79 | %% Cumulative performance moving average init tuple... 80 | {make_cma_key(Task_Type), 0, 0, Max_History, Slow_Factor_As_Percentage}}. 81 | 82 | 83 | max_procs_to_int(unlimited) -> -1; 84 | max_procs_to_int(inline_only) -> 0; 85 | max_procs_to_int(Max_Procs) -> Max_Procs. 86 | 87 | int_to_max_procs(-1) -> unlimited; 88 | int_to_max_procs( 0) -> inline_only; 89 | int_to_max_procs(Max) -> Max. 90 | 91 | is_valid_limit(Max_Procs) 92 | when is_integer(Max_Procs), 93 | Max_Procs > 0 -> true; 94 | is_valid_limit(unlimited) -> true; 95 | is_valid_limit(inline_only) -> true; 96 | is_valid_limit(_) -> false. 97 | 98 | %% Locations of cxy counter tuple positions used for ets:update_counter 99 | %% {Task_Type, Stored_Max_Procs_Value, Active_Procs, Max_History, Slow_Factor_As_Percentage, High_Water_Procs} 100 | -define(MAX_PROCS_POS, 2). 101 | -define(ACTIVE_PROCS_POS, 3). 102 | -define(MAX_HISTORY_POS, 4). 103 | -define(HIGH_WATER_PROCS_POS, 6). 104 | 105 | incr_active_procs(Task_Type) -> 106 | [ Active_Procs, High_Water_Procs ] = 107 | ets:update_counter(?MODULE, Task_Type, [{?ACTIVE_PROCS_POS, 1}, {?HIGH_WATER_PROCS_POS, 0}]), 108 | %% Use ets:update_counter to implement max(High_Water_Procs, 109 | %% Active_Procs). We use the -Active_Procs because that allows us to use 110 | %% a quirk of the ets:update_counter threshold interface to atomically 111 | %% maintain the max. 112 | Active_Procs > -High_Water_Procs 113 | andalso ets:update_counter(?MODULE, Task_Type, {?HIGH_WATER_PROCS_POS, 0, -Active_Procs, -Active_Procs}), 114 | Active_Procs. 115 | 116 | decr_active_procs(Task_Type) -> 117 | ets:update_counter(?MODULE, Task_Type, {?ACTIVE_PROCS_POS, -1}). 118 | 119 | reset_proc_counts(Task_Type) -> 120 | ets:update_counter(?MODULE, Task_Type, [{?MAX_PROCS_POS, 0}, {?MAX_HISTORY_POS, 0}]). 121 | 122 | change_max_proc_limit(Task_Type, New_Limit) -> 123 | ets:update_element(?MODULE, Task_Type, {?MAX_PROCS_POS, max_procs_to_int(New_Limit)}). 124 | 125 | 126 | %%%------------------------------------------------------------------------------ 127 | %%% Internal interface for maintaining moving avgs for slow execution detection 128 | %%%------------------------------------------------------------------------------ 129 | 130 | make_cma_key(Task_Type) -> 131 | {cma, Task_Type}. 132 | 133 | %% Cumulative performance moving average tuple... 134 | %% {make_cma_key(Task_Type), Spawn_Time_Cma, Execution_Time_Cma, Max_History, Slow_Factor_As_Percentage} 135 | -define(CMA_SPAWN_CMA_POS, 2). 136 | -define(CMA_EXEC_CMA_POS, 3). 137 | -define(CMA_RING_SIZE_POS, 4). 138 | -define(CMA_SLOW_FACTOR_POS, 5). 139 | 140 | %% Read all values using update_counter to avoid releasing write_lock for a read_lock. 141 | -define(CMA_READ_CMD, 142 | [{?CMA_SPAWN_CMA_POS, 0}, 143 | {?CMA_EXEC_CMA_POS, 0}, 144 | {?CMA_RING_SIZE_POS, 0}, 145 | {?CMA_SLOW_FACTOR_POS, 0} 146 | ]). 147 | 148 | get_cma_factors(Cma_Key) -> 149 | ets:update_counter(?MODULE, Cma_Key, ?CMA_READ_CMD). 150 | 151 | %% Update moving averages using update_element to avoid read_lock. 152 | -define(CMA_WRITE_CMD(__Spawn, __Exec), 153 | [{?CMA_SPAWN_CMA_POS, __Spawn}, 154 | {?CMA_EXEC_CMA_POS, __Exec} 155 | ]). 156 | 157 | update_cma_avgs(Cma_Key, New_Spawn_Cma, New_Exec_Cma) -> 158 | ets:update_element(?MODULE, Cma_Key, ?CMA_WRITE_CMD(New_Spawn_Cma, New_Exec_Cma)). 159 | 160 | 161 | %%%------------------------------------------------------------------------------ 162 | %%% Internal interface for updating execution times / detecting slow execution 163 | %%%------------------------------------------------------------------------------ 164 | 165 | -type snap() :: erlang:timestamp(). 166 | -type exec_triple() :: {module(), atom(), list()}. 167 | 168 | -spec update_spawn_times (task_type(), exec_triple(), snap(), snap(), snap()) -> is_slow | not_slow. 169 | -spec update_inline_times (task_type(), exec_triple(), snap(), snap(), snap()) -> is_slow | not_slow. 170 | 171 | update_spawn_times(Task_Type, Task_Fun, Start, Spawn, Done) -> 172 | case update_times(make_buffer_spawn(Task_Type), Task_Type, Task_Fun, Start, Spawn, Done, true) of 173 | is_slow -> update_slow_times(Task_Type, Task_Fun, Start, Spawn, Done); 174 | _Not_Slow -> not_slow 175 | end. 176 | 177 | update_inline_times(Task_Type, Task_Fun, Start, Spawn, Done) -> 178 | case update_times(make_buffer_inline(Task_Type), Task_Type, Task_Fun, Start, Spawn, Done, true) of 179 | is_slow -> update_slow_times(Task_Type, Task_Fun, Start, Spawn, Done); 180 | _Not_Slow -> not_slow 181 | end. 182 | 183 | update_slow_times(Task_Type, Task_Fun, Start, Spawn, Done) -> 184 | not_checked = update_times(make_buffer_slow(Task_Type), Task_Type, Task_Fun, Start, Spawn, Done, false), 185 | slow. 186 | 187 | update_times(Task_Table, Task_Type, Task_Fun, Start, Spawn, Done, Check_Slowness) -> 188 | Exec_Elapsed = timer:now_diff(Done, Spawn), 189 | Spawn_Elapsed = timer:now_diff(Spawn, Start), 190 | Elapsed = {Task_Fun, Start, Spawn_Elapsed, Exec_Elapsed}, 191 | case ets_buffer:write(Task_Table, Elapsed) of 192 | {missing_ets_buffer, _} = Error -> 193 | Error; 194 | _Num_Entries -> 195 | case Check_Slowness of 196 | true -> check_if_slow(Task_Type, Spawn_Elapsed, Exec_Elapsed); 197 | false -> not_checked 198 | end 199 | end. 200 | 201 | check_if_slow(Task_Type, Spawn_Elapsed, Exec_Elapsed) -> 202 | {Spawn_Cma, Exec_Cma, Slow_Factor_As_Percentage} = update_cmas(Task_Type, Spawn_Elapsed, Exec_Elapsed), 203 | is_slow(Spawn_Elapsed, Exec_Elapsed, Spawn_Cma, Exec_Cma, Slow_Factor_As_Percentage). 204 | 205 | %% Slow_Factor is a percentage, so 300 would be 3x the moving average. 206 | %% The slow test combines the spawn time and execution time and compares 207 | %% that to the sum of the two moving averages. 208 | is_slow(_This_Spawn, _This_Exec, _Spawn_Cma, 0 = _Exec_Cma, _Slow_Factor_As_Percentage) -> not_slow; 209 | is_slow( Spawn_Time, Exec_Time, Spawn_Cma, Exec_Cma, Slow_Factor_As_Percentage) -> 210 | Cma_Time = Spawn_Cma + Exec_Cma, 211 | Full_Time = Spawn_Time + Exec_Time, 212 | case round((Full_Time / Cma_Time) * 100) >= Slow_Factor_As_Percentage of 213 | false -> not_slow; 214 | true -> is_slow 215 | end. 216 | 217 | update_cmas(Task_Type, Spawn_Elapsed, Exec_Elapsed) -> 218 | 219 | %% Fetch, compute and update the moving averages... 220 | Cma_Key = make_cma_key(Task_Type), 221 | [Old_Spawn_Cma, Old_Exec_Cma, Num_Samples, Slow_Factor_As_Percentage] = get_cma_factors(Cma_Key), 222 | New_Exec_Cma = cumulative_moving_avg(Old_Exec_Cma, Exec_Elapsed, Num_Samples), 223 | New_Spawn_Cma = cumulative_moving_avg(Old_Spawn_Cma, Spawn_Elapsed, Num_Samples), 224 | true = update_cma_avgs(Cma_Key, New_Spawn_Cma, New_Exec_Cma), 225 | 226 | %% The previous moving averages are returned for comparison. 227 | {Old_Spawn_Cma, Old_Exec_Cma, Slow_Factor_As_Percentage}. 228 | 229 | cumulative_moving_avg( 0, New_Case, _Num_Samples) -> New_Case; 230 | cumulative_moving_avg(Old_Avg, New_Case, Num_Samples) -> 231 | round((Num_Samples * Old_Avg + New_Case) / (Num_Samples + 1)). 232 | 233 | 234 | %%%------------------------------------------------------------------------------ 235 | %%% Mechanism for creating ets table, and executing tasks 236 | %%%------------------------------------------------------------------------------ 237 | 238 | %% @doc 239 | %% Initialize a named ETS table to hold concurrency limits which is checked 240 | %% before spawning new processes to ensure limits are not exceeded. The 241 | %% Limits argument is a list of task types, the corresponding maximum 242 | %% number of simultaneous processes to allow, and a maximum number of 243 | %% timestamps to record in a circular buffer for later analysis. 244 | %% 245 | %% Note: this function must be called by a long-lived process, probably a 246 | %% supervisor, because it will be the owner of the cxy_ctl ets table. If 247 | %% the owning process (i.e., the caller of cxy_ctl:init/1) ever terminates, 248 | %% all subsequent attempts to use cxy_ctl to spawn tasks will crash with a 249 | %% badarg because the ets table holding the limits will be gone. 250 | %% @end 251 | 252 | -spec init([{Task_Type, Type_Max, Timer_History_Count, Slow_Factor_As_Percentage}]) 253 | -> boolean() | {error, init_already_executed} 254 | | {error, {invalid_init_args, list()}} when 255 | Task_Type :: task_type(), 256 | Type_Max :: cxy_limit(), 257 | Timer_History_Count :: non_neg_integer(), 258 | Slow_Factor_As_Percentage :: 101 .. 100000. 259 | 260 | init(Limits) -> 261 | case ets:info(?MODULE, name) of 262 | ?MODULE -> {error, init_already_executed}; 263 | undefined -> 264 | %% Validate Limits and construct ring buffer params for each concurrency type... 265 | case lists:foldl(fun(Args, Acc) -> valid_limits(Args, Acc) end, {[], [], []}, Limits) of 266 | { Buffer_Params, Cxy_Params, []} -> do_init(Buffer_Params, Cxy_Params); 267 | {_Buffer_Params, _Cxy_Params, Errors} -> {error, {invalid_init_args, lists:reverse(Errors)}} 268 | end 269 | end. 270 | 271 | valid_limits({Type, Max_Procs, History_Count, Slow_Factor_As_Percentage} = Limit, 272 | {Buffer_Params, Cxy_Params, Errors} = Results) 273 | when is_atom(Type), 274 | is_integer(History_Count), History_Count >= 0, 275 | is_integer(Slow_Factor_As_Percentage), 276 | Slow_Factor_As_Percentage >= 101, 277 | Slow_Factor_As_Percentage =< 100000 -> 278 | case is_valid_limit(Max_Procs) of 279 | true -> make_limits(Limit, Results); 280 | false -> {Buffer_Params, Cxy_Params, [Limit | Errors]} 281 | end; 282 | valid_limits(Invalid, {Buffer_Params, Cxy_Params, Errors}) -> 283 | {Buffer_Params, Cxy_Params, [Invalid | Errors]}. 284 | 285 | make_limits({Type, Max_Procs, History_Count, Slow_Factor_As_Percentage}, {Buffer_Params, Cxy_Params, Errors}) -> 286 | {make_buffer_params(Buffer_Params, Type, History_Count), 287 | make_proc_params(Cxy_Params, Type, Max_Procs, History_Count, Slow_Factor_As_Percentage), Errors}. 288 | 289 | make_buffer_slow (Type) -> list_to_atom("slow_" ++ atom_to_list(Type)). 290 | make_buffer_spawn (Type) -> list_to_atom("spawn_" ++ atom_to_list(Type)). 291 | make_buffer_inline(Type) -> list_to_atom("inline_" ++ atom_to_list(Type)). 292 | make_buffer_names (Type) -> {make_buffer_spawn(Type), make_buffer_inline(Type), make_buffer_slow(Type)}. 293 | 294 | make_buffer_params(Acc, _Type, 0) -> Acc; 295 | make_buffer_params(Acc, Type, Max_History) -> 296 | {Spawn_Type, Inline_Type, Slow_Type} = make_buffer_names(Type), 297 | [{Spawn_Type, ring, Max_History}, 298 | {Inline_Type, ring, Max_History}, 299 | {Slow_Type, ring, Max_History} 300 | | Acc]. 301 | 302 | make_proc_params(Acc, Type, Max_Procs, Max_History, Slow_Factor_As_Percentage) -> 303 | [make_proc_values(Type, Max_Procs, Max_History, Slow_Factor_As_Percentage) | Acc]. 304 | 305 | 306 | do_init(Buffer_Params, Cxy_Params) -> 307 | _ = ets:new(?MODULE, [named_table, ordered_set, public, {write_concurrency, true}]), 308 | do_insert_limits(Buffer_Params, Cxy_Params). 309 | 310 | do_insert_limits(Buffer_Params, Cxy_Cma_Params) -> 311 | ets_buffer:create(Buffer_Params), 312 | _ = [begin 313 | ets:insert_new(?MODULE, Cxy_Params), 314 | ets:insert_new(?MODULE, Cma_Params) 315 | end || {Cxy_Params, Cma_Params} <- Cxy_Cma_Params], 316 | true. 317 | 318 | 319 | -spec add_task_types([{Task_Type, Type_Max, Timer_History_Count, Slow_Factor_As_Percentage}]) 320 | -> boolean() | {error, {add_duplicate_task_types, list()}} when 321 | Task_Type :: task_type(), 322 | Type_Max :: cxy_limit(), 323 | Timer_History_Count :: non_neg_integer(), 324 | Slow_Factor_As_Percentage :: 101 .. 100000. 325 | 326 | add_task_types(Limits) -> 327 | case [Args || Args = {Task_Type, _Max_Procs, _History_Count, _Slow_Factor_As_Percentage} <- Limits, 328 | ets:lookup(?MODULE, Task_Type) =/= []] of 329 | [] -> 330 | %% Validate Limits and construct ring buffer params for each concurrency type... 331 | case lists:foldl(fun(Args, Acc) -> valid_limits(Args, Acc) end, {[], [], []}, Limits) of 332 | { Buffer_Params, Cxy_Params, []} -> do_insert_limits(Buffer_Params, Cxy_Params); 333 | {_Buffer_Params, _Cxy_Params, Errors} -> {error, {invalid_add_args, lists:reverse(Errors)}} 334 | end; 335 | Dups -> {error, {add_duplicate_task_types, Dups}} 336 | end. 337 | 338 | 339 | -spec remove_task_types([task_type()]) 340 | -> pos_integer() | {error, {missing_task_types, [task_type()]}}. 341 | 342 | remove_task_types(Task_Types) -> 343 | case [Task_Type || Task_Type <- Task_Types, ets:lookup(?MODULE, Task_Type) =/= []] of 344 | Task_Types -> Deletes = [begin 345 | {Buff1, Buff2, Buff3} = make_buffer_names(Task_Type), 346 | [ets_buffer:delete(B) || B <- [Buff1, Buff2, Buff3]] 347 | end || Task_Type <- Task_Types, ets:delete(?MODULE, Task_Type)], 348 | length(Deletes); 349 | Found_Types -> {error, {missing_task_types, Task_Types -- Found_Types}} 350 | end. 351 | 352 | 353 | -spec adjust_task_limits([{task_type(), cxy_limit()}]) 354 | -> pos_integer() | {error, {missing_task_types, [task_type()]}}. 355 | 356 | adjust_task_limits(Task_Limits) -> 357 | case [TTL || TTL = {Task_Type, _} <- Task_Limits, ets:lookup(?MODULE, Task_Type) =/= []] of 358 | Task_Limits -> case [{Task_Type, Limit} || {Task_Type, Limit} <- Task_Limits, is_valid_limit(Limit)] of 359 | Task_Limits -> 360 | Changes = [TL || TL = {Task_Type, New_Limit} <- Task_Limits, 361 | change_max_proc_limit(Task_Type, New_Limit)], 362 | length(Changes); 363 | Legal_Limits -> 364 | {error, {invalid_task_limits, Task_Limits -- Legal_Limits}} 365 | end; 366 | Found_Tasks -> {error, {missing_task_types, Task_Limits -- Found_Tasks}} 367 | end. 368 | 369 | 370 | %% @doc 371 | %% Execute a task by spawning a function to run it, only if the task type 372 | %% does not have too many currently executing processes. If there are too 373 | %% many, execute the task inline. Returns neither a pid nor a result. 374 | %% @end 375 | 376 | -spec execute_task(atom(), atom(), atom(), list()) -> ok. 377 | -spec execute_task(atom(), atom(), atom(), list(), all_keys | dict_props()) -> ok. 378 | 379 | execute_task(Task_Type, Mod, Fun, Args) -> 380 | internal_execute_task(Task_Type, Mod, Fun, Args, inline, none). 381 | 382 | execute_task(Task_Type, Mod, Fun, Args, Dict_Props) -> 383 | internal_execute_task(Task_Type, Mod, Fun, Args, inline, Dict_Props). 384 | 385 | 386 | %% @doc 387 | %% Execute a task by spawning a function to run it, only if the task type 388 | %% does not have too many currently executing processes. If there are too 389 | %% many, return {max_pids, Max} without executing, rather than ok. 390 | %% @end 391 | 392 | -spec maybe_execute_task(atom(), atom(), atom(), list()) -> ok | {max_pids, non_neg_integer()}. 393 | -spec maybe_execute_task(atom(), atom(), atom(), list(), 394 | all_keys | dict_props()) -> ok | {max_pids, non_neg_integer()}. 395 | 396 | maybe_execute_task(Task_Type, Mod, Fun, Args) -> 397 | internal_execute_task(Task_Type, Mod, Fun, Args, refuse, none). 398 | 399 | maybe_execute_task(Task_Type, Mod, Fun, Args, Dict_Props) -> 400 | internal_execute_task(Task_Type, Mod, Fun, Args, refuse, Dict_Props). 401 | 402 | 403 | internal_execute_task(Task_Type, Mod, Fun, Args, Over_Limit_Action, Dict_Props) -> 404 | [Max, Max_History] = reset_proc_counts(Task_Type), 405 | Start = Max_History > 0 andalso os:timestamp(), 406 | case {Max, incr_active_procs(Task_Type)} of 407 | 408 | %% Spawn a new process... 409 | {Unlimited, Below_Max} when Unlimited =:= -1; Below_Max =< Max -> 410 | Dict_Prop_Vals = get_calling_dictionary_values(Dict_Props), 411 | Wrapper_Args = [Mod, Fun, Args, Task_Type, Max_History, Start, spawn, Dict_Prop_Vals], 412 | _ = proc_lib:spawn(?MODULE, execute_wrapper, Wrapper_Args), 413 | ok; 414 | 415 | %% Execute inline. 416 | _Over_Max -> 417 | case Over_Limit_Action of 418 | refuse -> decr_active_procs(Task_Type), 419 | {max_pids, Max}; 420 | inline -> _ = setup_local_process_dictionary(Dict_Props), 421 | _ = execute_wrapper(Mod, Fun, Args, Task_Type, Max_History, Start, inline, []), 422 | ok 423 | end 424 | end. 425 | 426 | %% @doc 427 | %% Execute a task by spawning a function to run it, only if the task type 428 | %% does not have too many currently executing processes. If there are too 429 | %% many, execute the task inline. Returns a linked pid if spawned, or results 430 | %% if inlined. 431 | %% @end 432 | 433 | -spec execute_pid_link(atom(), atom(), atom(), list()) -> pid() | {inline, any()}. 434 | -spec execute_pid_link(atom(), atom(), atom(), list(), all_keys | dict_props()) -> pid() | {inline, any()}. 435 | 436 | execute_pid_link(Task_Type, Mod, Fun, Args) -> 437 | internal_execute_pid(Task_Type, Mod, Fun, Args, link, inline, none). 438 | 439 | execute_pid_link(Task_Type, Mod, Fun, Args, Dict_Props) -> 440 | internal_execute_pid(Task_Type, Mod, Fun, Args, link, inline, Dict_Props). 441 | 442 | %% @doc 443 | %% Execute a task by spawning a function to run it, only if the task type 444 | %% does not have too many currently executing processes. If there are too 445 | %% many, return {max_pids, Max_Count} instead of linked pid. 446 | %% @end 447 | 448 | -spec maybe_execute_pid_link(atom(), atom(), atom(), list()) -> pid() | {max_pids, non_neg_integer()}. 449 | -spec maybe_execute_pid_link(atom(), atom(), atom(), list(), 450 | all_keys | dict_props()) -> pid() | {max_pids, non_neg_integer()}. 451 | 452 | maybe_execute_pid_link(Task_Type, Mod, Fun, Args) -> 453 | internal_execute_pid(Task_Type, Mod, Fun, Args, link, refuse, none). 454 | 455 | maybe_execute_pid_link(Task_Type, Mod, Fun, Args, Dict_Props) -> 456 | internal_execute_pid(Task_Type, Mod, Fun, Args, link, refuse, Dict_Props). 457 | 458 | %% @doc 459 | %% Execute a task by spawning a function to run it, only if the task type 460 | %% does not have too many currently executing processes. If there are too 461 | %% many, execute the task inline. Returns a {pid(), reference()} if spawned 462 | %% so the process can be monitored, or results if inlined. 463 | %% @end 464 | 465 | -spec execute_pid_monitor(atom(), atom(), atom(), list()) -> {pid(), reference()} | {inline, any()}. 466 | -spec execute_pid_monitor(atom(), atom(), atom(), list(), 467 | all_keys | dict_props()) -> {pid(), reference()} | {inline, any()}. 468 | 469 | execute_pid_monitor(Task_Type, Mod, Fun, Args) -> 470 | internal_execute_pid(Task_Type, Mod, Fun, Args, monitor, inline, none). 471 | 472 | execute_pid_monitor(Task_Type, Mod, Fun, Args, Dict_Props) -> 473 | internal_execute_pid(Task_Type, Mod, Fun, Args, monitor, inline, Dict_Props). 474 | 475 | %% @doc 476 | %% Execute a task by spawning a function to run it, only if the task type 477 | %% does not have too many currently executing processes. If there are too 478 | %% many, return {max_pids, Max_Count} instead of {pid(), reference()}. 479 | %% @end 480 | 481 | -spec maybe_execute_pid_monitor(atom(), atom(), atom(), list()) -> {pid(), reference()} | {max_pids, non_neg_integer()}. 482 | -spec maybe_execute_pid_monitor(atom(), atom(), atom(), list(), 483 | all_keys | dict_props()) -> {pid(), reference()} | {max_pids, non_neg_integer()}. 484 | 485 | maybe_execute_pid_monitor(Task_Type, Mod, Fun, Args) -> 486 | internal_execute_pid(Task_Type, Mod, Fun, Args, monitor, refuse, none). 487 | 488 | maybe_execute_pid_monitor(Task_Type, Mod, Fun, Args, Dict_Props) -> 489 | internal_execute_pid(Task_Type, Mod, Fun, Args, monitor, refuse, Dict_Props). 490 | 491 | 492 | internal_execute_pid(Task_Type, Mod, Fun, Args, Spawn_Type, Over_Limit_Action, Dict_Props) -> 493 | [Max, Max_History] = reset_proc_counts(Task_Type), 494 | Start = Max_History > 0 andalso os:timestamp(), 495 | case {Max, incr_active_procs(Task_Type)} of 496 | 497 | %% Spawn a new process... 498 | {Unlimited, Below_Max} when Unlimited =:= -1; Below_Max =< Max -> 499 | Dict_Prop_Vals = get_calling_dictionary_values(Dict_Props), 500 | Wrapper_Args = [Mod, Fun, Args, Task_Type, Max_History, Start, spawn, Dict_Prop_Vals], 501 | case Spawn_Type of 502 | link -> spawn_link (?MODULE, execute_wrapper, Wrapper_Args); 503 | monitor -> spawn_monitor(?MODULE, execute_wrapper, Wrapper_Args) 504 | end; 505 | 506 | %% Too many processes already running... 507 | _Over_Max -> 508 | case Over_Limit_Action of 509 | refuse -> decr_active_procs(Task_Type), 510 | {max_pids, Max}; 511 | inline -> _ = setup_local_process_dictionary(Dict_Props), 512 | {inline, execute_wrapper(Mod, Fun, Args, Task_Type, Max_History, Start, inline, [])} 513 | end 514 | end. 515 | 516 | setup_local_process_dictionary(none) -> skip; 517 | setup_local_process_dictionary(all_keys) -> skip; 518 | setup_local_process_dictionary(Dict_Props) -> 519 | Dict = get(), 520 | [put(K, V) || {?VALID_DICT_VALUE_MARKER, {K,V}} <- Dict_Props, proplists:get_value(K, Dict) =:= undefined]. 521 | 522 | -spec get_calling_dictionary_values(none | all_keys | dict_props()) -> dict_prop_vals(). 523 | 524 | get_calling_dictionary_values(none) -> []; 525 | get_calling_dictionary_values(all_keys) -> get(); 526 | get_calling_dictionary_values(List) when is_list(List) -> 527 | get_calling_dictionary_values(List, []). 528 | 529 | get_calling_dictionary_values([{?VALID_DICT_VALUE_MARKER, {Key, Default}} | More], Props) -> 530 | case get(Key) of 531 | undefined -> get_calling_dictionary_values(More, [{Key, Default} | Props]); 532 | Value -> get_calling_dictionary_values(More, [{Key, Value } | Props]) 533 | end; 534 | get_calling_dictionary_values([Key | More], Props) -> 535 | case get(Key) of 536 | undefined -> get_calling_dictionary_values(More, Props); 537 | Value -> get_calling_dictionary_values(More, [{Key, Value} | Props]) 538 | end; 539 | get_calling_dictionary_values([], Props) -> Props. 540 | 541 | 542 | -spec execute_wrapper(atom(), atom(), list(), atom(), integer(), false | erlang:timestamp(), spawn | inline, dict_prop_vals()) 543 | -> any() | no_return(). 544 | 545 | %% If Start is 'false', we don't want to record elapsed time history... 546 | execute_wrapper(Mod, Fun, Args, Task_Type, _Max_History, false, Spawn_Or_Inline, Dict_Prop_Pairs) -> 547 | Result = try 548 | _ = [put(Key, Val) || {Key, Val} <- Dict_Prop_Pairs], 549 | apply(Mod, Fun, Args) 550 | catch Error:Type:STrace -> {error, {mfa_failure, {{Error, Type}, {Mod, Fun, Args}, Task_Type, Spawn_Or_Inline}}, STrace} 551 | after decr_active_procs(Task_Type) 552 | end, 553 | case Result of 554 | {error, Call_Data, Trace} -> fail_wrapper(Spawn_Or_Inline, Call_Data, Trace); 555 | Result -> Result 556 | end; 557 | 558 | %% Otherwise, we incur the overhead cost of recording elapsed time history. 559 | execute_wrapper(Mod, Fun, Args, Task_Type, Max_History, Start, Spawn_Or_Inline, Dict_Prop_Pairs) -> 560 | MFA = {Mod, Fun, Args}, 561 | Spawn = os:timestamp(), 562 | Result = try 563 | _ = [put(Key, Val) || {Key, Val} <- Dict_Prop_Pairs], 564 | apply(Mod, Fun, Args) 565 | catch Error:Type:STrace -> {error, {mfa_failure, {{Error, Type}, MFA, Task_Type, Max_History, Start, Spawn_Or_Inline}}, STrace} 566 | after 567 | decr_active_procs(Task_Type), 568 | case Spawn_Or_Inline of 569 | spawn -> update_spawn_times (Task_Type, MFA, Start, Spawn, os:timestamp()); 570 | inline -> update_inline_times(Task_Type, MFA, Start, Spawn, os:timestamp()) 571 | end 572 | end, 573 | case Result of 574 | {error, Call_Data, Trace} -> fail_wrapper(Spawn_Or_Inline, Call_Data, Trace); 575 | Result -> Result 576 | end. 577 | 578 | -spec fail_wrapper(spawn | inline, any(), any()) -> no_return(). 579 | fail_wrapper(spawn, Call_Data, Stacktrace) -> erlang:error(spawn_failure, [Call_Data, Stacktrace]); 580 | fail_wrapper(inline, Call_Data, Stacktrace) -> exit ({inline_failure, [Call_Data, Stacktrace]}). 581 | 582 | 583 | %% @doc 584 | %% Provide a list of the registered concurrency limit types and their corresponding limit 585 | %% values for max_procs, active_procs, max_history size and slow_factor_as_percentage. 586 | %% @end 587 | 588 | -spec concurrency_types() -> [proplists:proplist()]. 589 | 590 | concurrency_types() -> 591 | [[{task_type, Task_Type}, {max_procs, int_to_max_procs(Max_Procs_Allowed)}, 592 | {active_procs, Active_Procs}, {max_history, Max_History}, {slow_factor_as_percentage, Slow_Factor}, 593 | {high_water_procs, -High_Water_Procs}] 594 | || {Task_Type, Max_Procs_Allowed, Active_Procs, Max_History, Slow_Factor, High_Water_Procs} <- ets:tab2list(?MODULE), 595 | is_atom(Task_Type)]. 596 | 597 | 598 | %% @doc 599 | %% Provide the entire performance history for a given task_type as a tuple of two elements: 600 | %% the performance for spawn execution and for inline execution. Each entry includes the 601 | %% start time for the request, the number of microseconds to spawn the task, and the number 602 | %% of microseconds to execute the request. 603 | %% @end 604 | 605 | -type spawn_history_result() :: {spawn_execs, [proplists:proplist()]}. 606 | -type inline_history_result() :: {inline_execs, [proplists:proplist()]}. 607 | -type slow_history_result() :: {slow_execs, [proplists:proplist()]}. 608 | 609 | -type history_result() :: {spawn_history_result(), inline_history_result(), slow_history_result()}. 610 | 611 | -spec history(atom()) -> history_result() | ets_buffer:buffer_error(). 612 | -spec history(atom(), inline, pos_integer()) -> inline_history_result() | ets_buffer:buffer_error(); 613 | (atom(), spawn, pos_integer()) -> spawn_history_result() | ets_buffer:buffer_error(); 614 | (atom(), slow, pos_integer()) -> slow_history_result() | ets_buffer:buffer_error(). 615 | 616 | %% @doc Provide all the performance history for a given task_type. 617 | history(Task_Type) -> 618 | {Spawn_Type, Inline_Type, Slow_Type} = make_buffer_names(Task_Type), 619 | case get_buffer_times(Spawn_Type) of 620 | Spawn_Times_List when is_list(Spawn_Times_List) -> 621 | {{spawn_execs, Spawn_Times_List}, 622 | {inline_execs, get_buffer_times(Inline_Type)}, 623 | {slow_execs, get_buffer_times(Slow_Type)}} ; 624 | Error -> Error 625 | end. 626 | 627 | %% @doc Provide the most recent performance history for a given task_type. 628 | history(Task_Type, inline, Num_Items) -> 629 | Inline_Type = make_buffer_inline(Task_Type), 630 | case get_buffer_times(Inline_Type, Num_Items) of 631 | Inline_Times_List when is_list(Inline_Times_List) -> {inline_execs, Inline_Times_List}; 632 | Error -> Error 633 | end; 634 | history(Task_Type, spawn, Num_Items) -> 635 | Spawn_Type = make_buffer_spawn(Task_Type), 636 | case get_buffer_times(Spawn_Type, Num_Items) of 637 | Spawn_Times_List when is_list(Spawn_Times_List) -> {spawn_execs, Spawn_Times_List}; 638 | Error -> Error 639 | end; 640 | history(Task_Type, slow, Num_Items) -> 641 | Slow_Type = make_buffer_slow(Task_Type), 642 | case get_buffer_times(Slow_Type, Num_Items) of 643 | Slow_Times_List when is_list(Slow_Times_List) -> {slow_execs, Slow_Times_List}; 644 | Error -> Error 645 | end. 646 | 647 | slow_calls(Task_Type) -> 648 | {_Spawns, _Execs, Slow_Calls} = history(Task_Type), 649 | Slow_Calls. 650 | 651 | slow_calls(Task_Type, Num_Items) -> 652 | history(Task_Type, slow, Num_Items). 653 | 654 | get_buffer_times(Buffer_Name) -> 655 | case ets_buffer:history(Buffer_Name) of 656 | Times_List when is_list(Times_List) -> [format_buffer_times(Times) || Times <- Times_List]; 657 | Error -> Error 658 | end. 659 | 660 | get_buffer_times(Buffer_Name, Num_Items) -> 661 | case ets_buffer:history(Buffer_Name, Num_Items) of 662 | Times_List when is_list(Times_List) -> [format_buffer_times(Times) || Times <- Times_List]; 663 | Error -> Error 664 | end. 665 | 666 | format_buffer_times({Task_Fun, {_,_,Micro} = Start, Spawn_Elapsed, Exec_Elapsed}) -> 667 | [{task_fun, Task_Fun}, {start, calendar:now_to_universal_time(Start), Micro}, 668 | {spawn_time_micros, Spawn_Elapsed}, {exec_time_micros, Exec_Elapsed}]. 669 | 670 | -define(HW_READ_CMD, {?HIGH_WATER_PROCS_POS, 0}). 671 | -define(HW_RESET_CMD, {?HIGH_WATER_PROCS_POS, -1, 0, 0}). 672 | 673 | %% @doc 674 | %% Return the highest number of concurrent processes for a given task type. 675 | %% The one-argument form just returns the high-water value, while the 676 | %% two-argument form allows the value to be reset. 677 | %% @end 678 | 679 | -spec high_water(task_type()) -> non_neg_integer(). 680 | -spec high_water(task_type(), cxy_clear()) -> non_neg_integer(). 681 | 682 | high_water(Task_Type) -> 683 | high_water(Task_Type, no_clear). 684 | 685 | high_water(Task_Type, ClearCmd) -> 686 | case ClearCmd of 687 | clear -> 688 | [Old_High_Water, 0] = ets:update_counter(?MODULE, Task_Type, [?HW_READ_CMD, ?HW_RESET_CMD]), 689 | -Old_High_Water; 690 | no_clear -> 691 | -ets:update_counter(?MODULE, Task_Type, ?HW_READ_CMD) 692 | end. 693 | -------------------------------------------------------------------------------- /src/cxy_fount_sup.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Supervisor to manage cxy_regulator + cxy_fount. These two servers 9 | %%% cooperate to spawn new workers, but control the pace at which workers 10 | %%% are allowed to start. 11 | %%% 12 | %%% @since v1.1.0 13 | %%% @end 14 | %%%------------------------------------------------------------------------------ 15 | -module(cxy_fount_sup). 16 | -author('Jay Nelson '). 17 | 18 | -behaviour(supervisor). 19 | 20 | %%% API 21 | -export([start_link/2, start_link/3, start_link/4, 22 | get_fount/1, get_regulator/1]). 23 | 24 | %%% Supervisor callbacks 25 | -export([init/1]). 26 | 27 | 28 | %%%=================================================================== 29 | %%% API functions 30 | %%%=================================================================== 31 | 32 | -spec start_link( module(), cxy_fount:fount_args()) -> {ok, pid()}. 33 | -spec start_link( module(), cxy_fount:fount_args(), cxy_fount:fount_options()) -> {ok, pid()}; 34 | (atom(), module(), cxy_fount:fount_args()) -> {ok, pid()}. 35 | -spec start_link(atom(), module(), cxy_fount:fount_args(), cxy_fount:fount_options()) -> {ok, pid()}. 36 | 37 | start_link(Fount_Behaviour, Init_Args) 38 | when is_atom(Fount_Behaviour), is_list(Init_Args) -> 39 | supervisor:start_link(?MODULE, {Fount_Behaviour, Init_Args}). 40 | 41 | start_link(Fount_Behaviour, Init_Args, Fount_Options) 42 | when is_atom(Fount_Behaviour), 43 | is_list(Init_Args), is_list(Fount_Options) -> 44 | supervisor:start_link(?MODULE, {Fount_Behaviour, Init_Args, Fount_Options}); 45 | start_link(Fount_Name, Fount_Behaviour, Init_Args) 46 | when is_atom(Fount_Name), is_atom(Fount_Behaviour), is_list(Init_Args) -> 47 | supervisor:start_link({local, make_sup_name(Fount_Name)}, ?MODULE, 48 | {Fount_Name, Fount_Behaviour, Init_Args}). 49 | 50 | 51 | start_link(Fount_Name, Fount_Behaviour, Init_Args, Fount_Options) 52 | when is_atom(Fount_Name), is_atom(Fount_Behaviour), 53 | is_list(Init_Args), is_list(Fount_Options) -> 54 | supervisor:start_link({local, make_sup_name(Fount_Name)}, ?MODULE, 55 | {Fount_Name, Fount_Behaviour, Init_Args, Fount_Options}). 56 | 57 | make_sup_name(Fount_Name) -> 58 | list_to_atom(atom_to_list(Fount_Name) ++ "_sup"). 59 | 60 | 61 | -spec get_fount (pid()) -> pid(). 62 | -spec get_regulator (pid()) -> pid(). 63 | 64 | get_fount(Fount_Sup) -> 65 | hd([Pid || {cxy_fount, Pid, worker, _Modules} <- supervisor:which_children(Fount_Sup)]). 66 | 67 | get_regulator(Fount_Sup) -> 68 | hd([Pid || {cxy_regulator, Pid, worker, _Modules} <- supervisor:which_children(Fount_Sup)]). 69 | 70 | 71 | %%%=================================================================== 72 | %%% Supervisor callbacks 73 | %%%=================================================================== 74 | 75 | -type restart() :: {supervisor:strategy(), non_neg_integer(), pos_integer()}. 76 | -type sup_init_return() :: {ok, {restart(), [supervisor:child_spec()]}}. 77 | 78 | -define(CHILD(__Mod, __Args), {__Mod, {__Mod, start_link, __Args}, temporary, 2000, worker, [__Mod]}). 79 | 80 | %%% Init without or with Fount_Name. 81 | -spec init({ module(), cxy_fount:fount_args()}) -> sup_init_return(); 82 | ({atom(), module(), cxy_fount:fount_args()}) -> sup_init_return(); 83 | ({ module(), cxy_fount:fount_args(), cxy_fount:fount_options()}) -> sup_init_return(); 84 | ({atom(), module(), cxy_fount:fount_args(), cxy_fount:fount_options()}) -> sup_init_return(). 85 | 86 | init({Fount_Behaviour, Init_Args}) 87 | when is_atom(Fount_Behaviour), is_list(Init_Args) -> 88 | Fount_Args = [self(), Fount_Behaviour, Init_Args, []], 89 | Regulator_Args = [], 90 | init_internal(Fount_Args, Regulator_Args); 91 | 92 | init({Fount_Behaviour, Init_Args, Fount_Options}) 93 | when is_atom(Fount_Behaviour), 94 | is_list(Init_Args), is_list(Fount_Options) -> 95 | Fount_Args = [self(), Fount_Behaviour, Init_Args, Fount_Options], 96 | Regulator_Args = case proplists:lookup(time_slice, Fount_Options) of 97 | none -> []; 98 | Time_Slice_Option -> [[Time_Slice_Option]] 99 | end, 100 | init_internal(Fount_Args, Regulator_Args); 101 | 102 | init({Fount_Name, Fount_Behaviour, Init_Args}) 103 | when is_atom(Fount_Name), is_atom(Fount_Behaviour), is_list(Init_Args) -> 104 | Fount_Args = [self(), Fount_Name, Fount_Behaviour, Init_Args, []], 105 | Regulator_Args = [], 106 | init_internal(Fount_Args, Regulator_Args); 107 | 108 | init({Fount_Name, Fount_Behaviour, Init_Args, Fount_Options}) 109 | when is_atom(Fount_Name), is_atom(Fount_Behaviour), 110 | is_list(Init_Args), is_list(Fount_Options) -> 111 | Fount_Args = [self(), Fount_Name, Fount_Behaviour, Init_Args, Fount_Options], 112 | Regulator_Args = case proplists:lookup(time_slice, Fount_Options) of 113 | none -> []; 114 | Time_Slice_Option -> [[Time_Slice_Option]] 115 | end, 116 | init_internal(Fount_Args, Regulator_Args). 117 | 118 | %%% Internal init only differs in the cxy_fount:start_link args. 119 | init_internal(Fount_Args, Regulator_Args) -> 120 | Fount_Fsm = ?CHILD(cxy_fount, Fount_Args), 121 | Regulator_Fsm = ?CHILD(cxy_regulator, Regulator_Args), 122 | {ok, { {rest_for_one, 5, 60}, [Regulator_Fsm, Fount_Fsm]} }. 123 | -------------------------------------------------------------------------------- /src/cxy_regulator.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% A cxy_regulator is used in conjunction with cxy_fount. The fount 9 | %%% requests slabs of newly spawned processes, whilst the regulator 10 | %%% controls the rate at which the processes are generated. Throttling 11 | %%% the rate ensures that the amount of work can be constrained to 12 | %%% avoid overloading a VM node. 13 | %%% 14 | %%% The regulator is implemented as a gen_statem so that it can be 15 | %%% paused and resumed for maintenance purposes. 16 | %%% @since 1.1.0 17 | %%% @end 18 | %%%------------------------------------------------------------------------------ 19 | -module(cxy_regulator). 20 | -author('Jay Nelson '). 21 | 22 | -behaviour(gen_statem). 23 | 24 | 25 | %%% API 26 | -export([start_link/0, start_link/1, pause/1, resume/1, status/1]). 27 | -export([allow_spawn/2]). 28 | 29 | %% gen_statem callbacks 30 | -export([init/1, callback_mode/0, terminate/3, code_change/4, format_status/2]). 31 | 32 | %%% state functions 33 | -export(['NORMAL'/3, 'OVERMAX'/3, 'PAUSED'/3]). 34 | 35 | -type state_name() :: 'NORMAL' | 'OVERMAX' | 'PAUSED'. 36 | 37 | -type regulator_ref() :: pid(). 38 | -export_type([regulator_ref/0]). 39 | 40 | -type thruput() :: normal | overmax. 41 | -type allocate_slab_args() :: {pid(), module(), tuple(), erlang:timestamp(), pos_integer()}. 42 | -type allocate_slab_request() :: {allocate_slab, allocate_slab_args()}. 43 | 44 | -record(epoch_slab_counts, { 45 | epoch = 0 :: non_neg_integer(), 46 | slots = {} :: tuple() 47 | }). 48 | -type epoch_slab_counts() :: #epoch_slab_counts{}. 49 | 50 | -record(cr_state, { 51 | init_time = os:timestamp() :: erlang:timestamp(), 52 | thruput = normal :: thruput(), 53 | slab_counts = #epoch_slab_counts{} :: epoch_slab_counts(), 54 | pending_requests = queue:new() :: queue:queue() 55 | }). 56 | -type cr_state() :: #cr_state{}. 57 | 58 | 59 | %%%=================================================================== 60 | %%% API 61 | %%%=================================================================== 62 | 63 | -spec start_link() -> {ok, regulator_ref()}. 64 | -spec start_link(proplists:proplist()) -> {ok, regulator_ref()}. 65 | 66 | start_link() -> gen_statem:start_link(?MODULE, {[]}, []). 67 | start_link(Config) -> gen_statem:start_link(?MODULE, {Config}, []). 68 | 69 | 70 | -type status_attr() :: {current_state, atom()} % FSM State function name 71 | | {thruput, thruput()}. % Paused thruput state 72 | 73 | -spec pause (regulator_ref()) -> paused. 74 | -spec resume (regulator_ref()) -> thruput(). 75 | -spec status (regulator_ref()) -> [status_attr(), ...]. 76 | 77 | pause (Regulator) -> gen_statem:call(Regulator, pause). 78 | resume (Regulator) -> gen_statem:call(Regulator, resume). 79 | 80 | status (Regulator) -> 81 | {status, _, _, [_Pdict, _State, _Parent, _Dbg, Status]} = sys:get_status(Regulator), 82 | Status. 83 | 84 | %%%=================================================================== 85 | %%% gen_statem callbacks 86 | %%%=================================================================== 87 | 88 | -spec init({proplists:proplist()}) -> {ok, 'NORMAL', cr_state()}. 89 | -spec format_status(normal | terminate, list()) -> proplists:proplist(). 90 | 91 | default_num_slots () -> 100. 92 | make_slot_stats (N) -> list_to_tuple(lists:duplicate(N, 0)). 93 | 94 | init({Config}) -> 95 | Num_Slots = proplists:get_value(time_slice, Config, default_num_slots()), 96 | Slab_Counts = #epoch_slab_counts{slots=make_slot_stats(Num_Slots)}, 97 | {ok, 'NORMAL', #cr_state{slab_counts=Slab_Counts}}. 98 | 99 | format_status(_Reason, [_Dict, State_Name, State]) -> 100 | generate_status(State_Name, State). 101 | 102 | generate_status(State_Name, State) -> 103 | [{current_state, State_Name} | generate_status(State)]. 104 | 105 | generate_status(#cr_state{init_time=Started, thruput=Thruput, 106 | slab_counts=SC, pending_requests=PR}) -> 107 | [ 108 | {init_time, Started}, 109 | {thruput, Thruput}, 110 | {slab_counts, SC}, 111 | {pending_requests, queue:len(PR)} 112 | ]. 113 | 114 | 115 | %%%------------------------------------------------------------------------------ 116 | %%% Spawn pace regulation logic 117 | %%% Slots per second slices the spawning to timing buckets. 118 | %%% Default is 100 slots per second timing, with one slab allowed per slot. 119 | %%% Config 'time_slice' property on init changes from 100 to any 1 to N value. 120 | %%%------------------------------------------------------------------------------ 121 | millis_per_micro() -> 1000. 122 | micros_per_slot(Slots) -> (timer:seconds(1) * millis_per_micro()) div Slots. 123 | overload_pause_millis(Slots) -> (micros_per_slot(Slots) div 2) div millis_per_micro(). 124 | 125 | time_slot(Start_Time, Num_Slots) -> 126 | Micros_Since_Start = timer:now_diff(os:timestamp(), Start_Time), 127 | Raw_Epoch = (Micros_Since_Start div micros_per_slot(Num_Slots)), 128 | Epoch = Raw_Epoch div Num_Slots + 1, % don't allow 0 129 | Slot = (Raw_Epoch rem Num_Slots) + 1, % tuples number 1-N 130 | {Epoch, Slot}. 131 | 132 | allow_slab_generation(Slot, #epoch_slab_counts{slots=Slot_Stats} = ESC) 133 | when is_tuple(Slot_Stats), 134 | is_integer(Slot), Slot > 0, Slot =< tuple_size(Slot_Stats) -> 135 | case element(Slot, Slot_Stats) of 136 | 1 -> {false, ESC}; % Disallow 137 | 0 -> New_Slots = setelement(Slot, Slot_Stats, 1), % Mark slot 138 | {true, ESC#epoch_slab_counts{slots=New_Slots}} % Allow and mark 139 | end. 140 | 141 | get_epoch_slots(#epoch_slab_counts{epoch=Slab_Epoch} = ESC, Slab_Epoch) -> ESC; 142 | get_epoch_slots(Old_Epoch_Counts, Current_Epoch) -> 143 | Num_Slots = tuple_size(Old_Epoch_Counts#epoch_slab_counts.slots), 144 | #epoch_slab_counts{epoch=Current_Epoch, slots=make_slot_stats(Num_Slots)}. 145 | 146 | allow_spawn(Server_Start_Time, #epoch_slab_counts{slots=Slot_Stats} = ESC) -> 147 | Num_Slots = tuple_size(Slot_Stats), 148 | {Epoch, Slot} = time_slot(Server_Start_Time, Num_Slots), 149 | New_ESC = get_epoch_slots(ESC, Epoch), 150 | allow_slab_generation(Slot, New_ESC). 151 | 152 | 153 | %%%------------------------------------------------------------------------------ 154 | %%% Asynch state functions (triggered by gen_statem:cast/call/2) 155 | %%%------------------------------------------------------------------------------ 156 | 157 | -spec 'NORMAL' ({call, term()} | cast, allocate_slab_request(), cr_state()) -> term(). 158 | -spec 'OVERMAX' ({call, term()} | cast, allocate_slab_request(), cr_state()) -> term(). 159 | -spec 'PAUSED' ({call, term()} | cast, allocate_slab_request(), cr_state()) -> term(). 160 | 161 | %%% 'NORMAL' means no throttling is occurring 162 | 'NORMAL' ({call, From}, pause, State) -> {next_state, 'PAUSED', State, [{reply, From, paused}]}; 163 | 'NORMAL' ({call, From}, Event, State) -> handle_event({call, From}, Event, State); 164 | 165 | 'NORMAL'(cast, {allocate_slab, Args}, State) -> 166 | allocate({allocate_slab, Args}, State); 167 | 168 | 'NORMAL'(cast, queued_request, State) -> 169 | pop_pending(normal, State); 170 | 171 | 'NORMAL'(cast, _Unknown, _State) -> 172 | keep_state_and_data. 173 | 174 | %%% 'OVERMAX' means spawning is stopped by the regulator 175 | 'OVERMAX' ({call, From}, pause, State) -> {next_state, 'PAUSED', State, [{reply, From, paused}]}; 176 | 'OVERMAX' ({call, From}, Event, State) -> handle_event({call, From}, Event, State); 177 | 178 | %%% Queue up requests if 'OVERMAX' or 'PAUSED'. 179 | 'OVERMAX'(cast, {allocate_slab, Args}, State) -> 180 | queue_request({allocate_slab, Args}, 'OVERMAX', State); 181 | 182 | 'OVERMAX'(cast, queued_request, State) -> 183 | pop_pending(normal, State); 184 | 185 | 'OVERMAX'(cast, _Unknown, _State) -> 186 | keep_state_and_data. 187 | 188 | %%% 'PAUSED' means manually stopped, will resume either 'NORMAL' or 'OVERMAX' 189 | 'PAUSED' ({call, From}, resume, #cr_state{thruput=normal} = State) -> 190 | _ = pace_next_slab(0), 191 | {next_state, 'NORMAL', State, [{reply, From, {resumed, normal}}]}; 192 | 193 | 'PAUSED' ({call, From}, resume, #cr_state{thruput=overmax} = State) -> 194 | _ = pace_next_slab(0), 195 | {next_state, 'OVERMAX', State, [{reply, From, {resumed, overmax}}]}; 196 | 197 | 'PAUSED' ({call, From}, Event, State) -> 198 | handle_event({call, From}, Event, State); 199 | 200 | %%% Paused swallows queued_request events silently, 'resume' required to restart events. 201 | %%% Slab requests are queued up even if the queue is currently empty. 202 | 'PAUSED'(cast, {allocate_slab, Args}, State) -> 203 | queue_request({allocate_slab, Args}, 'PAUSED', State); 204 | 205 | 'PAUSED'(cast, queued_request, _State) -> 206 | keep_state_and_data; 207 | 208 | 'PAUSED'(cast, _Unknown, _State) -> 209 | keep_state_and_data. 210 | 211 | %%%------------------------------------------------------------------------------ 212 | %%% Asynch internal support functions 213 | %%%------------------------------------------------------------------------------ 214 | allocate({allocate_slab, {Fount, Module, Mod_State, Timestamp, Slab_Size}} = Request, 215 | #cr_state{init_time=Init_Time, slab_counts=Slab_Counts} = State) -> 216 | case allow_spawn(Init_Time, Slab_Counts) of 217 | {false, New_Slab_Counts} -> 218 | Num_Slots = tuple_size(Slab_Counts#epoch_slab_counts.slots), 219 | _ = pace_slab(Num_Slots), 220 | queue_request(Request, 'OVERMAX', State#cr_state{slab_counts=New_Slab_Counts}); 221 | {true, New_Slab_Counts} -> 222 | allocate_slab(Fount, Module, Mod_State, Timestamp, Slab_Size, []), 223 | {next_state, 'NORMAL', State#cr_state{slab_counts=New_Slab_Counts}} 224 | end. 225 | 226 | queue_request(Slab_Request, Next_State_Name, #cr_state{pending_requests=PR} = State) -> 227 | New_Pending = queue:in({os:timestamp(), Slab_Request}, PR), 228 | New_State = case Next_State_Name of 229 | 'OVERMAX' -> State#cr_state{pending_requests=New_Pending, thruput=overmax}; 230 | 'PAUSED' -> State#cr_state{pending_requests=New_Pending} 231 | end, 232 | {next_state, Next_State_Name, New_State}. 233 | 234 | pop_pending(normal, #cr_state{pending_requests=PR, slab_counts=Slab_Counts} = State) -> 235 | Num_Slots = tuple_size(Slab_Counts#epoch_slab_counts.slots), 236 | New_State = State#cr_state{thruput=normal}, 237 | case queue:out(PR) of 238 | %% Nothing queued, just change state. 239 | {empty, _} -> {next_state, 'NORMAL', New_State}; 240 | %% Something queued, handle it. 241 | {{value, {_Timestamp, Request}}, PR2} -> 242 | _ = pace_slab(Num_Slots), 243 | 'NORMAL'(cast, Request, New_State#cr_state{pending_requests=PR2}) 244 | end. 245 | 246 | pace_slab(Num_Slots) -> pace_next_slab(overload_pause_millis(Num_Slots)). 247 | 248 | pace_next_slab( 0) -> gen_statem:cast(self(), queued_request); 249 | pace_next_slab(Pause_Millis) -> 250 | timer:apply_after(Pause_Millis, gen_statem, cast, [self(), queued_request]). 251 | 252 | 253 | %%% Rely on the client behaviour to create new pids. This means using 254 | %%% spawn or any of the gen_*:start patterns since the pids are unsupervised. 255 | %%% The resulting pids must be linked to the cxy_fount parent so that they are 256 | %%% destroyed if the parent terminates. While idle, the slab allocated pids 257 | %%% should avoid crashing because they can take out the entire cxy_fount. 258 | %%% Once a pid receives a task_pid command, it becomes unlinked and free to 259 | %%% complete its task on its own timeline, independently from the fount. 260 | allocate_slab(Fount_Pid, _Module, _Mod_State, Start_Time, 0, Slab) -> 261 | Elapsed_Time = timer:now_diff(os:timestamp(), Start_Time), 262 | gen_statem:cast(Fount_Pid, {slab, Slab, Start_Time, Elapsed_Time}); 263 | 264 | allocate_slab(Fount_Pid, Module, Mod_State, Start_Time, Num_To_Spawn, Slab) 265 | when is_pid(Fount_Pid), is_atom(Module), is_integer(Num_To_Spawn), Num_To_Spawn > 0 -> 266 | 267 | %% Module behaviour needs to explicitly link to the parent_pid, 268 | %% since this function is executing in the caller's process space, 269 | %% rather than the gen_statem of the cxy_fount parent_pid process space. 270 | case Module:start_pid(Fount_Pid, Mod_State) of 271 | Allocated_Pid when is_pid(Allocated_Pid) -> 272 | allocate_slab(Fount_Pid, Module, Mod_State, Start_Time, Num_To_Spawn-1, [Allocated_Pid | Slab]) 273 | end. 274 | 275 | -type from() :: {pid(), reference()}. 276 | -type status() :: proplists:proplist(). 277 | 278 | -spec handle_event (status, from(), State) 279 | -> {reply, status(), State_Name, State} 280 | when State_Name :: state_name(), State :: cr_state(). 281 | 282 | handle_event({call, From}, Event, _State) -> 283 | {keep_state_and_data, [{reply, From, {ignored, Event}}]}. 284 | 285 | %%%=================================================================== 286 | %%% Unused functions 287 | %%%=================================================================== 288 | 289 | -spec code_change (any(), State_Name, State, any()) 290 | -> {ok, State_Name, State} when State_Name :: state_name(), State :: cr_state(). 291 | code_change (_OldVsn, State_Name, State, _Extra) -> {ok, State_Name, State}. 292 | 293 | %%% Pre-spawned pids are linked and die when FSM dies. 294 | -spec terminate(atom(), state_name(), cr_state()) -> ok. 295 | terminate(_Reason, _State_Name, _State) -> ok. 296 | 297 | -spec callback_mode() -> atom(). 298 | callback_mode() -> 299 | state_functions. 300 | -------------------------------------------------------------------------------- /src/cxy_synch.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2014-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson [http://duomark.com/] 4 | %%% @reference 2014-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Synchronization barriers force multiple processes to pause until all 9 | %%% participants reach the same point. They then may proceed independently. 10 | %%% 11 | %%% @since 0.9.8 12 | %%% @end 13 | %%%------------------------------------------------------------------------------ 14 | -module(cxy_synch). 15 | -author('Jay Nelson '). 16 | 17 | %% External API 18 | -export([ 19 | before_task/2, 20 | before_task/4, 21 | cleanup_star_pattern/2 22 | ]). 23 | 24 | -include("tracing_levels.hrl"). 25 | 26 | -type pid_count() :: non_neg_integer(). 27 | -type timeout_ms() :: pos_integer(). 28 | -type bare_mfa() :: {module(), atom(), list()}. 29 | -type bare_fun() :: fun(() -> any()). 30 | -type barrier_fun() :: fun(() -> any()). 31 | -type synch_error() :: {error, no_start} 32 | | {error, {not_synched, {pid_count(), pid_count()}}}. 33 | 34 | -export_type([bare_mfa/0, bare_fun/0, pid_count/0, timeout_ms/0, synch_error/0]). 35 | 36 | 37 | %%%------------------------------------------------------------------------------ 38 | %%% External API 39 | %%%------------------------------------------------------------------------------ 40 | 41 | -spec before_task(pid_count(), bare_mfa() | bare_fun()) -> ok | synch_error(). 42 | -spec before_task(pid_count(), timeout_ms(), timeout_ms(), bare_mfa() | bare_fun()) -> ok | synch_error(). 43 | 44 | before_task(Num_Pids_To_Synch, Task_Fun) 45 | when is_integer(Num_Pids_To_Synch), Num_Pids_To_Synch > 0 -> 46 | before_task(Num_Pids_To_Synch, 1000, 1000, Task_Fun). 47 | 48 | before_task(Num_Pids_To_Synch, Spawn_Timeout, Synch_Timeout, Task_Fun) -> 49 | {Synch_Result, Task_Pids} 50 | = case Task_Fun of 51 | Task_Fun when is_function(Task_Fun, 0) -> 52 | synch_pids(Num_Pids_To_Synch, Spawn_Timeout, Synch_Timeout); 53 | {_Module, _Task_Fun, _Args} 54 | when is_atom(_Module), is_atom(_Task_Fun), is_list(_Args) -> 55 | synch_pids(Num_Pids_To_Synch, Spawn_Timeout, Synch_Timeout) 56 | end, 57 | open_barrier_gate(Synch_Result, Task_Pids, Task_Fun). 58 | 59 | 60 | %%%------------------------------------------------------------------------------ 61 | %%% Synchronization support functions 62 | %%%------------------------------------------------------------------------------ 63 | 64 | synch_pids(Num_Pids, Spawn_Timeout, Synch_Timeout) -> 65 | 66 | %% A barrier function is used to allow all spawned pids to wait at the same point... 67 | {Self, Coord_Ref} = {self(), make_ref()}, 68 | {Barrier_Fun, Synch_Ref} = create_barrier_fun(Num_Pids, Spawn_Timeout), 69 | Spawn_Workers_Fun = fun() -> spawn_link_wait_pids(Self, Coord_Ref, Synch_Ref, 70 | Num_Pids, Synch_Timeout, Barrier_Fun) end, 71 | 72 | %% The synchronization messages are collected by the coordinator... 73 | trace(coord_start, {Coord_Ref}), 74 | {Coordinator_Pid, Coord_Monitor_Ref} = spawn_monitor(Spawn_Workers_Fun), 75 | wait_for_synch(Coord_Ref, Coord_Monitor_Ref, Coordinator_Pid, Spawn_Timeout, Synch_Timeout, []). 76 | 77 | wait_for_synch(Coord_Ref, Coord_Mref, Coord_Pid, Spawn_Timeout, Synch_Timeout, Spawned_Pids) -> 78 | 79 | {Timeout_Trace_Type, Receive_Timeout} 80 | = case Spawned_Pids of 81 | [] -> {spawn_timeout, Spawn_Timeout}; 82 | _ -> {synch_timeout, Synch_Timeout} 83 | end, 84 | 85 | receive 86 | %% First message is spawned to allow for spawn timeout and tracing... 87 | {Coord_Ref, spawned, New_Pids} -> 88 | wait_for_synch(Coord_Ref, Coord_Mref, Coord_Pid, Spawn_Timeout, Synch_Timeout, New_Pids); 89 | 90 | %% The second message determines the result of synching. 91 | {Coord_Ref, {error, _} = Err} -> {Err, Spawned_Pids}; 92 | {Coord_Ref, synched, _Num_Pids} -> {synched, Spawned_Pids}; 93 | 94 | %% But the Coordinator may go down before synching is complete. 95 | {'DOWN', Coord_Mref, process, Coord_Pid, Error} -> 96 | trace(coordinator_dead, {Coord_Ref, Error}), 97 | {error, {synchronization_coordinator_failure, Error}} 98 | 99 | after Receive_Timeout -> 100 | trace(Timeout_Trace_Type, {Receive_Timeout}), 101 | {error, {Timeout_Trace_Type, Receive_Timeout}} 102 | end. 103 | 104 | spawn_link_wait_pids(Caller, Coord_Ref, Synch_Ref, Num_Pids, Synch_Timeout, Barrier_Fun) -> 105 | process_flag(trap_exit, true), 106 | Self = self(), 107 | Pids = spawn_link_times(fun() -> coordinate(Self, Synch_Ref, Synch_Timeout) end, Num_Pids, []), 108 | trace(pids_spawned, {Coord_Ref, Num_Pids}), 109 | Caller ! {Coord_Ref, spawned, Pids}, 110 | trace(wait_at_barrier, {}), 111 | case Barrier_Fun() of 112 | {error, Err} -> trace(synch_error, {Coord_Ref, Err}), Caller ! {Coord_Ref, {error, Err}}; 113 | synched -> trace(synched, {Coord_Ref, Num_Pids}), Caller ! {Coord_Ref, synched, Num_Pids} 114 | end. 115 | 116 | %% List Comprehension requires lists:seq(1,N) when N is large. 117 | spawn_link_times(_Fun, 0, Pids) -> Pids; 118 | spawn_link_times( Fun, N, Pids) -> 119 | New_Pid = spawn_link(Fun), 120 | trace(spawn, {New_Pid}), 121 | spawn_link_times(Fun, N-1, [New_Pid | Pids]). 122 | 123 | 124 | %% All spawned workers are linked to the coordinator, 125 | %% so killing it takes all of them with the coordinator, 126 | %% and we make sure to flush the message queue of the 127 | %% downed coordinator message. 128 | cleanup_star_pattern(Coordinator_Pid, Coord_Monitor_Ref) -> 129 | exit(Coordinator_Pid, kill), 130 | erlang:yield(), 131 | demonitor(Coord_Monitor_Ref, [flush]). 132 | 133 | open_barrier_gate({error, _} = Error, _Pids, _Task_Fun) -> Error; 134 | open_barrier_gate(synched, Pids, Task_Fun) -> 135 | _ = [Pid ! {start, Task_Fun} || Pid <- Pids], 136 | ok. 137 | 138 | 139 | %%%----------------------------------------------------------------------- 140 | %%% Coordination among workers, barrier function and tracing utilities. 141 | %%%----------------------------------------------------------------------- 142 | 143 | -spec coordinate(pid(), reference(), timeout_ms()) 144 | -> synched 145 | | {error, no_start} 146 | | {error, {not_synched, {pos_integer(), pos_integer()}}}. 147 | 148 | -spec create_barrier_fun(pid_count(), timeout_ms()) -> {barrier_fun(), reference()}. 149 | 150 | coordinate(Coordinator, Synch_Ref, Synch_Timeout) -> 151 | trace(ready, {Synch_Ref, self()}), 152 | Coordinator ! {Synch_Ref, ready, self()}, 153 | receive {start, Task_Fun} -> 154 | trace(start, {self()}), 155 | Result = case Task_Fun of 156 | {Mod, Fun, Args} -> Mod:Fun(Args); 157 | Task_Fun -> Task_Fun() 158 | end, 159 | trace(completed, {self(), Result}), 160 | Result 161 | after Synch_Timeout -> 162 | trace(no_start, {Synch_Timeout, self()}), 163 | {error, no_start} 164 | end. 165 | 166 | create_barrier_fun(Num_Pids_To_Synch, Spawn_Timeout) 167 | when is_integer(Num_Pids_To_Synch), Num_Pids_To_Synch > 0, 168 | is_integer(Spawn_Timeout), Spawn_Timeout > 0 -> 169 | 170 | %% Create a recursive function with a receive barrier reachable within Timeout milliseconds. 171 | %% Using anonymous fun() to be compatible with R16 and prior VMs. 172 | %% TODO: Total Time Elapsed should be less than Timeout, not just the last message rcvd. 173 | Synchronization_Ref = make_ref(), 174 | {fun() -> Barrier_Fun = fun(_, _Start_Time, _Timeout, 0, Num_Pids) -> 175 | trace(synched, {Num_Pids}), 176 | synched; 177 | (F, Start_Time, Timeout, Remaining, Num_Pids) -> 178 | case remaining_timeout(Start_Time, Timeout) of 179 | Expired when Expired =< 0 -> 180 | trace(expired, {Remaining, Num_Pids}), 181 | {error, {not_synched, {Remaining, Num_Pids}}}; 182 | Remaining_Time -> 183 | trace(remaining_time, {Remaining_Time}), 184 | receive 185 | {Synchronization_Ref, ready, Pid} -> 186 | Remaining_Pid_Count = Remaining - 1, 187 | trace(pid_synch, {Pid, Remaining_Pid_Count}), 188 | F(F, Start_Time, Timeout, Remaining_Pid_Count, Num_Pids) 189 | after Remaining_Time -> 190 | trace(barrier_timeout, {Timeout}), 191 | {error, {not_synched, {Remaining, Num_Pids}}} 192 | end 193 | end 194 | end, 195 | Start_Time = os:timestamp(), 196 | Barrier_Fun(Barrier_Fun, Start_Time, Spawn_Timeout, Num_Pids_To_Synch, Num_Pids_To_Synch) 197 | end, Synchronization_Ref}. 198 | 199 | remaining_timeout(Start_Time, Original_Timeout) -> 200 | (Original_Timeout - timer:now_diff(os:timestamp(), Start_Time) div 1000). 201 | 202 | %% Tracing of messages back to application... 203 | trace(coord_start, {Ref}) -> et:trace_me(?TRACE_TIMINGS, app, coord, coord_start, [Ref]); 204 | trace(pids_spawned, {Ref, Count}) -> et:trace_me(?TRACE_TIMINGS, coord, app, pids_spawned, [Ref, Count]); 205 | trace(synched, {Ref, Count}) -> et:trace_me(?TRACE_TIMINGS, coord, app, synched, [Ref, Count]); 206 | 207 | trace(start, {Pid}) -> et:trace_me(?TRACE_TIMINGS, coord, Pid, start, [Pid]); 208 | trace(no_start, {Time, Pid}) -> et:trace_me(?TRACE_TIMINGS, Pid, coord, no_start, [Time]); 209 | 210 | trace(coordinator_dead, {Ref, Reason}) -> et:trace_me(?TRACE_TIMINGS, coord, app, coord_dead, [Ref, Reason]); 211 | trace(spawn_timeout, {Time}) -> et:trace_me(?TRACE_TIMINGS, coord, app, spawn_timeout, [Time]); 212 | trace(synch_timeout, {Time}) -> et:trace_me(?TRACE_TIMINGS, coord, app, synch_timeout, [Time]); 213 | trace(synch_error, {Ref, Error}) -> et:trace_me(?TRACE_TIMINGS, coord, app, synch_error, [Ref, Error]); 214 | 215 | %% Tracing of messages within the barrier coordinator... 216 | trace(barrier_timeout, {Time}) -> et:trace_me(?TRACE_TIMINGS, barrier, coord, barrier_timeout, [Time]); 217 | trace(expired, {Unack, All}) -> et:trace_me(?TRACE_TIMINGS, barrier, coord, expired, [Unack, All]); 218 | trace(synched, {All}) -> et:trace_me(?TRACE_TIMINGS, barrier, coord, synched, [All]); 219 | trace(wait_at_barrier, {}) -> et:trace_me(?TRACE_TIMINGS, coord, barrier, wait, []); 220 | trace(ready, {Ref, Pid}) -> et:trace_me(?TRACE_TIMINGS, Pid, barrier, ready, [Ref, Pid]); 221 | 222 | %% Tracing of individual pid synchronization, only used for debugging. 223 | trace(spawn, {Pid}) -> et:trace_me(?TRACE_DEBUG, coord, Pid, spawn, []); 224 | trace(pid_synch, {Pid, Count}) -> et:trace_me(?TRACE_DEBUG, Pid, coord, pid_synch, [Pid, Count]); 225 | trace(remaining_time, {RT}) -> et:trace_me(?TRACE_DEBUG, barrier, barrier, remaining_time, [RT]); 226 | trace(completed, {Pid, Result}) -> et:trace_me(?TRACE_DEBUG, Pid, app, completed, [Pid, Result]). 227 | -------------------------------------------------------------------------------- /src/epocxy.app.src: -------------------------------------------------------------------------------- 1 | %%-*- mode: erlang -*- 2 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 3 | %% ex: ts=4 sw=4 et 4 | 5 | 6 | {application, epocxy, 7 | [ 8 | {id, "epocxy"}, 9 | {vsn, "1.1.1"}, 10 | {description, "Erlang Patterns of Concurrency"}, 11 | {modules, [ 12 | batch_feeder, 13 | cxy_cache, cxy_cache_fsm, cxy_cache_sup, 14 | cxy_ctl, 15 | cxy_fount, cxy_fount_sup, cxy_regulator, 16 | cxy_synch, 17 | ets_buffer 18 | ]}, 19 | {registered, [cxy_cache_sup]}, 20 | {applications, [kernel, stdlib, sasl]}, 21 | {included_applications, []}, 22 | {env, []} 23 | ]}. 24 | -------------------------------------------------------------------------------- /test/epocxy.coverspec: -------------------------------------------------------------------------------- 1 | %% -*- mode: erlang -*- 2 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 3 | %% ex: ts=4 sw=4 et 4 | 5 | %%%------------------------------------------------------------------------------ 6 | %%% @copyright (c) 2015-2016, DuoMark International, Inc. 7 | %%% @author Jay Nelson 8 | %%% @reference 2015-2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 9 | %%% @reference The license is based on the template for Modified BSD from 10 | %%% OSI 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | {export, "./epocxy/logs/cover"}. 14 | {level, details}. 15 | {incl_dirs, ["../src/"]}. 16 | {incl_mods, [ 17 | batch_feeder, 18 | ets_buffer, 19 | cxy_ctl, 20 | cxy_cache, 21 | cxy_cache_fsm, 22 | cxy_cache_sup, 23 | cxy_fount, 24 | cxy_fount_sup, 25 | cxy_regulator, 26 | cxy_synch 27 | ]}. 28 | -------------------------------------------------------------------------------- /test/epocxy.spec: -------------------------------------------------------------------------------- 1 | %% -*- mode: erlang -*- 2 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 3 | %% ex: ts=4 sw=4 et 4 | 5 | %%%------------------------------------------------------------------------------ 6 | %%% @copyright (c) 2015-2016, DuoMark International, Inc. 7 | %%% @author Jay Nelson 8 | %%% @reference 2015-2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 9 | %%% @reference The license is based on the template for Modified BSD from 10 | %%% OSI 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | {alias, epocxy, "./epocxy/"}. 14 | {include, ["../include"]}. 15 | {logdir, "./epocxy/logs/"}. 16 | {cover, "./epocxy.coverspec"}. 17 | {suites, epocxy, [ 18 | batch_feeder_SUITE, 19 | ets_buffer_SUITE, 20 | cxy_ctl_SUITE, 21 | cxy_cache_SUITE, 22 | 23 | %% cxy_fount has multiple components 24 | cxy_regulator_SUITE, 25 | cxy_fount_SUITE 26 | ]}. 27 | -------------------------------------------------------------------------------- /test/epocxy/batch_feeder_SUITE.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Validation of batch_feeder using common test and PropEr. 9 | %%% 10 | %%% @since 0.9.9 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(batch_feeder_SUITE). 14 | -auth('jay@duomark.com'). 15 | -vsn(''). 16 | 17 | -behaviour(batch_feeder). 18 | 19 | -export([first_batch/1, prep_batch/3, exec_batch/3]). 20 | 21 | %%% Common_test exports 22 | -export([all/0, 23 | init_per_suite/1, end_per_suite/1, 24 | init_per_testcase/2, end_per_testcase/2 25 | ]). 26 | 27 | %%% Test case exports 28 | -export([check_processing/1]). 29 | 30 | -include("epocxy_common_test.hrl"). 31 | 32 | 33 | %%%=================================================================== 34 | %%% Test cases 35 | %%%=================================================================== 36 | 37 | -type test_case() :: atom(). 38 | -type test_group() :: atom(). 39 | 40 | -spec all() -> [test_case() | {group, test_group()}]. 41 | all() -> [check_processing]. 42 | 43 | -type config() :: proplists:proplist(). 44 | -spec init_per_suite (config()) -> config(). 45 | -spec end_per_suite (config()) -> config(). 46 | 47 | init_per_suite (Config) -> Config. 48 | end_per_suite (Config) -> Config. 49 | 50 | -spec init_per_testcase (atom(), config()) -> config(). 51 | -spec end_per_testcase (atom(), config()) -> config(). 52 | 53 | init_per_testcase (_Test_Case, Config) -> Config. 54 | end_per_testcase (_Test_Case, Config) -> Config. 55 | 56 | %% Test Modules is ?TM 57 | -define(TM, batch_feeder). 58 | 59 | %%%=================================================================== 60 | %%% check_processing/1 61 | %%%=================================================================== 62 | -spec check_processing(config()) -> ok. 63 | check_processing(_Config) -> 64 | Test_Start = "Check that continuation-based processing visits each step", 65 | ct:comment(Test_Start), ct:log(Test_Start), 66 | Test_Fn 67 | = ?FORALL({Num_Ids, Batch_Size}, {range(5,10), range(1,5)}, 68 | begin 69 | Ids = {all_ids, lists:seq(1, Num_Ids)}, 70 | Props = [{sum, 0}, {batch_size, Batch_Size}, {collector, self()}, Ids], 71 | ct:log("Testing with context ~p", [Props]), 72 | 73 | done = ?TM:process_data({?MODULE, Props}), 74 | ct:log("Message queue:~n~p~n", [process_info(self(), messages)]), 75 | Iters = round((Num_Ids / Batch_Size) + 0.49), 76 | receive_processed(Iters, Batch_Size, Batch_Size, 1, Num_Ids) 77 | end), 78 | true = proper:quickcheck(Test_Fn, ?PQ_NUM(50)), 79 | 80 | Test_Complete = "Continuation functions worked", 81 | ct:comment(Test_Complete), ct:log(Test_Complete), 82 | ok. 83 | 84 | receive_processed( 0, _, _, _, Max_Pos) -> receive_sum(Max_Pos); 85 | receive_processed(Iters, Num_Msgs, Curr_Msg, Pos, Max_Pos) -> 86 | ct:log("NM: ~p I: ~p CM: ~p P: ~p", [Num_Msgs, Iters, Curr_Msg, Pos]), 87 | receive 88 | Msg -> 89 | {processed, Iteration, {Iteration, Pos}} = Msg, 90 | New_Pos = Pos + 1, 91 | case {Curr_Msg - 1, New_Pos =< Max_Pos} of 92 | {0, _} -> (Pos > 1 orelse Num_Msgs =:= 1) 93 | andalso Iters > 1 andalso receive_sum(Pos), 94 | receive_processed(Iters-1, Num_Msgs, Num_Msgs, New_Pos, Max_Pos); 95 | {_, false} -> receive_processed(Iters-1, Num_Msgs, Num_Msgs, New_Pos, Max_Pos); 96 | {N, true} -> receive_processed(Iters, Num_Msgs, N, New_Pos, Max_Pos) 97 | end 98 | after 100 -> timeout 99 | end. 100 | 101 | receive_sum(High_Water) -> 102 | Sum = lists:sum(lists:seq(1, High_Water)), 103 | receive Msg -> ct:log("Received sum message ~p", [Msg]), 104 | {sum, Sum} = Msg, ct:log("Received sum ~p", [Sum]), true 105 | after 100 -> ct:log("Timeout waiting for sum ~p", [High_Water]), timeout 106 | end. 107 | 108 | 109 | %%%=================================================================== 110 | %%% batch_feeder behaviour implementation 111 | %%%=================================================================== 112 | 113 | %%% This behaviour implementation uses {?MODULE, proplists:proplist()} 114 | %%% to define the context of batch generation. The initial set of 115 | %%% integers is a list contained on the 'all_ids' property. A 2nd 116 | %%% property 'batch_size' determines how many integers to slice off 117 | %%% for each batch. In this implementation, the batch_size never 118 | %%% varies. 119 | 120 | %%% prep_batch pairs {Batch_Num, Id} and adds a sum of Ids seen 121 | %%% to the proplist context pushing it on the front to shadow 122 | %%% older sums (but leaving them there for debugging inspection 123 | %%% if necessary for the test suite). 124 | 125 | %%% exec_batch wraps {processed, Batch_Num, Elem} where Elem is 126 | %%% the pair from prep_batch. It takes a timestamp and puts that 127 | %%% on the context and messages the sum to a collector. 128 | 129 | %%% This is only a demonstration to show how side-effects can 130 | %%% be maintained in the context, and concurrency or messaging 131 | %%% can be embedded into the iteration phases. 132 | 133 | first_batch({_Module, Env} = Context) -> 134 | Num_Items = proplists:get_value(batch_size, Env), 135 | All_Ids = proplists:get_value(all_ids, Env), 136 | {Batch, Rest} = ?TM:get_next_batch_list(Num_Items, All_Ids), 137 | {{Batch, Context}, make_continuation_fn(Rest)}. 138 | 139 | prep_batch(Iteration, Batch, {Module, Env} = _Context) -> 140 | Sum = lists:sum(Batch) + proplists:get_value(sum, Env), 141 | {[{Iteration, Elem} || Elem <- Batch], {Module, [{sum, Sum} | Env]}}. 142 | 143 | exec_batch(Iteration, Batch, {Module, Env} = _Context) -> 144 | Pid = proplists:get_value(collector, Env), 145 | [Pid ! {processed, Iteration, Elem} || Elem <- Batch], 146 | {Module, New_Env} = New_Context = {Module, [{timestamp, os:timestamp()} | Env]}, 147 | New_Sum = proplists:get_value(sum, New_Env), 148 | ct:log("Context: ~p Sum: ~p Processing: ~p", [New_Context, New_Sum, Batch]), 149 | Pid ! {sum, New_Sum}, 150 | {ok, New_Context}. 151 | 152 | 153 | %%%------------------------------------------------------------------------------ 154 | %%% Support functions 155 | %%%------------------------------------------------------------------------------ 156 | 157 | make_continuation_fn([]) -> 158 | fun(_Iteration, _Context) -> done end; 159 | make_continuation_fn(Batch_Remaining) -> 160 | fun(_Iteration, {_Module, Env} = Context) -> 161 | Num_Items = proplists:get_value(batch_size, Env), 162 | {Next_Batch, More} = ?TM:get_next_batch_list(Num_Items, Batch_Remaining), 163 | {{Next_Batch, Context}, make_continuation_fn(More)} 164 | end. 165 | -------------------------------------------------------------------------------- /test/epocxy/cxy_cache_SUITE.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Tests for cxy_cache use both common_test and PropEr to check for errors. 9 | %%% Common_test is the driving framework and is used to validate simple cases 10 | %%% of calling API functions with pre-canned valid values. The PropEr tests 11 | %%% are designed to comprehensively generate values which stress the workings 12 | %%% of the caching API. 13 | %%% 14 | %%% Simple tests precede PropEr tests in sequence groups so that breakage in 15 | %%% the basic API are found more quickly without invoking PropEr generators. 16 | %%% 17 | %%% @since 0.9.6 18 | %%% @end 19 | %%%------------------------------------------------------------------------------ 20 | -module(cxy_cache_SUITE). 21 | -auth('jay@duomark.com'). 22 | -vsn(''). 23 | 24 | -export([all/0, groups/0, 25 | init_per_suite/1, end_per_suite/1, 26 | init_per_group/1, end_per_group/1, 27 | init_per_testcase/2, end_per_testcase/2 28 | ]). 29 | 30 | -export([ 31 | proper_check_create/1, 32 | vf_check_one_fetch/1, vf_check_many_fetches/1, 33 | vr_force_obj_refresh/1, vr_force_key_refresh/1, 34 | vd_clear_and_delete/1, 35 | check_fsm_cache/1 36 | ]). 37 | 38 | -include("epocxy_common_test.hrl"). 39 | 40 | -type test_case() :: atom(). 41 | -type test_group() :: atom(). 42 | 43 | -spec all() -> [test_case() | {group, test_group()}]. 44 | all() -> [ 45 | proper_check_create, % Establish all atoms as valid cache names 46 | {group, verify_fetch}, % Tests fetching from the cache, even when empty 47 | {group, verify_delete}, % Tests clearing and deleting from the cache 48 | {group, verify_refresh}, % Tests refreshing items in the cache 49 | check_fsm_cache % Certifies the cache supervisor and FSM ets ownership 50 | ]. 51 | 52 | -spec groups() -> [{test_group(), [sequence], [test_case() | {group, test_group()}]}]. 53 | groups() -> [ 54 | {verify_delete, [sequence], [vd_clear_and_delete ]}, 55 | {verify_fetch, [sequence], [vf_check_one_fetch, vf_check_many_fetches ]}, 56 | {verify_refresh, [sequence], [vr_force_obj_refresh, vr_force_key_refresh ]} 57 | ]. 58 | 59 | 60 | -type config() :: proplists:proplist(). 61 | -spec init_per_suite (config()) -> config(). 62 | -spec end_per_suite (config()) -> config(). 63 | 64 | init_per_suite (Config) -> Config. 65 | end_per_suite (Config) -> Config. 66 | 67 | -spec init_per_group (config()) -> config(). 68 | -spec end_per_group (config()) -> config(). 69 | 70 | init_per_group (Config) -> Config. 71 | end_per_group (Config) -> Config. 72 | 73 | -spec init_per_testcase (atom(), config()) -> config(). 74 | -spec end_per_testcase (atom(), config()) -> config(). 75 | 76 | init_per_testcase (_Test_Case, Config) -> Config. 77 | end_per_testcase (_Test_Case, Config) -> Config. 78 | 79 | -define(TM, cxy_cache). 80 | 81 | 82 | %%%------------------------------------------------------------------------------ 83 | %%% Unit tests for cxy_cache core 84 | %%%------------------------------------------------------------------------------ 85 | 86 | -include("cxy_cache.hrl"). 87 | 88 | %% Validate any atom can be used as a cache_name and info/1 will report properly. 89 | -spec proper_check_create(config()) -> ok. 90 | proper_check_create(_Config) -> 91 | ct:log("Test using an atom as a cache name"), 92 | Test_Cache_Name = ?FORALL(Cache_Name, ?SUCHTHAT(Cache_Name, atom(), Cache_Name =/= ''), 93 | check_create_test(Cache_Name)), 94 | true = proper:quickcheck(Test_Cache_Name, ?PQ_NUM(5)), 95 | ct:comment("Successfully tested atoms as cache_names"), 96 | ok. 97 | 98 | %% Checks that create cache and info reporting are consistent. 99 | check_create_test(Cache_Name) -> 100 | ct:comment("Testing cache_name: ~p", [Cache_Name]), 101 | ct:log("Testing cache_name: ~p", [Cache_Name]), 102 | {Cache_Name, []} = ?TM:info(Cache_Name), 103 | 104 | %% Test invalid args to reserve... 105 | ct:comment("Testing invalid args for cxy_cache:reserve/2 and cache_name: ~p", [Cache_Name]), 106 | Cache_Module = list_to_atom(atom_to_list(Cache_Name) ++ "_module"), 107 | %% The following two tests cause dialyzer errors, uncomment and recomment to eliminate baseline build errors 108 | %% true = try ?TM:reserve(atom_to_list(Cache_Name), Cache_Module) catch error:function_clause -> true end, 109 | %% true = try ?TM:reserve(Cache_Name, atom_to_list(Cache_Module)) catch error:function_clause -> true end, 110 | 111 | %% Test that valid args can only reserve once... 112 | ct:comment("Testing cxy_cache:reserve can only succeed once for cache_name: ~p", [Cache_Name]), 113 | Cache_Name = ?TM:reserve(Cache_Name, Cache_Module), 114 | {Cache_Name, Cache_Info_Rsrv} = ?TM:info(Cache_Name), 115 | [undefined, undefined] = [proplists:get_value(Prop, Cache_Info_Rsrv, missing_prop) 116 | || Prop <- [new_gen_tid, old_gen_tid]], 117 | {error, already_exists} = ?TM:reserve(Cache_Name, Cache_Module), 118 | {error, already_exists} = ?TM:reserve(Cache_Name, any_other_name), 119 | 120 | %% Check that valid info is reported after the cache is created. 121 | ct:comment("Ensure valid cxy_cache:info/1 after creating cache_name: ~p", [Cache_Name]), 122 | true = ?TM:create(Cache_Name), 123 | {Cache_Name, Cache_Info} = ?TM:info(Cache_Name), 124 | true = is_list(Cache_Info), 125 | 126 | %% Verify the info is initialized and an ets table is created for each generation. 127 | ct:comment("Validate two generations of ets table for cache_name: ~p", [Cache_Name]), 128 | [0, 0] = [proplists:get_value(Prop, Cache_Info) || Prop <- [new_gen_count, old_gen_count]], 129 | [set, set] = [ets:info(proplists:get_value(Prop, Cache_Info), type) 130 | || Prop <- [new_gen_tid, old_gen_tid]], 131 | eliminate_cache(Cache_Name), 132 | true. 133 | 134 | vf_check_one_fetch(_Config) -> 135 | ct:log("Test basic cache access"), 136 | Cache_Name = frogs, 137 | validate_create_and_fetch(Cache_Name, frog_obj, frog, "frog-124"), 138 | eliminate_cache(Cache_Name), 139 | ct:comment("Successfully tested basic cache access"), 140 | ok. 141 | 142 | validate_create_and_fetch(Cache_Name, Cache_Obj_Type, Obj_Record_Type, Obj_Instance_Key) -> 143 | reserve_and_create_cache(Cache_Name, Cache_Obj_Type, 5), 144 | [#cxy_cache_meta{new_gen=New, old_gen=Old}] = ets:lookup(?TM, Cache_Name), 145 | 146 | %% First time creates new value (fetch_count always indicates next access count)... 147 | false = ?TM:is_cached(Cache_Name, Obj_Instance_Key), 148 | Before_Obj_Insert = erlang:now(), 149 | {Obj_Record_Type, Obj_Instance_Key} = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 150 | [] = ets:lookup(Old, Obj_Instance_Key), 151 | [#cxy_cache_value{key=Obj_Instance_Key, version=Obj_Create_Time, 152 | value={Obj_Record_Type, Obj_Instance_Key}}] = ets:lookup(New, Obj_Instance_Key), 153 | [#cxy_cache_meta{fetch_count=1}] = ets:lookup(?TM, Cache_Name), 154 | true = ?TM:is_cached(Cache_Name, Obj_Instance_Key), 155 | true = timer:now_diff(Obj_Create_Time, Before_Obj_Insert) > 0, 156 | false = ?TM:maybe_make_new_generation(Cache_Name), 157 | true = ?TM:is_cached(Cache_Name, Obj_Instance_Key), 158 | ok. 159 | 160 | vf_check_many_fetches(_Config) -> 161 | ct:log("Test fetches and new generations"), 162 | All_Obj_Types = [{fox_obj, fox}, {frog_obj, frog}, {rabbit_obj, rabbit}], 163 | Test_Generations 164 | = ?FORALL({Cache_Name, Obj_Type_Pair, Instances}, 165 | {?SUCHTHAT(Cache_Name, atom(), Cache_Name =/= ''), 166 | union(All_Obj_Types), 167 | ?SUCHTHAT(Instances, {non_empty(string()), non_empty(string())}, 168 | element(1,Instances) =/= element(2,Instances))}, 169 | begin 170 | {Instance1, Instance2} = Instances, 171 | {Obj_Type, Obj_Rec_Type} = Obj_Type_Pair, 172 | Result = validate_new_generations(Cache_Name, Obj_Type, Obj_Rec_Type, Instance1, Instance2), 173 | eliminate_cache(Cache_Name), 174 | Result 175 | end), 176 | true = proper:quickcheck(Test_Generations, ?PQ_NUM(5)), 177 | ct:comment("Successfully tested new generations"), 178 | ok. 179 | 180 | validate_new_generations(Cache_Name, Cache_Obj_Type, Obj_Record_Type, Obj_Key1, Obj_Key2) -> 181 | ct:comment("Testing new generations of cache ~p with object type ~p and instances ~p and ~p", 182 | [Cache_Name, {Cache_Obj_Type, Obj_Record_Type}, Obj_Key1, Obj_Key2]), 183 | ct:log("Testing new generations of cache ~p with object type ~p and instances ~p and ~p", 184 | [Cache_Name, {Cache_Obj_Type, Obj_Record_Type}, Obj_Key1, Obj_Key2]), 185 | ok = validate_create_and_fetch(Cache_Name, Cache_Obj_Type, Obj_Record_Type, Obj_Key1), 186 | [#cxy_cache_meta{new_gen=New, old_gen=Old}] = ets:lookup(?TM, Cache_Name), 187 | 188 | %% Second time fetches existing value... 189 | ct:comment("Testing initial fetch on new generation for cache: ~p", [Cache_Name]), 190 | {Obj_Record_Type, Obj_Key1} = ?TM:fetch_item(Cache_Name, Obj_Key1), 191 | [] = ets:lookup(Old, Obj_Key1), 192 | [Initial_Obj_Value1] = ets:lookup(New, Obj_Key1), 193 | [#cxy_cache_meta{fetch_count=2}] = ets:lookup(?TM, Cache_Name), 194 | false = ?TM:maybe_make_new_generation(Cache_Name), 195 | 196 | %% Retrieve 3 more times still no new generation... 197 | ct:comment("Test 3 more fetches don't trigger a new generation for cache: ~p", [Cache_Name]), 198 | Exp3 = lists:duplicate(3, {Obj_Record_Type, Obj_Key1}), 199 | Exp3 = [?TM:fetch_item(Cache_Name, Obj_Key1) || _N <- lists:seq(1,3)], 200 | [] = ets:lookup(Old, Obj_Key1), 201 | [Initial_Obj_Value1] = ets:lookup(New, Obj_Key1), 202 | [#cxy_cache_meta{fetch_count=5}] = ets:lookup(?TM, Cache_Name), 203 | false = ?TM:maybe_make_new_generation(Cache_Name), 204 | 205 | %% Once more to get a new generation, then use a new key to insert in the new generation only... 206 | ct:comment("Bump fetch counts to qualify as a new generation for cache: ~p", [Cache_Name]), 207 | {Obj_Record_Type, Obj_Key1} = ?TM:fetch_item(Cache_Name, Obj_Key1), 208 | 0 = ets:info(Old, size), 209 | [#cxy_cache_meta{new_gen=New, old_gen=Old}] = ets:lookup(?TM, Cache_Name), 210 | 211 | %% Force check which triggers generation rotation... 212 | ct:comment("Create a new generation for cache: ~p", [Cache_Name]), 213 | true = ?TM:is_cached(Cache_Name, Obj_Key1), 214 | true = ?TM:maybe_make_new_generation(Cache_Name), 215 | [#cxy_cache_meta{new_gen=New2, old_gen=New}] = ets:lookup(?TM, Cache_Name), 216 | 0 = ets:info(New2, size), 217 | true = ?TM:is_cached(Cache_Name, Obj_Key1), 218 | false = ?TM:is_cached(Cache_Name, Obj_Key2), 219 | {Obj_Record_Type, Obj_Key2} = ?TM:fetch_item(Cache_Name, Obj_Key2), 220 | 1 = ets:info(New2, size), 221 | [] = ets:lookup(New2, Obj_Key1), 222 | [Initial_Obj_Value2] = ets:lookup(New2, Obj_Key2), 223 | 1 = ets:info(New, size), 224 | [Initial_Obj_Value1] = ets:lookup(New, Obj_Key1), 225 | [] = ets:lookup(New, Obj_Key2), 226 | [#cxy_cache_meta{fetch_count=1}] = ets:lookup(?TM, Cache_Name), 227 | true = ?TM:is_cached(Cache_Name, Obj_Key1), 228 | true = ?TM:is_cached(Cache_Name, Obj_Key2), 229 | 230 | %% Now check if migration of key Obj_Key1 works properly... 231 | ct:comment("Try to migrate a value from old generation to new generation in cache: ~p", [Cache_Name]), 232 | {Obj_Record_Type, Obj_Key1} = ?TM:fetch_item(Cache_Name, Obj_Key1), 233 | 2 = ets:info(New2, size), 234 | %% Both objects exist in the newest generation... 235 | [Initial_Obj_Value1] = ets:lookup(New2, Obj_Key1), 236 | [Initial_Obj_Value2] = ets:lookup(New2, Obj_Key2), 237 | %% And the now old generation still has a copy of the first key inserted 238 | %% because we copy forward without deleting from old generation. 239 | %% (The old value will have to be deleted in future on migration when we 240 | %% want to visit all trashed objects on old generation expiration so that 241 | %% we don't garbage collect items that are still active.) 242 | 1 = ets:info(New, size), 243 | [Initial_Obj_Value1] = ets:lookup(New, Obj_Key1), 244 | [] = ets:lookup(New, Obj_Key2), 245 | [#cxy_cache_meta{fetch_count=2}] = ets:lookup(?TM, Cache_Name), 246 | 247 | true. 248 | 249 | vd_clear_and_delete(_Config) -> 250 | ct:comment("Testing clear and delete of instances from a cache"), 251 | validate_clear_and_delete_cache(frog_cache, frog_obj, frog, "frog-3127"), 252 | ct:comment("Successfully tested clear and delete"), 253 | ok. 254 | 255 | validate_clear_and_delete_cache(Cache_Name, Cache_Obj_Type, Obj_Record_Type, Obj_Instance_Key) -> 256 | 257 | %% Create cache and fetch one item... 258 | ct:comment("Put a single item into new cache: ~p", [Cache_Name]), 259 | reserve_and_create_cache(Cache_Name, Cache_Obj_Type, 5), 260 | Fetch1 = ets:tab2list(?TM), 261 | [#cxy_cache_meta{fetch_count=0, started=Started, new_gen_time=NG_Time, old_gen_time=OG_Time}] = Fetch1, 262 | {Cache_Name, Info1} = ?TM:info(Cache_Name), 263 | 0 = proplists:get_value(new_gen_count, Info1), 264 | 265 | ct:comment("Check cache count statistics when fetching an item from cache: ~p", [Cache_Name]), 266 | Expected_Frog = {Obj_Record_Type, Obj_Instance_Key}, 267 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 268 | Fetch2 = ets:tab2list(?TM), 269 | [#cxy_cache_meta{fetch_count=1, started=Started, new_gen_time=NG_Time, old_gen_time=OG_Time}] = Fetch2, 270 | {Cache_Name, Info2} = ?TM:info(Cache_Name), 271 | 1 = proplists:get_value(new_gen_count, Info2), 272 | 273 | %% Delete the item and fetch it 3 more times.. 274 | ct:comment("Verify cxy_cache:delete_item/2 works in cache: ~p", [Cache_Name]), 275 | true = ?TM:delete_item(Cache_Name, Obj_Instance_Key), 276 | {Cache_Name, Info3} = ?TM:info(Cache_Name), 277 | 0 = proplists:get_value(new_gen_count, Info3), 278 | 279 | [Expected_Frog, Expected_Frog, Expected_Frog] 280 | = [?TM:fetch_item(Cache_Name, Obj_Instance_Key) || _N <- lists:seq(1,3)], 281 | Fetch3 = ets:tab2list(?TM), 282 | [#cxy_cache_meta{fetch_count=4, started=Started, new_gen_time=NG_Time, old_gen_time=OG_Time}] = Fetch3, 283 | true = Started =/= NG_Time, 284 | {Cache_Name, Info4} = ?TM:info(Cache_Name), 285 | 1 = proplists:get_value(new_gen_count, Info4), 286 | 287 | ct:comment("Check get_and_clear_counts matches and clears for cache ~p", [Cache_Name]), 288 | {Cache_Name, Cleared_Counts1} = ?TM:get_and_clear_counts(Cache_Name), 289 | [2,0,0,1,0,2] 290 | = [proplists:get_value(Property, Cleared_Counts1) 291 | || Property <- [gen1_hits, gen2_hits, refresh_count, delete_count, error_count, miss_count]], 292 | 293 | {Cache_Name, Cleared_Counts2} = ?TM:get_and_clear_counts(Cache_Name), 294 | [0,0,0,0,0,0] 295 | = [proplists:get_value(Property, Cleared_Counts2) 296 | || Property <- [gen1_hits, gen2_hits, refresh_count, delete_count, error_count, miss_count]], 297 | {Cache_Name, Info5} = ?TM:info(Cache_Name), 298 | 1 = proplists:get_value(new_gen_count, Info5), 299 | 300 | %% Unknown cache not accessible... 301 | Missing_Cache = foo, 302 | ct:comment("Verify a missing cache reports clear, delete and info for cache: ~p", [Missing_Cache]), 303 | false = ?TM:clear(Missing_Cache), 304 | false = ?TM:delete(Missing_Cache), 305 | {foo, []} = ?TM:info(Missing_Cache), 306 | 307 | %% Clear cache and verify it has new metadata... 308 | ct:comment("Verify the cache counts after clearing cache: ~p", [Cache_Name]), 309 | true = ?TM:clear(Cache_Name), 310 | [#cxy_cache_meta{fetch_count=0, started=New_Time, new_gen_time=New_Time, old_gen_time=New_Time, 311 | new_gen=New_Gen, old_gen=Old_Gen}] = ets:tab2list(?TM), 312 | true = New_Time > Started andalso New_Time > NG_Time andalso New_Time > OG_Time, 313 | [set,0] = [ets:info(New_Gen, Attr) || Attr <- [type, size]], 314 | [set,0] = [ets:info(Old_Gen, Attr) || Attr <- [type, size]], 315 | {Cache_Name, Info6} = ?TM:info(Cache_Name), 316 | 0 = proplists:get_value(new_gen_count, Info6), 317 | 318 | %% Unknown cache still not accessible... 319 | ct:comment("Ensure still no information for missing cache: ~p", [Missing_Cache]), 320 | false = ?TM:clear(Missing_Cache), 321 | false = ?TM:delete(Missing_Cache), 322 | {foo, []} = ?TM:info(Missing_Cache), 323 | 324 | %% Remove cache and complete test. 325 | eliminate_cache(Cache_Name), 326 | [0, undefined, undefined] = [ets:info(Tab, size) || Tab <- [?TM, Old_Gen, New_Gen]], 327 | ok. 328 | 329 | vr_force_obj_refresh(_Config) -> 330 | ct:comment("Testing refresh of an object instance in a cache"), 331 | 332 | %% Create cache and fetch one item... 333 | Cache_Name = frog_cache, 334 | Cache_Obj_Type = frog_obj, 335 | reserve_and_create_cache(Cache_Name, Cache_Obj_Type, 3), 336 | 337 | %% Test refreshing a missing item... 338 | ct:comment("Refresh a missing object with a new object in cache: ~p", [Cache_Name]), 339 | Exact_Version = erlang:now(), 340 | Exact_Key = "missing-frog", 341 | Exact_Object = {frog, Exact_Key}, 342 | Exact_Object = refresh(obj, Cache_Name, Exact_Key, {Exact_Version, Exact_Object}), 343 | true = check_version(obj, Cache_Name, Exact_Key, Exact_Version), 344 | 345 | %% Test refreshing an already present item... 346 | ct:comment("Refresh an existing object with a new object in cache: ~p", [Cache_Name]), 347 | validate_force_refresh(obj, Cache_Name, frog, "frog-with-spots", erlang:now()), 348 | 349 | %% Remove cache and complete test. 350 | eliminate_cache(Cache_Name), 351 | ct:comment("Successfully tested fetch_item_version for objects"), 352 | ok. 353 | 354 | vr_force_key_refresh(_Config) -> 355 | ct:comment("Testing refresh of a key instance in a cache"), 356 | 357 | %% Create cache and fetch one item... 358 | Cache_Name = frog_cache, 359 | Cache_Obj_Type = frog_obj, 360 | ct:comment("Put a single item into new cache: ~p", [Cache_Name]), 361 | reserve_and_create_cache(Cache_Name, Cache_Obj_Type, 3), 362 | validate_force_refresh(key, Cache_Name, frog, "frog-without-spots", erlang:now()), 363 | 364 | %% Remove cache and complete test. 365 | eliminate_cache(Cache_Name), 366 | ct:comment("Successfully tested fetch_item_version for keys"), 367 | ok. 368 | 369 | validate_force_refresh(Type, Cache_Name, Obj_Record_Type, Obj_Instance_Key, Old_Time) -> 370 | Expected_Frog = {Obj_Record_Type, Obj_Instance_Key}, 371 | Expected_Frog = ?TM:fetch_item (Cache_Name, Obj_Instance_Key), 372 | Frog_Version_1 = ?TM:fetch_item_version (Cache_Name, Obj_Instance_Key), 373 | true = timer:now_diff(Frog_Version_1, Old_Time) > 0, 374 | 375 | %% Now refresh it to a newer version... 376 | ct:comment("Refreshing to a newer version in cache: ~p", [Cache_Name]), 377 | New_Time = erlang:now(), 378 | true = timer:now_diff(New_Time, Frog_Version_1) > 0, 379 | Expected_Frog = refresh(Type, Cache_Name, Obj_Instance_Key, {New_Time, Expected_Frog}), 380 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 381 | true = check_version(Type, Cache_Name, Obj_Instance_Key, New_Time), 382 | 383 | %% Then check that refreshing to an older version has no effect. 384 | ct:comment("Refreshing to an older version in cache: ~p", [Cache_Name]), 385 | Expected_Frog = refresh(Type, Cache_Name, Obj_Instance_Key, {Old_Time, Expected_Frog}), 386 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 387 | true = check_version(Type, Cache_Name, Obj_Instance_Key, New_Time), 388 | 389 | %% Now test the old generation with refresh... 390 | ct:comment("Create a new generation for cache: ~p", [Cache_Name]), 391 | no_value_available = ?TM:fetch_item_version(Cache_Name, missing_object), 392 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 393 | true = ?TM:maybe_make_new_generation(Cache_Name), 394 | true = check_version(Type, Cache_Name, Obj_Instance_Key, New_Time), 395 | no_value_available = ?TM:fetch_item_version(Cache_Name, missing_object), 396 | 397 | %% Refresh the old generation item... 398 | Expected_Frog = refresh(Type, Cache_Name, Obj_Instance_Key, {Old_Time, Expected_Frog}), 399 | true = check_version(Type, Cache_Name, Obj_Instance_Key, New_Time), 400 | 401 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 402 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 403 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 404 | Expected_Frog = ?TM:fetch_item(Cache_Name, Obj_Instance_Key), 405 | true = ?TM:maybe_make_new_generation(Cache_Name), 406 | Newer_Time = erlang:now(), 407 | Expected_Frog = refresh(Type, Cache_Name, Obj_Instance_Key, {Newer_Time, Expected_Frog}), 408 | true = check_version(Type, Cache_Name, Obj_Instance_Key, Newer_Time), 409 | 410 | ok. 411 | 412 | refresh(key, Cache_Name, Obj_Instance_Key, _Object) -> 413 | ?TM:refresh_item(Cache_Name, Obj_Instance_Key); 414 | refresh(obj, Cache_Name, Obj_Instance_Key, Object) -> 415 | ?TM:refresh_item(Cache_Name, Obj_Instance_Key, Object). 416 | 417 | check_version(key, Cache_Name, Obj_Instance_Key, New_Time) -> 418 | timer:now_diff(New_Time, ?TM:fetch_item_version (Cache_Name, Obj_Instance_Key)) < 0; 419 | check_version(obj, Cache_Name, Obj_Instance_Key, New_Time) -> 420 | timer:now_diff(New_Time, ?TM:fetch_item_version (Cache_Name, Obj_Instance_Key)) =:= 0. 421 | 422 | 423 | %%%------------------------------------------------------------------------------ 424 | %%% Thread testing of cxy_cache_sup, cxy_cache_fsm and cxy_cache together. 425 | %%%------------------------------------------------------------------------------ 426 | 427 | -define(SUP, cxy_cache_sup). 428 | -define(FSM, cxy_cache_fsm). 429 | 430 | check_fsm_cache(_Config) -> 431 | 432 | %% Create a simple_one_for_one supervisor... 433 | ct:comment("Testing cxy_cache_fsm and cxy_cache together"), 434 | {ok, Sup} = ?SUP:start_link(), 435 | Sup = whereis(?SUP), 436 | undefined = ets:info(?TM, named_table), 437 | 438 | %% The first cache instance causes the creation of cache ets metadata table. 439 | %% Make sure that the supervisor owns the metadata ets table 'cxy_cache'... 440 | {ok, Fox_Cache} = ?SUP:start_cache(fox_cache, fox_obj, time, 1000000), 441 | [set, true, public, Sup] = [ets:info(?TM, P) || P <- [type, named_table, protection, owner]], 442 | 1 = ets:info(?TM, size), 443 | {ok, Rabbit_Cache} = ?SUP:start_cache(rabbit_cache, rabbit_obj, time, 1300000), 444 | 2 = ets:info(?TM, size), 445 | 446 | %% Verify the owner of the generational ets tables is the respective FSM instance... 447 | [#cxy_cache_meta{new_gen=Fox2, old_gen=Fox1}] = ets:lookup(?TM, fox_cache), 448 | [#cxy_cache_meta{new_gen=Rabbit2, old_gen=Rabbit1}] = ets:lookup(?TM, rabbit_cache), 449 | [Fox_Cache, Fox_Cache, Rabbit_Cache, Rabbit_Cache] 450 | = [ets:info(Tab, owner) || Tab <- [Fox2, Fox1, Rabbit2, Rabbit1]], 451 | 452 | %% Wait for a new generation (1.3 seconds minimum)... 453 | timer:sleep(1500), % Additional time for timeout jitter 454 | [#cxy_cache_meta{new_gen=Fox3, old_gen=Fox2}] = ets:lookup(?TM, fox_cache), 455 | [#cxy_cache_meta{new_gen=Rabbit3, old_gen=Rabbit2}] = ets:lookup(?TM, rabbit_cache), 456 | true = (Fox3 =/= Fox2 andalso Rabbit3 =/= Rabbit2), 457 | [Fox_Cache, Fox_Cache, Rabbit_Cache, Rabbit_Cache] 458 | = [ets:info(Tab, owner) || Tab <- [Fox3, Fox2, Rabbit3, Rabbit2]], 459 | [undefined, undefined] = [ets:info(Tab) || Tab <- [Fox1, Rabbit1]], 460 | 461 | 2 = ets:info(?TM, size), 462 | true = ?TM:delete(fox_cache), 463 | 1 = ets:info(?TM, size), 464 | true = ?TM:delete(rabbit_cache), 465 | 0 = ets:info(?TM, size), 466 | unlink(Sup), 467 | 468 | ct:comment("Successfully tested cxy_cache_fsm and cxy_cache"), 469 | ok. 470 | 471 | 472 | %%%------------------------------------------------------------------------------ 473 | %%% Support functions 474 | %%%------------------------------------------------------------------------------ 475 | 476 | %% Functions for triggering new generations. 477 | gen_count_fun (Thresh) -> fun(Name, Count, Time) -> ?TM:new_gen_count_threshold (Name, Count, Time, Thresh) end. 478 | %%gen_time_fun (Thresh) -> fun(Name, Count, Time) -> ?TM:new_gen_time_threshold (Name, Count, Time, Thresh) end. 479 | 480 | %% Create a new cache (each testcase creates the ets metadata table on first reserve call). 481 | %% Generation logic is to create a new generation every Gen_Count fetches. 482 | reserve_and_create_cache(Cache_Name, Cache_Obj, Gen_Count) -> 483 | %% undefined = ets:info(?TM, named_table), 484 | Gen_Fun = gen_count_fun(Gen_Count), 485 | Cache_Name = ?TM:reserve(Cache_Name, Cache_Obj, Gen_Fun), 486 | true = validate_cache_metatable(Cache_Name, Cache_Obj, Gen_Fun), 487 | true = ?TM:create(Cache_Name), 488 | true = validate_cache_generations(Cache_Name), 489 | true. 490 | 491 | validate_cache_metatable(Cache_Name, Cache_Obj, Gen_Fun) -> 492 | [Exp1] = ets:tab2list(?TM), 493 | Exp2 = #cxy_cache_meta{cache_name=Cache_Name, cache_module=Cache_Obj, 494 | new_gen=undefined, old_gen=undefined, new_generation_function=Gen_Fun}, 495 | true = metas_match(Exp1, Exp2), 496 | [set, true, public] = [ets:info(?TM, Prop) || Prop <- [type, named_table, protection]], 497 | true. 498 | 499 | validate_cache_generations(Cache_Name) -> 500 | [Metadata] = ets:lookup(?TM, Cache_Name), 501 | #cxy_cache_meta{cache_name=Cache_Name, new_gen=Tid1, old_gen=Tid2} = Metadata, 502 | [set, false, public] = [ets:info(Tid1, Prop) || Prop <- [type, named_table, protection]], 503 | [set, false, public] = [ets:info(Tid2, Prop) || Prop <- [type, named_table, protection]], 504 | true. 505 | 506 | %% Delete cache and verify that all ets cache meta data is gone. 507 | %% This only works if there is just one (or zero) cache(s) registered. 508 | eliminate_cache(Cache_Name) -> 509 | true = ?TM:delete(Cache_Name), 510 | true = ets:info(?TM, named_table), 511 | [] = ets:tab2list(?TM). 512 | 513 | %% Verify that two metadata records match provided that the 2nd was created later than the 1st. 514 | metas_match(#cxy_cache_meta{ 515 | cache_name=Name, fetch_count=Fetch, gen1_hit_count=Hit_Count1, gen2_hit_count=Hit_Count2, 516 | miss_count=Miss_Count, error_count=Err_Count, cache_module=Mod, new_gen=New, old_gen=Old, 517 | new_generation_function=Gen_Fun, new_generation_thresh=Thresh, started=Start1} = _Earlier, 518 | #cxy_cache_meta{ 519 | cache_name=Name, fetch_count=Fetch, gen1_hit_count=Hit_Count1, gen2_hit_count=Hit_Count2, 520 | miss_count=Miss_Count, error_count=Err_Count, cache_module=Mod, new_gen=New, old_gen=Old, 521 | new_generation_function=Gen_Fun, new_generation_thresh=Thresh, started=Start2} = _Later) -> 522 | Start1 < Start2; 523 | 524 | %% Logs and fails if there is any field mismatch. 525 | metas_match(A,B) -> ct:log("~w~n", [A]), 526 | ct:log("~w~n", [B]), 527 | false. 528 | -------------------------------------------------------------------------------- /test/epocxy/cxy_ctl_SUITE.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Tests for cxy_ctl using common test. 9 | %%% 10 | %%% @since 0.9.6 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(cxy_ctl_SUITE). 14 | -auth('jay@duomark.com'). 15 | -vsn(''). 16 | 17 | -export([all/0, init_per_suite/1, end_per_suite/1]). 18 | -export([ 19 | check_proc_dict_helper/1, 20 | check_no_timer_limits/1, check_with_timer_limits/1, 21 | check_atom_limits/1, check_limit_errors/1, 22 | check_concurrency_types/1, 23 | check_execute_task/1, check_maybe_execute_task/1, 24 | check_execute_pid_link/1, check_maybe_execute_pid_link/1, 25 | check_execute_pid_monitor/1, check_maybe_execute_pid_monitor/1, 26 | check_multiple_init_calls/1, check_copying_dict/1, 27 | check_high_water/1 28 | ]). 29 | 30 | %% Spawned functions 31 | -export([put_pdict/2, fetch_ages/0, fetch_ets_ages/1]). 32 | 33 | -include_lib("common_test/include/ct.hrl"). 34 | 35 | -spec all() -> [atom()]. 36 | 37 | all() -> [ 38 | check_proc_dict_helper, 39 | check_no_timer_limits, check_with_timer_limits, 40 | check_atom_limits, check_limit_errors, 41 | check_concurrency_types, 42 | check_execute_task, check_maybe_execute_task, 43 | check_execute_pid_link, check_maybe_execute_pid_link, 44 | check_execute_pid_monitor, check_maybe_execute_pid_monitor, 45 | check_multiple_init_calls, check_copying_dict, 46 | check_high_water 47 | ]. 48 | 49 | -type config() :: proplists:proplist(). 50 | -spec init_per_suite(config()) -> config(). 51 | -spec end_per_suite(config()) -> config(). 52 | 53 | init_per_suite(Config) -> Config. 54 | end_per_suite(Config) -> Config. 55 | 56 | %% Test Modules is ?TM 57 | -define(TM, cxy_ctl). 58 | -define(MAX_SLOW_FACTOR, 100000). 59 | 60 | -spec check_proc_dict_helper(config()) -> ok. 61 | check_proc_dict_helper(_Config) -> 62 | {'$$dict_prop', {boo, 22}} = ?TM:make_process_dictionary_default_value(boo, 22), 63 | {'$$dict_prop', {{k,v}, {key,value}}} = ?TM:make_process_dictionary_default_value({k,v}, {key,value}), 64 | ok. 65 | 66 | -spec check_no_timer_limits(config()) -> ok. 67 | check_no_timer_limits(_Config) -> 68 | Limits = [{a, 15, 0, ?MAX_SLOW_FACTOR}, {b, 35, 0, ?MAX_SLOW_FACTOR}], 69 | true = ?TM:init(Limits), 70 | All_Entries = ets:tab2list(?TM), 71 | 4 = length(All_Entries), 72 | true = lists:member({a, 15, 0, 0, ?MAX_SLOW_FACTOR, 0}, All_Entries), 73 | true = lists:member({b, 35, 0, 0, ?MAX_SLOW_FACTOR, 0}, All_Entries), 74 | true = lists:member({{cma,a}, 0, 0, 0, ?MAX_SLOW_FACTOR}, All_Entries), 75 | true = lists:member({{cma,b}, 0, 0, 0, ?MAX_SLOW_FACTOR}, All_Entries), 76 | ok. 77 | 78 | -spec check_with_timer_limits(config()) -> ok. 79 | check_with_timer_limits(_Config) -> 80 | Limits = [{a, 15, 5, ?MAX_SLOW_FACTOR}, {b, 35, 0, ?MAX_SLOW_FACTOR}, {c, 17, 4, ?MAX_SLOW_FACTOR}], 81 | true = ?TM:init(Limits), 82 | All_Entries = ets:tab2list(?TM), 83 | 6 = length(All_Entries), 84 | true = lists:member({a, 15, 0, 5, ?MAX_SLOW_FACTOR, 0}, All_Entries), 85 | true = lists:member({b, 35, 0, 0, ?MAX_SLOW_FACTOR, 0}, All_Entries), 86 | true = lists:member({c, 17, 0, 4, ?MAX_SLOW_FACTOR, 0}, All_Entries), 87 | true = lists:member({{cma,a}, 0, 0, 5, ?MAX_SLOW_FACTOR}, All_Entries), 88 | true = lists:member({{cma,b}, 0, 0, 0, ?MAX_SLOW_FACTOR}, All_Entries), 89 | true = lists:member({{cma,c}, 0, 0, 4, ?MAX_SLOW_FACTOR}, All_Entries), 90 | ok. 91 | 92 | -spec check_atom_limits(config()) -> ok. 93 | check_atom_limits(_Config) -> 94 | Limits = [{a, unlimited, 0, ?MAX_SLOW_FACTOR}, {b, unlimited, 5, ?MAX_SLOW_FACTOR}, 95 | {c, inline_only, 0, ?MAX_SLOW_FACTOR}, {d, inline_only, 7, ?MAX_SLOW_FACTOR}], 96 | true = ?TM:init(Limits), 97 | All_Entries = ets:tab2list(?TM), 98 | 8 = length(All_Entries), 99 | true = lists:member({a, -1, 0, 0, ?MAX_SLOW_FACTOR, 0}, All_Entries), 100 | true = lists:member({b, -1, 0, 5, ?MAX_SLOW_FACTOR, 0}, All_Entries), 101 | true = lists:member({c, 0, 0, 0, ?MAX_SLOW_FACTOR, 0}, All_Entries), 102 | true = lists:member({d, 0, 0, 7, ?MAX_SLOW_FACTOR, 0}, All_Entries), 103 | true = lists:member({{cma,a}, 0, 0, 0, ?MAX_SLOW_FACTOR}, All_Entries), 104 | true = lists:member({{cma,b}, 0, 0, 5, ?MAX_SLOW_FACTOR}, All_Entries), 105 | true = lists:member({{cma,c}, 0, 0, 0, ?MAX_SLOW_FACTOR}, All_Entries), 106 | true = lists:member({{cma,d}, 0, 0, 7, ?MAX_SLOW_FACTOR}, All_Entries), 107 | ok. 108 | 109 | -spec check_limit_errors(config()) -> ok. 110 | check_limit_errors(_Config) -> 111 | Limits1 = [{a, unlimited, -1, ?MAX_SLOW_FACTOR}, 112 | {b, 5, 0, ?MAX_SLOW_FACTOR}, 113 | {c, unlimited, 0, ?MAX_SLOW_FACTOR}], 114 | {error, {invalid_init_args, [{a, unlimited, -1, ?MAX_SLOW_FACTOR}]}} = ?TM:init(Limits1), 115 | Limits2 = [{a, unlimited, -1, ?MAX_SLOW_FACTOR}, 116 | {b, foo, 0, ?MAX_SLOW_FACTOR}, 117 | {c, 0, bar, ?MAX_SLOW_FACTOR}], 118 | {error, {invalid_init_args, Limits2}} = ?TM:init(Limits2), 119 | 120 | %% Call init with good values to test add/remove/adjust... 121 | Limits = [{a, unlimited, 0, ?MAX_SLOW_FACTOR}, {b, 17, 5, ?MAX_SLOW_FACTOR}, 122 | {c, 8, 0, ?MAX_SLOW_FACTOR}, {d, inline_only, 7, ?MAX_SLOW_FACTOR}], 123 | true = ?TM:init(Limits), 124 | 125 | {error, {add_duplicate_task_types, Limits}} = ?TM:add_task_types(Limits), 126 | Limits3 = [{g, foo, 1, ?MAX_SLOW_FACTOR}, {h, 17, -1, ?MAX_SLOW_FACTOR}], 127 | {error, {invalid_add_args, Limits3}} = ?TM:add_task_types(Limits3), 128 | 129 | Limits3a = [{TT, L} || {TT, L, _H, _Slow} <- Limits3], 130 | {error, {missing_task_types, [{g, foo}, {h, 17}]}} = ?TM:adjust_task_limits(Limits3a), 131 | Limits4 = [{a, foo}, {b, -1}], 132 | {error, {invalid_task_limits, Limits4}} = ?TM:adjust_task_limits(Limits4), 133 | ok. 134 | 135 | -spec check_concurrency_types(config()) -> ok. 136 | check_concurrency_types(_Config) -> 137 | Limits = [{a, unlimited, 0, ?MAX_SLOW_FACTOR}, {b, 17, 5, ?MAX_SLOW_FACTOR}, 138 | {c, 8, 0, ?MAX_SLOW_FACTOR}, {d, inline_only, 7, ?MAX_SLOW_FACTOR}], 139 | true = ?TM:init(Limits), 140 | Types = ?TM:concurrency_types(), 141 | [[a, unlimited, 0, 0], [b, 17, 0, 5], [c, 8, 0, 0], [d, inline_only, 0, 7]] 142 | = [[proplists:get_value(P, This_Type_Props) 143 | || P <- [task_type, max_procs, active_procs, max_history]] 144 | || This_Type_Props <- Types], 145 | ok. 146 | 147 | %% execute_task runs a background task without feedback. 148 | -spec check_execute_task(config()) -> ok. 149 | check_execute_task(_Config) -> 150 | {Inline_Type, Spawn_Type} = {ets_inline, ets_spawn}, 151 | Limits = [{Inline_Type, inline_only, 2, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 152 | true = ?TM:init(Limits), 153 | Ets_Table = ets:new(check_execute_task, [public, named_table]), 154 | 155 | _ = try 156 | %% Inline update the shared ets table... 157 | ok = ?TM:execute_task(Inline_Type, ets, insert_new, [Ets_Table, {joe, 5}]), 158 | [{joe, 5}] = ets:lookup(Ets_Table, joe), 159 | ok = ?TM:execute_task(Inline_Type, ets, insert, [Ets_Table, {joe, 7}]), 160 | [{joe, 7}] = ets:lookup(Ets_Table, joe), 161 | true = ets:delete(Ets_Table, joe), 162 | 163 | %% Spawn update the shared ets table. 164 | ok = ?TM:execute_task(Spawn_Type, ets, insert_new, [Ets_Table, {joe, 4}]), 165 | erlang:yield(), 166 | [{joe, 4}] = ets:lookup(Ets_Table, joe), 167 | ok = ?TM:execute_task(Spawn_Type, ets, insert, [Ets_Table, {joe, 6}]), 168 | erlang:yield(), 169 | [{joe, 6}] = ets:lookup(Ets_Table, joe), 170 | true = ets:delete(Ets_Table, joe) 171 | after true = ets:delete(Ets_Table) 172 | end, 173 | 174 | ok. 175 | 176 | %% maybe_execute_task runs a background task without feedback but not more than limit. 177 | -spec check_maybe_execute_task(config()) -> ok. 178 | check_maybe_execute_task(_Config) -> 179 | {Overmax_Type, Spawn_Type} = {ets_overmax, ets_spawn}, 180 | Limits = [{Overmax_Type, inline_only, 0, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 181 | true = ?TM:init(Limits), 182 | Ets_Table = ets:new(check_maybe_execute_task, [public, named_table]), 183 | 184 | _ = try 185 | %% Over max should refuse to run... 186 | {max_pids, 0} = ?TM:maybe_execute_task(Overmax_Type, ets, insert_new, [Ets_Table, {joe, 5}]), 187 | erlang:yield(), 188 | [] = ets:lookup(Ets_Table, joe), 189 | {max_pids, 0} = ?TM:maybe_execute_task(Overmax_Type, ets, insert_new, [Ets_Table, {joe, 7}]), 190 | erlang:yield(), 191 | [] = ets:lookup(Ets_Table, joe), 192 | true = ets:delete(Ets_Table, joe), 193 | [0] = [proplists:get_value(active_procs, Props) 194 | || [{task_type, Type} | _] = Props <- cxy_ctl:concurrency_types(), 195 | Type =:= Overmax_Type], 196 | 197 | %% Spawn update the shared ets table. 198 | ok = ?TM:maybe_execute_task(Spawn_Type, ets, insert_new, [Ets_Table, {joe, 4}]), 199 | erlang:yield(), 200 | [{joe, 4}] = ets:lookup(Ets_Table, joe), 201 | ok = ?TM:maybe_execute_task(Spawn_Type, ets, insert, [Ets_Table, {joe, 6}]), 202 | erlang:yield(), 203 | [{joe, 6}] = ets:lookup(Ets_Table, joe), 204 | true = ets:delete(Ets_Table, joe), 205 | [0] = [proplists:get_value(active_procs, Props) 206 | || [{task_type, Type} | _] = Props <- cxy_ctl:concurrency_types(), 207 | Type =:= Overmax_Type] 208 | 209 | after true = ets:delete(Ets_Table) 210 | end, 211 | 212 | ok. 213 | 214 | %% execute_pid_link runs a task with a return value of Pid or {inline, Result}. 215 | -spec check_execute_pid_link(config()) -> ok. 216 | check_execute_pid_link(_Config) -> 217 | {Inline_Type, Spawn_Type} = {pdict_inline, pdict_spawn}, 218 | Limits = [{Inline_Type, inline_only, 2, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 219 | true = ?TM:init(Limits), 220 | 221 | %% When inline, update our process dictionary... 222 | Old_Joe = erase(joe), 223 | _ = try 224 | {inline, undefined} = ?TM:execute_pid_link(Inline_Type, erlang, put, [joe, 5]), 225 | 5 = get(joe), 226 | {inline, 5} = ?TM:execute_pid_link(Inline_Type, erlang, put, [joe, 7]), 227 | 7 = get(joe) 228 | after put(joe, Old_Joe) 229 | end, 230 | 231 | %% When spawned, it affects a new process dictionary, not ours. 232 | Self = self(), 233 | Old_Joe = erase(joe), 234 | _ = try 235 | undefined = get(joe), 236 | New_Pid = ?TM:execute_pid_link(Spawn_Type, ?MODULE, put_pdict, [joe, 5]), 237 | false = (New_Pid =:= Self), 238 | {links, [Self]} = process_info(New_Pid, links), 239 | {monitors, []} = process_info(New_Pid, monitors), 240 | false = (New_Pid =:= self()), 241 | New_Pid ! {Self, get_pdict, joe}, 242 | undefined = get(joe), 243 | ok = receive {get_pdict, New_Pid, 5} -> ok 244 | after 100 -> timeout 245 | end 246 | after put(joe, Old_Joe) 247 | end, 248 | 249 | ok. 250 | 251 | %% maybe_execute_pid_link runs a task with a return value of Pid or {max_pids, Max}. 252 | -spec check_maybe_execute_pid_link(config()) -> ok. 253 | check_maybe_execute_pid_link(_Config) -> 254 | {Overmax_Type, Spawn_Type} = {pdict_overmax, pdict_spawn}, 255 | Limits = [{Overmax_Type, inline_only, 0, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 256 | true = ?TM:init(Limits), 257 | 258 | %% When inline, update our process dictionary... 259 | Old_Joe = erase(joe), 260 | _ = try 261 | {max_pids, 0} = ?TM:maybe_execute_pid_link(Overmax_Type, erlang, put, [joe, 5]), 262 | erlang:yield(), 263 | undefined = get(joe), 264 | {max_pids, 0} = ?TM:maybe_execute_pid_link(Overmax_Type, erlang, put, [joe, 7]), 265 | erlang:yield(), 266 | undefined = get(joe), 267 | [0] = [proplists:get_value(active_procs, Props) 268 | || [{task_type, Type} | _] = Props <- cxy_ctl:concurrency_types(), 269 | Type =:= Overmax_Type] 270 | 271 | after put(joe, Old_Joe) 272 | end, 273 | 274 | %% When spawned, it affects a new process dictionary, not ours. 275 | Self = self(), 276 | Old_Joe = erase(joe), 277 | _ = try 278 | undefined = get(joe), 279 | New_Pid = ?TM:maybe_execute_pid_link(Spawn_Type, ?MODULE, put_pdict, [joe, 5]), 280 | false = (New_Pid =:= Self), 281 | {links, [Self]} = process_info(New_Pid, links), 282 | {monitors, []} = process_info(New_Pid, monitors), 283 | New_Pid ! {Self, get_pdict, joe}, 284 | undefined = get(joe), 285 | ok = receive {get_pdict, New_Pid, 5} -> ok 286 | after 100 -> timeout 287 | end, 288 | [0] = [proplists:get_value(active_procs, Props) 289 | || [{task_type, Type} | _] = Props <- cxy_ctl:concurrency_types(), 290 | Type =:= Overmax_Type] 291 | 292 | after put(joe, Old_Joe) 293 | end, 294 | 295 | ok. 296 | 297 | %% execute_pid_monitor runs a task with a return value of {Pid, Monitor_Ref} or {inline, Result}. 298 | -spec check_execute_pid_monitor(config()) -> ok. 299 | check_execute_pid_monitor(_Config) -> 300 | {Inline_Type, Spawn_Type} = {pdict_inline, pdict_spawn}, 301 | Limits = [{Inline_Type, inline_only, 0, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 302 | true = ?TM:init(Limits), 303 | 304 | %% When inline, update our process dictionary... 305 | Old_Joe = erase(joe), 306 | _ = try 307 | {inline, undefined} = ?TM:execute_pid_monitor(Inline_Type, erlang, put, [joe, 5]), 308 | 5 = get(joe), 309 | {inline, 5} = ?TM:execute_pid_monitor(Inline_Type, erlang, put, [joe, 7]), 310 | 7 = get(joe) 311 | after put(joe, Old_Joe) 312 | end, 313 | 314 | %% When spawned, it affects a new process dictionary, not ours. 315 | Self = self(), 316 | Old_Joe = erase(joe), 317 | _ = try 318 | undefined = get(joe), 319 | {New_Pid, _Monitor_Ref} = ?TM:execute_pid_monitor(Spawn_Type, ?MODULE, put_pdict, [joe, 5]), 320 | false = (New_Pid =:= self()), 321 | {links, []} = process_info(New_Pid, links), 322 | New_Pid ! {Self, get_pdict, joe}, 323 | undefined = get(joe), 324 | ok = receive {get_pdict, New_Pid, 5} -> ok 325 | after 100 -> timeout 326 | end 327 | after put(joe, Old_Joe) 328 | end, 329 | 330 | ok. 331 | 332 | %% maybe_execute_pid_monitor runs a task with a return value of {Pid, Monitor_Ref} or {max_pids, Max}. 333 | -spec check_maybe_execute_pid_monitor(config()) -> ok. 334 | check_maybe_execute_pid_monitor(_Config) -> 335 | {Overmax_Type, Spawn_Type} = {pdict_overmax, pdict_spawn}, 336 | Limits = [{Overmax_Type, inline_only, 0, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 337 | true = ?TM:init(Limits), 338 | 339 | %% When inline, update our process dictionary... 340 | Old_Joe = erase(joe), 341 | _ = try 342 | {max_pids, 0} = ?TM:maybe_execute_pid_monitor(Overmax_Type, erlang, put, [joe, 5]), 343 | erlang:yield(), 344 | undefined = get(joe), 345 | {max_pids, 0} = ?TM:maybe_execute_pid_monitor(Overmax_Type, erlang, put, [joe, 7]), 346 | erlang:yield(), 347 | undefined = get(joe), 348 | [0] = [proplists:get_value(active_procs, Props) 349 | || [{task_type, Type} | _] = Props <- cxy_ctl:concurrency_types(), 350 | Type =:= Overmax_Type] 351 | 352 | after put(joe, Old_Joe) 353 | end, 354 | 355 | %% When spawned, it affects a new process dictionary, not ours. 356 | Self = self(), 357 | Old_Joe = erase(joe), 358 | _ = try 359 | undefined = get(joe), 360 | {New_Pid, _Monitor_Ref} = ?TM:maybe_execute_pid_monitor(Spawn_Type, ?MODULE, put_pdict, [joe, 5]), 361 | false = (New_Pid =:= Self), 362 | {links, []} = process_info(New_Pid, links), 363 | New_Pid ! {Self, get_pdict, joe}, 364 | undefined = get(joe), 365 | ok = receive {get_pdict, New_Pid, 5} -> ok 366 | after 100 -> timeout 367 | end, 368 | [0] = [proplists:get_value(active_procs, Props) 369 | || [{task_type, Type} | _] = Props <- cxy_ctl:concurrency_types(), 370 | Type =:= Overmax_Type] 371 | 372 | after put(joe, Old_Joe) 373 | end, 374 | 375 | ok. 376 | 377 | -spec check_multiple_init_calls(config()) -> ok. 378 | check_multiple_init_calls(_Config) -> 379 | Limits1 = [{a, unlimited, 0, ?MAX_SLOW_FACTOR}, {b, 17, 5, ?MAX_SLOW_FACTOR}, 380 | {c, 8, 0, ?MAX_SLOW_FACTOR}, {d, inline_only, 7, ?MAX_SLOW_FACTOR}], 381 | true = ?TM:init(Limits1), 382 | {error, init_already_executed} = ?TM:init(Limits1), 383 | {error, init_already_executed} = ?TM:init([]), 384 | 385 | Cxy_Limits = [L || {_, L, _, _} <- Limits1], 386 | Cxy_Limits = [proplists:get_value(max_procs, P) || P <- ?TM:concurrency_types()], 387 | 388 | Dup1 = {b, 217, 15, ?MAX_SLOW_FACTOR}, 389 | Dup2 = {d, inline_only, 17, ?MAX_SLOW_FACTOR}, 390 | Limits2 = [{f, unlimited, 0, ?MAX_SLOW_FACTOR}, Dup1, {e, 18, 10, ?MAX_SLOW_FACTOR}, Dup2], 391 | {error, {add_duplicate_task_types, [Dup1, Dup2]}} = ?TM:add_task_types(Limits2), 392 | 393 | Error_Dups = [T || {T, _, _, _} <- Limits2 -- [Dup1, Dup2]], 394 | {error, {missing_task_types, Error_Dups}} = ?TM:remove_task_types([T || {T, _, _, _} <- Limits2]), 395 | 2 = ?TM:remove_task_types([element(1,Dup1), element(1,Dup2)]), 396 | Missing_Task_Types = [T || {T, _, _, _} <- Limits2], 397 | {error, {missing_task_types, Missing_Task_Types}} = ?TM:remove_task_types([T || {T, _, _, _} <- Limits2]), 398 | true = ?TM:add_task_types(Limits2), 399 | {error, {add_duplicate_task_types, Limits1}} = ?TM:add_task_types(Limits1), 400 | 401 | [unlimited,217,8,inline_only,18,unlimited] 402 | = [proplists:get_value(max_procs, P) || P <- ?TM:concurrency_types()], 403 | ok. 404 | 405 | -spec put_pdict(atom(), any()) -> {get_pdict, pid(), any()}. 406 | put_pdict(Key, Value) -> 407 | put(Key, Value), 408 | get_pdict(Key). 409 | 410 | get_pdict(Key) -> 411 | receive {From, get_pdict, Key} -> From ! {get_pdict, self(), get(Key)} end. 412 | 413 | get_pdict() -> 414 | Vals = filter_pdict(), 415 | receive {From, get_pdict} -> From ! {get_pdict, self(), Vals} after 300 -> pdict_timeout end. 416 | 417 | filter_pdict() -> lists:sort([{K, V} || {{cxy_ctl, K}, V} <- get()]). 418 | 419 | -spec fetch_ages() -> pdict_timeout | {get_pdict, pid(), proplists:proplist()}. 420 | fetch_ages() -> get_pdict(). 421 | 422 | -spec fetch_ets_ages(atom() | ets:tid()) -> ok. 423 | fetch_ets_ages(Ets_Table) -> 424 | Vals = [{K, V} || {{cxy_ctl, K}, V} <- get()], 425 | ets:insert(Ets_Table, {results, Vals}), 426 | ok. 427 | 428 | -spec check_copying_dict(config()) -> ok. 429 | check_copying_dict(_Config) -> 430 | {Inline_Type, Spawn_Type} = {pd_inline, pd_spawn}, 431 | Limits = [{Inline_Type, inline_only, 2, ?MAX_SLOW_FACTOR}, {Spawn_Type, 3, 5, ?MAX_SLOW_FACTOR}], 432 | true = ?TM:init(Limits), 433 | 434 | %% Init the current process dictionary... 435 | put({cxy_ctl, ann}, 13), 436 | put({cxy_ctl, joe}, 5), 437 | put({cxy_ctl, sam}, 7), 438 | Stable_Pre_Call_Dict = filter_pdict(), 439 | 440 | _ = try 441 | Self = self(), 442 | Ets_Table = ets:new(execute_task, [public, named_table]), 443 | Joe = ?TM:make_process_dictionary_default_value({cxy_ctl, joe}, 8), 444 | Sue = ?TM:make_process_dictionary_default_value({cxy_ctl, sue}, 4), 445 | 446 | ok = ?TM:execute_task(Spawn_Type, ?MODULE, fetch_ets_ages, [Ets_Table], all_keys), 447 | erlang:yield(), 448 | [{results, Props1}] = ets:tab2list(Ets_Table), 449 | [{ann,13},{joe, 5},{sam, 7}] = lists:sort(Props1), 450 | Stable_Pre_Call_Dict = filter_pdict(), 451 | 452 | ok = ?TM:execute_task(Spawn_Type, ?MODULE, fetch_ets_ages, [Ets_Table], [{cxy_ctl, joe}, {cxy_ctl, sam}]), 453 | erlang:yield(), 454 | [{results, Props2}] = ets:tab2list(Ets_Table), 455 | [{joe, 5},{sam, 7}] = lists:sort(Props2), 456 | Stable_Pre_Call_Dict = filter_pdict(), 457 | 458 | ok = ?TM:execute_task(Spawn_Type, ?MODULE, fetch_ets_ages, [Ets_Table], [Joe, Sue]), 459 | erlang:yield(), 460 | [{results, Props3}] = ets:tab2list(Ets_Table), 461 | [{joe, 5},{sue, 4}] = lists:sort(Props3), 462 | Stable_Pre_Call_Dict = filter_pdict(), 463 | 464 | 465 | Pid1b = ?TM:execute_pid_link(Spawn_Type, ?MODULE, fetch_ages, [], all_keys), 466 | Pid1b ! {Self, get_pdict}, 467 | ok = receive {get_pdict, Pid1b, Props4} -> [{ann,13},{joe, 5},{sam, 7}] = lists:sort(Props4), ok; 468 | Any1b -> {any, Any1b} 469 | after 300 -> test_timeout 470 | end, 471 | Stable_Pre_Call_Dict = filter_pdict(), 472 | 473 | Pid2b = ?TM:execute_pid_link(Spawn_Type, ?MODULE, fetch_ages, [], [{cxy_ctl, joe}, {cxy_ctl, sam}]), 474 | Pid2b ! {Self, get_pdict}, 475 | ok = receive {get_pdict, Pid2b, Props5} -> [{joe, 5},{sam, 7}] = lists:sort(Props5), ok; 476 | Any2b -> {any, Any2b} 477 | after 300 -> test_timeout 478 | end, 479 | Stable_Pre_Call_Dict = filter_pdict(), 480 | 481 | Joe = ?TM:make_process_dictionary_default_value({cxy_ctl, joe}, 8), 482 | Sue = ?TM:make_process_dictionary_default_value({cxy_ctl, sue}, 4), 483 | Pid3b = ?TM:execute_pid_link(Spawn_Type, ?MODULE, fetch_ages, [], [Joe, Sue]), 484 | Pid3b ! {Self, get_pdict}, 485 | ok = receive {get_pdict, Pid3b, Props6} -> [{joe, 5},{sue, 4}] = lists:sort(Props6), ok; 486 | Any3b -> {any, Any3b} 487 | after 300 -> test_timeout 488 | end, 489 | Stable_Pre_Call_Dict = filter_pdict(), 490 | 491 | %% Inlines mess up the local dictionary, so do them last... 492 | ok = ?TM:execute_task(Inline_Type, ?MODULE, fetch_ets_ages, [Ets_Table], all_keys), 493 | [{results, Props7}] = ets:tab2list(Ets_Table), 494 | [{ann,13},{joe, 5},{sam, 7}] = lists:sort(Props7), 495 | Stable_Pre_Call_Dict = filter_pdict(), 496 | 497 | {inline, ok} = ?TM:execute_pid_link(Inline_Type, ?MODULE, fetch_ets_ages, [Ets_Table], all_keys), 498 | [{results, Props8}] = ets:tab2list(Ets_Table), 499 | [{ann,13},{joe, 5},{sam, 7}] = lists:sort(Props8), 500 | Stable_Pre_Call_Dict = filter_pdict(), 501 | 502 | ok = ?TM:execute_task(Inline_Type, ?MODULE, fetch_ets_ages, [Ets_Table], [{cxy_ctl, joe}, {cxy_ctl, sam}]), 503 | [{results, Props9}] = ets:tab2list(Ets_Table), 504 | [{ann,13},{joe, 5},{sam, 7}] = lists:sort(Props9), 505 | Stable_Pre_Call_Dict = filter_pdict(), 506 | 507 | {inline, ok} = ?TM:execute_pid_link(Inline_Type, ?MODULE, fetch_ets_ages, [Ets_Table], [{cxy_ctl, joe}, {cxy_ctl, sam}]), 508 | [{results, Props10}] = ets:tab2list(Ets_Table), 509 | [{ann,13},{joe, 5},{sam, 7}] = lists:sort(Props10), 510 | Stable_Pre_Call_Dict = filter_pdict(), 511 | 512 | ok = ?TM:execute_task(Inline_Type, ?MODULE, fetch_ets_ages, [Ets_Table], [Joe, Sue]), 513 | [{results, Props11}] = ets:tab2list(Ets_Table), 514 | [{ann,13},{joe, 5},{sam, 7},{sue,4}] = lists:sort(Props11), 515 | [{sue,4}] = filter_pdict() -- Stable_Pre_Call_Dict, 516 | 517 | erase({cxy_ctl, sue}), 518 | 519 | Stable_Pre_Call_Dict = filter_pdict(), 520 | {inline, ok} = ?TM:execute_pid_link(Inline_Type, ?MODULE, fetch_ets_ages, [Ets_Table], [Joe, Sue]), 521 | [{results, Props12}] = ets:tab2list(Ets_Table), 522 | [{ann,13},{joe, 5},{sam, 7},{sue,4}] = lists:sort(Props12), 523 | [{sue,4}] = filter_pdict() -- Stable_Pre_Call_Dict, 524 | 525 | true = ets:delete(Ets_Table) 526 | 527 | after [13, 5, 7] = [erase(K) || K <- [{cxy_ctl, ann}, {cxy_ctl, joe}, {cxy_ctl, sam}]] 528 | end, 529 | 530 | ok. 531 | 532 | check_high_water(_Config) -> 533 | {Inline_Type, Spawn_Type} = {high_water_inline, high_water_spawn}, 534 | Spawn_Max = 2, 535 | Limits = [{Inline_Type, inline_only, 0, ?MAX_SLOW_FACTOR}, {Spawn_Type, 2, 0, ?MAX_SLOW_FACTOR}], 536 | true = ?TM:init(Limits), 537 | 538 | %% Inline task, never run more than one at a time. 539 | ?TM:high_water(Inline_Type, clear), 540 | 0 = ?TM:high_water(Inline_Type), 541 | ok = ?TM:execute_task(Inline_Type, timer, sleep, [200]), erlang:yield(), 542 | ok = ?TM:execute_task(Inline_Type, timer, sleep, [200]), erlang:yield(), 543 | 1 = ?TM:high_water(Inline_Type), 544 | 1 = ?TM:high_water(Inline_Type, no_clear), 545 | 1 = ?TM:high_water(Inline_Type, clear), 546 | 0 = ?TM:high_water(Inline_Type), 547 | 548 | %% Spawn task, run concurrently. 549 | ?TM:high_water(Spawn_Type, clear), 550 | 0 = ?TM:high_water(Spawn_Type), 551 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 552 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 553 | 2 = ?TM:high_water(Spawn_Type, clear), 554 | 555 | %% Spawn task, run concurrently but not more than max. 556 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 557 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 558 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 559 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 560 | ok = ?TM:execute_task(Spawn_Type, timer, sleep, [200]), erlang:yield(), 561 | %% 3 because 2 are spawned and 1 runs inline concurrently 562 | 3 = ?TM:high_water(Spawn_Type), 563 | 564 | ok. 565 | -------------------------------------------------------------------------------- /test/epocxy/cxy_fount_SUITE.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015-2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015-2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Validation of cxy_fount using common test and PropEr. 9 | %%% 10 | %%% @since 0.9.9 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(cxy_fount_SUITE). 14 | -auth('jay@duomark.com'). 15 | -vsn(''). 16 | 17 | %%% Common_test exports 18 | -export([all/0, groups/0, 19 | init_per_suite/1, end_per_suite/1, 20 | init_per_group/1, end_per_group/1, 21 | init_per_testcase/2, end_per_testcase/2 22 | ]). 23 | 24 | %%% Test case exports 25 | -export([ 26 | check_construction/1, check_edge_pid_allocs/1, 27 | check_reservoir_refills/1, check_faulty_behaviour/1, 28 | report_speed/1 29 | ]). 30 | 31 | -include("epocxy_common_test.hrl"). 32 | 33 | 34 | %%%=================================================================== 35 | %%% Test cases 36 | %%%=================================================================== 37 | 38 | -type test_case() :: atom(). 39 | -type test_group() :: atom(). 40 | 41 | -spec all() -> [test_case() | {group, test_group()}]. 42 | all() -> [ 43 | %% Uncomment to test failing user-supplied module. 44 | %% {group, check_behaviour} % Ensure behaviour crashes properly. 45 | 46 | {group, check_create} % Verify construction and reservoir refill. 47 | ]. 48 | 49 | -spec groups() -> [{test_group(), [sequence], [test_case() | {group, test_group()}]}]. 50 | groups() -> [ 51 | %% Uncomment to test failing user-supplied module. 52 | %% {check_behaviour, [sequence], [check_faulty_behaviour]}, 53 | 54 | {check_create, [sequence], [check_construction, check_edge_pid_allocs, 55 | check_reservoir_refills, report_speed]} 56 | ]. 57 | 58 | 59 | -type config() :: proplists:proplist(). 60 | -spec init_per_suite (config()) -> config(). 61 | -spec end_per_suite (config()) -> config(). 62 | 63 | init_per_suite (Config) -> Config. 64 | end_per_suite (Config) -> Config. 65 | 66 | -spec init_per_group (config()) -> config(). 67 | -spec end_per_group (config()) -> config(). 68 | 69 | init_per_group (Config) -> Config. 70 | end_per_group (Config) -> Config. 71 | 72 | -spec init_per_testcase (atom(), config()) -> config(). 73 | -spec end_per_testcase (atom(), config()) -> config(). 74 | 75 | init_per_testcase (_Test_Case, Config) -> Config. 76 | end_per_testcase (_Test_Case, Config) -> Config. 77 | 78 | %% Test Modules is ?TM 79 | -define(TM, cxy_fount). 80 | 81 | %%%=================================================================== 82 | %%% check_construction/1 83 | %%%=================================================================== 84 | -spec check_construction(config()) -> ok. 85 | check_construction(_Config) -> 86 | Test = "Check that founts can be constructed and refill fully", 87 | ct:comment(Test), ct:log(Test), 88 | 89 | Full_Fn = ?FORALL({Slab_Size, Depth}, {range(1,37), range(3,17)}, 90 | verify_full_fount(Slab_Size, Depth)), 91 | true = proper:quickcheck(Full_Fn, ?PQ_NUM(5)), 92 | 93 | Test_Complete = "Fount construction and full refill verified", 94 | ct:comment(Test_Complete), ct:log(Test_Complete), 95 | ok. 96 | 97 | fount_dims(Slab_Size, Depth) -> 98 | "(" ++ integer_to_list(Slab_Size) ++ " pids x " ++ integer_to_list(Depth) ++ " slabs)". 99 | 100 | verify_full_fount(Slab_Size, Depth) -> 101 | Fount_Dims = fount_dims(Slab_Size, Depth), 102 | Case1 = "Verify construction of a " ++ Fount_Dims ++ " fount results in a full reservoir", 103 | ct:comment(Case1), ct:log(Case1), 104 | {Sup, Fount} = start_fount(cxy_fount_hello_behaviour, Slab_Size, Depth), 105 | 106 | Case2 = "Verify that an empty " ++ Fount_Dims ++ " fount refills itself", 107 | ct:comment(Case2), ct:log(Case2), 108 | Full_Fount_Size = Depth * Slab_Size, 109 | Pids1 = ?TM:get_pids(Fount, Full_Fount_Size), 110 | 'FULL' = verify_reservoir_is_full(Fount, Depth), 111 | Pids2 = ?TM:get_pids(Fount, Full_Fount_Size), 112 | 'FULL' = verify_reservoir_is_full(Fount, Depth), 113 | 114 | Case3 = "Verify that fetches get different pids", 115 | ct:comment(Case3), ct:log(Case3), 116 | true = sets:is_disjoint(sets:from_list(Pids1), sets:from_list(Pids2)), 117 | 118 | unlink(Sup), 119 | exit(Sup, kill), 120 | Complete = "Fount " ++ Fount_Dims ++ " construction verified", 121 | ct:comment(Complete), ct:log(Complete), 122 | true. 123 | 124 | start_fount(Behaviour, Slab_Size, Depth) -> 125 | %% Time slice cannot be smaller than 1/500th second 126 | Options = [{slab_size, Slab_Size}, {num_slabs, Depth}, {time_slice, 500}], 127 | {ok, Sup} = cxy_fount_sup:start_link(Behaviour, [none], Options), 128 | Fount = cxy_fount_sup:get_fount(Sup), 129 | 'FULL' = verify_reservoir_is_full(Fount, Depth), 130 | {Sup, Fount}. 131 | 132 | verify_reservoir_is_full(Fount, Num_Of_Spawn_Slabs) -> 133 | Time_To_Sleep = (Num_Of_Spawn_Slabs + 1) * 6, % 1/167th second wait per slab 134 | timer:sleep(Time_To_Sleep), 135 | Status = ?TM:get_status(Fount), 136 | Final_State = proplists:get_value(current_state, Status), 137 | finish_full(Time_To_Sleep, Final_State, Status). 138 | 139 | finish_full(Time_Slept, Final_State, Status) -> 140 | ct:log("Slept ~p milliseconds before reservoir was ~p", [Time_Slept, Final_State]), 141 | [FC, Num_Slabs, Max_Slabs, Slab_Size, Pid_Count, Max_Pids] 142 | = [proplists:get_value(P, Status) 143 | || P <- [fount_count, slab_count, max_slabs, slab_size, pid_count, max_pids]], 144 | Ok_Full_Count = (Max_Pids - Pid_Count) < Slab_Size, 145 | Ok_Slab_Count = (Max_Slabs - 1) =:= Num_Slabs andalso FC > 0, 146 | 147 | %% Provide extra data on failure 148 | {true, true, FC, Num_Slabs, Slab_Size, Max_Pids, Final_State, Time_Slept} 149 | = {Ok_Full_Count, Ok_Slab_Count, FC, Num_Slabs, Slab_Size, 150 | Max_Pids, Final_State, Time_Slept}, 151 | Final_State. 152 | 153 | 154 | %%%=================================================================== 155 | %%% check_edge_pid_allocs/1 looks for simple edge case number of pids 156 | %%%=================================================================== 157 | -spec check_edge_pid_allocs(config()) -> ok. 158 | check_edge_pid_allocs(_Config) -> 159 | Test = "Check that founts can dole out various sized lists of pids", 160 | ct:comment(Test), ct:log(Test), 161 | 162 | %% Depth at least 3 to avoid 'FULL' with depth 3 or less and partial fount. 163 | Edge_Fn = ?FORALL({Slab_Size, Depth}, {range(1,37), range(4,17)}, 164 | verify_edges(Slab_Size, Depth)), 165 | true = proper:quickcheck(Edge_Fn, ?PQ_NUM(5)), 166 | 167 | Test_Complete = "Reservoir refill edge conditions verified", 168 | ct:comment(Test_Complete), ct:log(Test_Complete), 169 | ok. 170 | 171 | verify_edge(Fount, Alloc_Size, Depth) -> 172 | Pids = ?TM:get_pids(Fount, Alloc_Size), 173 | 'FULL' = verify_reservoir_is_full(Fount, Depth), 174 | Pids. 175 | 176 | verify_edges(Slab_Size, Depth) -> 177 | Fount_Dims = fount_dims(Slab_Size, Depth), 178 | Case1 = "Verify reservoir " ++ Fount_Dims ++ " fount refill edge conditions", 179 | ct:comment(Case1), ct:log(Case1), 180 | 181 | {Sup, Fount} = start_fount(cxy_fount_hello_behaviour, Slab_Size, Depth), 182 | 183 | Multiples = [{N, N * Slab_Size} || N <- [1,2,3]], 184 | Case2_Msg = "Verify ~s fount allocation in slab multiples ~p", 185 | Case2 = lists:flatten(io_lib:format(Case2_Msg, [Fount_Dims, Multiples])), 186 | ct:comment(Case2), ct:log(Case2), 187 | [Pids1, Pids2, Pids3] 188 | = [verify_edge(Fount, Alloc_Size, Alloc_Depth) 189 | || {Alloc_Depth, Alloc_Size} <- Multiples], 190 | 191 | %% Deviate from slab modulo arithmetic... 192 | Pids4 = [?TM:get_pid(Fount), ?TM:get_pid(Fount)], 193 | 194 | [Pids5, Pids6, Pids7] 195 | = [verify_edge(Fount, Alloc_Size, Alloc_Depth) 196 | || {Alloc_Depth, Alloc_Size} <- Multiples], 197 | 198 | %% Make sure all pids are unique 199 | All_Pids = lists:append([Pids1, Pids2, Pids3, Pids4, Pids5, Pids6, Pids7]), 200 | Num_Pids = length(All_Pids), 201 | Num_Pids = sets:size(sets:from_list(All_Pids)), 202 | 203 | Get8_Count = max(1, Slab_Size div 3), 204 | Get9_Count = max(1, Slab_Size div 2), 205 | Case3 = "Verify " ++ Fount_Dims ++ " allocation in < slab counts (" 206 | ++ integer_to_list(Get8_Count) ++ "," ++ integer_to_list(Get9_Count) ++ ")", 207 | ct:comment(Case3), ct:log(Case3), 208 | Pids8 = ?TM:get_pids(Fount, Get8_Count), 209 | Pids9 = ?TM:get_pids(Fount, Get9_Count), 210 | {Get8_Count, Get9_Count} = {length(Pids8), length(Pids9)}, 211 | true = sets:is_disjoint(sets:from_list(Pids8), sets:from_list(Pids9)), 212 | 'FULL' = verify_reservoir_is_full(Fount, 1), 213 | 214 | Max_Pids = Slab_Size * Depth, 215 | Get10_Count = min(Max_Pids, round(Slab_Size * 2.4)), 216 | Get11_Count = min(Max_Pids, round(Slab_Size * 1.7)), 217 | Case4 = "Verify " ++ Fount_Dims ++ " allocation in > slab counts (" 218 | ++ integer_to_list(Get10_Count) ++ "," ++ integer_to_list(Get11_Count) ++ ")", 219 | ct:comment(Case4), ct:log(Case4), 220 | 'FULL' = verify_reservoir_is_full(Fount, 1), 221 | Pids10 = ?TM:get_pids(Fount, Get10_Count), 222 | 'FULL' = verify_reservoir_is_full(Fount, 3), 223 | Pids11 = ?TM:get_pids(Fount, Get11_Count), 224 | {{Get10_Count, Get10_Count}, {Get11_Count, Get11_Count}} 225 | = {{Get10_Count, length(Pids10)}, {Get11_Count, length(Pids11)}}, 226 | true = sets:is_disjoint(sets:from_list(Pids10), sets:from_list(Pids11)), 227 | 'FULL' = verify_reservoir_is_full(Fount, 2), 228 | 229 | unlink(Sup), 230 | exit(Sup, kill), 231 | Test_Complete = "Fount " ++ Fount_Dims ++ " pid allocation verified", 232 | ct:comment(Test_Complete), ct:log(Test_Complete), 233 | true. 234 | 235 | 236 | %%%=================================================================== 237 | %%% check_reservoir_refills/1 238 | %%%=================================================================== 239 | -spec check_reservoir_refills(config()) -> ok. 240 | check_reservoir_refills(_Config) -> 241 | Test = "Check that repeated fount requests are quickly replaced", 242 | ct:comment(Test), ct:log(Test), 243 | 244 | Test_Allocators = 245 | ?FORALL({Slab_Size, Depth, Num_Pids}, 246 | {range(1,20), range(3,10), non_empty(list(range(1, 30)))}, 247 | verify_slab_refills(Slab_Size, Depth, Num_Pids)), 248 | true = proper:quickcheck(Test_Allocators, ?PQ_NUM(100)), 249 | 250 | Test_Complete = "Verified repeated fount requests are quickly replaced", 251 | ct:comment(Test_Complete), ct:log(Test_Complete), 252 | ok. 253 | 254 | verify_slab_refills(Slab_Size, Depth, Num_Pids) -> 255 | ct:log("PropEr testing slab_size ~p, depth ~p with ~w get_pid_fetches", 256 | [Slab_Size, Depth, Num_Pids]), 257 | {Sup, Fount} = start_fount(cxy_fount_hello_behaviour, Slab_Size, Depth), 258 | 259 | ct:log("Testing get"), 260 | [verify_pids(Fount, N, Slab_Size, Depth, get) || N <- Num_Pids], 261 | ct:log("Testing task"), 262 | [verify_pids(Fount, N, Slab_Size, Depth, task) || N <- Num_Pids], 263 | 264 | unlink(Sup), 265 | exit(Sup, kill), 266 | true. 267 | 268 | verify_pids(Fount, Num_Pids, Slab_Size, Depth, Task_Or_Get) -> 269 | erlang:yield(), 270 | Pids = case Task_Or_Get of 271 | get -> ?TM:get_pids (Fount, Num_Pids); 272 | task -> ?TM:task_pids (Fount, lists:duplicate(Num_Pids, hello)) 273 | end, 274 | case Num_Pids of 275 | Num_Pids when Num_Pids > Slab_Size * Depth -> 276 | [] = Pids, 277 | 'FULL' = verify_reservoir_is_full(Fount, 1); 278 | Num_Pids when Num_Pids > Slab_Size * (Depth - 1) -> 279 | case length(Pids) of 280 | 0 -> 'FULL' = verify_reservoir_is_full(Fount, 1); 281 | Num_Pids -> 282 | Num_Pids = length(Pids), 283 | _ = unlink_workers(Pids, Task_Or_Get), 284 | 'FULL' = verify_reservoir_is_full(Fount, Num_Pids div Slab_Size) 285 | end; 286 | Num_Pids -> 287 | _ = unlink_workers(Pids, Task_Or_Get), 288 | 'FULL' = verify_reservoir_is_full(Fount, Num_Pids div Slab_Size) 289 | end. 290 | 291 | unlink_workers(Pids, Task_Or_Get) -> 292 | [begin 293 | case process_info(Pid, links) of 294 | undefined -> skip; 295 | {links, Links} -> 296 | false = lists:member(Pid, Links), 297 | Task_Or_Get =:= task 298 | orelse cxy_fount_hello_behaviour:say_to(Pid, hello) 299 | %% Killing crashes the test, but normal end above doesn't??? 300 | %% exit(Pid, kill) 301 | end 302 | end || Pid <- Pids]. 303 | 304 | %%%=================================================================== 305 | %%% check_faulty_behaviour/1 306 | %%%=================================================================== 307 | -spec check_faulty_behaviour(config()) -> ok. 308 | check_faulty_behaviour(_Config) -> 309 | Test = "Check that non-pid returns crash the fount", 310 | ct:comment(Test), ct:log(Test), 311 | 312 | Case1 = "Verify a bad behaviour crashes the fount", 313 | ct:comment(Case1), ct:log(Case1), 314 | Old_Trap = process_flag(trap_exit, true), 315 | try 316 | Slab_Size = 10, Num_Slabs = 3, 317 | Behaviour = cxy_fount_fail_behaviour, 318 | Fount_Options = [{slab_size, Slab_Size}, {num_slabs, Num_Slabs}, {time_slice, 500}], 319 | {ok, Sup1} = cxy_fount_sup:start_link(Behaviour, [none], Fount_Options), 320 | Fount1 = cxy_fount_sup:get_fount(Sup1), 321 | crashed = bad_pid(Fount1), 322 | 323 | {ok, Sup2} = cxy_fount_sup:start_link(Behaviour, [none], [{time_slice, 500}]), 324 | Fount2 = cxy_fount_sup:get_fount(Sup2), 325 | crashed = bad_pid(Fount2) 326 | 327 | after true = process_flag(trap_exit, Old_Trap) 328 | end, 329 | 330 | Test_Complete = "Fount failure verified", 331 | ct:comment(Test_Complete), ct:log(Test_Complete), 332 | ok. 333 | 334 | bad_pid(Fount) -> 335 | receive {'EXIT', Fount, 336 | {{case_clause, bad_pid}, 337 | [{cxy_fount, allocate_slab, 5, 338 | [{file, "src/cxy_fount.erl"}, {line,_}]}]}} -> crashed 339 | after 1000 -> timeout 340 | end. 341 | 342 | 343 | %%%=================================================================== 344 | %%% report_speed/1 345 | %%%=================================================================== 346 | -spec report_speed(config()) -> ok. 347 | report_speed(_Config) -> 348 | Test = "Report the spawning speed", 349 | ct:comment(Test), ct:log(Test), 350 | 351 | lists:foreach( 352 | fun({Slab_Size, Num_Slabs}) -> 353 | {Sup, Fount} = start_fount(cxy_fount_hello_behaviour, Slab_Size, Num_Slabs), 354 | 'FULL' = verify_reservoir_is_full(Fount, Num_Slabs), % Give it a chance to fill up 355 | ct:log("Spawn rate per process with ~p pids for ~p slabs: ~p microseconds", 356 | [Slab_Size, Num_Slabs, cxy_fount:get_spawn_rate_per_process(Fount)]), 357 | ct:log("Spawn rate per slab with ~p pids for ~p slabs: ~p microseconds", 358 | [Slab_Size, Num_Slabs, cxy_fount:get_spawn_rate_per_slab(Fount)]), 359 | ct:log("Replacement rate per process with ~p pids for ~p slabs: ~p microseconds", 360 | [Slab_Size, Num_Slabs, cxy_fount:get_total_rate_per_process(Fount)]), 361 | ct:log("Replacement rate per slab with ~p pids for ~p slabs: ~p microseconds", 362 | [Slab_Size, Num_Slabs, cxy_fount:get_total_rate_per_slab(Fount)]), 363 | unlink(Sup), 364 | exit(Sup, kill) 365 | end, [{5,100}, {20, 50}, {40, 50}, {60, 50}, {80, 50}, {100, 50}, 366 | {150, 50}, {200, 50}, {250, 50}, {300, 50}, {500, 50}]), 367 | 368 | Test_Complete = "Fount reporting speed reported", 369 | ct:comment(Test_Complete), ct:log(Test_Complete), 370 | ok. 371 | -------------------------------------------------------------------------------- /test/epocxy/cxy_fount_fail_behaviour.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015-2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015-2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Example behaviour for cxy_fount failure testing. 9 | %%% 10 | %%% @since 0.9.9 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(cxy_fount_fail_behaviour). 14 | -behaviour(cxy_fount). 15 | 16 | %% Behaviour API 17 | -export([init/1, start_pid/2, send_msg/2]). 18 | 19 | init (_) -> {}. 20 | start_pid (_Fount, {}) -> bad_pid. 21 | send_msg (Worker, _Msg) -> Worker. 22 | -------------------------------------------------------------------------------- /test/epocxy/cxy_fount_hello_behaviour.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015-2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015-2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Example behaviour for cxy_fount testing. 9 | %%% 10 | %%% @since 0.9.9 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(cxy_fount_hello_behaviour). 14 | -behaviour(cxy_fount). 15 | 16 | %% Behaviour API 17 | -export([init/1, start_pid/2, send_msg/2]). 18 | 19 | %% For testing only 20 | -export([say_to/2]). 21 | 22 | -type fount () :: cxy_fount:fount_ref(). 23 | -type stamp () :: erlang:timestamp(). 24 | -type pid_msg () :: any(). 25 | 26 | -spec init (any()) -> stamp(). 27 | -spec start_pid (fount(), stamp()) -> pid() | {error, Reason::any()}. 28 | -spec send_msg (Worker, pid_msg()) -> Worker | {error, Reason::any()} 29 | when Worker :: pid(). 30 | 31 | init(_) -> 32 | os:timestamp(). 33 | 34 | start_pid(Fount, Started) -> 35 | cxy_fount:spawn_worker(Fount, fun wait_for_hello/1, [Started]). 36 | 37 | send_msg(Worker, Msg) -> 38 | spawn_link(fun() -> say_to(Worker, Msg) end), 39 | Worker. 40 | 41 | %% Idle workers may wait a while before being used in a test. 42 | wait_for_hello(Started) -> 43 | receive 44 | {Ref, From, hello} -> reply(Started, From, Ref, goodbye); 45 | {Ref, From, Unexpected} -> reply(Started, From, Ref, {unexpected, Unexpected}) 46 | after 30000 -> wait_for_hello_timeout 47 | end. 48 | 49 | reply(Started, From, Ref, Msg) -> 50 | Elapsed = timer:now_diff(os:timestamp(), Started), 51 | From ! {Ref, Msg, now(), {elapsed, Elapsed}}. 52 | 53 | %% Just verify the goodbye response comes after saying hello. 54 | say_to(Worker, Msg) -> 55 | Ref = make_ref(), 56 | Now1 = now(), 57 | Worker ! {Ref, self(), Msg}, 58 | %% now() is used to guarantee monotonic increasing time 59 | receive {Ref, goodbye, Now2, {elapsed, _Elapsed}} -> true = Now1 < Now2 60 | after 1000 -> throw(say_hello_timeout) 61 | end. 62 | -------------------------------------------------------------------------------- /test/epocxy/cxy_regulator_SUITE.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2015-2016, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2015-2016 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Validation of cxy_regulator using common test and PropEr. 9 | %%% 10 | %%% @since 0.9.9 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(cxy_regulator_SUITE). 14 | -author('Jay Nelson '). 15 | -vsn(''). 16 | 17 | %%% Common_test exports 18 | -export([all/0, groups/0, 19 | init_per_suite/1, end_per_suite/1, 20 | init_per_group/1, end_per_group/1, 21 | init_per_testcase/2, end_per_testcase/2 22 | ]). 23 | 24 | %%% Test case exports 25 | -export([ 26 | check_construction/1, check_pause_resume/1, 27 | check_add_slabs/1 28 | ]). 29 | 30 | -include("epocxy_common_test.hrl"). 31 | 32 | 33 | %%%=================================================================== 34 | %%% Test cases 35 | %%%=================================================================== 36 | 37 | -type test_case() :: atom(). 38 | -type test_group() :: atom(). 39 | 40 | -spec all() -> [test_case() | {group, test_group()}]. 41 | all() -> [ 42 | %% Uncomment to test failing user-supplied module. 43 | %% {group, check_behaviour} % Ensure behaviour crashes properly. 44 | 45 | {group, check_create}, % Verify construction and pause/resume 46 | {group, check_slabs} 47 | ]. 48 | 49 | -spec groups() -> [{test_group(), [sequence], [test_case() | {group, test_group()}]}]. 50 | groups() -> [ 51 | {check_create, [sequence], [check_construction, check_pause_resume]}, 52 | {check_slabs, [sequence], [check_add_slabs]} 53 | ]. 54 | 55 | 56 | -type config() :: proplists:proplist(). 57 | -spec init_per_suite (config()) -> config(). 58 | -spec end_per_suite (config()) -> config(). 59 | 60 | init_per_suite (Config) -> Config. 61 | end_per_suite (Config) -> Config. 62 | 63 | -spec init_per_group (config()) -> config(). 64 | -spec end_per_group (config()) -> config(). 65 | 66 | init_per_group (Config) -> Config. 67 | end_per_group (Config) -> Config. 68 | 69 | -spec init_per_testcase (atom(), config()) -> config(). 70 | -spec end_per_testcase (atom(), config()) -> config(). 71 | 72 | init_per_testcase (_Test_Case, Config) -> [{time_slice, 100} | Config]. 73 | end_per_testcase (_Test_Case, Config) -> Config. 74 | 75 | %% Test Module is ?TM 76 | -define(TM, cxy_regulator). 77 | 78 | %%%=================================================================== 79 | %%% check_construction/1 80 | %%%=================================================================== 81 | -spec check_construction(config()) -> ok. 82 | check_construction(Config) -> 83 | Test = "Check that a regulator can be constructed", 84 | ct:comment(Test), ct:log(Test), 85 | 86 | Pid1 = start_regulator(Config), 87 | 88 | Config2 = [{time_slice, 30} | Config], 89 | Pid2 = start_regulator(Config2), 90 | 91 | Config3 = [{time_slice, 300} | Config], 92 | Pid3 = start_regulator(Config3), 93 | 94 | _ = [begin unlink(Pid), exit(Pid, kill) end || Pid <- [Pid1, Pid2, Pid3]], 95 | 96 | Test_Complete = "Regulator construction verified", 97 | ct:comment(Test_Complete), ct:log(Test_Complete), 98 | ok. 99 | 100 | start_regulator(Config) -> 101 | Tuple_Slots = proplists:get_value(time_slice, Config), 102 | Init_Tuple = list_to_tuple(lists:duplicate(Tuple_Slots, 0)), 103 | ct:log("Time slice: ~p~nTuple: ~p", [Tuple_Slots, Init_Tuple]), 104 | 105 | {ok, Pid} = ?TM:start_link(Config), 106 | Full_Status = get_full_status(Pid), 107 | 'NORMAL' = get_status_internal (Full_Status), 108 | normal = get_thruput_internal (Full_Status), 109 | {epoch_slab_counts, 0, Init_Tuple} 110 | = get_slab_counts_internal (Full_Status), 111 | 0 = get_pending_requests_internal (Full_Status), 112 | Pid. 113 | 114 | get_full_status (Pid) -> ?TM:status(Pid). 115 | 116 | get_status (Pid) -> get_status_internal (get_full_status(Pid)). 117 | get_thruput (Pid) -> get_thruput_internal (get_full_status(Pid)). 118 | get_init_time (Pid) -> get_init_time_internal (get_full_status(Pid)). 119 | get_slab_counts (Pid) -> get_slab_counts_internal (get_full_status(Pid)). 120 | get_pending_requests (Pid) -> get_pending_requests_internal (get_full_status(Pid)). 121 | 122 | get_status_internal (Props) -> proplists:get_value(current_state, Props). 123 | get_thruput_internal (Props) -> proplists:get_value(thruput, Props). 124 | get_init_time_internal (Props) -> proplists:get_value(init_time, Props). 125 | get_slab_counts_internal (Props) -> proplists:get_value(slab_counts, Props). 126 | get_pending_requests_internal (Props) -> proplists:get_value(pending_requests, Props). 127 | 128 | 129 | %%%=================================================================== 130 | %%% check_pause_resume/1 131 | %%%=================================================================== 132 | -spec check_pause_resume(config()) -> ok. 133 | check_pause_resume(Config) -> 134 | Test = "Check that regulators can be paused and resumed", 135 | ct:comment(Test), ct:log(Test), 136 | 137 | Pid = start_regulator(Config), 138 | 139 | paused = ?TM:pause(Pid), 140 | 'PAUSED' = get_status(Pid), 141 | {ignored, pause} = ?TM:pause(Pid), 142 | 'PAUSED' = get_status(Pid), 143 | 144 | {resumed, normal} = ?TM:resume(Pid), 145 | 'NORMAL' = get_status(Pid), 146 | {ignored, resume} = ?TM:resume(Pid), 147 | 'NORMAL' = get_status(Pid), 148 | 149 | Test_Complete = "Reservoir refill edge conditions verified", 150 | ct:comment(Test_Complete), ct:log(Test_Complete), 151 | ok. 152 | 153 | 154 | %%%=================================================================== 155 | %%% check_add_slabs/1 156 | %%%=================================================================== 157 | -spec check_add_slabs(config()) -> ok. 158 | check_add_slabs(Config) -> 159 | Test = "Check that regulators can add slabs", 160 | ct:comment(Test), ct:log(Test), 161 | 162 | Regulator = start_regulator(Config), 163 | Num_Pids_Per_Slab = 10, 164 | Num_Slabs = 7, 165 | {Status1, Msgs} = receive_add_slab(Config, Regulator, Num_Pids_Per_Slab, Num_Slabs), 166 | Exp_Results = lists:duplicate(Num_Slabs, true), 167 | Exp_Results = lists:foldl( 168 | fun({Pid_Num, Msg}, Results) -> 169 | Result = validate_msg(Config, Num_Pids_Per_Slab, Pid_Num, Msg), 170 | [Result | Results] 171 | end, 172 | [], lists:zip(lists:seq(1, Num_Slabs), lists:sort(Msgs))), 173 | 174 | 6 = get_pending_requests_internal (Status1), 175 | 'OVERMAX' = get_status_internal (Status1), 176 | overmax = get_thruput_internal (Status1), 177 | 178 | Status2 = cxy_regulator:status(Regulator), 179 | 180 | 0 = get_pending_requests_internal (Status2), 181 | 'NORMAL' = get_status_internal (Status2), 182 | normal = get_thruput_internal (Status2), 183 | 184 | Test_Complete = "Slab delivered via gen_fsm:send_event to Fount", 185 | ct:comment(Test_Complete), ct:log(Test_Complete), 186 | ok. 187 | 188 | validate_msg(Config, Num_Pids, Pid_Num, {'$gen_cast', {slab, Pids, _Time_Stamp, Elapsed}}) -> 189 | Time_Slice = proplists:get_value(time_slice, Config), 190 | Time_Slice_Millis = timer:seconds(1) div Time_Slice, 191 | Num_Pids = length([Pid || Pid <- Pids, is_pid(Pid)]), 192 | case Pid_Num of 193 | 194 | %% First spawned slab is immediate... 195 | 1 -> (Elapsed div timer:seconds(1)) < 100; 196 | 197 | %% Subsequent slabs are delayed by the time slice regulator. 198 | N -> Avg_Elapsed = (Elapsed div ((N-1) * timer:seconds(1))), 199 | Min_Allowed = 0.8 * Time_Slice_Millis, 200 | Max_Allowed = 1.5 * Time_Slice_Millis, 201 | Avg_Elapsed > Min_Allowed 202 | andalso Avg_Elapsed < Max_Allowed 203 | end. 204 | 205 | receive_add_slab(Config, Regulator, Slab_Size, Num_Slabs) -> 206 | Self = self(), 207 | Fake_Fount = spawn_link(fun() -> fake_fount(Self, []) end), 208 | Cmd = allocate_slab_cmd(Fake_Fount, Slab_Size), 209 | _ = [gen_statem:cast(Regulator, Cmd) || _N <- lists:seq(1, Num_Slabs)], 210 | wait_for_add_slab(Config, Fake_Fount, Regulator). 211 | 212 | allocate_slab_cmd(Fount, Slab_Size) -> 213 | {allocate_slab, {Fount, cxy_fount_hello_behaviour, {}, os:timestamp(), Slab_Size}}. 214 | 215 | wait_for_add_slab(Config, Fake_Fount, Regulator) -> 216 | Time_Slice = proplists:get_value(time_slice, Config), 217 | Time_Slice_Millis = timer:seconds(1) div Time_Slice, 218 | _ = receive after Time_Slice_Millis -> continue end, 219 | Status = cxy_regulator:status(Regulator), 220 | _ = receive after timer:seconds(1) -> Fake_Fount ! stop end, 221 | receive {fount_msgs, Fake_Fount, Msgs} -> {Status, Msgs} 222 | after Time_Slice_Millis -> timeout 223 | end. 224 | 225 | fake_fount(Receiver, Msgs) -> 226 | receive 227 | stop -> deliver_msgs (Receiver, Msgs); 228 | Msg -> fake_fount (Receiver, [Msg | Msgs]) 229 | end. 230 | 231 | deliver_msgs(Receiver, Msgs) -> 232 | Receiver ! {fount_msgs, self(), Msgs}, 233 | delivered. 234 | -------------------------------------------------------------------------------- /test/epocxy/epocxy_common_test.hrl: -------------------------------------------------------------------------------- 1 | -include_lib("common_test/include/ct.hrl"). 2 | -include_lib("proper/include/proper.hrl"). 3 | -define(PQ_OPTS, [long_result, {numtests, 100}]). 4 | -define(PQ_NUM(__Times), [long_result, {numtests, __Times}]). 5 | -define(PQ(__Fn), proper:quickcheck(__Fn, ?PQ_OPTS)). 6 | -------------------------------------------------------------------------------- /test/epocxy/fox_obj.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Stub object for testing cxy_cache. 9 | %%% 10 | %%% @since 0.9.6 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(fox_obj). 14 | -auth('jay@duomark.com'). 15 | -vsn(''). 16 | 17 | -behaviour(cxy_cache). 18 | 19 | -export([create_key_value/1]). 20 | 21 | -spec create_key_value(cxy_cache:cached_key()) -> {cxy_cache:cached_value_vsn(), cxy_cache:cached_value()}. 22 | create_key_value(Key) -> {erlang:now(), new_fox(Key)}. 23 | 24 | new_fox(Name) -> {fox, Name}. 25 | 26 | -------------------------------------------------------------------------------- /test/epocxy/frog_obj.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Stub object for testing cxy_cache. 9 | %%% 10 | %%% @since 0.9.6 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(frog_obj). 14 | -auth('jay@duomark.com'). 15 | -vsn(''). 16 | 17 | -behaviour(cxy_cache). 18 | 19 | -export([create_key_value/1]). 20 | 21 | -spec create_key_value(cxy_cache:cached_key()) -> {cxy_cache:cached_value_vsn(), cxy_cache:cached_value()}. 22 | create_key_value(Key) -> {erlang:now(), new_frog(Key)}. 23 | 24 | new_frog(Name) -> {frog, Name}. 25 | 26 | -------------------------------------------------------------------------------- /test/epocxy/rabbit_obj.erl: -------------------------------------------------------------------------------- 1 | %%%------------------------------------------------------------------------------ 2 | %%% @copyright (c) 2013-2015, DuoMark International, Inc. 3 | %%% @author Jay Nelson 4 | %%% @reference 2013-2015 Development sponsored by TigerText, Inc. [http://tigertext.com/] 5 | %%% @reference The license is based on the template for Modified BSD from 6 | %%% OSI 7 | %%% @doc 8 | %%% Stub object for testing cxy_cache. 9 | %%% 10 | %%% @since 0.9.6 11 | %%% @end 12 | %%%------------------------------------------------------------------------------ 13 | -module(rabbit_obj). 14 | -auth('jay@duomark.com'). 15 | -vsn(''). 16 | 17 | -behaviour(cxy_cache). 18 | 19 | -export([create_key_value/1]). 20 | 21 | -spec create_key_value(cxy_cache:cached_key()) -> {cxy_cache:cached_value_vsn(), cxy_cache:cached_value()}. 22 | create_key_value(Key) -> {erlang:now(), new_rabbit(Key)}. 23 | 24 | new_rabbit(Name) -> {rabbit, Name}. 25 | --------------------------------------------------------------------------------