├── LICENSE.txt
├── README.md
├── archive
├── README.md
└── metric_incentive.py
├── examples.py
├── images
├── convergence_divergence.png
├── divergence_example.png
├── momentum_beta.png
├── nesterov_examples.png
└── polyak_examples.png
├── pyed
├── __init__.py
├── analysis.py
├── bomze.py
├── bomze.txt
├── dynamics.py
├── geometries.py
├── incentives.py
└── information.py
├── requirements.txt
├── setup.py
└── tests
├── __init__.py
├── test_dynamics.py
└── test_math.py
/LICENSE.txt:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2015 Marc Harper
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # pyed
3 |
4 | pyed is a python package to compute trajectories of evolutionary dynamics
5 | on Riemannian geometries and with momentum. For example, the library can be used to generate
6 | phase portraits of discrete replicator equations with various parameters, and generalizations.
7 |
8 | ## Examples
9 |
10 | The tests and [examples.py](examples.py) file contain several examples. There is also a
11 | [jupyter notebook](https://github.com/marcharper/pyed/blob/notebooks/Momentum%20Paper%20Figures.ipynb)
12 | that produces several example trajectories:
13 |
14 |
15 |

16 |
17 |

18 |
19 |

20 |
21 |

22 |
23 |
24 |
25 | ## Tests
26 |
27 | This repository uses `pytest`. You can run the tests from the command line with
28 | ```
29 | pytest
30 | ```
31 |
32 | in the root level directory of the repository.
33 |
34 |
35 | # Repository history
36 | This repository was originally named metric-incentive-dynamics and located at
37 | https://github.com/marcharper/metric-incentive-dynamics
38 | containing code to compute plots for a publication. See the [archived readme](archive/README.md)
39 | for more information. The code was extended to support momentum and made into an installable
40 | python package in the current repository.
41 |
42 |
43 |
--------------------------------------------------------------------------------
/archive/README.md:
--------------------------------------------------------------------------------
1 |
2 | # metric-incentive-dynamics
3 |
4 | Repository History
5 | ------------------
6 |
7 | Last updated: 2020-05-27
8 |
9 | This python script computes time-scale dynamics for the metric-incentive
10 | dynamic, used to create the plots in the publication
11 | [Lyapunov Functions for Time-Scale Dynamics on Riemannian Geometries of the Simplex](http://link.springer.com/article/10.1007/s13235-014-0124-0),
12 | also on the Arxiv preprint server with the name [Stability of Evolutionary Dynamics on Time Scales](https://arxiv.org/abs/1210.5539).
13 |
14 | This code was originally at the URL https://github.com/marcharper/metric-incentive-dynamics before being archived here.
15 | The repository https://github.com/marcharper/pyed . If you are trying to make similar plots, use the
16 | [pyed](https://github.com/marcharper/pyed) library at https://github.com/marcharper/pyed .
17 |
18 | Basic Usage
19 | -----------
20 |
21 | The main function is compute_trajectory in the file metric_incentive.py, which takes several parameters:
22 |
23 | ```
24 | def compute_trajectory(initial_state, incentive, iterations=2000, h=1/200., G=None, escort=None, exit_on_uniform=True, verbose=False, fitness=None):
25 | ```
26 |
27 | Let us consider each in turn. The initial_state is a numpy array that indicates the starting point of the dynamic. For instance, to start at the center of the simplex in a three-type dynamic, use:
28 |
29 | ```
30 | from metric_incentive import *
31 | import numpy
32 | initial_state = normalize(numpy.array([1,1,1]))
33 | ```
34 |
35 | Strictly speaking, the normalization is not necessary (compute_trajectory will do it for you), nevertheless the normalizations function *normalize* is available.
36 |
37 | The second parameter, *incentive* is a function that takes a state and produces a vector (a numpy array) of the incentive values. Several incentives are included, such as *replicator_incentive*, which takes a fitness landscape (again a function taking a state to a vector) and produces an incentive:
38 |
39 | ```
40 | m = rock_scissors_paper(a=1, b=-2)
41 | print m
42 | fitness = linear_fitness(m)
43 | print fitness(initial_state)
44 | incentive = replicator_incentive(fitness)
45 | print incentive(normalize(numpy.array([1,1,4])))
46 | ```
47 |
48 | This outputs:
49 |
50 | ```
51 | [[0, 2, 1], [1, 0, 2], [2, 1, 0]]
52 | array([ 1., 1., 1.])
53 | array([ 0.16666667, 0.25 , 0.33333333])
54 | ```
55 |
56 | `compute_trajectory` has several additional arguments including:
57 |
58 | - *iterations* (optional: default=2000) is the maximum number of iterations that the dynamic will step through unless an exit condition is reached.
59 | -*h* (optional but you probably want to change it) is a constant (should be between 0 and 1) corresponding to the time scale, or a generator that produces successful values that are not necessarily the same.
60 | - *G* (optional) is a Riemannian metric given as a function of a simplex variable. Again there are several helpers, such as *shahshahani_metric()* included in the code (which is the default). This parameter must return numpy array matrices at input points on the simplex.
61 | - *escort* is another optional functional parameter that can be used instead of an entire metric (technically it defines a diagonal metric). If you specific both a metric and an escort, just the metric is used (and a warning is given).
62 | - *exit_on_uniform* gives an exit condition to stop iteration early if the incentive vector is uniform, or very nearly so (which indicates convergence).
63 | - *verbose* if True outputs each step of the dynamic to standard out.
64 | - *fitness* is optional and used for reporting if verbose == True
65 |
66 | Example
67 | -------
68 |
69 | The function *basic_example* shows how to compute a trajectory, plot it in the simplex (for n=3), and plot candidate Lyapunov functions on the trajectory.
70 |
--------------------------------------------------------------------------------
/archive/metric_incentive.py:
--------------------------------------------------------------------------------
1 | import inspect
2 | import math
3 | import warnings
4 |
5 | import numpy
6 | from matplotlib import pyplot
7 |
8 | import ternary
9 |
10 | ### Globals ##
11 | # Plotting options for matplotlib, color list to maintain colors across plots.
12 | #colors = ['r','g','b','k', 'y']
13 | colors = "bgrcmyk"
14 | ## Greyscale
15 | #shade_count = 10
16 | #colors = map(str, [0.5 + x / (2. * shade_count) for x in range(1, shade_count+1)])
17 |
18 | ### Math helpers ##
19 |
20 | def product(xs):
21 | s = 1.
22 | for x in xs:
23 | s *= x
24 | return s
25 |
26 | def normalize(x):
27 | """Normalizes a numpy array by dividing by the sum."""
28 | s = float(numpy.sum(x))
29 | return x / s
30 |
31 | def shannon_entropy(p):
32 | s = 0.
33 | for i in range(len(p)):
34 | try:
35 | s += p[i] * math.log(p[i])
36 | except ValueError:
37 | continue
38 | return -1.*s
39 |
40 | def uniform_mutation_matrix(n, ep):
41 | return (1. - ep) * numpy.eye(n) + ep / (n - 1.) * (numpy.ones(n) - numpy.eye(n))
42 |
43 | ### Information Divergences ##
44 |
45 | def kl_divergence(p, q):
46 | s = 0.
47 | for i in range(len(p)):
48 | try:
49 | t = p[i] * math.log(p[i] / q[i])
50 | s += t
51 | except ValueError:
52 | continue
53 | return s
54 |
55 | def q_divergence(q):
56 | """Returns the divergence function corresponding to the parameter value q."""
57 | if q == 0:
58 | def d(x, y):
59 | return 0.5 * numpy.dot((x-y),(x-y))
60 | return d
61 | if q == 1:
62 | return kl_divergence
63 | if q == 2:
64 | def d(x,y):
65 | s = 0.
66 | for i in range(len(x)):
67 | s += math.log(x[i] / y[i]) + 1 - x[i] / y[i]
68 | return -s
69 | return d
70 | q = float(q)
71 | def d(x, y):
72 | s = 0.
73 | for i in range(len(x)):
74 | s += (math.pow(y[i], 2 - q) - math.pow(x[i], 2 - q)) / (2 - q)
75 | s -= math.pow(y[i], 1 - q) * (y[i] - x[i])
76 | s = -s / (1 - q)
77 | return s
78 | return d
79 |
80 | ### Escorts ###
81 |
82 | def DEFAULT_ESCORT(x):
83 | """Gives Shahshahani metric and KL-divergence."""
84 | return x
85 |
86 | def twisted_escort(x):
87 | l = list(x)
88 | return numpy.array([l[1],l[2],l[0]])
89 |
90 | ## Just use power escort with p = 0
91 | #def projection_escort(x):
92 | #return numpy.ones(len(x))
93 |
94 | def power_escort(q):
95 | """Returns an escort function for the power q."""
96 | def g(x):
97 | y = []
98 | for i in range(len(x)):
99 | y.append(math.pow(x[i], q))
100 | return numpy.array(y)
101 | return g
102 |
103 | def exponential_escort(x):
104 | return numpy.exp(x)
105 |
106 | ### Metrics ##
107 |
108 | # Can also use metric_from_escort to get the Euclidean metric.
109 | def euclidean_metric(n=3):
110 | I = numpy.identity(3)
111 | def G(x):
112 | return I
113 | return G
114 |
115 | def metric_from_escort(escort):
116 | def G(x):
117 | #return numpy.linalg.inv(numpy.diag(escort(x)))
118 | return numpy.diag(1./ escort(x))
119 | return G
120 |
121 | def shahshahani_metric():
122 | return metric_from_escort(DEFAULT_ESCORT)
123 |
124 | DEFAULT_METRIC = shahshahani_metric()
125 |
126 | ### Incentives ##
127 |
128 | def rock_scissors_paper(a=1, b=1):
129 | return [[0,-b,a], [a, 0, -b], [-b, a, 0]]
130 |
131 | def linear_fitness(m):
132 | """f(x) = mx for a matrix m."""
133 | m = numpy.array(m)
134 | def f(x):
135 | return numpy.dot(m, x)
136 | return f
137 |
138 | def replicator_incentive(fitness):
139 | def g(x):
140 | return x * fitness(x)
141 | return g
142 |
143 | def DEFAULT_INCENTIVE(f):
144 | return replicator_incentive(f)
145 |
146 | def replicator_incentive_power(fitness, q):
147 | def g(x):
148 | y = []
149 | #print q, x
150 | for i in range(len(x)):
151 | #y.append(math.pow(max(x[i],0), q))
152 | y.append(math.pow(x[i], q))
153 | y = numpy.array(y)
154 | return y * fitness(x)
155 | return g
156 |
157 | def best_reply_incentive(fitness):
158 | """Compute best reply to fitness landscape at state."""
159 | def g(state):
160 | f = fitness(state)
161 | try:
162 | dim = state.size
163 | except AttributeError:
164 | state = numpy.array(state)
165 | dim = state.size
166 | replies = []
167 | for i in range(dim):
168 | x = numpy.zeros(dim)
169 | x[i] = 1
170 | replies.append(numpy.dot(x, f))
171 | replies = numpy.array(replies)
172 | i = numpy.argmax(replies)
173 | x = numpy.zeros(dim)
174 | x[i] = 1
175 | return x
176 | return g
177 |
178 | def logit_incentive(fitness, eta):
179 | def f(x):
180 | return normalize(numpy.exp(fitness(x) / eta))
181 | return f
182 |
183 | ### Simulation ###
184 |
185 | # Functions to check exit conditions.
186 | def is_uniform(x, epsilon=0.000000001):
187 | """Determine if the vector is uniform within epsilon tolerance. Useful to stop a simulation if the fitness landscape has become essentially uniform."""
188 | x_0 = x[0]
189 | for i in range(1, len(x)):
190 | if abs(x[i] - x_0) > epsilon:
191 | return False
192 | return True
193 |
194 | def is_in_simplex(x):
195 | """Checks if a distribution has exited the simplex."""
196 | stop = True
197 | for j in range(x.size):
198 | #print x[j]
199 | if x[j] < 0:
200 | stop = False
201 | break
202 | return stop
203 |
204 | ### Generators for time-scales. ##
205 |
206 | def constant_generator(h):
207 | while True:
208 | yield h
209 |
210 | def fictitious_play_generator(h):
211 | i = 1
212 | while True:
213 | yield float(h) / (i + 1)
214 | i += 1
215 |
216 | ## Functions to actually compute trajectories
217 |
218 | def dynamics(state, incentive=None, G=None, h=1.0, mu=None):
219 | """Compute the next iteration of the dynamic."""
220 | if not incentive:
221 | incentive = DEFAULT_INCENTIVE
222 | if not G:
223 | G = shahshahani_metric()
224 | if mu is None:
225 | mu = numpy.eye(len(state))
226 | ones = numpy.ones(len(state))
227 | g = numpy.dot(numpy.linalg.inv(G(state)), ones)
228 | i = incentive(state)
229 | next_state = state + h * (numpy.dot(i, mu) - g / numpy.dot(g, ones) * numpy.sum(i))
230 | return next_state
231 |
232 | def compute_trajectory(initial_state, incentive, iterations=2000, h=1/200., G=None, escort=None, exit_on_uniform=True, verbose=False, fitness=None, project=True, mu=None):
233 | """Computes a trajectory of a dynamic until convergence or other exit condition is reached."""
234 | # Check if the time-scale is constant or not, and if it is, make it into a generator.
235 | if not inspect.isgenerator(h):
236 | h_gen = constant_generator(h)
237 | else:
238 | h_gen = h
239 | # If an escort is given, translate to a metric.
240 | if escort:
241 | if G:
242 | warnings.warn("Both an escort and a metric were supplied to the simulation. Proceeding with the metric only.""")
243 | else:
244 | G = metric_from_escort(escort)
245 | # Make sure we are starting in the simplex.
246 | x = normalize(initial_state)
247 | t = []
248 | for j, h in enumerate(h_gen):
249 | # Record each point for later analysis.
250 | t.append(x)
251 | if verbose:
252 | if fitness:
253 | print j, x, incentive(x), fitness(x)
254 | else:
255 | print j, x, incentive(x)
256 | ## Exit conditions.
257 | # Is the landscape uniform, indicating convergence?
258 | if exit_on_uniform:
259 | if fitness:
260 | if is_uniform(fitness(x)):
261 | break
262 | if is_uniform(incentive(x)):
263 | break
264 | # Are we out of the simplex?
265 | if not is_in_simplex(x):
266 | break
267 | if j >= iterations:
268 | break
269 | ## End Exit Conditions.
270 | # Iterate the dynamic.
271 | x = dynamics(x, incentive=incentive, G=G, h=h, mu=mu)
272 | # Check to make sure that the distribution has not left the simplex due to round-off.
273 | # May conflict with out of simplex exit condition, but is useful for non-forward-invariant dynamics (such as projection dynamics). Note that this is very similar to Sandholm's projection and may be better handled that way.
274 | if project:
275 | for i in range(len(x)):
276 | x[i] = max(0, x[i])
277 | #Re-normalize in case any values were rounded to 0.
278 | x = normalize(x)
279 | return t
280 |
281 | def two_population_trajectory(params, iterations=2000, exit_on_uniform=True, verbose=False):
282 | """Multipopulation trajectory -- each population has its own incentive, metric, and time-scale. This function only accepts metrics G and generators for h."""
283 | t = [tuple(normalize(p[0]) for p in params)]
284 | for j in range(iterations):
285 | current_state = t[-1]
286 | h = [p[2].next() for p in params]
287 | i = params[0][1](current_state[1])
288 | G = params[0][-1]
289 | ones = numpy.ones(len(current_state[0]))
290 | g = numpy.dot(numpy.linalg.inv(G(current_state[0])), ones)
291 | #print i, g, h[0]
292 | x = current_state[0] + h[0] * (i - g / numpy.dot(g, ones) * numpy.sum(i))
293 | i = params[1][1](current_state[0])
294 | G = params[1][-1]
295 | ones = numpy.ones(len(current_state[1]))
296 | g = numpy.dot(numpy.linalg.inv(G(current_state[1])), ones)
297 | y = current_state[1] + h[1] * (i - g / numpy.dot(g, ones) * numpy.sum(i))
298 |
299 | for i in range(len(x)):
300 | x[i] = max(0, x[i])
301 | for i in range(len(y)):
302 | y[i] = max(0, y[i])
303 | x = normalize(x)
304 | y = normalize(y)
305 | t.append((x, y))
306 | if verbose:
307 | print x, y
308 | return t
309 |
310 | ### Analysis ##
311 |
312 | def relative_prediction_power(new, old):
313 | return product(new) / product(old)
314 |
315 | def compute_iss_diff(e, x, incentive):
316 | """Computes the difference of the LHS and RHS of the ISS condition."""
317 | i = incentive(x)
318 | s = numpy.sum(incentive(x))
319 | lhs = sum(e[j] / x[j] * i[j] for j in range(x.size))
320 | return lhs - s
321 |
322 | def eiss_diff_func(e, incentive, escort=None):
323 | if not escort:
324 | escort = DEFAULT_ESCORT
325 | def f(x):
326 | es = escort(x)
327 | inc = incentive(x)
328 | s = sum((e[i] - x[i])*inc[i] / es[i] for i in range(len(x)))
329 | return s
330 | return f
331 |
332 | def G_iss_diff_func(e, incentive, G=None):
333 | if not G:
334 | G = DEFAULT_METRIC
335 | def f(x):
336 | g = G(x)
337 | inc = incentive(x)
338 | return numpy.dot((e - x), numpy.dot(G(x), incentive(x)))
339 | return f
340 |
341 | ### Examples and Tests ##
342 |
343 | def divergence_test(a=0, b=4, steps=100):
344 | """Compare the q-divergences for various values to illustrate that q_1-div > q_2-div if q_1 > q_2."""
345 | points = []
346 | x = numpy.array([1./3, 1./3, 1./3])
347 | y = numpy.array([1./2, 1./4, 1./4])
348 | d = float(b - a) / steps
349 | for i in range(0, steps):
350 | q = a + i * d
351 | div = q_divergence(q)
352 | points.append((q, div(x, y)))
353 | pyplot.plot([x for (x,y) in points], [y for (x,y) in points])
354 | pyplot.show()
355 |
356 | def basic_example():
357 | # Projection dynamic.
358 | initial_state = normalize(numpy.array([1,1,4]))
359 | m = rock_scissors_paper(a=1., b=-2.)
360 | fitness = linear_fitness(m)
361 | incentive = replicator_incentive_power(fitness, 0)
362 | mu = uniform_mutation_matrix(3, ep=0.2)
363 | t = compute_trajectory(initial_state, incentive, escort=power_escort(0), iterations=10000, verbose=True, mu=mu)
364 | figure, tax = ternary.figure()
365 | tax.plot(t, linewidth=2, color="black")
366 | tax.boundary()
367 |
368 | ## Lyapunov Quantities
369 | pyplot.figure()
370 | # Replicator Lyapunov
371 | e = normalize(numpy.array([1,1,1]))
372 | v = [kl_divergence(e, x) for x in t]
373 | pyplot.plot(range(len(t)), v, color='b')
374 | d = q_divergence(0)
375 | v = [d(e, x) for x in t]
376 | pyplot.plot(range(len(t)), v, color='r')
377 | pyplot.show()
378 |
379 | if __name__ == "__main__":
380 | basic_example()
381 |
382 |
--------------------------------------------------------------------------------
/examples.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 |
4 | import ternary
5 | import pyed
6 |
7 |
8 | def divergence_test(a=0, b=4, steps=100):
9 | """Compare the q-divergences for various values to illustrate that q_1-div > q_2-div if q_1 > q_2."""
10 | points = []
11 | x = np.array([1./3, 1./3, 1./3])
12 | y = np.array([1./2, 1./4, 1./4])
13 | d = float(b - a) / steps
14 | for i in range(0, steps):
15 | q = a + i * d
16 | div = pyed.information.q_divergence(q)
17 | points.append((q, div(x, y)))
18 | plt.plot([x for (x,y) in points], [y for (x,y) in points])
19 | plt.show()
20 |
21 |
22 | def basic_example():
23 | # Projection dynamic.
24 | initial_state = pyed.normalize(np.array([1, 1, 4]))
25 | m = pyed.incentives.rock_paper_scissors(a=1., b=-2.)
26 | fitness = pyed.incentives.linear_fitness(m)
27 | incentive = pyed.incentives.replicator_incentive_power(fitness, 0)
28 | mu = pyed.incentives.uniform_mutation_matrix(3, ep=0.2)
29 | t = pyed.dynamics.compute_trajectory(
30 | initial_state, incentive, escort=pyed.geometries.power_escort(0), iterations=10000, verbose=True, mu=mu)
31 | figure, tax = ternary.figure()
32 | tax.plot(t, linewidth=2, color="black")
33 | tax.boundary()
34 |
35 | ## Lyapunov Quantities
36 | plt.figure()
37 | # Replicator Lyapunov
38 | e = pyed.normalize(np.array([1, 1, 1]))
39 | v = [pyed.information.kl_divergence(e, x) for x in t]
40 | plt.plot(range(len(t)), v, color='b')
41 | d = pyed.information.q_divergence(0)
42 | v = [d(e, x) for x in t]
43 | plt.plot(range(len(t)), v, color='r')
44 | plt.show()
45 |
46 |
47 | if __name__ == "__main__":
48 | divergence_test()
49 | basic_example()
50 |
--------------------------------------------------------------------------------
/images/convergence_divergence.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcharper/pyed/d2b9d6e470cdbda6627a75456730508c085966ea/images/convergence_divergence.png
--------------------------------------------------------------------------------
/images/divergence_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcharper/pyed/d2b9d6e470cdbda6627a75456730508c085966ea/images/divergence_example.png
--------------------------------------------------------------------------------
/images/momentum_beta.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcharper/pyed/d2b9d6e470cdbda6627a75456730508c085966ea/images/momentum_beta.png
--------------------------------------------------------------------------------
/images/nesterov_examples.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcharper/pyed/d2b9d6e470cdbda6627a75456730508c085966ea/images/nesterov_examples.png
--------------------------------------------------------------------------------
/images/polyak_examples.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcharper/pyed/d2b9d6e470cdbda6627a75456730508c085966ea/images/polyak_examples.png
--------------------------------------------------------------------------------
/pyed/__init__.py:
--------------------------------------------------------------------------------
1 | from . import bomze, dynamics, geometries, incentives, information
2 | from .dynamics import normalize
3 |
--------------------------------------------------------------------------------
/pyed/analysis.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from .geometries import DEFAULT_ESCORT, DEFAULT_METRIC
3 |
4 |
5 | def product(xs):
6 | s = 1.
7 | for x in xs:
8 | s *= x
9 | return s
10 |
11 |
12 | def relative_prediction_power(new, old):
13 | return product(new) / product(old)
14 |
15 |
16 | def compute_iss_diff(e, x, incentive):
17 | """Computes the difference of the LHS and RHS of the ISS condition."""
18 | i = incentive(x)
19 | s = np.sum(incentive(x))
20 | lhs = sum(e[j] / x[j] * i[j] for j in range(x.size))
21 | return lhs - s
22 |
23 |
24 | def eiss_diff_func(e, incentive, escort=None):
25 | if not escort:
26 | escort = DEFAULT_ESCORT
27 |
28 | def f(x):
29 | es = escort(x)
30 | inc = incentive(x)
31 | s = sum((e[i] - x[i]) * inc[i] / es[i] for i in range(len(x)))
32 | return s
33 | return f
34 |
35 |
36 | def G_iss_diff_func(e, incentive, G=None):
37 | if not G:
38 | G = DEFAULT_METRIC
39 |
40 | def f(x):
41 | g = G(x)
42 | return np.dot((e - x), np.dot(G(x), incentive(x)))
43 | return f
44 |
45 |
--------------------------------------------------------------------------------
/pyed/bomze.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 |
4 | def bomze_matrices(filename="bomze.txt"):
5 | """
6 | Yields the 48 matrices from I.M. Bomze's classification of three player phase
7 | portraits.
8 |
9 | Bomze, Immanuel M. "Lotka-Volterra equation and replicator dynamics: new issues in classification."
10 | Biological cybernetics 72.5 (1995): 447-453.
11 | """
12 |
13 | this_dir, this_filename = os.path.split(__file__)
14 |
15 | handle = open(os.path.join(this_dir, filename))
16 | for line in handle:
17 | a, b, c, d, e, f, g, h, i = map(float, line.split())
18 | yield [[a, b, c], [d, e, f], [g, h, i]]
19 |
--------------------------------------------------------------------------------
/pyed/bomze.txt:
--------------------------------------------------------------------------------
1 | 0 0 0 0 0 0 0 0 0
2 | 0 0 1 0 0 1 1 1 0
3 | 0 0 1 0 0 1 1 0 0
4 | 0 0 1 0 0 1 1 -1 0
5 | 0 0 0 0 0 0 1 -1 0
6 | 0 1 2 1 0 1 2 -1 0
7 | 0 1 -1 1 0 -2 -1 2 0
8 | 0 1 1 1 0 1 1 1 0
9 | 0 -1 3 -1 0 3 1 1 0
10 | 0 1 1 -1 0 3 1 1 0
11 | 0 1 3 -1 0 5 1 3 0
12 | 0 1 -1 -1 0 1 -1 1 0
13 | 0 6 -4 -3 0 5 -1 3 0
14 | 0 1 -1 1 0 -3 -1 3 0
15 | 0 3 -1 3 0 -1 1 1 0
16 | 0 3 -1 1 0 1 3 -1 0
17 | 0 -1 1 1 0 -1 -1 1 0
18 | 0 2 -1 -1 0 2 2 -1 0
19 | 0 0 0 0 0 -1 0 1 0
20 | 0 0 0 0 0 -1 0 0 0
21 | 0 0 0 0 0 -1 0 -1 0
22 | 0 0 -2 0 0 -1 -1 -1 0
23 | 0 0 -1 0 0 1 -1 1 0
24 | 0 0 1 0 0 0 1 -1 0
25 | 0 0 1 0 0 0 1 1 0
26 | 0 0 -2 0 0 -1 -1 0 0
27 | 0 0 -1 0 0 1 -1 0 0
28 | 0 0 -1 0 0 -2 -1 0 0
29 | 0 0 -1 0 0 -2 -1 1 0
30 | 0 0 -1 0 0 -1 0 0 0
31 | 0 0 -2 0 0 -1 0 0 0
32 | 0 0 -1 0 0 1 0 0 0
33 | 0 0 -1 0 0 0 0 -1 0
34 | 0 0 -1 0 0 0 1 -1 0
35 | 0 -1 3 -1 0 1 3 1 0
36 | 0 3 1 3 0 1 1 1 0
37 | 0 1 -1 -3 0 1 -1 1 0
38 | 0 1 1 -1 0 1 1 1 0
39 | 0 1 1 -1 0 3 1 3 0
40 | 0 -1 -1 1 0 1 -1 1 0
41 | 0 1 -1 1 0 -1 1 1 0
42 | 0 1 -1 1 0 1 1 -1 0
43 | 0 1 1 1 0 1 -1 -1 0
44 | 0 1 1 -1 0 1 -1 -1 0
45 | 0 1 1 0 0 2 0 3 0
46 | 0 -2 1 0 0 -2 0 -1 0
47 | 0 -1 1 0 0 -1 0 1 0
48 | 0 2 0 2 0 0 1 1 0
49 | 0 1 0 1 0 0 -1 2 0
--------------------------------------------------------------------------------
/pyed/dynamics.py:
--------------------------------------------------------------------------------
1 | import inspect
2 | import warnings
3 |
4 | import numpy as np
5 |
6 | from . import incentives, geometries, information
7 |
8 |
9 | def normalize(x):
10 | """Normalizes a numpy array by dividing by the sum."""
11 | s = float(np.sum(x))
12 | return x / s
13 |
14 |
15 | ### Simulation ###
16 |
17 | # Functions to check exit conditions.
18 | def is_uniform(x, epsilon=1e-9):
19 | """Determine if the vector is uniform within epsilon tolerance. Useful to
20 | stop a simulation if the fitness landscape has become essentially uniform."""
21 | x_0 = x[0]
22 | for i in range(1, len(x)):
23 | if abs(x[i] - x_0) > epsilon:
24 | return False
25 | return True
26 |
27 |
28 | def is_in_simplex(x):
29 | """Checks if a distribution has exited the simplex. Assumes that the distribution has been normalized."""
30 | for j in range(x.size):
31 | if x[j] < 0 or x[j] > 1:
32 | return False
33 | return True
34 |
35 | ### Generators for time-scales. ##
36 |
37 |
38 | def constant_generator(h):
39 | while True:
40 | yield h
41 |
42 |
43 | def fictitious_play_generator(h):
44 | i = 1
45 | while True:
46 | yield float(h) / (i + 1)
47 | i += 1
48 |
49 | ## Functions to actually compute trajectories
50 |
51 |
52 | def potential(state, incentive=None, G=None, mu=None):
53 | """Compute the potential function for a given state.
54 |
55 | incentive
56 | function of population vector that incorporates the payoff / fitness landscape
57 | G
58 | Riemannian metric, a matrix-valued function of the population vector
59 | mu, optional
60 | mutation vector
61 |
62 | """
63 | if not incentive:
64 | incentive = incentives.DEFAULT_INCENTIVE
65 | if not G:
66 | G = geometries.DEFAULT_METRIC
67 | if mu is None:
68 | mu = np.eye(len(state))
69 | ones = np.ones(len(state))
70 | g = np.dot(np.linalg.inv(G(state)), ones)
71 | i = incentive(state)
72 | return np.dot(i, mu) - g / np.dot(g, ones) * np.sum(i)
73 |
74 |
75 | def dynamics(state, z, incentive=None, G=None, h=1.0, mu=None, momentum=0):
76 | """Compute the next iteration of the dynamic."""
77 | U = potential(state, incentive=incentive, G=G, mu=mu)
78 | next_z = momentum * z + U
79 | next_state = state + h * next_z
80 | return next_state, next_z
81 |
82 |
83 | def nesterov_dynamics(state, z, incentive=None, G=None, h=1.0, mu=None, momentum=0):
84 | """Compute the next iteration of the dynamic with Nesterov momentum."""
85 | lookahead_state = normalize(state + momentum * z)
86 | U = potential(lookahead_state, incentive=incentive, G=G, mu=mu)
87 | next_z = momentum * z + U
88 | next_state = state + h * next_z
89 | return next_state, next_z
90 |
91 |
92 | def compute_trajectory(
93 | initial_state, incentive, iterations=2000, h=1/200., G=None,
94 | escort=None, exit_on_uniform=True, exit_on_divergence_tol=False,
95 | divergence_tol=.001, stable_state=None, verbose=False,
96 | fitness=None, project=True, mu=None, initial_z=(),
97 | momentum=None, divergence=None, nesterov=False):
98 | """Computes a trajectory of a dynamic until convergence or other exit condition is reached.
99 |
100 | initial_state
101 | initial population distribution
102 | incentive
103 | function of population vector that incorporates the payoff / fitness landscape
104 | iterations
105 | maximum number of iterations to run, if exit conditions are not met
106 | h, float
107 | step size aka learning rate
108 | G
109 | Riemannian metric, a matrix-valued function of the population vector
110 | escort
111 | function which generates a Riemannan metric
112 | exit_on_uniform, bool
113 | whether to exit when the population becomes uniform
114 | exit_on_divergence_tol, bool
115 | whether to exit when the population reaches near convergence, according to the divergence tolerance
116 | divergence_tol, float
117 | tolerance for determining convergence
118 | stable_state, None
119 | an ESS (population state) to measure convergence to
120 | verbose, bool
121 | whether to report verbose information
122 | fitness
123 | a function of the population vector
124 | project, bool
125 | if the trajectory exits the simple, project back into it
126 | mu, optional
127 | mutation vector
128 | initial_z
129 | if momentum is used, supply an initial z vector
130 | momentum, float
131 | value of momentum to use
132 | divergence
133 | supplied divergence to measure convergence
134 | nesterov
135 | whether to use the Nesterov version of momentum
136 | """
137 | ## Setup and check inputs
138 | # Check if the time-scale is constant or not, and if it is, make it into a generator.
139 | if not len(initial_z):
140 | n = len(initial_state)
141 | initial_z = initial_state.copy() - np.array([1. /n] * n)
142 | if not momentum:
143 | momentum = 0
144 | if not divergence:
145 | divergence = information.kl_divergence
146 | if not inspect.isgenerator(h):
147 | h_gen = constant_generator(h)
148 | else:
149 | h_gen = h
150 | # If an escort is given, translate to a metric.
151 | if escort:
152 | if G:
153 | warnings.warn(
154 | "Both an escort and a metric were supplied to the simulation. Proceeding with the metric only.""")
155 | else:
156 | G = geometries.metric_from_escort(escort)
157 | # Make sure we are starting in the simplex.
158 | x = normalize(initial_state)
159 | z = initial_z
160 | t = []
161 | if not stable_state:
162 | stable_state = np.ones_like(initial_state) / initial_state.shape[0]
163 |
164 | ## Iterate the dynamics.
165 | for j, h in enumerate(h_gen):
166 | # Record each point for later analysis.
167 | t.append(x)
168 | if verbose:
169 | if fitness:
170 | print(j, x, incentive(x), fitness(x))
171 | else:
172 | print(j, x, incentive(x))
173 | ## Exit conditions.
174 | # Is the landscape uniform, indicating convergence?
175 | if exit_on_uniform:
176 | if fitness:
177 | if is_uniform(fitness(x)):
178 | return t
179 | if is_uniform(incentive(x)):
180 | return t
181 | if exit_on_divergence_tol:
182 | if divergence(stable_state, x) < divergence_tol:
183 | return t
184 | # Are we out of the simplex?
185 | if not is_in_simplex(x):
186 | break
187 | if j >= iterations:
188 | break
189 | ## End Exit Conditions.
190 |
191 | # Iterate the dynamic.
192 | if nesterov:
193 | x, z = nesterov_dynamics(x, z, incentive=incentive, G=G, h=h, mu=mu, momentum=momentum)
194 | else:
195 | x, z = dynamics(x, z, incentive=incentive, G=G, h=h, mu=mu, momentum=momentum)
196 | # Check to make sure that the distribution has not left the simplex
197 | # due to round-off.
198 | # May conflict with out of simplex exit condition, but is useful for
199 | # non-forward-invariant dynamics (such as projection dynamics). Note
200 | # that this is very similar to Sandholm's projection and may be
201 | # better handled that way.
202 | if project:
203 | x = np.clip(x, a_min=0, a_max=np.inf)
204 | # Re-normalize in case any values were rounded to 0.
205 | x = normalize(x)
206 | return t
207 |
208 |
209 | def replicator_trajectory(initial_state, fitness, iterations=2000, h=1/200., verbose=False, momentum=None,
210 | exit_on_uniform=True, exit_on_divergence_tol=True, divergence_tol=.001, nesterov=False):
211 | """Convenience function for replicator dynamics."""
212 | incentive = incentives.replicator_incentive_power(fitness, 1)
213 | return compute_trajectory(
214 | initial_state,
215 | incentive,
216 | iterations=iterations,
217 | h=h,
218 | verbose=verbose,
219 | momentum=momentum,
220 | exit_on_uniform=exit_on_uniform,
221 | exit_on_divergence_tol=exit_on_divergence_tol,
222 | divergence_tol=divergence_tol,
223 | nesterov=nesterov
224 | )
225 |
226 |
227 | def projection_trajectory(initial_state, fitness, iterations=2000, h=1/200., verbose=False, momentum=None,
228 | exit_on_uniform=True, exit_on_divergence_tol=True, divergence_tol=.001, nesterov=False):
229 | incentive = incentives.replicator_incentive_power(fitness, 0)
230 | """Convenience function for projection dynamics."""
231 | return compute_trajectory(
232 | initial_state,
233 | incentive,
234 | iterations=iterations,
235 | escort=geometries.power_escort(0),
236 | h=h,
237 | verbose=verbose,
238 | momentum=momentum,
239 | exit_on_uniform=exit_on_uniform,
240 | exit_on_divergence_tol=exit_on_divergence_tol,
241 | divergence_tol=divergence_tol,
242 | nesterov=nesterov
243 | )
244 |
245 |
246 | def two_population_trajectory(params, iterations=2000, verbose=False):
247 | """Multipopulation trajectory -- each population has its own incentive, metric, and time-scale. This function only
248 | accepts metrics G and generators for h."""
249 | t = [tuple(normalize(p[0]) for p in params)]
250 | for _ in range(iterations):
251 | current_state = t[-1]
252 | h = [p[2].next() for p in params]
253 | i = params[0][1](current_state[1])
254 | G = params[0][-1]
255 | ones = np.ones(len(current_state[0]))
256 | g = np.dot(np.linalg.inv(G(current_state[0])), ones)
257 | x = current_state[0] + h[0] * (i - g / np.dot(g, ones) * np.sum(i))
258 | i = params[1][1](current_state[0])
259 | G = params[1][-1]
260 | ones = np.ones(len(current_state[1]))
261 | g = np.dot(np.linalg.inv(G(current_state[1])), ones)
262 | y = current_state[1] + h[1] * (i - g / np.dot(g, ones) * np.sum(i))
263 |
264 | for i in range(len(x)):
265 | x[i] = max(0, x[i])
266 | for i in range(len(y)):
267 | y[i] = max(0, y[i])
268 | x = normalize(x)
269 | y = normalize(y)
270 | t.append((x, y))
271 | if verbose:
272 | print(x, y)
273 | return t
274 |
275 |
--------------------------------------------------------------------------------
/pyed/geometries.py:
--------------------------------------------------------------------------------
1 | from math import pow
2 | import numpy as np
3 |
4 |
5 | def DEFAULT_ESCORT(x):
6 | """Gives Shahshahani metric and KL-divergence."""
7 | return x
8 |
9 |
10 | def twisted_escort(x):
11 | l = list(x)
12 | return np.array([l[1], l[2], l[0]])
13 |
14 |
15 | def power_escort(q):
16 | """Returns an escort function for the power q."""
17 |
18 | def g(x):
19 | y = []
20 | for i in range(len(x)):
21 | y.append(pow(x[i], q))
22 | return np.array(y)
23 |
24 | return g
25 |
26 |
27 | def projection_escort(x):
28 | return power_escort(0)
29 |
30 |
31 | def exponential_escort(x):
32 | return np.exp(x)
33 |
34 |
35 | # Can also use metric_from_escort to get the Euclidean metric.
36 | def euclidean_metric(n=3):
37 | I = np.identity(n)
38 |
39 | def G(x):
40 | return I
41 | return G
42 |
43 |
44 | def metric_from_escort(escort):
45 | def G(x):
46 | return np.diag(1. / escort(x))
47 | return G
48 |
49 |
50 | def shahshahani_metric():
51 | """Also known as the Fisher information metric."""
52 | return metric_from_escort(DEFAULT_ESCORT)
53 |
54 |
55 | DEFAULT_METRIC = shahshahani_metric()
56 |
--------------------------------------------------------------------------------
/pyed/incentives.py:
--------------------------------------------------------------------------------
1 | """Fitness landscapes, incentives, and various matrices."""
2 |
3 | from math import pow
4 | import numpy as np
5 |
6 |
7 | def normalize(x):
8 | """Normalizes a numpy array by dividing by the sum."""
9 | s = float(np.sum(x))
10 | return x / s
11 |
12 |
13 | def uniform_mutation_matrix(n, ep):
14 | return (1. - ep) * np.eye(n) + ep / (n - 1.) * (np.ones(n) - np.eye(n))
15 |
16 |
17 | def rock_paper_scissors(a=1, b=1):
18 | return [[0, a, b], [b, 0, a], [a, b, 0]]
19 |
20 |
21 | def linear_fitness(m):
22 | """f(x) = mx for a matrix m."""
23 | m = np.array(m)
24 |
25 | def f(x):
26 | return np.dot(m, x)
27 | return f
28 |
29 |
30 | def replicator_incentive(fitness):
31 | def g(x):
32 | return x * fitness(x)
33 | return g
34 |
35 |
36 | def replicator_incentive_power(fitness, q):
37 | def g(x):
38 | y = []
39 | for i in range(len(x)):
40 | y.append(pow(x[i], q))
41 | y = np.array(y)
42 | return y * fitness(x)
43 | return g
44 |
45 |
46 | def best_reply_incentive(fitness):
47 | """Compute best reply to fitness landscape at state."""
48 | def g(state):
49 | f = fitness(state)
50 | try:
51 | dim = state.size
52 | except AttributeError:
53 | state = np.array(state)
54 | dim = state.size
55 | replies = []
56 | for i in range(dim):
57 | x = np.zeros(dim)
58 | x[i] = 1
59 | replies.append(np.dot(x, f))
60 | replies = np.array(replies)
61 | i = np.argmax(replies)
62 | x = np.zeros(dim)
63 | x[i] = 1
64 | return x
65 | return g
66 |
67 |
68 | def logit_incentive(fitness, eta):
69 | def f(x):
70 | return normalize(np.exp(fitness(x) / eta))
71 | return f
72 |
73 |
74 | def fermi_incentive(fitness, beta):
75 | def f(x):
76 | return normalize(np.exp(fitness(x) * beta))
77 | return f
78 |
79 |
80 | def DEFAULT_INCENTIVE(f):
81 | return replicator_incentive(f)
82 |
83 |
--------------------------------------------------------------------------------
/pyed/information.py:
--------------------------------------------------------------------------------
1 | from math import log, pow
2 | import numpy as np
3 | from scipy import stats
4 |
5 |
6 | def shannon_entropy(p):
7 | return stats.entropy(p)
8 |
9 |
10 | def kl_divergence(p, q):
11 | """Standard KL divergence."""
12 | return stats.entropy(p, q)
13 |
14 |
15 | def q_divergence(q):
16 | """Returns the q-divergence function corresponding to the parameter value q."""
17 | if q == 0:
18 | def d(x, y):
19 | return 0.5 * np.dot((x - y), (x - y))
20 | return d
21 |
22 | if q == 1:
23 | return kl_divergence
24 |
25 | if q == 2:
26 | def d(x,y):
27 | s = 0.
28 | for i in range(len(x)):
29 | s += log(x[i] / y[i]) + 1 - x[i] / y[i]
30 | return -s
31 | return d
32 |
33 | q = float(q)
34 |
35 | def d(x, y):
36 | s = 0.
37 | for i in range(len(x)):
38 | s += (pow(y[i], 2 - q) - pow(x[i], 2 - q)) / (2 - q)
39 | s -= pow(y[i], 1 - q) * (y[i] - x[i])
40 | s = -s / (1 - q)
41 | return s
42 | return d
43 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | matplotlib>=2.0.0
2 | numpy>=1.9.2
3 | python-ternary>=1.0.0
4 | scipy>=1.4
5 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | __version__ = "0.0.1"
4 |
5 | # Read in the requirements.txt file
6 | with open("requirements.txt") as f:
7 | requirements = []
8 | for library in f.read().splitlines():
9 | requirements.append(library)
10 |
11 | setup(
12 | name="pyed",
13 | version=__version__,
14 | install_requires=requirements,
15 | author="Marc Harper",
16 | author_email="marcharper@gmail.com",
17 | packages=["pyed"],
18 | license="The MIT License (MIT)",
19 | description="Evolutionary Dynamics Trajectories",
20 | long_description_content_type="text/x-rst",
21 | include_package_data=True,
22 | classifiers=[
23 | "Programming Language :: Python :: 3.5",
24 | "Programming Language :: Python :: 3.6",
25 | "Programming Language :: Python :: 3 :: Only",
26 | ],
27 | python_requires=">=3.5",
28 | package_data={'pyed': ['bomze.txt']},
29 | )
30 |
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcharper/pyed/d2b9d6e470cdbda6627a75456730508c085966ea/tests/__init__.py
--------------------------------------------------------------------------------
/tests/test_dynamics.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import pytest
3 |
4 | import pyed
5 | from pyed import dynamics
6 |
7 | # Set random seed
8 | np.random.seed(10)
9 |
10 |
11 | def dynamics_example(a=1, b=2, power=1, initial_state=None, momentum=0., iterations=1000, alpha=1e-2):
12 | if initial_state is None:
13 | initial_state = pyed.normalize(np.array([1, 1, 4]))
14 | m = pyed.incentives.rock_paper_scissors(a=a, b=b)
15 | fitness = pyed.incentives.linear_fitness(m)
16 | incentive = pyed.incentives.replicator_incentive_power(fitness, power)
17 | mu = pyed.incentives.uniform_mutation_matrix(3, ep=0.2)
18 | t = pyed.dynamics.compute_trajectory(
19 | initial_state, incentive, escort=pyed.geometries.power_escort(power), iterations=iterations, verbose=False,
20 | mu=mu, momentum=momentum, h=alpha)
21 |
22 | e = pyed.normalize(np.array([1, 1, 1]))
23 | d = pyed.information.q_divergence(power)
24 | v = [d(e, x) for x in t]
25 | return t[-1], v[-1]
26 |
27 |
28 | def test_convergence():
29 | # These examples converge to the interior of the simplex
30 | parameters = [
31 | (1, 1, 1), # replicator with hawk-dove like landscape,
32 | (2, 1, 1), # replicator with rock-paper-scissors like landscape,
33 | (1, 1, 0), # projection with rock-paper-scissors like landscape,
34 | (2, 1, 0), # projection with rock-paper-scissors like landscape,
35 | (1, 1, 2), # poincare with hawk-dove like landscape,
36 | ]
37 | for a, b, power in parameters:
38 | initial_state = pyed.normalize(np.array([1, 1, 4]))
39 | x, v = dynamics_example(a, b, power, initial_state=initial_state, iterations=10000, alpha=0.1)
40 | # assert (dynamics.is_uniform(x) == True)
41 | assert abs(v) < 1e-8
42 |
43 |
44 | def test_cycling():
45 | # These cycle indefinitely
46 | parameters = [
47 | (1, -1, 1), # replicator with hawk-dove like landscape,
48 | (1, -1, 0), # projection with rock-paper-scissors like landscape,
49 | ]
50 | for a, b, power in parameters:
51 | initial_state = pyed.normalize(np.array([2, 1, 4]))
52 | x, v = dynamics_example(a, b, power, initial_state=initial_state)
53 | assert (dynamics.is_uniform(x) == False)
54 | assert v > 1e-3
55 |
56 |
57 | def test_divergence():
58 | # These diverge to the boundary
59 | parameters = [
60 | (1, -2, 1), # replicator with hawk-dove like landscape,
61 | (1, -2, 0), # projection with rock-paper-scissors like landscape,
62 | ]
63 | for a, b, power in parameters:
64 | initial_state = pyed.normalize(np.array([1, 1, 8]))
65 | x, v = dynamics_example(a, b, power, initial_state=initial_state, iterations=20000)
66 | assert (dynamics.is_uniform(x) == False)
67 | assert abs(np.prod(x)) < 1e-2
68 | assert v > 1e-3
69 |
70 |
71 | def test_polyak_momentum():
72 | # These examples converge to the interior of the simplex
73 | parameters = [
74 | (1, 1, 1), # replicator with hawk-dove like landscape,
75 | (2, 1, 1), # replicator with rock-paper-scissors like landscape,
76 | (1, 1, 0), # projection with rock-paper-scissors like landscape,
77 | (2, 1, 0), # projection with rock-paper-scissors like landscape,
78 | (1, 1, 2), # poincare with hawk-dove like landscape,
79 | ]
80 | for a, b, power in parameters:
81 | for momentum in [-0.2, -0.1, 0.1, 0.5]:
82 | initial_state = pyed.normalize(np.array([1, 1, 4]))
83 | x, v = dynamics_example(
84 | a, b, power, initial_state=initial_state, momentum=momentum, iterations=20000, alpha=0.01)
85 | assert abs(v) < 1e-4
86 |
87 | # These examples diverge because of the momentum
88 | parameters = [
89 | (1, 1, 1), # replicator with hawk-dove like landscape,
90 | (2, 1, 1), # replicator with rock-paper-scissors like landscape,
91 | (1, 1, 0), # projection with rock-paper-scissors like landscape,
92 | (2, 1, 0), # projection with rock-paper-scissors like landscape,
93 | # (1, 1, 2), # poincare with hawk-dove like landscape,
94 | ]
95 | for a, b, power in parameters:
96 | for momentum in [1.1, 1.2]:
97 | initial_state = pyed.normalize(np.array([1, 1, 4]))
98 | x, v = dynamics_example(
99 | a, b, power, initial_state=initial_state, momentum=momentum, iterations=1000, alpha=0.01)
100 | assert abs(v) > 1e-2
101 |
102 |
103 |
--------------------------------------------------------------------------------
/tests/test_math.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import pytest
3 |
4 | from pyed import dynamics
5 |
6 |
7 | def test_normalize():
8 | assert np.allclose(dynamics.normalize(np.ones(3)), (1/3) * np.ones(3))
9 |
10 |
11 | def test_is_uniform_false():
12 | assert dynamics.is_uniform(np.array([1, 1.1, 1.001])) == False
13 |
14 |
15 | def test_is_uniform():
16 | assert dynamics.is_uniform(np.array([1, 1.05, 1.001]), epsilon=.1) == True
17 |
18 |
--------------------------------------------------------------------------------