`__.
43 |
--------------------------------------------------------------------------------
/docs/source/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy
2 | scipy
3 | sympy
4 | matplotlib
5 | ipython
6 | sphinx
7 | jupyter_client
8 | nbsphinx
9 | nbconvert==6.5.1
--------------------------------------------------------------------------------
/docs/source/tutorials.rst:
--------------------------------------------------------------------------------
1 | .. _tutorials:
2 |
3 | *********
4 | Tutorials
5 | *********
6 |
7 | .. toctree::
8 | tutorials/firststeps.ipynb
9 | tutorials/modelselection.ipynb
10 | tutorials/hyperparameteroptimization.ipynb
11 | tutorials/hyperstudy.ipynb
12 | tutorials/changepointstudy.ipynb
13 | tutorials/onlinestudy.ipynb
14 | tutorials/priordistributions.ipynb
15 | tutorials/customobservationmodels.ipynb
16 | tutorials/probabilityparser.ipynb
17 | tutorials/multiprocessing.ipynb
18 |
--------------------------------------------------------------------------------
/docs/source/tutorials/multiprocessing.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Multiprocessing\n",
8 | "\n",
9 | "Conducting extensive data studies based on the `HyperStudy` or `ChangepointStudy` classes may involve several 10.000 or 100.000 individual fits (see e.g. [here](changepointstudy.html#Analyzing-structural-breaks-in-time-series-models)). Since these individual fits with different hyper-parameter values are independent of each other, the computational workload may be distributed among the individual cores of a multi-core processor. To keep things simple, *bayesloop* uses [object serialization](https://docs.python.org/2/library/pickle.html) to create duplicates of the current `HyperStudy` or `ChangepointStudy` instance and distributes them across the predefined number of cores. In general, this procedure may be handled by the built-in Python module [multiprocessing](https://docs.python.org/2/library/multiprocessing.html). However, *multiprocessing* relies on the built-in module [pickle](https://docs.python.org/2/library/pickle.html) for object serialization, which fails to serialize the classes defined in *bayesloop*. We therefore use a different version of the *multiprocessing* module that is part of the [pathos](https://github.com/uqfoundation/pathos) module.\n",
10 | "\n",
11 | "The latest version of *pathos* can be installed directly via [pip](https://pypi.python.org/pypi/pip), but requires [git](https://de.wikipedia.org/wiki/Git):\n",
12 | "```\n",
13 | "pip install git+https://github.com/uqfoundation/pathos\n",
14 | "```\n",
15 | "**Note**: Windows users need to install a C compiler *before* installing pathos. One possible solution for 64-bit systems is to install [Microsoft Visual C++ 2008 SP1 Redistributable Package (x64)](http://www.microsoft.com/en-us/download/confirmation.aspx?id=2092) and [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/details.aspx?id=44266).\n",
16 | "\n",
17 | "Once installed correctly, the number of cores to use in a hyper-study or change-point study can be specified by using the keyword argument `nJobs` within the `fit` method. Example:\n",
18 | "```\n",
19 | "S.fit(silent=True, nJobs=4)\n",
20 | "```"
21 | ]
22 | }
23 | ],
24 | "metadata": {
25 | "anaconda-cloud": {},
26 | "kernelspec": {
27 | "display_name": "Python 2",
28 | "language": "python",
29 | "name": "python2"
30 | },
31 | "language_info": {
32 | "codemirror_mode": {
33 | "name": "ipython",
34 | "version": 2
35 | },
36 | "file_extension": ".py",
37 | "mimetype": "text/x-python",
38 | "name": "python",
39 | "nbconvert_exporter": "python",
40 | "pygments_lexer": "ipython2",
41 | "version": "2.7.10"
42 | }
43 | },
44 | "nbformat": 4,
45 | "nbformat_minor": 0
46 | }
47 |
--------------------------------------------------------------------------------
/docs/source/tutorials/onlinestudy.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Online study\n",
8 | "\n",
9 | "All study types of *bayesloop* introduced so far are used for retrospective data analysis, i.e. the complete data set is already available at the time of the analysis. Many applications, however, from algorithmic trading to the monitoring of heart function or blood sugar levels call for on-line analysis methods that can take into account new information as it arrives from external sources. For this purpose, *bayesloop* provides the class `OnlineStudy`, which enables the inference of time-varying parameters in a sequential fashion, much like a [particle filter](https://en.wikipedia.org/wiki/Particle_filter). In contrast to particle filters, the `OnlineStudy` can account for different *scenarios* of parameter dynamics (i.e. different transition models) and can apply on-line model selection to objectively determine which scenario is more likely to describe the current data point, or all past data points.\n",
10 | "\n",
11 | "In this case, we avoid constructing some artificial usage example and directly point the reader at this [case study on stock market fluctuations](../examples/stoackmarketfluctuations.html). In this detailed example, we investigate the intra-day price fluctuations of the exchange-traded fund SPY. Based on two different transition models, one for *normal* market function and a second one for *chaotic* market fluctuations, we identify price corrections that are induced by news announcements of economic indicators."
12 | ]
13 | }
14 | ],
15 | "metadata": {
16 | "anaconda-cloud": {},
17 | "kernelspec": {
18 | "display_name": "Python 2",
19 | "language": "python",
20 | "name": "python2"
21 | },
22 | "language_info": {
23 | "codemirror_mode": {
24 | "name": "ipython",
25 | "version": 2
26 | },
27 | "file_extension": ".py",
28 | "mimetype": "text/x-python",
29 | "name": "python",
30 | "nbconvert_exporter": "python",
31 | "pygments_lexer": "ipython2",
32 | "version": "2.7.10"
33 | }
34 | },
35 | "nbformat": 4,
36 | "nbformat_minor": 0
37 | }
38 |
--------------------------------------------------------------------------------
/docs/source/tutorials/priordistributions.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Prior distributions\n",
8 | "\n",
9 | "One important aspect of Bayesian inference has not yet been discussed in this tutorial: [prior distributions](https://en.wikipedia.org/wiki/Prior_probability). In Bayesian statistics, one has to provide probability (density) values for every possible parameter value *before* taking into account the data at hand. This prior distribution thus reflects all *prior* knowledge of the system that is to be investigated. In the case that no prior knowledge is available, a *non-informative* prior in the form of the so-called [Jeffreys prior](https://en.wikipedia.org/wiki/Jeffreys_prior) allows to minimize the effect of the prior on the results. The next two sub-sections discuss how one can set custom prior distributions for the parameters of the observation model and for hyper-parameters in a hyper-study or change-point study."
10 | ]
11 | },
12 | {
13 | "cell_type": "code",
14 | "execution_count": 1,
15 | "metadata": {
16 | "collapsed": false
17 | },
18 | "outputs": [
19 | {
20 | "name": "stdout",
21 | "output_type": "stream",
22 | "text": [
23 | "+ Created new study.\n",
24 | "+ Successfully imported example data.\n"
25 | ]
26 | }
27 | ],
28 | "source": [
29 | "%matplotlib inline\n",
30 | "import matplotlib.pyplot as plt # plotting\n",
31 | "import seaborn as sns # nicer plots\n",
32 | "sns.set_style('whitegrid') # plot styling\n",
33 | "\n",
34 | "import numpy as np\n",
35 | "import bayesloop as bl\n",
36 | "\n",
37 | "# prepare study for coal mining data\n",
38 | "S = bl.Study()\n",
39 | "S.loadExampleData()"
40 | ]
41 | },
42 | {
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "## Parameter prior\n",
47 | "\n",
48 | "*bayesloop* employs a forward-backward algorithm that is based on [Hidden Markov models](http://www.cs.sjsu.edu/~stamp/RUA/HMM.pdf). This inference algorithm iteratively produces a parameter distribution for each time step, but it has to start these iterations from a specified probability distribution - the parameter prior. All built-in observation models already have a predefined prior, stored in the attribute `prior`. Here, the prior distribution is stored as a Python function that takes as many arguments as there are parameters in the observation model. The prior distributions can be looked up directly within `observationModels.py`. For the `Poisson` model discussed in this tutorial, the default prior distribution is defined in a method called `jeffreys` as\n",
49 | "```\n",
50 | "def jeffreys(x):\n",
51 | " return np.sqrt(1. / x)\n",
52 | "```\n",
53 | "corresponding to the non-informative Jeffreys prior, $p(\\lambda) \\propto 1/\\sqrt{\\lambda}$. This type of prior can also be determined automatically for arbitrary user-defined observation models, see [here](customobservationmodels.html#Sympy.stats-random-variables).\n",
54 | "\n",
55 | "### Prior functions and arrays\n",
56 | "\n",
57 | "To change the predefined prior of a given observation model, one can add the keyword argument `prior` when defining an observation model. There are different ways of defining a parameter prior in *bayesloop*: If `prior=None` is set, *bayesloop* will assign equal probability to all parameter values, resulting in a uniform prior distribution within the specified parameter boundaries. One can also directly supply a Numpy array with prior probability (density) values. The shape of the array must match the shape of the parameter grid! Another way to define a custom prior is to provide a function that takes exactly as many arguments as there are parameters in the defined observation model. *bayesloop* will then evaluate the function for all parameter values and assign the corresponding probability values.\n",
58 | "\n",
59 | "\n",
60 | "**Note:** In all of the cases described above, *bayesloop* will re-normalize the provided prior values, so they do not need to be passed in a normalized form. Below, we describe the possibility of using probability distributions from the SymPy stats module as prior distributions, which are not re-normalized by *bayesloop*.\n",
61 | "
\n",
62 | "\n",
63 | "Next, we illustrate the difference between the Jeffreys prior and a flat, uniform prior with a very simple inference example: We fit the coal mining example data set using the `Poisson` observation model and further assume the rate parameter to be static:"
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "execution_count": 2,
69 | "metadata": {
70 | "collapsed": false
71 | },
72 | "outputs": [
73 | {
74 | "name": "stdout",
75 | "output_type": "stream",
76 | "text": [
77 | "+ Transition model: Static/constant parameter values. Hyper-Parameter(s): []\n",
78 | "Fit with built-in Jeffreys prior:\n",
79 | "+ Observation model: Poisson. Parameter(s): ['accident_rate']\n",
80 | "+ Started new fit:\n",
81 | " + Formatted data.\n",
82 | " + Set prior (function): jeffreys. Values have been re-normalized.\n",
83 | "\n",
84 | " + Finished forward pass.\n",
85 | " + Log10-evidence: -88.00564\n",
86 | "\n",
87 | " + Finished backward pass.\n",
88 | " + Computed mean parameter values.\n",
89 | "-----\n",
90 | "\n",
91 | "Fit with custom flat prior:\n",
92 | "+ Observation model: Poisson. Parameter(s): ['accident_rate']\n",
93 | "+ Started new fit:\n",
94 | " + Formatted data.\n",
95 | " + Set prior (function): . Values have been re-normalized.\n",
96 | "\n",
97 | " + Finished forward pass.\n",
98 | " + Log10-evidence: -87.98915\n",
99 | "\n",
100 | " + Finished backward pass.\n",
101 | " + Computed mean parameter values.\n"
102 | ]
103 | }
104 | ],
105 | "source": [
106 | "# we assume a static rate parameter for simplicity\n",
107 | "S.set(bl.tm.Static())\n",
108 | "\n",
109 | "print 'Fit with built-in Jeffreys prior:'\n",
110 | "S.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000)))\n",
111 | "S.fit()\n",
112 | "jeffreys_mean = S.getParameterMeanValues('accident_rate')[0]\n",
113 | "print('-----\\n')\n",
114 | " \n",
115 | "print 'Fit with custom flat prior:'\n",
116 | "S.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000), \n",
117 | " prior=lambda x: 1.))\n",
118 | "# alternatives: prior=None, prior=np.ones(1000)\n",
119 | "S.fit()\n",
120 | "flat_mean = S.getParameterMeanValues('accident_rate')[0]"
121 | ]
122 | },
123 | {
124 | "cell_type": "markdown",
125 | "metadata": {},
126 | "source": [
127 | "First note that the model evidence indeed slightly changes due to the different choices of the parameter prior. Second, one may notice that the posterior mean value of the flat-prior-fit does not exactly match the arithmetic mean of the data. This small deviation shows that a flat/uniform prior is not completely non-informative for a Poisson model! The fit using the Jeffreys prior, however, succeeds in reproducing the *frequentist* estimate, i.e. the arithmetic mean:"
128 | ]
129 | },
130 | {
131 | "cell_type": "code",
132 | "execution_count": 3,
133 | "metadata": {
134 | "collapsed": false
135 | },
136 | "outputs": [
137 | {
138 | "name": "stdout",
139 | "output_type": "stream",
140 | "text": [
141 | "arithmetic mean = 1.69090909091\n",
142 | "flat-prior mean = 1.7\n",
143 | "Jeffreys prior mean = 1.69090909091\n"
144 | ]
145 | }
146 | ],
147 | "source": [
148 | "print('arithmetic mean = {}'.format(np.mean(S.rawData)))\n",
149 | "print('flat-prior mean = {}'.format(flat_mean))\n",
150 | "print('Jeffreys prior mean = {}'.format(jeffreys_mean))"
151 | ]
152 | },
153 | {
154 | "cell_type": "markdown",
155 | "metadata": {},
156 | "source": [
157 | "### SymPy prior\n",
158 | "\n",
159 | "The second option is based on the [SymPy](http://www.sympy.org/en/index.html) module that introduces symbolic mathematics to Python. Its sub-module [sympy.stats](http://docs.sympy.org/dev/modules/stats.html) covers a wide range of discrete and continuous random variables. The keyword argument `prior` also accepts a list of `sympy.stats` random variables, one for each parameter (if there is only one parameter, the list can be omitted). The multiplicative joint probability density of these random variables is then used as the prior distribution. The following example defines an exponential prior for the `Poisson` model, favoring small values of the rate parameter: "
160 | ]
161 | },
162 | {
163 | "cell_type": "code",
164 | "execution_count": 4,
165 | "metadata": {
166 | "collapsed": false
167 | },
168 | "outputs": [
169 | {
170 | "name": "stdout",
171 | "output_type": "stream",
172 | "text": [
173 | "+ Observation model: Poisson. Parameter(s): ['accident_rate']\n",
174 | "+ Started new fit:\n",
175 | " + Formatted data.\n",
176 | " + Set prior (sympy): exp(-x)\n",
177 | "\n",
178 | " + Finished forward pass.\n",
179 | " + Log10-evidence: -87.94640\n",
180 | "\n",
181 | " + Finished backward pass.\n",
182 | " + Computed mean parameter values.\n"
183 | ]
184 | }
185 | ],
186 | "source": [
187 | "import sympy.stats\n",
188 | "S.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000), \n",
189 | " prior=sympy.stats.Exponential('expon', 1)))\n",
190 | "S.fit()"
191 | ]
192 | },
193 | {
194 | "cell_type": "markdown",
195 | "metadata": {},
196 | "source": [
197 | "Note that one needs to assign a name to each `sympy.stats` variable. In this case, the output of *bayesloop* shows the mathematical formula that defines the prior. This is possible because of the symbolic representation of the prior by `SymPy`.\n",
198 | "\n",
199 | "\n",
200 | "**Note:** The support interval of a prior distribution defined via SymPy can deviate from the parameter interval specified in *bayesloop*. In the example above, we specified the parameter interval ]0, 6[, while the exponential prior has the support ]0, $\\infty$[. SymPy priors are not re-normalized with respect to the specified parameter interval. Be aware that the resulting model evidence value will only be correct if no parameter values outside of the parameter boundaries gain significant probability values. In most cases, one can simply check whether the parameter distribution has sufficiently *fallen off* at the parameter boundaries.\n",
201 | "
\n",
202 | "\n",
203 | "## Hyper-parameter priors\n",
204 | "\n",
205 | "As shown before, [hyper-studies](hyperstudy.html) and [change-point studies](changepointstudy.html) can be used to determine the full distribution of hyper-parameters (the parameters of the transition model). As for the time-varying parameters of the observation model, one might have prior knowledge about the values of certain hyper-parameters that can be included into the study to refine the resulting distribution of these hyper-parameters. Hyper-parameter priors can be defined just as regular priors, either by an arbitrary function or by a list of `sympy.stats` random variables.\n",
206 | "\n",
207 | "In a first example, we return to the simple change-point model of the coal-mining data set and perform to fits of the change-point: first, we specify no hyper-prior for the time step of our change-point, assuming equal probability for each year in our data set. Second, we define a Normal distribution around the year 1920 with a (rather unrealistic) standard deviation of 5 years as the hyper-prior using a SymPy random variable. For both fits, we plot the change-point distribution to show the differences induced by the different priors:"
208 | ]
209 | },
210 | {
211 | "cell_type": "code",
212 | "execution_count": 5,
213 | "metadata": {
214 | "collapsed": false
215 | },
216 | "outputs": [
217 | {
218 | "name": "stdout",
219 | "output_type": "stream",
220 | "text": [
221 | "Fit with flat hyper-prior:\n",
222 | "+ Created new study.\n",
223 | " --> Hyper-study\n",
224 | " --> Change-point analysis\n",
225 | "+ Successfully imported example data.\n",
226 | "+ Observation model: Poisson. Parameter(s): ['accident_rate']\n",
227 | "+ Transition model: Change-point. Hyper-Parameter(s): ['tChange']\n",
228 | "+ Detected 1 change-point(s) in transition model: ['tChange']\n",
229 | "+ Set hyper-prior(s): ['uniform']\n",
230 | "+ Started new fit.\n",
231 | " + 109 analyses to run.\n",
232 | "\n",
233 | " + Computed average posterior sequence\n",
234 | " + Computed hyper-parameter distribution\n",
235 | " + Log10-evidence of average model: -75.71555\n",
236 | " + Computed local evidence of average model\n",
237 | " + Computed mean parameter values.\n",
238 | "+ Finished fit.\n"
239 | ]
240 | },
241 | {
242 | "data": {
243 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgEAAAERCAYAAADi2HRnAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3X9wFPX9x/HXHccm0QuJKEQEGtCSilooCSqiIKBMaaUg\nkJCAgEp0quOPsVLHMiqGaoi1OFp/pIw6Kj/N1OIPiGOtaMSfRblOgqgkFhU1HoLyI7njkiPcfv/g\ny0kgOVbNJoHP8/FXbj/7uXvfeza5V3b3dj22bdsCAADG8XZ0AQAAoGMQAgAAMBQhAAAAQxECAAAw\nFCEAAABDEQIAADCUz80nt21bRUVFqq6ulmVZKi4uVt++fePj5eXlWrJkiXw+n7KyslRUVCRJmjx5\nsvx+vySpT58+WrBggZtlAgBgJFdDwJo1axSNRlVWVqaqqiqVlJSotLRUktTY2KgHH3xQ5eXlsixL\nc+bMUUVFhc4//3xJ0pIlS9wsDQAA47l6OCAQCGjEiBGSpMGDB2vjxo3xMcuyVFZWJsuyJElNTU1K\nSkrSpk2btGfPHhUWFuqKK65QVVWVmyUCAGAsV/cEhEIhpaamfv9iPp9isZi8Xq88Ho+6d+8uSVq6\ndKkikYiGDx+umpoaFRYWKi8vT59//rmuvvpqvfzyy/J6OX0BAIC25GoI8Pv9CofD8ccHAsABtm3r\n3nvv1ZYtW/Twww9Lkvr166fMzMz4z+np6dq+fbsyMjLcLBUAAOO4GgKys7NVUVGhcePGqbKyUllZ\nWc3G77jjDiUnJ8fPE5CklStXqqamRnfeeae++eYbhcNh9ejRI+HrBAIBV+oHAKCzysnJ+cnP4XHz\nBkIHfztAkkpKSvThhx8qEonozDPPVG5ubvxNeDwezZo1S6NGjdKtt96qYDAor9erP/7xj/rVr36V\n8HUCgUCbNONYR5+co1fO0Cdn6JNz9MqZtuqTq3sCPB6P5s+f32xZ//794z9/9NFHLc6777773CwL\nAACIiwUBAGAsQgAAAIYiBAAAYChCAAAAhnL1xEDgaBeLxRQMBhOu06tXLy5mBeCoRAgAEggGg5q2\neJqsNKvF8ejuqJ6+/Gn17t27nSsDgJ+OEAAcgZVmKaV7SkeXAQBtjn2YAAAYihAAAIChCAEAABiK\nEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAA\nAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACA\noQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoXxuPrlt2yoqKlJ1dbUsy1JxcbH69u0b\nHy8vL9eSJUvk8/mUlZWloqKiI84BAABtw9U9AWvWrFE0GlVZWZnmzJmjkpKS+FhjY6MefPBBLVu2\nTCtWrFB9fb0qKioSzgEAAG3H1RAQCAQ0YsQISdLgwYO1cePG+JhlWSorK5NlWZKkpqYmJSUlJZwD\nAADajqshIBQKKTU1Nf7Y5/MpFotJkjwej7p37y5JWrp0qSKRiIYPH55wDgAAaDuunhPg9/sVDofj\nj2OxmLze73OHbdu69957tWXLFj388MOO5rQmEAi0YeXHLvrkXCAQ0LZt21RfV6+oN9riOo11jdqw\nYYO2bt3aztV1HmxTztAn5+hV+3E1BGRnZ6uiokLjxo1TZWWlsrKymo3fcccdSk5OVmlpqeM5rcnJ\nyWnT2o9FgUCAPjl0oFe1tbVK/TRVKekpLa4XiUU0aNAg9e7du50r7BzYppyhT87RK2faKii5GgLG\njh2rt99+WwUFBZKkkpISlZeXKxKJ6Mwzz9Szzz6rnJwczZw5Ux6PR7NmzWpxDgAAaHuuhgCPx6P5\n8+c3W9a/f//4zx999FGL8w6dAwAA2h4XCwIAwFCEAAAADEUIAADAUIQAAAAMRQgAAMBQhAAAAAxF\nCAAAwFCEAAAADEUIAADAUK5eMRA4GsRiMQWDwWbLtm3bptra2v3L7Q4qDABcRgiA8YLBoKYtniYr\nzYovq6+rV+qnqar/ol5WT0spavkGQgBwNCMEAJKsNEsp3b//oI96o0pJT1HDroYOrAoA3MU5AQAA\nGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiK\nEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAA\nAIChCAEAABiKEAAAgKF8bj65bdsqKipSdXW1LMtScXGx+vbt22ydSCSi2bNna8GCBerfv78kafLk\nyfL7/ZKkPn36aMGCBW6WCQCAkVwNAWvWrFE0GlVZWZmqqqpUUlKi0tLS+PjGjRt155136ptvvokv\ni0ajkqQlS5a4WRoAAMZz9XBAIBDQiBEjJEmDBw/Wxo0bm43v3btXpaWlOvXUU+PLNm3apD179qiw\nsFBXXHGFqqqq3CwRAABjubonIBQKKTU19fsX8/kUi8Xk9e7PHkOGDJG0/7DBAcnJySosLFReXp4+\n//xzXX311Xr55ZfjcwAAQNtw9Mn6+OOPa/v27T/4yf1+v8LhcPzxwQGgNf369dOECRPiP6enp/+o\n1wYAAIk52hPQ0NCgGTNmKDMzU5MmTdLFF1+srl27HnFedna2KioqNG7cOFVWViorK+uIc1auXKma\nmpr4uQLhcFg9evQ44rxAIODkrRiPPh1u27Ztqq+rV9QbbbZ8967dCteHpSbJs8vT4tzGukZt2LBB\nW7dubY9SOyW2KWfok3P0qv04CgHXX3+9rr/+eq1fv17l5eV66KGHNGzYMOXl5WngwIGtzhs7dqze\nfvttFRQUSJJKSkpUXl6uSCSivLy8+Hoez/d/YHNzczV37lxNnz5dXq9XCxYscHQoICcnx8lbMVog\nEKBPLaitrVXqp6lKSU+JL9u9a7fS0tMU2xGTJ9mjtPS0FudGYhENGjRIvXv3bq9yOxW2KWfok3P0\nypm2CkqOzwmIRCL66quv9OWXX8rr9apbt266++67lZ2drTlz5rQ4x+PxaP78+c2WHfga4MEO/iZA\n165dtXDhQqdlAY7EYjEFg8EWx4LBoGS3OAQAxzRHIWDOnDlat26dRo4cqWuvvVZDhw6VtP/rfBdc\ncEGrIQDoLILBoKYtniYrzTpsrP6Lelk9LaUopYWZAHDschQCzjvvPN1111067rjj4sui0agsy9KL\nL77oWnFAW7LSLKV0P/yDvmFXQwdUAwAdz9G3A5555plmASAWi2nKlCmS5OikPQAA0Pkk3BMwa9Ys\nvffee5Kk008//ftJPp/GjBnjbmUAAMBVCUPAgRP27r77bt1+++3tUhAAAGgfCUNARUWFRo8erTPP\nPFPPP//8YeOXXnqpa4UBAAB3JQwBH3zwgUaPHh0/JHAoQgAAAEevhCHgxhtvlLT/Ij8AAODYkjAE\njBkzptnV/A716quvtnlBAACgfSQMAUuXLm2vOgAAQDtLGAJqamo0evToFk8KlGTs9dIBADgWODox\ncN26dS2Oc2IgAABHrx90YmAoFFLXrl2VlJTkfmUAAMBVju4dUFNTo1tvvVVff/21JOnUU0/Vvffe\nq759+7paHAAAcI+jewfMmzdPN910k9atW6d169Zp9uzZmjt3rtu1AQAAFzkKAY2Njbrwwgvjj8eO\nHatQKORaUQAAwH0JQ8DXX3+tr7/+WqeffroeffRR7dixQ7t379ayZcs0dOjQ9qoRAAC4IOE5ATNm\nzJDH45Ft21q3bp3KysriYx6Ph5sKAQBwFEsYAl577bX2qgMAALQzR98O+PTTT7VixQrt2bNHtm0r\nFovpq6++0vLly92uDwAAuMTRiYF/+MMf1K1bN3388ccaOHCgvvvuOw0YMMDt2gAAgIsc7QmIxWK6\n8cYb1dTUpDPOOEMFBQUqKChwuzYAAOAiR3sCUlJSFI1G1a9fP3344YeyLEuNjY1u1wYAAFzkKARM\nmDBB11xzjUaNGqVly5bpqquuUkZGhtu1AQAAFzk6HDBjxgxdeuml8vv9Wrp0qT744AOdf/75btcG\nAABc5CgE7N27V88995zee+89+Xw+DR8+XCkpKW7XBgAAXOQoBPz5z39WKBTSpEmTZNu2nn/+eVVX\nV3OxIAAAjmKOQkBlZaVWr14dfzx69GhNnDjRtaIAAID7HJ0YmJGRoS+//DL+eNu2berRo4drRQEA\nAPcl3BMwc+ZMeTwe7dy5UxMmTNDZZ58tr9er//73v1wsCACAo1zCEHDDDTe0uHz27NmuFAMAANpP\nwhBwzjnnxH9eu3at/vOf/6ipqUnnnnuuLr74YteLAwAA7nF0TsBjjz2mhx9+WL169VKfPn20aNEi\nLVq0yO3aAACAixx9O2DVqlV65plnlJycLEmaOnWqJk+erGuuucbV4gAAgHsc7QmwbTseACQpKSlJ\nPp+j/AAAADopR5/kw4YN0w033KBJkyZJkp5//nmde+65rhYGAADc5SgE3HbbbXr66af1/PPPy7Zt\nDRs2TPn5+W7XBgAAXOQoBBQWFuqJJ57Q9OnTf9CT27atoqIiVVdXy7IsFRcXq2/fvs3WiUQimj17\nthYsWKD+/fs7mgMAAH46R+cENDQ0KBgM/uAnX7NmjaLRqMrKyjRnzhyVlJQ0G9+4caNmzJjR7GqE\nR5oDAADahqM9ATt27NCYMWN04oknKikpKb781VdfTTgvEAhoxIgRkqTBgwdr48aNzcb37t2r0tJS\n3XLLLY7nAACAtuEoBPz973+PXyyoS5cuuvDCC3XeeecdcV4oFFJqaur3L+bzKRaLyevdvwNiyJAh\nkvYfNnA6BwAAtA1HIWDRokVqbGzU1KlTFYvF9MILL+iTTz7RbbfdlnCe3+9XOByOP3byYf5j5kj7\n9yDgyI7mPsViMX377betjp900kmtbivbtm1TfV29ot7oYWPh+rDUJHl2eZot371rd6tjBzTWNWrD\nhg3aunXrD3gnx5ajeZtqT/TJOXrVfhyFgKqqKv3rX/+KPx4zZozGjx9/xHnZ2dmqqKjQuHHjVFlZ\nqaysLFfmSFJOTo6j9UwWCASO6j7V1tZqzitzZKVZh41Fd0f19OVPq3fv3q3OTf00VSnpKYeNxXbE\n5En2KC09Lb5s967dSktPa3HsYJFYRIMGDWr1dY91R/s21V7ok3P0ypm2CkqOQkCvXr20ZcsWZWZm\nSpK+/fZbZWRkHHHe2LFj9fbbb6ugoECSVFJSovLyckUiEeXl5cXX83g8CecAB1hpllK6H/5BDgD4\n4RyFgKamJk2cOFFDhw6Vz+dTIBBQjx49NGvWLEnSkiVLWpzn8Xg0f/78Zsv69+9/2HoHz29pDgAA\naHuOQsChtxTmVsIAABz9HIWAg28pDOB7dsxOeA2NXr168c0WAJ0WdwECfoLGukZdt/o6pfZMPWzs\nSCcrAkBHIwQAP5HVjZMVARyd2E8JAIChCAEAABiKEAAAgKEIAQAAGIoTA9FpxGKxhF+3CwaDkt3q\nMADgByIEoNMIBoOatnhai/cGkKT6L+pl9bSUIs7EB4C2QAhAp5Lo3gANuxrauRoAOLZxTgAAAIYi\nBAAAYChCAAAAhiIEAABgKEIAAACGIgQAAGAoQgAAAIYiBAAAYChCAAAAhiIEAABgKEIAAACGIgQA\nAGAoQgAAAIYiBAAAYChCAAAAhiIEAABgKEIAAACGIgQAAGAoQgAAAIYiBAAAYChCAAAAhiIEAABg\nKEIAAACGIgQAAGAoQgAAAIYiBAAAYCifm09u27aKiopUXV0ty7JUXFysvn37xsdfe+01lZaWyufz\nacqUKcrLy5MkTZ48WX6/X5LUp08fLViwwM0yAQAwkqshYM2aNYpGoyorK1NVVZVKSkpUWloqSWpq\natI999yjZ599VklJSZo2bZouuuii+If/kiVL3CwNAADjuXo4IBAIaMSIEZKkwYMHa+PGjfGxzZs3\nKzMzU36/X127dlVOTo7ef/99bdq0SXv27FFhYaGuuOIKVVVVuVkiAADGcnVPQCgUUmpq6vcv5vMp\nFovJ6/UeNnb88cervr5ep556qgoLC5WXl6fPP/9cV199tV5++WV5vZy+AABAW3I1BPj9foXD4fjj\nAwHgwFgoFIqPhcNhdevWTZmZmfrZz34mSerXr5/S09O1fft2ZWRkJHytQCDgwjs49nTmPm3btk31\ndfWKeqMtjofrw1KT5NnlOWyssa5RGzZs0NatW3/wc7f2vLt37U74mj+1pmNFZ96mOhP65By9aj+u\nhoDs7GxVVFRo3LhxqqysVFZWVnzstNNO05YtW1RXV6fk5GStX79ehYWFWrlypWpqanTnnXfqm2++\nUTgcVo8ePY74Wjk5OW6+lWNCIBDo1H2qra1V6qepSklPaXE8tiMmT7JHaelph41FYhENGjRIvXv3\n/sHP3dLz7t61W2npaQlf86fWdCzo7NtUZ0GfnKNXzrRVUHI1BIwdO1Zvv/22CgoKJEklJSUqLy9X\nJBJRXl6e5s6dq9mzZ8u2beXm5qpnz57Kzc3V3LlzNX36dHm9Xi1YsIBDAQAAuMDVEODxeDR//vxm\ny/r37x//edSoURo1alSz8a5du2rhwoVulgUAAMTFggAAMBYhAAAAQxECAAAwFCEAAABDEQIAADAU\nIQAAAEO5+hVBwGR2zFYwGEy4Tq9evbgOBoAOQwgAXNJY16jrVl+n1J6pLY5Hd0f19OVPH9NXFATQ\nuRECcEw40n/dwWBQstuxoP9ndbOU0r3lyyADQEcjBOCYcKT/uuu/qJfV01KK+EAGgAMIAThmJPqv\nu2FXQztX465YLJZwzwfnGgBwghAAHIWCwaCmLZ4mK806bIxzDQA4RQgAjlJWGucbAPhp2F8IAICh\nCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgB\nAAAYihsIAR3EjtncDhhAhyIEAB2ksa5R162+Tqk9Uw8b43bAANoDIQDoQFY3bgcMoOOwrxEAAEMR\nAgAAMBQhAAAAQxECAAAwFCEAAABD8e0AtKtYLNbqd+ODwaBkt3NBAGAwQgDaVTAY1LTF02SlWYeN\n1X9RL6unpRTxlbkjXUiIwASgLRAC0O6stJa/G9+wq6EDqumcEl1ISCIwAWgbroYA27ZVVFSk6upq\nWZal4uJi9e3bNz7+2muvqbS0VD6fT1OmTFFeXt4R5wCmSHQhoUSBicsRA3DK1RCwZs0aRaNRlZWV\nqaqqSiUlJSotLZUkNTU16Z577tGzzz6rpKQkTZs2TRdddJECgUCrcwAcGZcjBuCUqyEgEAhoxIgR\nkqTBgwdr48aN8bHNmzcrMzNTfr9fkjR06FC99957qqysbHUO2keik/cOjEtq8b/JRGMSx7LbC5cj\nBuCEqyEgFAopNfX7/0Z8Pp9isZi8Xu9hY8cdd5zq6+sVDodbnYP2kejkPWn/8Wglq8X/NBONHRjn\nWHbHOdKhgiOFOA4lAMcWV0OA3+9XOByOPz74w9zv9ysUCsXHwuGw0tLSEs5JpLa2tg0rPzZt27bN\nUZ8SfUi0hWhdVJHkyGHL99btlaJqcexI4209t7GuUZFYpFPV1BZzQ1+FdPXyq+U/0d/i84aCIclS\ni+PRUFSP5D6iXr16xZc53aZMR5+co1fty9UQkJ2drYqKCo0bN06VlZXKysqKj5122mnasmWL6urq\nlJycrPXr16uwsFCSWp2TyNatW115D8eSnj17OuqTx+PR/b++v/UVhiWYnGjsaJzbGWv6KXOP9LwO\nHLwNOd2mTEefnKNX7ctj27ZrR2gPPtNfkkpKSvThhx8qEokoLy9Pr7/+uh5++GHZtq3c3FxNmzat\nxTn9+/d3q0QAAIzlaggAAACdF2f4AABgKEIAAACGIgQAAGAoQgAAAIbq9CGgqqpKM2fOlCR9/PHH\nys/P12WXXabbbrtNkrRp0ybNnDlTs2bN0syZMzVo0CC99dZbamxs1I033qjLLrtMv//977Vz586O\nfBuuO1KfJOmJJ57Q5MmTlZeXpzVr1kiScX2SnPXq0Ucf1aWXXqqZM2fq9ddfl2Rerw7u04cffqi8\nvDzNmDFDd999d3ydf/zjH5oyZYoKCgrok1rvkyTt2LFDv/71rxWNRiWZ1yfJWa+eeuopTZ06Vfn5\n+XrkkUckmdcrJ31avny5cnNzNXXqVL300kuSfmSf7E7sscces8ePH2/n5+fbtm3b1113nf3GG2/Y\ntm3bc+bMsSsqKpqt/9JLL9m33HKLbdu2/eSTT9oPPfSQbdu2/eKLL9p33313+xXezpz0qa6uzh41\napTd1NRk79692x49erRt22b1ybad9aq6utqeOHGiHY1G7cbGRnvSpEl2Q0ODUb06tE+TJ0+2Kysr\nbdu27fvvv99etWqVvX37dnv8+PH23r177fr6env8+PF2NBqlT4f0ybZt+80337QvvfRSOycnx25s\nbLRtm9+9g3v1wAMP2KtWrbK/+OILe8qUKfE5BQUFdnV1tVG9crJN7dixwx4/fry9b98+OxQK2Rde\neKFt2z9um+rUewIyMzPjSVCSBg4cqJ07d8q2bYXDYfl831/rKBKJ6KGHHor/NxcIBDRy5EhJ0siR\nI/Xuu++2b/HtyEmfUlJS1Lt3b4XDYe3Zsyd+FUaT+iQ569XmzZt1zjnnqGvXrrIsS5mZmdq0aZNR\nvTq0T998840GDx4saf9FwNavX68NGzYoJydHPp9Pfr9f/fr1o0+H9CkQCEiSunTpoqeeekppaWnx\ndU3qk5S4V0OGDFEgENApp5yixx9/PL7Ovn37lJSUZFSvnGxTJ5xwgl544QV5vV5t375dSUlJkn7c\nNtWpQ8DYsWPVpUuX+ON+/fqpuLhYl1xyiXbs2KFzzjknPvbPf/5Tv/nNb+K/ZKFQKH5zouOPP77Z\nJYqPNU77lJGRod/+9reaMmVKfFeTSX2SnPUqKytL69ev1549e7Rz505VVlYqEokY1atD+9S3b1+t\nX79e0v4rejY0NLR4/49QKKRwOEyftL9Pkcj+Szefd955SktLk33QZVlM2p4kZ73q0qWL0tPTJUl/\n+ctfdMYZZygzM9OoXjndprxer5YvX678/HxNmDBB0o/bply9bHBbKy4u1ooVK3Taaadp+fLluuee\nezRv3jxJ0urVq/XQQw/F1z34HgSH3pToWNdSny644AJ9++23qqiokG3bKiws1JAhQ5Sammpsn6TW\nt6np06frqquuUq9evTRo0CCdcMIJRvdqwYIFKi4u1r59+5STk6OkpCSlpqYedv+Pbt26Gf2711Kf\nDubxeOI/m9wnqfVeRaNRzZ07V6mpqbrzzjslmd2rRNvUZZddpvz8fF111VVat27dj/ob1an3BBwq\nPT09nnIyMjJUV1cnaX/62bt3rzIyMuLrZmdna+3atZKktWvXaujQoe1fcAdpqU9paWlKTk6O7+I+\n8Afc5D5JLfdq586dCofDWrFihebPn6+tW7cqKytLQ4YMMbZXa9eu1X333acnn3xSu3bt0vDhw/XL\nX/5SgUBA0WhU9fX1+vTTTzVgwAD6dEifDnbwngDTf/da69W1116rgQMHqqioKB6aTO5VS3367LPP\ndMMNN0jaf6gpKSlJXbp0+VF9Oqr2BNx111266aab5PP5ZFmW7rrrLknSZ599pt69ezdbd9q0abr1\n1ls1ffp0WZal++67ryNK7hAt9emUU07RWWedpalTp8rr9SonJ0fDhw9Xdna2sX2SWu7VCSecoM2b\nNys3N1eWZemWW26Rx+MxepvKzMzU5ZdfrpSUFJ177rnx444zZ87U9OnTZdu2br75ZlmWRZ9a6NMB\nB+8JMLlPUsu9WrNmjdavX6+9e/dq7dq18ng8mjNnjtG9am2bOv3005Wfny+Px6ORI0dq6NChOuus\ns35wn7h3AAAAhjqqDgcAAIC2QwgAAMBQhAAAAAxFCAAAwFCEAAAADEUIAADAUIQAwBChUEjXXXed\nJGnv3r164IEH9Lvf/U6TJk1SQUFB/DrjtbW1GjNmTEeWCqCdHFUXCwLw4+3atUubNm2SJP3pT39S\ncnKyVq5cKcuyVFNTo9mzZ2vx4sVKTk5udlEbAMcuLhYEGOLaa6/VW2+9pQsvvFDvvPOO3n333WbX\nIX///ffVu3dv2batqVOnatiwYaqpqVFaWpoeeeQRpaWladmyZVq1apUikYi8Xq/uv/9+nXrqqRoz\nZowmTpyot956Sw0NDfGbv9TU1Gju3LmKxWLKycnRG2+8oX//+9/67rvvNG/ePG3dulVer1c333yz\nzjvvvA7sDmAmDgcAhrj99tvVs2dPTZgwQQMGDDjs5jZnn322TjnlFEnSjh07dOWVV2r16tXq3r27\nXnzxRYVCIb322mtatmyZVq9erYsuukgrVqyIz+/evbueeeYZ5efna9GiRZL273G46aab9Nxzz6lP\nnz7at2+fpP03bsrNzdXKlStVWlqqefPmac+ePe3UCQAHcDgAMIzX61UsFku4TkZGhs466yxJ0oAB\nA7Rz5075/X4tXLhQ5eXl+vzzz/Xmm29q4MCB8TkXXHBBfP1XXnlFu3fvVm1trUaMGCFJys3N1dKl\nSyVJ77zzjj777DP97W9/k7T/vvFffPGFTj/99DZ/vwBaRwgADHPWWWdp8+bNikajsiwrvnzx4sXq\n0aOHBg8e3Ox+5h6PR7Zta+vWrZo5c6ZmzJihkSNH6qSTTtLHH38cX+/AnoUD6x/8HIeKxWJavHix\nunXrJknatm2bevTo0dZvFcARcDgAMITP59O+fft08skna/To0brrrrsUjUYlSR999JEef/xxZWVl\nSWp+y9sDPvjgg/gdzQYNGqQ33ngj4R4Fv9+vzMxMvfnmm5KkVatWxU84HDZsmJYvXy5J+t///qcJ\nEyYoEom06fsFcGTsCQAMceKJJ+rkk0/W5ZdfrkcffVR//etfNXHiRCUlJSk5OVkLFy7Uz3/+c9XW\n1rb47YDkqdIHAAAAtUlEQVQLLrhATz/9tC655BIlJSVp0KBB+uSTTySp1W8TlJSU6LbbbtP999+v\nX/ziF0pOTpa0//yEefPmacKECZKkhQsX6rjjjnPpnQNoDd8OAOCaRx55RPn5+TrppJP0yiuvaPXq\n1XrwwQc7uiwA/489AQBcc8opp+jKK6+Uz+dTWlqaiouLO7okAAdhTwAAAIbixEAAAAxFCAAAwFCE\nAAAADEUIAADAUIQAAAAMRQgAAMBQ/wf6rNXeFefZaAAAAABJRU5ErkJggg==\n",
244 | "text/plain": [
245 | ""
246 | ]
247 | },
248 | "metadata": {},
249 | "output_type": "display_data"
250 | },
251 | {
252 | "name": "stdout",
253 | "output_type": "stream",
254 | "text": [
255 | "-----\n",
256 | "\n",
257 | "Fit with custom normal prior:\n",
258 | "+ Transition model: Change-point. Hyper-Parameter(s): ['tChange']\n",
259 | "+ Detected 1 change-point(s) in transition model: ['tChange']\n",
260 | "+ Set hyper-prior(s): ['sqrt(2)*exp(-(x - 1920)**2/50)/(10*sqrt(pi))']\n",
261 | "+ Started new fit.\n",
262 | " + 109 analyses to run.\n",
263 | "\n",
264 | " + Computed average posterior sequence\n",
265 | " + Computed hyper-parameter distribution\n",
266 | " + Log10-evidence of average model: -80.50692\n",
267 | " + Computed local evidence of average model\n",
268 | " + Computed mean parameter values.\n",
269 | "+ Finished fit.\n"
270 | ]
271 | },
272 | {
273 | "data": {
274 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgEAAAERCAYAAADi2HRnAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xtw1NX9//HXLssm0Q2JCEYMfANaUlELJUFFCggoU1op\n14QEBFSiUx0vY6WOZVQIhRBrcbSK0UFH5Z4pRRHiWBWJeC3KdhJEJbF4j0FQIMkuS5awn98f/FgJ\nSTYr5pPbeT7+yu7Zszn73oW89nzO53wclmVZAgAAxnG29QAAAEDbIAQAAGAoQgAAAIYiBAAAYChC\nAAAAhiIEAABgKJedT25ZlnJzc1VWVia32628vDz16dMn3F5UVKSVK1fK5XIpNTVVubm5kqQpU6bI\n4/FIknr37q0lS5bYOUwAAIxkawjYsmWLgsGgCgsLVVpaqvz8fBUUFEiSamtr9eijj6qoqEhut1tz\n585VcXGxfvOb30iSVq5caefQAAAwnq2HA7xer0aMGCFJGjRokHbt2hVuc7vdKiwslNvtliTV1dUp\nJiZGu3fv1uHDh5WTk6Prr79epaWldg4RAABj2ToT4PP5FB8f/+Mvc7kUCoXkdDrlcDjUvXt3SdKq\nVasUCAQ0bNgwlZeXKycnR5mZmfriiy9000036ZVXXpHTyfIFAABakq0hwOPxyO/3h2+fCAAnWJal\nBx98UF9++aWWLVsmSerbt69SUlLCPycmJmr//v1KSkqyc6gAABjH1hCQlpam4uJijRs3TiUlJUpN\nTa3Xfv/99ys2Nja8TkCSNmzYoPLyci1YsEDfffed/H6/evbsGfH3eL1eW8YPAEB7lZ6e/rOfw2Hn\nBYROPjtAkvLz8/XRRx8pEAjo4osvVkZGRvhFOBwOzZ49W6NGjdI999yjyspKOZ1O/fnPf9avf/3r\niL/H6/W2SDE6O+oUPWoVHeoUHeoUPWoVnZaqk60zAQ6HQwsXLqx3X79+/cI/f/zxx432e+ihh+wc\nFgAAEJsFAQBgLEIAAACGIgQAAGAoQgAAAIYiBAAAYChCAAAAhiIEAABgKEIAAACGIgQAAGAoQgAA\nAIYiBAAAYChCAAAAhrL1AkIAYKJQKKTKysrw7X379qmioiJ8u1evXnI6+Q6GtkcIAIAWVllZqekr\npsud4JYk1VTXKP6zeElSsCqoddetU3JyclsOEZBECAAAW7gT3IrrHidJCjqDikuMa+MRAQ0xHwUA\ngKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAICh\nCAEAABiKEAAAgKEIAQAAGIoQAACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGIoQAACAoQgB\nAAAYymXnk1uWpdzcXJWVlcntdisvL099+vQJtxcVFWnlypVyuVxKTU1Vbm5us30AAEDLsHUmYMuW\nLQoGgyosLNTcuXOVn58fbqutrdWjjz6q1atXa+3ataqpqVFxcXHEPgAAoOXYGgK8Xq9GjBghSRo0\naJB27doVbnO73SosLJTb7ZYk1dXVKSYmJmIfAADQcmwNAT6fT/Hx8eHbLpdLoVBIkuRwONS9e3dJ\n0qpVqxQIBDRs2LCIfQAAQMuxdU2Ax+OR3+8P3w6FQnI6f8wdlmXpwQcf1Jdffqlly5ZF1acpXq+3\nBUfeeVGn6FGr6FCnhvbt26ea6hoFncHwfVWHqiRJtdW12rlzp/bu3dtWw2v3+Ey1HltDQFpamoqL\nizVu3DiVlJQoNTW1Xvv999+v2NhYFRQURN2nKenp6S069s7I6/VSpyhRq+hQp8ZVVFQo/rN4xSXG\nSToeABISEyRJgVBAAwcOVHJyclsOsd3iMxWdlgpKtoaAsWPH6p133lF2drYkKT8/X0VFRQoEArr4\n4ov1/PPPKz09XbNmzZLD4dDs2bMb7QMAAFqerSHA4XBo4cKF9e7r169f+OePP/640X6n9gEAAC2P\nzYIAADAUIQAAAEMRAgAAMBQhAAAAQxECAAAwFCEAAABDEQIAADAUIQAAAEMRAgAAMBQhAAAAQxEC\nAAAwFCEAAABDEQIAADAUIQAAAEMRAgAAMBQhAAAAQ7naegAAfrpQKKTKysom23v16iWnk4wPIDJC\nANABVVZWavqK6XInuBu0BauCWnfdOiUnJ7fByAB0JIQAoINyJ7gV1z2urYcBoANjvhAAAEMRAgAA\nMBQhAAAAQxECAAAwFCEAAABDEQIAADAUIQAAAEMRAgAAMBQhAAAAQxECAAAwFCEAAABDEQIAADAU\nIQAAAEMRAgAAMBQhAAAAQxECAAAwFCEAAABDuex8csuylJubq7KyMrndbuXl5alPnz71HhMIBDRn\nzhwtWbJE/fr1kyRNmTJFHo9HktS7d28tWbLEzmECAGAkW0PAli1bFAwGVVhYqNLSUuXn56ugoCDc\nvmvXLi1YsEDfffdd+L5gMChJWrlypZ1DAwDAeLYeDvB6vRoxYoQkadCgQdq1a1e99qNHj6qgoEDn\nn39++L7du3fr8OHDysnJ0fXXX6/S0lI7hwgAgLFsnQnw+XyKj4//8Ze5XAqFQnI6j2ePwYMHSzp+\n2OCE2NhY5eTkKDMzU1988YVuuukmvfLKK+E+AACgZUT1l/Xpp5/W/v37f/KTezwe+f3+8O2TA0BT\n+vbtqwkTJoR/TkxMPK3fDQAAIotqJuDIkSOaOXOmUlJSNHnyZF199dXq2rVrs/3S0tJUXFyscePG\nqaSkRKmpqc322bBhg8rLy8NrBfx+v3r27NlsP6/XG81LMR51il57rtW+fftUU12joDPYoK22ulY7\nd+7U3r17W2Us7blObaWx96fqUJWk1n9/OiI+U60nqhBw22236bbbbtOOHTtUVFSkxx57TEOHDlVm\nZqYGDBjQZL+xY8fqnXfeUXZ2tiQpPz9fRUVFCgQCyszMDD/O4XCEf87IyNC8efM0Y8YMOZ1OLVmy\nJKpDAenp6dG8FKN5vV7qFKX2XquKigrFfxavuMS4Bm2BUEADBw5UcnKy7eNo73VqK6e+P1WHqpSQ\nmCCpdd+fjojPVHRaKihFvSYgEAjom2++0ddffy2n06lu3bpp8eLFSktL09y5cxvt43A4tHDhwnr3\nnTgN8GQnnwnQtWtXLV26NNphAQCA0xRVCJg7d662b9+ukSNH6pZbbtGQIUMkHT+db/jw4U2GAAAA\n0H5FFQKuuOIKLVq0SGeccUb4vmAwKLfbrZdeesm2wQEAAPtEdXbA+vXr6wWAUCikqVOnSlJUi/YA\nAED7E3EmYPbs2Xr//fclSRdeeOGPnVwujRkzxt6RAQAAW0UMAScW7C1evFj33XdfqwwIAAC0jogh\noLi4WKNHj9bFF1+sjRs3NmifNGmSbQMDAAD2ihgCPvzwQ40ePTp8SOBUhAAAADquiCHgjjvukHR8\nkx8AANC5RAwBY8aMqbeb36lef/31Fh8QAABoHRFDwKpVq1prHAAAoJVFDAHl5eUaPXp0o4sCJbH3\nNQAAHVhUCwO3b9/eaDsLAwEA6Lh+0sJAn8+nrl27KiYmxv6RAQAAW0V17YDy8nLdc889+vbbbyVJ\n559/vh588EH16dPH1sEBAAD7RHXtgPnz5+vOO+/U9u3btX37ds2ZM0fz5s2ze2wAAMBGUYWA2tpa\nXXnlleHbY8eOlc/ns21QAADAfhFDwLfffqtvv/1WF154oZYvX64DBw6oqqpKq1ev1pAhQ1prjAAA\nwAYR1wTMnDlTDodDlmVp+/btKiwsDLc5HA4uKgQAQAcWMQRs3bq1tcYBAABaWVRnB3z22Wdau3at\nDh8+LMuyFAqF9M0332jNmjV2jw8AANgkqoWBf/rTn9StWzd98sknGjBggH744Qf179/f7rEBAAAb\nRTUTEAqFdMcdd6iurk4XXXSRsrOzlZ2dbffYAACAjaKaCYiLi1MwGFTfvn310Ucfye12q7a21u6x\nAQAAG0UVAiZMmKCbb75Zo0aN0urVq3XjjTcqKSnJ7rEBAAAbRXU4YObMmZo0aZI8Ho9WrVqlDz/8\nUL/5zW/sHhsAALBRVCHg6NGjeuGFF/T+++/L5XJp2LBhiouLs3tsAADARlGFgL/+9a/y+XyaPHmy\nLMvSxo0bVVZWxmZBAAB0YFGFgJKSEm3evDl8e/To0Zo4caJtgwIAAPaLamFgUlKSvv766/Dtffv2\nqWfPnrYNCgAA2C/iTMCsWbPkcDh08OBBTZgwQZdeeqmcTqf++9//slkQAAAdXMQQcPvttzd6/5w5\nc2wZDAAAaD0RQ8Bll10W/nnbtm36z3/+o7q6Ol1++eW6+uqrbR8cAACwT1RrAp566iktW7ZMvXr1\nUu/evfXkk0/qySeftHtsAADARlGdHbBp0yatX79esbGxkqRp06ZpypQpuvnmm20dHAAAsE9UMwGW\nZYUDgCTFxMTI5YoqPwAAgHYqqr/kQ4cO1e23367JkydLkjZu3KjLL7/c1oEBAAB7RRUC7r33Xq1b\nt04bN26UZVkaOnSosrKy7B4bAACwUVQhICcnR88884xmzJjxk57csizl5uaqrKxMbrdbeXl56tOn\nT73HBAIBzZkzR0uWLFG/fv2i6gMAAH6+qNYEHDlyRJWVlT/5ybds2aJgMKjCwkLNnTtX+fn59dp3\n7dqlmTNn1tuNsLk+AACgZUQ1E3DgwAGNGTNGZ599tmJiYsL3v/766xH7eb1ejRgxQpI0aNAg7dq1\nq1770aNHVVBQoLvvvjvqPgAAoGVEFQKeeOKJ8GZBXbp00ZVXXqkrrrii2X4+n0/x8fE//jKXS6FQ\nSE7n8QmIwYMHSzp+2CDaPgAAoGVEFQKefPJJ1dbWatq0aQqFQnrxxRf16aef6t57743Yz+PxyO/3\nh29H88f8dPpIx2cQ0DzqFL32XKt9+/apprpGQWewQVttda127typvXv3tspY2nOd2kpj70/VoSpJ\nrf/+dER8plpPVCGgtLRU//73v8O3x4wZo/HjxzfbLy0tTcXFxRo3bpxKSkqUmppqSx9JSk9Pj+px\nJvN6vdQpSu29VhUVFYr/LF5xiXEN2gKhgAYOHKjk5GTbx9He69RWTn1/qg5VKSExQVLrvj8dEZ+p\n6LRUUIoqBPTq1UtffvmlUlJSJEnff/+9kpKSmu03duxYvfPOO8rOzpYk5efnq6ioSIFAQJmZmeHH\nORyOiH0AAEDLiyoE1NXVaeLEiRoyZIhcLpe8Xq969uyp2bNnS5JWrlzZaD+Hw6GFCxfWu69fv34N\nHndy/8b6AACAlhdVCDj1ksJcShgAgI4vqhBw8iWFAQBA58B5dwAAGIoQAACAoQgBAAAYihAAAICh\noloYCAD4USgUinhRtcrKSslqshloNwgBAPATVVZWavqK6XInuBttr/mqRu5z3IpTwx0dgfaEEAAA\np8Gd4FZc98b/yB85dKSVRwOcHtYEAABgKEIAAACGIgQAAGAoQgAAAIYiBAAAYChCAAAAhiIEAABg\nKEIAAACGIgQAAGAoQgAAAIYiBAAAYChCAAAAhiIEAABgKEIAAACG4lLCANCKrJClysrKiI/p1auX\nnE6+o8F+hAAAaEW11bW6dfOtij8nvtH2YFVQ665bp+Tk5FYeGUxECACAVubu5lZc97i2HgbAmgAA\nAExFCAAAwFCEAAAADEUIAADAUCwMBDqZ5k5B4/QzACcQAoBOJtIpaJx+BuBkhACgE+IUtI6LmRy0\nJkIAALQjzOSgNRECAKCdYSYHrYU5JQAADEUIAADAULYeDrAsS7m5uSorK5Pb7VZeXp769OkTbt+6\ndasKCgrkcrk0depUZWZmSpKmTJkij8cjSerdu7eWLFli5zABADCSrSFgy5YtCgaDKiwsVGlpqfLz\n81VQUCBJqqur0wMPPKDnn39eMTExmj59uq666qrwH/+VK1faOTQAAIxn6+EAr9erESNGSJIGDRqk\nXbt2hdv27NmjlJQUeTwede3aVenp6frggw+0e/duHT58WDk5Obr++utVWlpq5xABADCWrTMBPp9P\n8fE/nubicrkUCoXkdDobtJ155pmqqanR+eefr5ycHGVmZuqLL77QTTfdpFdeeYXzYgEAaGG2hgCP\nxyO/3x++fSIAnGjz+XzhNr/fr27duiklJUX/93//J0nq27evEhMTtX//fiUlJUX8XV6v14ZX0PlQ\np+i151rt27dPNdU1CjqDDdr8NX6pTnIccjRoq62u1c6dO7V3794WG0t7rpNdItVfavw9qDpU1WRb\nc31POHLoiLZu3aoePXo0ObYePXp0+C9NJn6m2oqtISAtLU3FxcUaN26cSkpKlJqaGm674IIL9OWX\nX6q6ulqxsbHasWOHcnJytGHDBpWXl2vBggX67rvv5Pf71bNnz2Z/V3p6up0vpVPwer3UKUrtvVYV\nFRWK/yxecYkNzyUPHQjJEetQQmJCg7ZAKKCBAwe22GYz7b1OdolUf6nhe1B1qCr8c6T3p7n20IGQ\nlv1vmeKrG24kJHWOzYRM/Uz9VC0VlGwNAWPHjtU777yj7OxsSVJ+fr6KiooUCASUmZmpefPmac6c\nObIsSxkZGTrnnHOUkZGhefPmacaMGXI6nVqyZEmHT7UA0FLYSAgtydYQ4HA4tHDhwnr39evXL/zz\nqFGjNGrUqHrtXbt21dKlS+0cFgAAEJsFAQBgLEIAAACGIgQAAGAoQgAAAIYiBAAAYChbzw4AALQe\nK2SpsrKyyfZevXpxyjXqIQQAQCdRW12rWzffqvhzGm4m1Bk2EkLLIwQAQCfCZkL4KZgXAgDAUIQA\nAAAMxeEAAGhEKBRqcpFdZWWlZLXygAAbEAIAoBGVlZWavmK63AnuBm01X9XIfY5bceLYOzo2QgAA\nNMGd0PgiuyOHjrTBaICWx5oAAAAMRQgAAMBQhAAAAAxFCAAAwFAsDAQQFum0OIm954HOhhAAICzS\naXHsPQ90PoQAAPU0dVocgM6HEAAYpLlLzbITHmAWQgBgkEiXmpXYCQ8wDSEAMEykS82yEx5gFpb5\nAgBgKEIAAACGIgQAAGAoQgAAAIZiYSDQDjW3cx+n8gFoCYQAoB2KtHOfxKl8J4sUmEKhkCQ1udUx\n2yDDdIQAoI1E+uNVWVnJqXxRihSYar6qkWLV6L4IbIMMEAKANtPcH6+O9k3/53wjl37et/Kmtjo+\ncuiIHLEOtkFW87tFMitiJkIAYJNojus39W2/PX7Tb+yPyL59+1RRUSHp+Ov50yt/UkxiTIO+kb6R\nS3wrbw2Rdouk/uYiBAA26WzH9Rv7I1JTXaP4z47fDr8evpG3W02FzuZmCSRmCjorQgDwM5h2XP/U\n1xN0BhWXePx2R3w9OK65a0owU9B5EQKACKKZ0o80Bd6RvunDbJECKzovQgAQQdRT+h3kuD4AnMzW\nEGBZlnJzc1VWVia32628vDz16dMn3L5161YVFBTI5XJp6tSpyszMbLYP0NqaWnku8Ye+pbTFyvXm\nficbMsEEtoaALVu2KBgMqrCwUKWlpcrPz1dBQYEkqa6uTg888ICef/55xcTEaPr06brqqqvk9Xqb\n7AOgc2qLlevNHQfncA5MYGsI8Hq9GjFihCRp0KBB2rVrV7htz549SklJkcfjkSQNGTJE77//vkpK\nSprsA9ihseP+J05949tg6zndles/5z3qbAs37RLpPbB7DwjYy9YQ4PP5FB//Y8p2uVwKhUJyOp0N\n2s444wzV1NTI7/c32Qc4Xc2t4j91cd+JU9/4Ntj2+Mbe9iK9B83tAVF7sFYPj3tYvXr1arSdgNC2\nbA0BHo9Hfr8/fPvkP+Yej0c+ny/c5vf7lZCQELFPJCc2LEHTTt7YxTSVlZW69V+3yu1puMDPV+mT\nu4dbMWq4wl+SgtVBBWIDjbYdrT4qBdVoe6S2turb0s9bW12rQChg/++NbbRbWFPvUXt5f6KtU1uN\nOarnbeY9aEqwJqib1twkz9mehm2+oB7PeLxeQDD5/6m24LAsy7bJzldffVXFxcXKz89XSUmJCgoK\ntHz5cknH1wRcc801Wr9+vWJjYzV9+nQ98cQTKikpabJPU7xer10vAQCAdik9Pf1nP4etIeDklf6S\nlJ+fr48++kiBQECZmZl64403tGzZMlmWpYyMDE2fPr3RPv369bNriAAAGMvWEAAAANovVmMAAGAo\nQgAAAIYiBAAAYChCAAAAhmr3IaC0tFSzZs2SJH3yySfKysrStddeq3vvvVeStHv3bs2aNUuzZ8/W\nrFmzNHDgQL399tuqra3VHXfcoWuvvVZ//OMfdfDgwbZ8GbZrrk6S9Mwzz2jKlCnKzMzUli1bJMm4\nOknR1Wr58uWaNGmSZs2apTfeeEOSebU6uU4fffSRMjMzNXPmTC1evDj8mH/+85+aOnWqsrOzqZOa\nrpMkHThwQL/97W8VDAYlmVcnKbpaPffcc5o2bZqysrL0+OOPSzKvVtHUac2aNcrIyNC0adP08ssv\nSzrNOlnt2FNPPWWNHz/eysrKsizLsm699VbrzTfftCzLsubOnWsVFxfXe/zLL79s3X333ZZlWdaz\nzz5rPfbYY5ZlWdZLL71kLV68uPUG3sqiqVN1dbU1atQoq66uzqqqqrJGjx5tWZZZdbKs6GpVVlZm\nTZw40QoGg1Ztba01efJk68iRI0bV6tQ6TZkyxSopKbEsy7Iefvhha9OmTdb+/fut8ePHW0ePHrVq\namqs8ePHW8FgkDqdUifLsqy33nrLmjRpkpWenm7V1tZalsW/vZNr9cgjj1ibNm2yvvrqK2vq1Knh\nPtnZ2VZZWZlRtYrmM3XgwAFr/Pjx1rFjxyyfz2ddeeWVlmWd3meqXc8EpKSkhJOgJA0YMEAHDx6U\nZVny+/1yuX7c8DAQCOixxx4Lf5vzer0aOXKkJGnkyJF67733WnfwrSiaOsXFxSk5OVl+v1+HDx8O\n78JoUp2k6Gq1Z88eXXbZZeratavcbrdSUlK0e/duo2p1ap2+++47DRo0SJKUlpamHTt2aOfOnUpP\nT5fL5ZLH41Hfvn2p0yl1OrGRWZcuXfTcc88pISEh/FiT6iRFrtXgwYPl9Xp13nnn6emnnw4/5tix\nY4qJiTGqVtF8ps466yy9+OKLcjqd2r9/v2Jiju92ejp1atchYOzYserSpUv4dt++fZWXl6drrrlG\nBw4c0GWXXRZu+9e//qXf/e534X9kPp8vfHGiM888s94WxZ1NtHVKSkrS73//e02dOjU81WRSnaTo\napWamqodO3bo8OHDOnjwoEpKShQIBIyq1al16tOnj3bs2CFJKi4u1pEjRxq9/ofP55Pf76dOOl6n\nQOD4NrxXXHGFEhISZJ20LYtJnycpulp16dJFiYmJkqS//e1vuuiii5SSkmJUraL9TDmdTq1Zs0ZZ\nWVmaMGGCpNP7TNl67YCWlpeXp7Vr1+qCCy7QmjVr9MADD2j+/PmSpM2bN+uxxx4LP/bkaxCcelGi\nzq6xOg0fPlzff/+9iouLZVmWcnJyNHjwYMXHxxtbJ6npz9SMGTN04403qlevXho4cKDOOusso2u1\nZMkS5eXl6dixY0pPT1dMTIzi4+MbXP+jW7duRv/ba6xOJ3M4HOGfTa6T1HStgsGg5s2bp/j4eC1Y\nsECS2bWK9Jm69tprlZWVpRtvvFHbt28/rf+j2vVMwKkSExPDKScpKUnV1dWSjqefo0ePKikpKfzY\ntLQ0bdu2TZK0bds2DRkypPUH3EYaq1NCQoJiY2PDU9wn/gM3uU5S47U6ePCg/H6/1q5dq4ULF2rv\n3r1KTU3V4MGDja3Vtm3b9NBDD+nZZ5/VoUOHNGzYMP3qV7+S1+tVMBhUTU2NPvvsM/Xv3586nVKn\nk508E2D6v72manXLLbdowIABys3NDYcmk2vVWJ0+//xz3X777ZKOH2qKiYlRly5dTqtOHWomYNGi\nRbrzzjvlcrnkdru1aNEiSdLnn3+u5OTkeo+dPn267rnnHs2YMUNut1sPPfRQWwy5TTRWp/POO0+X\nXHKJpk2bJqfTqfT0dA0bNkxpaWnG1klqvFZnnXWW9uzZo4yMDLndbt19991yOBxGf6ZSUlJ03XXX\nKS4uTpdffnn4uOOsWbM0Y8YMWZalu+66S263mzo1UqcTTp4JMLlOUuO12rJli3bs2KGjR49q27Zt\ncjgcmjt3rtG1auozdeGFFyorK0sOh0MjR47UkCFDdMkll/zkOnHtAAAADNWhDgcAAICWQwgAAMBQ\nhAAAAAxFCAAAwFCEAAAADEUIAADAUIQAwBA+n0+33nqrJOno0aN65JFH9Ic//EGTJ09WdnZ2eJ/x\niooKjRkzpi2HCqCVdKjNggCcvkOHDmn37t2SpL/85S+KjY3Vhg0b5Ha7VV5erjlz5mjFihWKjY2t\nt6kNgM6LzYIAQ9xyyy16++23deWVV+rdd9/Ve++9V28f8g8++EDJycmyLEvTpk3T0KFDVV5eroSE\nBD3++ONKSEjQ6tWrtWnTJgUCATmdTj388MM6//zzNWbMGE2cOFFvv/22jhw5Er74S3l5uebNm6dQ\nKKT09HS9+eabevXVV/XDDz9o/vz52rt3r5xOp+666y5dccUVbVgdwEwcDgAMcd999+mcc87RhAkT\n1L9//wYXt7n00kt13nnnSZIOHDigG264QZs3b1b37t310ksvyefzaevWrVq9erU2b96sq666SmvX\nrg337969u9avX6+srCw9+eSTko7PONx555164YUX1Lt3bx07dkzS8Qs3ZWRkaMOGDSooKND8+fN1\n+PDhVqoEgBM4HAAYxul0KhQKRXxMUlKSLrnkEklS//79dfDgQXk8Hi1dulRFRUX64osv9NZbb2nA\ngAHhPsOHDw8//rXXXlNVVZUqKio0YsQISVJGRoZWrVolSXr33Xf1+eef6x//+Iek49eN/+qrr3Th\nhRe2+OsF0DRCAGCYSy65RHv27FEwGJTb7Q7fv2LFCvXs2VODBg2qdz1zh8Mhy7K0d+9ezZo1SzNn\nztTIkSPVo0cPffLJJ+HHnZhZOPH4k5/jVKFQSCtWrFC3bt0kSfv27VPPnj1b+qUCaAaHAwBDuFwu\nHTt2TOdbGReuAAABRElEQVSee65Gjx6tRYsWKRgMSpI+/vhjPf3000pNTZVU/5K3J3z44YfhK5oN\nHDhQb775ZsQZBY/Ho5SUFL311luSpE2bNoUXHA4dOlRr1qyRJP3vf//ThAkTFAgEWvT1AmgeMwGA\nIc4++2yde+65uu6667R8+XL9/e9/18SJExUTE6PY2FgtXbpUv/jFL1RRUdHo2QHDhw/XunXrdM01\n1ygmJkYDBw7Up59+KklNnk2Qn5+ve++9Vw8//LB++ctfKjY2VtLx9Qnz58/XhAkTJElLly7VGWec\nYdMrB9AUzg4AYJvHH39cWVlZ6tGjh1577TVt3rxZjz76aFsPC8D/x0wAANucd955uuGGG+RyuZSQ\nkKC8vLy2HhKAkzATAACAoVgYCACAoQgBAAAYihAAAIChCAEAABiKEAAAgKEIAQAAGOr/AY0EBcdb\nKFqMAAAAAElFTkSuQmCC\n",
275 | "text/plain": [
276 | ""
277 | ]
278 | },
279 | "metadata": {},
280 | "output_type": "display_data"
281 | }
282 | ],
283 | "source": [
284 | "print 'Fit with flat hyper-prior:'\n",
285 | "S = bl.ChangepointStudy()\n",
286 | "S.loadExampleData()\n",
287 | "\n",
288 | "L = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))\n",
289 | "T = bl.tm.ChangePoint('tChange', 'all')\n",
290 | "\n",
291 | "S.set(L, T)\n",
292 | "S.fit()\n",
293 | "\n",
294 | "plt.figure(figsize=(8,4))\n",
295 | "S.plot('tChange', facecolor='g', alpha=0.7)\n",
296 | "plt.xlim([1870, 1930])\n",
297 | "plt.show()\n",
298 | "print('-----\\n')\n",
299 | " \n",
300 | "print 'Fit with custom normal prior:'\n",
301 | "T = bl.tm.ChangePoint('tChange', 'all', prior=sympy.stats.Normal('norm', 1920, 5))\n",
302 | "S.set(T)\n",
303 | "S.fit()\n",
304 | "\n",
305 | "plt.figure(figsize=(8,4))\n",
306 | "S.plot('tChange', facecolor='g', alpha=0.7)\n",
307 | "plt.xlim([1870, 1930]);"
308 | ]
309 | },
310 | {
311 | "cell_type": "markdown",
312 | "metadata": {},
313 | "source": [
314 | "Since we used a quite narrow prior (containing a lot of information) in the second case, the resulting distribution is strongly shifted towards the prior. The following example revisits the two break-point-model from [here](changepointstudy.html#Analyzing-structural-breaks-in-time-series-models) and a linear decrease with a varying slope as a hyper-parameter. Here, we define a Gaussian prior for the slope hyper-parameter, which is centered around the value -0.2 with a standard deviation of 0.4, via a lambda-function. For simplification, we set the break-points to fixed years."
315 | ]
316 | },
317 | {
318 | "cell_type": "code",
319 | "execution_count": 6,
320 | "metadata": {
321 | "collapsed": false
322 | },
323 | "outputs": [
324 | {
325 | "name": "stdout",
326 | "output_type": "stream",
327 | "text": [
328 | "+ Created new study.\n",
329 | " --> Hyper-study\n",
330 | "+ Successfully imported example data.\n",
331 | "+ Observation model: Poisson. Parameter(s): ['accident_rate']\n",
332 | "+ Transition model: Serial transition model. Hyper-Parameter(s): ['slope', 't_1', 't_2']\n",
333 | "+ Set hyper-prior(s): [' (re-normalized)', 'uniform', 'uniform']\n",
334 | "+ Started new fit.\n",
335 | " + 30 analyses to run.\n",
336 | "\n",
337 | " + Computed average posterior sequence\n",
338 | " + Computed hyper-parameter distribution\n",
339 | " + Log10-evidence of average model: -74.84129\n",
340 | " + Computed local evidence of average model\n",
341 | " + Computed mean parameter values.\n",
342 | "+ Finished fit.\n"
343 | ]
344 | }
345 | ],
346 | "source": [
347 | "S = bl.HyperStudy()\n",
348 | "S.loadExampleData()\n",
349 | "\n",
350 | "L = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))\n",
351 | "T = bl.tm.SerialTransitionModel(bl.tm.Static(),\n",
352 | " bl.tm.BreakPoint('t_1', 1880),\n",
353 | " bl.tm.Deterministic(lambda t, slope=np.linspace(-2.0, 0.0, 30): t*slope, \n",
354 | " target='accident_rate',\n",
355 | " prior=lambda slope: np.exp(-0.5*((slope + 0.2)/(2*0.4))**2)/0.4),\n",
356 | " bl.tm.BreakPoint('t_2', 1900),\n",
357 | " bl.tm.Static()\n",
358 | " )\n",
359 | "\n",
360 | "S.set(L, T)\n",
361 | "S.fit()"
362 | ]
363 | },
364 | {
365 | "cell_type": "markdown",
366 | "metadata": {},
367 | "source": [
368 | "Finally, note that you can mix SymPy- and function-based hyper-priors for nested transition models."
369 | ]
370 | }
371 | ],
372 | "metadata": {
373 | "anaconda-cloud": {},
374 | "kernelspec": {
375 | "display_name": "Python 2",
376 | "language": "python",
377 | "name": "python2"
378 | },
379 | "language_info": {
380 | "codemirror_mode": {
381 | "name": "ipython",
382 | "version": 2
383 | },
384 | "file_extension": ".py",
385 | "mimetype": "text/x-python",
386 | "name": "python",
387 | "nbconvert_exporter": "python",
388 | "pygments_lexer": "ipython2",
389 | "version": "2.7.10"
390 | }
391 | },
392 | "nbformat": 4,
393 | "nbformat_minor": 0
394 | }
395 |
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | [metadata]
2 | long_description = file: README.md
3 | long_description_content_type = text/markdown
4 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from setuptools import setup
4 |
5 | setup(
6 | name='bayesloop',
7 | packages=['bayesloop'],
8 | version='1.5.7',
9 | description='Probabilistic programming framework that facilitates objective model selection for time-varying parameter models.',
10 | url='http://bayesloop.com',
11 | download_url = 'https://github.com/christophmark/bayesloop/tarball/1.5.7',
12 | author='Christoph Mark',
13 | author_email='christoph.mark@fau.de',
14 | license='The MIT License (MIT)',
15 | install_requires=['numpy>=1.11.0', 'scipy>=0.17.1', 'sympy>=1.0', 'matplotlib>=1.5.1', 'tqdm>=4.10.0', 'cloudpickle'],
16 | keywords=['bayes', 'inference', 'fitting', 'model selection', 'hypothesis testing', 'time series', 'time-varying', 'marginal likelihood'],
17 | classifiers=[],
18 | )
19 |
--------------------------------------------------------------------------------
/tests/test_changepointstudy.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 | import sympy.stats as stats
7 |
8 |
9 | class TestTwoParameterModel:
10 | def test_fit_1cp_1bp_2hp(self):
11 | # carry out fit
12 | S = bl.ChangepointStudy()
13 | S.loadData(np.array([1, 2, 3, 4, 5]))
14 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
15 |
16 | T = bl.tm.SerialTransitionModel(
17 | bl.tm.Static(),
18 | bl.tm.ChangePoint('ChangePoint', [0, 1]),
19 | bl.tm.CombinedTransitionModel(
20 | bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 2), target='mean'),
21 | bl.tm.RegimeSwitch('log10pMin', [-3, -1])
22 | ),
23 | bl.tm.BreakPoint('BreakPoint', 'all'),
24 | bl.tm.Static()
25 | )
26 |
27 | S.setTM(T)
28 | S.fit()
29 |
30 | # test parameter distributions
31 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
32 | [0.012437, 0.030168, 0.01761 , 0.001731, 0.001731],
33 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
34 |
35 | # test parameter mean values
36 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
37 | [0.968022, 1.956517, 3.476958, 4.161028, 4.161028],
38 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
39 |
40 | # test model evidence value
41 | np.testing.assert_almost_equal(S.logEvidence, -15.072007461556161, decimal=5,
42 | err_msg='Erroneous log-evidence value.')
43 |
44 | # test hyper-parameter distribution
45 | x, p = S.getHyperParameterDistribution('sigma')
46 | np.testing.assert_allclose(np.array([x, p]),
47 | [[0., 0.2], [0.4963324, 0.5036676]],
48 | rtol=1e-02, err_msg='Erroneous values in hyper-parameter distribution.')
49 |
50 | # test duration distribution
51 | d, p = S.getDurationDistribution(['ChangePoint', 'BreakPoint'])
52 | np.testing.assert_allclose(np.array([d, p]),
53 | [[1., 2., 3.], [0.01039273, 0.49395867, 0.49564861]],
54 | rtol=1e-02, err_msg='Erroneous values in duration distribution.')
55 |
56 | def test_fit_hyperpriors(self):
57 | # carry out fit
58 | S = bl.ChangepointStudy()
59 | S.loadData(np.array([1, 2, 3, 4, 5]))
60 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
61 |
62 | T = bl.tm.SerialTransitionModel(
63 | bl.tm.Static(),
64 | bl.tm.ChangePoint('ChangePoint', [0, 1], prior=np.array([0.3, 0.7])),
65 | bl.tm.CombinedTransitionModel(
66 | bl.tm.GaussianRandomWalk('sigma', bl.oint(0, 0.2, 2), target='mean', prior=lambda s: 1./s),
67 | bl.tm.RegimeSwitch('log10pMin', [-3, -1])
68 | ),
69 | bl.tm.BreakPoint('BreakPoint', 'all', prior=stats.Normal('Normal', 3., 1.)),
70 | bl.tm.Static()
71 | )
72 |
73 | S.setTM(T)
74 | S.fit()
75 |
76 | # test parameter distributions
77 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
78 | [0.033729, 0.050869, 0.020636, 0.001647, 0.001647],
79 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
80 |
81 | # test parameter mean values
82 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
83 | [0.98944 , 1.927195, 3.349921, 4.213695, 4.213695],
84 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
85 |
86 | # test model evidence value
87 | np.testing.assert_almost_equal(S.logEvidence, -15.709534690217343, decimal=5,
88 | err_msg='Erroneous log-evidence value.')
89 |
90 | # test hyper-parameter distribution
91 | x, p = S.getHyperParameterDistribution('sigma')
92 | np.testing.assert_allclose(np.array([x, p]),
93 | [[0.06666667, 0.13333333], [0.66515107, 0.33484893]],
94 | rtol=1e-02, err_msg='Erroneous values in hyper-parameter distribution.')
95 |
96 | # test duration distribution
97 | d, p = S.getDurationDistribution(['ChangePoint', 'BreakPoint'])
98 | np.testing.assert_allclose(np.array([d, p]),
99 | [[1., 2., 3.], [0.00373717, 0.40402616, 0.59223667]],
100 | rtol=1e-02, err_msg='Erroneous values in duration distribution.')
101 |
--------------------------------------------------------------------------------
/tests/test_fileio.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 |
7 |
8 | class TestFileIO:
9 | def test_save_load(self):
10 | S = bl.HyperStudy()
11 | S.loadData(np.array([1, 2, 3, 4, 5]))
12 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
13 | S.setTM(bl.tm.Static())
14 | S.fit()
15 |
16 | bl.save('study.bl', S)
17 | S = bl.load('study.bl')
18 |
--------------------------------------------------------------------------------
/tests/test_hyperstudy.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 | import sympy.stats as stats
7 |
8 |
9 | class TestTwoParameterModel:
10 | def test_fit_0hp(self):
11 | # carry out fit (this test is designed to fall back on the fit method of the Study class)
12 | S = bl.HyperStudy()
13 | S.loadData(np.array([1, 2, 3, 4, 5]))
14 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
15 | S.setTM(bl.tm.Static())
16 | S.fit()
17 |
18 | # test parameter distributions
19 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
20 | [0.013349, 0.013349, 0.013349, 0.013349, 0.013349],
21 | rtol=1e-04, err_msg='Erroneous posterior distribution values.')
22 |
23 | # test parameter mean values
24 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
25 | [3., 3., 3., 3., 3.],
26 | rtol=1e-05, err_msg='Erroneous posterior mean values.')
27 |
28 | # test model evidence value
29 | np.testing.assert_almost_equal(S.logEvidence, -16.1946904707, decimal=5,
30 | err_msg='Erroneous log-evidence value.')
31 |
32 | def test_fit_1hp(self):
33 | # carry out fit
34 | S = bl.HyperStudy()
35 | S.loadData(np.array([1, 2, 3, 4, 5]))
36 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
37 | S.setTM(bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 2), target='mean'))
38 | S.fit()
39 |
40 | # test parameter distributions
41 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
42 | [0.017242, 0.014581, 0.012691, 0.011705, 0.011586],
43 | rtol=1e-04, err_msg='Erroneous posterior distribution values.')
44 |
45 | # test parameter mean values
46 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
47 | [2.92089 , 2.952597, 3. , 3.047403, 3.07911 ],
48 | rtol=1e-05, err_msg='Erroneous posterior mean values.')
49 |
50 | # test model evidence value
51 | np.testing.assert_almost_equal(S.logEvidence, -16.0629517262, decimal=5,
52 | err_msg='Erroneous log-evidence value.')
53 |
54 | # test hyper-parameter distribution
55 | x, p = S.getHyperParameterDistribution('sigma')
56 | print(np.array([x, p]))
57 | np.testing.assert_allclose(np.array([x, p]),
58 | [[0., 0.2], [0.43828499, 0.56171501]],
59 | rtol=1e-05, err_msg='Erroneous values in hyper-parameter distribution.')
60 |
61 | def test_fit_2hp(self):
62 | # carry out fit
63 | S = bl.HyperStudy()
64 | S.loadData(np.array([1, 2, 3, 4, 5]))
65 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
66 |
67 | T = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 2), target='mean'),
68 | bl.tm.RegimeSwitch('log10pMin', [-3, -1]))
69 |
70 | S.setTM(T)
71 | S.fit()
72 |
73 | # test parameter distributions
74 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
75 | [0.005589, 0.112966, 0.04335 , 0.00976 , 0.002909],
76 | rtol=1e-04, err_msg='Erroneous posterior distribution values.')
77 |
78 | # test parameter mean values
79 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
80 | [0.963756, 2.105838, 2.837739, 3.734359, 4.595412],
81 | rtol=1e-05, err_msg='Erroneous posterior mean values.')
82 |
83 | # test model evidence value
84 | np.testing.assert_almost_equal(S.logEvidence, -10.7601875492, decimal=5,
85 | err_msg='Erroneous log-evidence value.')
86 |
87 | # test hyper-parameter distribution
88 | x, p = S.getHyperParameterDistribution('sigma')
89 | np.testing.assert_allclose(np.array([x, p]),
90 | [[0., 0.2], [0.48943645, 0.51056355]],
91 | rtol=1e-05, err_msg='Erroneous values in hyper-parameter distribution.')
92 |
93 | # test joint hyper-parameter distribution
94 | x, y, p = S.getJointHyperParameterDistribution(['log10pMin', 'sigma'])
95 | np.testing.assert_allclose(np.array([x, y]),
96 | [[-3., -1.], [0., 0.2]],
97 | rtol=1e-05, err_msg='Erroneous parameter values in joint hyper-parameter '
98 | 'distribution.')
99 |
100 | np.testing.assert_allclose(p,
101 | [[0.00701834, 0.0075608], [0.48241812, 0.50300274]],
102 | rtol=1e-05, err_msg='Erroneous probability values in joint hyper-parameter '
103 | 'distribution.')
104 |
105 | def test_fit_hyperprior_array(self):
106 | # carry out fit
107 | S = bl.HyperStudy()
108 | S.loadData(np.array([1, 2, 3, 4, 5]))
109 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
110 | S.setTM(bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 2), target='mean', prior=np.array([0.2, 0.8])))
111 | S.fit()
112 |
113 | # test parameter distributions
114 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
115 | [0.019149, 0.015184, 0.012369, 0.0109 , 0.010722],
116 | rtol=1e-04, err_msg='Erroneous posterior distribution values.')
117 |
118 | # test parameter mean values
119 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
120 | [2.882151, 2.929385, 3. , 3.070615, 3.117849],
121 | rtol=1e-04, err_msg='Erroneous posterior mean values.')
122 |
123 | # test model evidence value
124 | np.testing.assert_almost_equal(S.logEvidence, -15.9915077133, decimal=5,
125 | err_msg='Erroneous log-evidence value.')
126 |
127 | # test hyper-parameter distribution
128 | x, p = S.getHyperParameterDistribution('sigma')
129 | np.testing.assert_allclose(np.array([x, p]),
130 | [[0., 0.2], [0.16322581, 0.83677419]],
131 | rtol=1e-05, err_msg='Erroneous values in hyper-parameter distribution.')
132 |
133 | def test_fit_hyperprior_function(self):
134 | # carry out fit
135 | S = bl.HyperStudy()
136 | S.loadData(np.array([1, 2, 3, 4, 5]))
137 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
138 | S.setTM(bl.tm.GaussianRandomWalk('sigma', bl.cint(0.1, 0.3, 2), target='mean', prior=lambda s: 1./s))
139 | S.fit()
140 |
141 | # test parameter distributions
142 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
143 | [0.025476, 0.015577, 0.012088, 0.010889, 0.010749],
144 | rtol=1e-04, err_msg='Erroneous posterior distribution values.')
145 |
146 | # test parameter mean values
147 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
148 | [2.858477, 2.915795, 3. , 3.084205, 3.141523],
149 | rtol=1e-04, err_msg='Erroneous posterior mean values.')
150 |
151 | # test model evidence value
152 | np.testing.assert_almost_equal(S.logEvidence, -15.9898700147, decimal=5,
153 | err_msg='Erroneous log-evidence value.')
154 |
155 | # test hyper-parameter distribution
156 | x, p = S.getHyperParameterDistribution('sigma')
157 | np.testing.assert_allclose(np.array([x, p]),
158 | [[0.1, 0.3], [0.61609973, 0.38390027]],
159 | rtol=1e-05, err_msg='Erroneous values in hyper-parameter distribution.')
160 |
161 | def test_fit_hyperprior_sympy(self):
162 | # carry out fit
163 | S = bl.HyperStudy()
164 | S.loadData(np.array([1, 2, 3, 4, 5]))
165 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
166 | S.setTM(bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 2), target='mean', prior=stats.Exponential('e', 1.)))
167 | S.fit()
168 |
169 | # test parameter distributions
170 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
171 | [0.016898, 0.014472, 0.012749, 0.011851, 0.011742],
172 | rtol=1e-04, err_msg='Erroneous posterior distribution values.')
173 |
174 | # test parameter mean values
175 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
176 | [2.927888, 2.95679 , 3. , 3.04321 , 3.072112],
177 | rtol=1e-04, err_msg='Erroneous posterior mean values.')
178 |
179 | # test model evidence value
180 | np.testing.assert_almost_equal(S.logEvidence, -17.0866290887, decimal=5,
181 | err_msg='Erroneous log-evidence value.')
182 |
183 | # test hyper-parameter distribution
184 | x, p = S.getHyperParameterDistribution('sigma')
185 | np.testing.assert_allclose(np.array([x, p]),
186 | [[0., 0.2], [0.487971, 0.512029]],
187 | rtol=1e-05, err_msg='Erroneous values in hyper-parameter distribution.')
188 |
--------------------------------------------------------------------------------
/tests/test_observationmodels.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 | import scipy.stats
7 | import sympy.stats
8 | from sympy import Symbol
9 |
10 |
11 | class TestSymPy:
12 | def test_sympy_1p(self):
13 | # carry out fit
14 | S = bl.Study()
15 | S.loadData(np.array([1, 2, 3, 4, 5]))
16 |
17 | rate = Symbol('rate', positive=True)
18 | poisson = sympy.stats.Poisson('poisson', rate)
19 | L = bl.om.SymPy(poisson, 'rate', bl.oint(0, 7, 100))
20 |
21 | S.setOM(L)
22 | S.setTM(bl.tm.Static())
23 | S.fit()
24 |
25 | # test model evidence value
26 | np.testing.assert_almost_equal(S.logEvidence, -10.238278174965238, decimal=5,
27 | err_msg='Erroneous log-evidence value.')
28 |
29 | def test_sympy_2p(self):
30 | # carry out fit
31 | S = bl.Study()
32 | S.loadData(np.array([1, 2, 3, 4, 5]))
33 |
34 | mu = Symbol('mu')
35 | std = Symbol('std', positive=True)
36 | normal = sympy.stats.Normal('norm', mu, std)
37 |
38 | L = bl.om.SymPy(normal, 'mu', bl.cint(0, 7, 200), 'std', bl.oint(0, 1, 200), prior=lambda x, y: 1.)
39 |
40 | S.setOM(L)
41 | S.setTM(bl.tm.Static())
42 | S.fit()
43 |
44 | # test model evidence value
45 | np.testing.assert_almost_equal(S.logEvidence, -13.663836264357226, decimal=5,
46 | err_msg='Erroneous log-evidence value.')
47 |
48 |
49 | class TestSciPy:
50 | def test_scipy_1p(self):
51 | # carry out fit
52 | S = bl.Study()
53 | S.loadData(np.array([1, 2, 3, 4, 5]))
54 |
55 | L = bl.om.SciPy(scipy.stats.poisson, 'mu', bl.oint(0, 7, 100), fixedParameters={'loc': 0})
56 |
57 | S.setOM(L)
58 | S.setTM(bl.tm.Static())
59 | S.fit()
60 |
61 | # test model evidence value
62 | np.testing.assert_almost_equal(S.logEvidence, -10.238278174965238, decimal=5,
63 | err_msg='Erroneous log-evidence value.')
64 |
65 | def test_scipy_2p(self):
66 | # carry out fit
67 | S = bl.Study()
68 | S.loadData(np.array([1, 2, 3, 4, 5]))
69 |
70 | L = bl.om.SciPy(scipy.stats.norm, 'loc', bl.cint(0, 7, 200), 'scale', bl.oint(0, 1, 200))
71 |
72 | S.setOM(L)
73 | S.setTM(bl.tm.Static())
74 | S.fit()
75 |
76 | # test model evidence value
77 | np.testing.assert_almost_equal(S.logEvidence, -13.663836264357225, decimal=5,
78 | err_msg='Erroneous log-evidence value.')
79 |
80 |
81 | class TestNumPy:
82 | def test_numpy_1p(self):
83 | # carry out fit
84 | S = bl.Study()
85 | S.loadData(np.array([[1, 0.5], [2, 0.5], [3, 0.5], [4, 1.], [5, 1.]]))
86 |
87 | def likelihood(data, mu):
88 | x, std = data
89 |
90 | pdf = np.exp((x - mu) ** 2. / (2 * std ** 2.)) / np.sqrt(2 * np.pi * std ** 2.)
91 | return pdf
92 |
93 | L = bl.om.NumPy(likelihood, 'mu', bl.oint(0, 7, 100))
94 |
95 | S.setOM(L)
96 | S.setTM(bl.tm.Static())
97 | S.fit()
98 |
99 | # test model evidence value
100 | np.testing.assert_almost_equal(S.logEvidence, 148.92056578058387, decimal=5,
101 | err_msg='Erroneous log-evidence value.')
102 |
103 | def test_scipy_2p(self):
104 | # carry out fit
105 | S = bl.Study()
106 | S.loadData(np.array([1, 2, 3, 4, 5]))
107 |
108 | def likelihood(data, mu, std):
109 | x = data
110 |
111 | pdf = np.exp((x - mu) ** 2. / (2 * std ** 2.)) / np.sqrt(2 * np.pi * std ** 2.)
112 | return pdf
113 |
114 | L = bl.om.NumPy(likelihood, 'mu', bl.oint(0, 7, 100), 'std', bl.oint(1, 2, 100))
115 |
116 | S.setOM(L)
117 | S.setTM(bl.tm.Static())
118 | S.fit()
119 |
120 | # test model evidence value
121 | np.testing.assert_almost_equal(S.logEvidence, 29.792823521784587, decimal=5,
122 | err_msg='Erroneous log-evidence value.')
123 |
124 |
125 | class TestBuiltin:
126 | def test_bernoulli(self):
127 | S = bl.Study()
128 | S.loadData(np.array([1, 0, 1, 0, 0]))
129 |
130 | L = bl.om.Bernoulli('p', bl.oint(0, 1, 100))
131 | T = bl.tm.Static()
132 | S.set(L, T)
133 |
134 | S.fit()
135 | np.testing.assert_almost_equal(S.logEvidence, -4.3494298741972859, decimal=5,
136 | err_msg='Erroneous log-evidence value.')
137 |
138 | def test_poisson(self):
139 | S = bl.Study()
140 | S.loadData(np.array([1, 0, 1, 0, 0]))
141 |
142 | L = bl.om.Poisson('rate', bl.oint(0, 1, 100))
143 | T = bl.tm.Static()
144 | S.set(L, T)
145 |
146 | S.fit()
147 | np.testing.assert_almost_equal(S.logEvidence, -4.433708287229158, decimal=5,
148 | err_msg='Erroneous log-evidence value.')
149 |
150 | def test_gaussian(self):
151 | S = bl.Study()
152 | S.loadData(np.array([1, 0, 1, 0, 0]))
153 |
154 | L = bl.om.Gaussian('mu', bl.oint(0, 1, 100), 'std', bl.oint(0, 1, 100), prior=lambda m, s: 1/s**3)
155 | T = bl.tm.Static()
156 | S.set(L, T)
157 |
158 | S.fit()
159 | np.testing.assert_almost_equal(S.logEvidence, -12.430583625665736, decimal=5,
160 | err_msg='Erroneous log-evidence value.')
161 |
162 | def test_laplace(self):
163 | S = bl.Study()
164 | S.load(np.array([1, 0, 1, 0, 0]))
165 |
166 | L = bl.om.Laplace('mu', None, 'b', None)
167 | T = bl.tm.Static()
168 | S.set(L, T)
169 |
170 | S.fit()
171 | np.testing.assert_almost_equal(S.logEvidence, -10.658573159, decimal=5,
172 | err_msg='Erroneous log-evidence value.')
173 |
174 | def test_gaussianmean(self):
175 | S = bl.Study()
176 | S.loadData(np.array([[1, 0.5], [0, 0.4], [1, 0.3], [0, 0.2], [0, 0.1]]))
177 |
178 | L = bl.om.GaussianMean('mu', bl.oint(0, 1, 100))
179 | T = bl.tm.Static()
180 | S.set(L, T)
181 |
182 | S.fit()
183 | np.testing.assert_almost_equal(S.logEvidence, -6.3333705075036226, decimal=5,
184 | err_msg='Erroneous log-evidence value.')
185 |
186 | def test_whitenoise(self):
187 | S = bl.Study()
188 | S.loadData(np.array([1, 0, 1, 0, 0]))
189 |
190 | L = bl.om.WhiteNoise('std', bl.oint(0, 1, 100))
191 | T = bl.tm.Static()
192 | S.set(L, T)
193 |
194 | S.fit()
195 | np.testing.assert_almost_equal(S.logEvidence, -6.8161638661444073, decimal=5,
196 | err_msg='Erroneous log-evidence value.')
197 |
198 | def test_ar1(self):
199 | S = bl.Study()
200 | S.loadData(np.array([1, 0, 1, 0, 0]))
201 |
202 | L = bl.om.AR1('rho', bl.oint(-1, 1, 100), 'sigma', bl.oint(0, 1, 100))
203 | T = bl.tm.Static()
204 | S.set(L, T)
205 |
206 | S.fit()
207 | np.testing.assert_almost_equal(S.logEvidence, -4.3291291450463421, decimal=5,
208 | err_msg='Erroneous log-evidence value.')
209 |
210 | def test_scaledar1(self):
211 | S = bl.Study()
212 | S.loadData(np.array([1, 0, 1, 0, 0]))
213 |
214 | L = bl.om.ScaledAR1('rho', bl.oint(-1, 1, 100), 'sigma', bl.oint(0, 1, 100))
215 | T = bl.tm.Static()
216 | S.set(L, T)
217 |
218 | S.fit()
219 | np.testing.assert_almost_equal(S.logEvidence, -4.4178639067800738, decimal=5,
220 | err_msg='Erroneous log-evidence value.')
221 |
--------------------------------------------------------------------------------
/tests/test_onlinestudy.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 | import sympy.stats as stats
7 |
8 |
9 | class TestTwoParameterModel:
10 | def test_step_set1TM_0hp(self):
11 | # carry out fit
12 | S = bl.OnlineStudy(storeHistory=True)
13 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
14 | S.setTM(bl.tm.Static())
15 |
16 | data = np.array([1, 2, 3, 4, 5])
17 | for d in data:
18 | S.step(d)
19 |
20 | # test parameter distributions
21 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
22 | [0.0053811, 0.38690331, 0.16329865, 0.04887604, 0.01334921],
23 | rtol=1e-05, err_msg='Erroneous posterior distribution values.')
24 |
25 | # test parameter mean values
26 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
27 | [0.96310103, 1.5065597, 2.00218465, 2.500366, 3.],
28 | rtol=1e-05, err_msg='Erroneous posterior mean values.')
29 |
30 | # test model evidence value
31 | np.testing.assert_almost_equal(S.logEvidence, -16.1946904707, decimal=5,
32 | err_msg='Erroneous log-evidence value.')
33 |
34 | def test_step_add2TM_2hp_prior_hyperpriors_TMprior(self):
35 | # carry out fit
36 | S = bl.OnlineStudy(storeHistory=True)
37 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1./s))
38 |
39 | T1 = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('s1', [0.25, 0.5],
40 | target='mean',
41 | prior=stats.Exponential('e', 0.5)),
42 | bl.tm.GaussianRandomWalk('s2', bl.cint(0, 0.2, 2),
43 | target='sigma',
44 | prior=np.array([0.2, 0.8]))
45 | )
46 |
47 | T2 = bl.tm.Independent()
48 |
49 | S.addTransitionModel('T1', T1)
50 | S.addTransitionModel('T2', T2)
51 |
52 | S.setTransitionModelPrior([0.9, 0.1])
53 |
54 | data = np.array([1, 2, 3, 4, 5])
55 | for d in data:
56 | S.step(d)
57 |
58 | # test transition model distributions
59 | np.testing.assert_allclose(S.getCurrentTransitionModelDistribution(local=False)[1],
60 | [0.49402616, 0.50597384],
61 | rtol=1e-05, err_msg='Erroneous transition model probabilities.')
62 |
63 | np.testing.assert_allclose(S.getCurrentTransitionModelDistribution(local=True)[1],
64 | [0.81739495, 0.18260505],
65 | rtol=1e-05, err_msg='Erroneous local transition model probabilities.')
66 |
67 | # test hyper-parameter distributions
68 | np.testing.assert_allclose(S.getCurrentHyperParameterDistribution('s2')[1],
69 | [0.19047162, 0.80952838],
70 | rtol=1e-05, err_msg='Erroneous hyper-parameter distribution.')
71 |
72 | # test parameter distributions
73 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
74 | [0.05825921, 0.20129444, 0.07273516, 0.02125759, 0.0039255],
75 | rtol=1e-05, err_msg='Erroneous posterior distribution values.')
76 |
77 | # test parameter mean values
78 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
79 | [1.0771838, 1.71494272, 2.45992376, 3.34160617, 4.39337253],
80 | rtol=1e-05, err_msg='Erroneous posterior mean values.')
81 |
82 | # test model evidence value
83 | np.testing.assert_almost_equal(S.logEvidence, -9.46900822686, decimal=5,
84 | err_msg='Erroneous log-evidence value.')
85 |
--------------------------------------------------------------------------------
/tests/test_parser.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 |
7 |
8 | class TestParameterParsing:
9 | def test_inequality(self):
10 | S = bl.Study()
11 | S.loadData(np.array([1, 2, 3, 4, 5]))
12 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 50)))
13 | S.setTM(bl.tm.Static())
14 | S.fit()
15 |
16 | S2 = bl.Study()
17 | S2.loadData(np.array([1, 2, 3, 4, 5]))
18 | S2.setOM(bl.om.Poisson('rate2', bl.oint(0, 6, 50)))
19 | S2.setTM(bl.tm.GaussianRandomWalk('sigma', 0.2, target='rate2'))
20 | S2.fit()
21 |
22 | P = bl.Parser(S, S2)
23 | P('log(rate2*2*1.2) + 4 + rate^2 > 20', t=3)
24 | np.testing.assert_almost_equal(P('log(rate2@1*2*1.2) + 4 + rate@2^2 > 20'), 0.19606860326174191, decimal=5,
25 | err_msg='Erroneous parsing result for inequality.')
26 | np.testing.assert_almost_equal(P('log(rate2*2*1.2) + 4 + rate^2 > 20', t=3), 0.19772797081330246, decimal=5,
27 | err_msg='Erroneous parsing result for inequality with fixed timestamp.')
28 |
29 | def test_distribution(self):
30 | S = bl.Study()
31 | S.loadData(np.array([1, 2, 3, 4, 5]))
32 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 50)))
33 | S.setTM(bl.tm.Static())
34 | S.fit()
35 |
36 | S2 = bl.Study()
37 | S2.loadData(np.array([1, 2, 3, 4, 5]))
38 | S2.setOM(bl.om.Poisson('rate2', bl.oint(0, 6, 50)))
39 | S2.setTM(bl.tm.GaussianRandomWalk('sigma', 0.2, target='rate2'))
40 | S2.fit()
41 |
42 | P = bl.Parser(S, S2)
43 | x, p = P('log(rate2@1*2*1.2)+ 4 + rate@2^2')
44 | np.testing.assert_allclose(p[100:105],
45 | [0.00732 , 0.007495, 0.005775, 0.003511, 0.003949],
46 | rtol=1e-03, err_msg='Erroneous derived probability distribution.')
47 |
48 |
49 | class TestHyperParameterParsing:
50 | def test_statichyperparameter(self):
51 | S = bl.HyperStudy()
52 | S.loadData(np.array([1, 2, 3, 4, 5]))
53 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 50)))
54 | S.setTM(bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 5), target='rate'))
55 | S.fit()
56 |
57 | p = S.eval('exp(0.99*log(sigma))+1 > 1.1')
58 |
59 | np.testing.assert_almost_equal(p, 0.60696006616644793, decimal=5,
60 | err_msg='Erroneous parsing result for inequality.')
61 |
62 | def test_dynamichyperparameter(self):
63 | S = bl.OnlineStudy(storeHistory=True)
64 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 50)))
65 | S.add('gradual', bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 5), target='rate'))
66 | S.add('static', bl.tm.Static())
67 |
68 | for d in np.arange(5):
69 | S.step(d)
70 |
71 | p = S.eval('exp(0.99*log(sigma@2))+1 > 1.1')
72 |
73 | np.testing.assert_almost_equal(p, 0.61228433813735061, decimal=5,
74 | err_msg='Erroneous parsing result for inequality.')
75 |
76 | S = bl.OnlineStudy(storeHistory=False)
77 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 50)))
78 | S.add('gradual', bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 5), target='rate'))
79 | S.add('static', bl.tm.Static())
80 |
81 | for d in np.arange(3):
82 | S.step(d)
83 |
84 | p = S.eval('exp(0.99*log(sigma))+1 > 1.1')
85 |
86 | np.testing.assert_almost_equal(p, 0.61228433813735061, decimal=5,
87 | err_msg='Erroneous parsing result for inequality.')
88 |
--------------------------------------------------------------------------------
/tests/test_plot.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 | import matplotlib.pyplot as plt
7 |
8 |
9 | class TestPlot:
10 | def test_plot_study(self):
11 | S = bl.Study()
12 | S.loadData(np.array([1, 2, 3, 4, 5]))
13 |
14 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
15 | T = bl.tm.Static()
16 | S.set(L, T)
17 |
18 | S.fit()
19 |
20 | S.plot('rate')
21 | plt.close()
22 |
23 | S.plot('rate', t=2)
24 | plt.close()
25 |
26 | def test_plot_hyperstudy(self):
27 | S = bl.HyperStudy()
28 | S.loadData(np.array([1, 2, 3, 4, 5]))
29 |
30 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
31 | T = bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 5), target='rate')
32 | S.set(L, T)
33 |
34 | S.fit()
35 |
36 | S.plot('rate')
37 | plt.close()
38 |
39 | S.plot('rate', t=2)
40 | plt.close()
41 |
42 | S.plot('sigma')
43 | plt.close()
44 |
45 | def test_plot_changepointstudy(self):
46 | S = bl.ChangepointStudy()
47 | S.loadData(np.array([1, 2, 3, 4, 5]))
48 |
49 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
50 | T = bl.tm.SerialTransitionModel(bl.tm.Static(),
51 | bl.tm.ChangePoint('t1', 'all'),
52 | bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 3), target='rate'),
53 | bl.tm.ChangePoint('t2', 'all'),
54 | bl.tm.Static())
55 | S.set(L, T)
56 |
57 | S.fit()
58 |
59 | S.plot('rate')
60 | plt.close()
61 |
62 | S.plot('rate', t=2)
63 | plt.close()
64 |
65 | S.plot('sigma')
66 | plt.close()
67 |
68 | S.getDD(['t1', 't2'], plot=True)
69 | plt.close()
70 |
71 | def test_plot_onlinestudy(self):
72 | S = bl.OnlineStudy(storeHistory=True)
73 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 50)))
74 | S.add('gradual', bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.2, 5), target='rate'))
75 | S.add('static', bl.tm.Static())
76 |
77 | for d in np.arange(5):
78 | S.step(d)
79 |
80 | S.plot('rate')
81 | plt.close()
82 |
83 | S.plot('rate', t=2)
84 | plt.close()
85 |
86 | S.plot('sigma')
87 | plt.close()
88 |
89 | S.plot('sigma', t=2)
90 | plt.close()
91 |
92 | S.plot('gradual')
93 | plt.close()
94 |
95 | S.plot('gradual', local=True)
96 | plt.close()
97 |
--------------------------------------------------------------------------------
/tests/test_study.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 | import sympy.stats as stats
7 |
8 |
9 | class TestOneParameterModel:
10 | def test_fit_0hp(self):
11 | # carry out fit
12 | S = bl.Study()
13 | S.loadData(np.array([1, 2, 3, 4, 5]))
14 | S.setOM(bl.om.Poisson('rate'))
15 | S.setTM(bl.tm.Static())
16 | S.fit()
17 |
18 | # test parameter distributions
19 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
20 | [0.00034, 0.00034, 0.00034, 0.00034, 0.00034],
21 | rtol=1e-3, err_msg='Erroneous posterior distribution values.')
22 |
23 | # test parameter mean values
24 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
25 | [3.09761, 3.09761, 3.09761, 3.09761, 3.09761],
26 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
27 |
28 | # test model evidence value
29 | np.testing.assert_almost_equal(S.logEvidence, -10.4463425036, decimal=2,
30 | err_msg='Erroneous log-evidence value.')
31 |
32 | def test_fit_1hp(self):
33 | # carry out fit
34 | S = bl.Study()
35 | S.loadData(np.array([1, 2, 3, 4, 5]))
36 | S.setOM(bl.om.Poisson('rate'))
37 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='rate'))
38 | S.fit()
39 |
40 | # test parameter distributions
41 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
42 | [0.000417, 0.000386, 0.000356, 0.000336, 0.000332],
43 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
44 |
45 | # test parameter mean values
46 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
47 | [3.073534, 3.08179 , 3.093091, 3.104016, 3.111173],
48 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
49 |
50 | # test model evidence value
51 | np.testing.assert_almost_equal(S.logEvidence, -10.4337420351, decimal=2,
52 | err_msg='Erroneous log-evidence value.')
53 |
54 | def test_fit_2hp(self):
55 | # carry out fit
56 | S = bl.Study()
57 | S.loadData(np.array([1, 2, 3, 4, 5]))
58 | S.setOM(bl.om.Poisson('rate'))
59 |
60 | T = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('sigma', 0.1, target='rate'),
61 | bl.tm.RegimeSwitch('log10pMin', -3))
62 |
63 | S.setTM(T)
64 | S.fit()
65 |
66 | # test parameter distributions
67 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
68 | [0.000412, 0.000376, 0.000353, 0.000336, 0.000332],
69 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
70 |
71 | # test parameter mean values
72 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
73 | [2.942708, 3.002756, 3.071995, 3.103038, 3.111179],
74 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
75 |
76 | # test model evidence value
77 | np.testing.assert_almost_equal(S.logEvidence, -10.4342948181, decimal=2,
78 | err_msg='Erroneous log-evidence value.')
79 |
80 | def test_fit_prior_array(self):
81 | # carry out fit
82 | S = bl.Study()
83 | S.loadData(np.array([1, 2, 3, 4, 5]))
84 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 1000), prior=np.ones(1000)))
85 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='rate'))
86 | S.fit()
87 |
88 | # test parameter distributions
89 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
90 | [0.000221, 0.000202, 0.000184, 0.000172, 0.000172],
91 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
92 |
93 | # test parameter mean values
94 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
95 | [3.174159, 3.180812, 3.190743, 3.200642, 3.20722 ],
96 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
97 |
98 | # test model evidence value
99 | np.testing.assert_almost_equal(S.logEvidence, -10.0866227472, decimal=2,
100 | err_msg='Erroneous log-evidence value.')
101 |
102 | def test_fit_prior_function(self):
103 | # carry out fit
104 | S = bl.Study()
105 | S.loadData(np.array([1, 2, 3, 4, 5]))
106 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 1000), prior=lambda x: 1./x))
107 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='rate'))
108 | S.fit()
109 |
110 | # test parameter distributions
111 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
112 | [0.000437, 0.000401, 0.000366, 0.000342, 0.000337],
113 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
114 |
115 | # test parameter mean values
116 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
117 | [2.967834, 2.977838, 2.990624, 3.002654, 3.010419],
118 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
119 |
120 | # test model evidence value
121 | np.testing.assert_almost_equal(S.logEvidence, -11.3966589329, decimal=2,
122 | err_msg='Erroneous log-evidence value.')
123 |
124 | def test_fit_prior_sympy(self):
125 | # carry out fit
126 | S = bl.Study()
127 | S.loadData(np.array([1, 2, 3, 4, 5]))
128 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 1000), prior=stats.Exponential('expon', 1.)))
129 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='rate'))
130 | S.fit()
131 |
132 | # test parameter distributions
133 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
134 | [0.000881, 0.00081 , 0.00074 , 0.00069 , 0.000674],
135 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
136 |
137 | # test parameter mean values
138 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
139 | [2.627709, 2.643611, 2.661415, 2.677185, 2.687023],
140 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
141 |
142 | # test model evidence value
143 | np.testing.assert_almost_equal(S.logEvidence, -11.1819034242, decimal=2,
144 | err_msg='Erroneous log-evidence value.')
145 |
146 | def test_optimize(self):
147 | # carry out fit
148 | S = bl.Study()
149 | S.loadData(np.array([1, 2, 3, 4, 5]))
150 | S.setOM(bl.om.Poisson('rate', bl.oint(0, 6, 1000), prior=stats.Exponential('expon', 1.)))
151 |
152 | T = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('sigma', 2.1, target='rate'),
153 | bl.tm.RegimeSwitch('log10pMin', -3))
154 |
155 | S.setTM(T)
156 | S.optimize()
157 |
158 | # test parameter distributions
159 | np.testing.assert_allclose(S.getParameterDistributions('rate', density=False)[1][:, 250],
160 | [1.820641e-03, 2.083830e-03, 7.730833e-04, 1.977125e-04, 9.441302e-05],
161 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
162 |
163 | # test parameter mean values
164 | np.testing.assert_allclose(S.getParameterMeanValues('rate'),
165 | [1.015955, 2.291846, 3.36402 , 4.113622, 4.390356],
166 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
167 |
168 | # test model evidence value
169 | np.testing.assert_almost_equal(S.logEvidence, -9.47362827569, decimal=2,
170 | err_msg='Erroneous log-evidence value.')
171 |
172 | # test optimized hyper-parameter values
173 | np.testing.assert_almost_equal(S.getHyperParameterValue('sigma'), 2.11216289063, decimal=2,
174 | err_msg='Erroneous log-evidence value.')
175 | np.testing.assert_almost_equal(S.getHyperParameterValue('log10pMin'), -3.0, decimal=3,
176 | err_msg='Erroneous log-evidence value.')
177 |
178 |
179 | class TestTwoParameterModel:
180 | def test_fit_0hp(self):
181 | # carry out fit
182 | S = bl.Study()
183 | S.loadData(np.array([1, 2, 3, 4, 5]))
184 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
185 | S.setTM(bl.tm.Static())
186 | S.fit()
187 |
188 | # test parameter distributions
189 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
190 | [0.013349, 0.013349, 0.013349, 0.013349, 0.013349],
191 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
192 |
193 | # test parameter mean values
194 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
195 | [3., 3., 3., 3., 3.],
196 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
197 |
198 | # test model evidence value
199 | np.testing.assert_almost_equal(S.logEvidence, -16.1946904707, decimal=2,
200 | err_msg='Erroneous log-evidence value.')
201 |
202 | def test_fit_1hp(self):
203 | # carry out fit
204 | S = bl.Study()
205 | S.loadData(np.array([1, 2, 3, 4, 5]))
206 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
207 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='mean'))
208 | S.fit()
209 |
210 | # test parameter distributions
211 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
212 | [0.013547, 0.013428, 0.013315, 0.013241, 0.013232],
213 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
214 |
215 | # test parameter mean values
216 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
217 | [2.995242, 2.997088, 3. , 3.002912, 3.004758],
218 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
219 |
220 | # test model evidence value
221 | np.testing.assert_almost_equal(S.logEvidence, -16.1865343702, decimal=2,
222 | err_msg='Erroneous log-evidence value.')
223 |
224 | def test_fit_2hp(self):
225 | # carry out fit
226 | S = bl.Study()
227 | S.loadData(np.array([1, 2, 3, 4, 5]))
228 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
229 |
230 | T = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('sigma', 0.1, target='mean'),
231 | bl.tm.RegimeSwitch('log10pMin', -3))
232 |
233 | S.setTM(T)
234 | S.fit()
235 |
236 | # test parameter distributions
237 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
238 | [0.018848, 0.149165, 0.025588, 0.006414, 0.005426],
239 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
240 |
241 | # test parameter mean values
242 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
243 | [1.005987, 2.710129, 3.306985, 3.497192, 3.527645],
244 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
245 |
246 | # test model evidence value
247 | np.testing.assert_almost_equal(S.logEvidence, -14.3305753098, decimal=2,
248 | err_msg='Erroneous log-evidence value.')
249 |
250 | def test_fit_prior_array(self):
251 | # carry out fit
252 | S = bl.Study()
253 | S.loadData(np.array([1, 2, 3, 4, 5]))
254 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=np.ones((20, 20))))
255 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='mean'))
256 | S.fit()
257 |
258 | # test parameter distributions
259 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
260 | [0.02045 , 0.020327, 0.020208, 0.020128, 0.020115],
261 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
262 |
263 | # test parameter mean values
264 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
265 | [2.99656 , 2.997916, 3. , 3.002084, 3.00344 ],
266 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
267 |
268 | # test model evidence value
269 | np.testing.assert_almost_equal(S.logEvidence, -10.9827282104, decimal=2,
270 | err_msg='Erroneous log-evidence value.')
271 |
272 | def test_fit_prior_function(self):
273 | # carry out fit
274 | S = bl.Study()
275 | S.loadData(np.array([1, 2, 3, 4, 5]))
276 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1./s))
277 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='mean'))
278 | S.fit()
279 |
280 | # test parameter distributions
281 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
282 | [0.018242, 0.018119, 0.018001, 0.017921, 0.01791 ],
283 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
284 |
285 | # test parameter mean values
286 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
287 | [2.996202, 2.997693, 3. , 3.002307, 3.003798],
288 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
289 |
290 | # test model evidence value
291 | np.testing.assert_almost_equal(S.logEvidence, -11.9842221343, decimal=2,
292 | err_msg='Erroneous log-evidence value.')
293 |
294 | def test_fit_prior_sympy(self):
295 | # carry out fit
296 | S = bl.Study()
297 | S.loadData(np.array([1, 2, 3, 4, 5]))
298 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20),
299 | prior=[stats.Uniform('u', 0, 6), stats.Exponential('e', 2.)]))
300 | S.setTM(bl.tm.GaussianRandomWalk('sigma', 0.1, target='mean'))
301 | S.fit()
302 |
303 | # test parameter distributions
304 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
305 | [0.014305, 0.014183, 0.014066, 0.01399 , 0.01398 ],
306 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
307 |
308 | # test parameter mean values
309 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
310 | [2.995526, 2.997271, 3. , 3.002729, 3.004474],
311 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
312 |
313 | # test model evidence value
314 | np.testing.assert_almost_equal(S.logEvidence, -12.4324853153, decimal=2,
315 | err_msg='Erroneous log-evidence value.')
316 |
317 | def test_optimize(self):
318 | # carry out fit
319 | S = bl.Study()
320 | S.loadData(np.array([1, 2, 3, 4, 5]))
321 | S.setOM(bl.om.Gaussian('mean', bl.cint(0, 6, 20), 'sigma', bl.oint(0, 2, 20), prior=lambda m, s: 1/s**3))
322 |
323 | T = bl.tm.CombinedTransitionModel(bl.tm.GaussianRandomWalk('sigma', 1.07, target='mean'),
324 | bl.tm.RegimeSwitch('log10pMin', -3.90))
325 |
326 | S.setTM(T)
327 | S.optimize()
328 |
329 | # test parameter distributions
330 | np.testing.assert_allclose(S.getParameterDistributions('mean', density=False)[1][:, 5],
331 | [9.903855e-03, 1.887901e-02, 8.257234e-05, 5.142727e-06, 2.950377e-06],
332 | rtol=1e-02, err_msg='Erroneous posterior distribution values.')
333 |
334 | # test parameter mean values
335 | np.testing.assert_allclose(S.getParameterMeanValues('mean'),
336 | [0.979099, 1.951689, 3.000075, 4.048376, 5.020886],
337 | rtol=1e-02, err_msg='Erroneous posterior mean values.')
338 |
339 | # test model evidence value
340 | np.testing.assert_almost_equal(S.logEvidence, -8.010466752050611, decimal=2,
341 | err_msg='Erroneous log-evidence value.')
342 |
343 | # test optimized hyper-parameter values
344 | np.testing.assert_almost_equal(S.getHyperParameterValue('sigma'), 1.065854087589326, decimal=2,
345 | err_msg='Erroneous log-evidence value.')
346 | np.testing.assert_almost_equal(S.getHyperParameterValue('log10pMin'), -4.039735868499399, decimal=2,
347 | err_msg='Erroneous log-evidence value.')
348 |
--------------------------------------------------------------------------------
/tests/test_transitionmodels.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from __future__ import print_function, division
4 | import bayesloop as bl
5 | import numpy as np
6 |
7 |
8 | class TestBuiltin:
9 | def test_static(self):
10 | S = bl.Study()
11 | S.loadData(np.array([1, 2, 3, 4, 5]))
12 |
13 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
14 | T = bl.tm.Static()
15 | S.set(L, T)
16 |
17 | S.fit()
18 |
19 | # test model evidence value
20 | np.testing.assert_almost_equal(S.logEvidence, -10.372209708143769, decimal=5,
21 | err_msg='Erroneous log-evidence value.')
22 |
23 | def test_deterministic(self):
24 | S = bl.HyperStudy()
25 | S.loadData(np.array([1, 2, 3, 4, 5]))
26 |
27 | def linear(t, a=[1, 2]):
28 | return 0.5 + 0.2*a*t
29 |
30 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
31 | T = bl.tm.Deterministic(linear, target='rate')
32 | S.set(L, T)
33 |
34 | S.fit()
35 |
36 | # test model evidence value
37 | np.testing.assert_almost_equal(S.logEvidence, -9.4050089375418136, decimal=3,
38 | err_msg='Erroneous log-evidence value.')
39 |
40 | def test_gaussianrandomwalk(self):
41 | S = bl.Study()
42 | S.loadData(np.array([1, 2, 3, 4, 5]))
43 |
44 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
45 | T = bl.tm.GaussianRandomWalk('sigma', 0.2, target='rate')
46 | S.set(L, T)
47 |
48 | S.fit()
49 |
50 | # test model evidence value
51 | np.testing.assert_almost_equal(S.logEvidence, -10.323144246611964, decimal=5,
52 | err_msg='Erroneous log-evidence value.')
53 |
54 | def test_bivariaterandomwalk(self):
55 | S = bl.Study()
56 | S.loadData(np.array([1, 2, 3, 4, 5]))
57 |
58 | L = bl.om.Gaussian('mu', bl.oint(0, 6, 20), 'sigma', bl.oint(0, 2, 20))
59 | T = bl.tm.BivariateRandomWalk('sigma1', 1., 'sigma2', 0.1, 'rho', 0.5)
60 | S.set(L, T)
61 |
62 | S.fit()
63 |
64 | # test model evidence value
65 | np.testing.assert_almost_equal(S.logEvidence, -7.330706514472251, decimal=5,
66 | err_msg='Erroneous log-evidence value.')
67 |
68 | def test_alphastablerandomwalk(self):
69 | S = bl.Study()
70 | S.loadData(np.array([1, 2, 3, 4, 5]))
71 |
72 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
73 | T = bl.tm.AlphaStableRandomWalk('c', 0.2, 'alpha', 1.5, target='rate')
74 | S.set(L, T)
75 |
76 | S.fit()
77 |
78 | # test model evidence value
79 | np.testing.assert_almost_equal(S.logEvidence, -10.122384638661309, decimal=5,
80 | err_msg='Erroneous log-evidence value.')
81 |
82 | def test_changepoint(self):
83 | S = bl.Study()
84 | S.loadData(np.array([1, 2, 3, 4, 5]))
85 |
86 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
87 | T = bl.tm.ChangePoint('t_change', 2)
88 | S.set(L, T)
89 |
90 | S.fit()
91 |
92 | # test model evidence value
93 | np.testing.assert_almost_equal(S.logEvidence, -12.894336092378385, decimal=5,
94 | err_msg='Erroneous log-evidence value.')
95 |
96 | def test_regimeswitch(self):
97 | S = bl.Study()
98 | S.loadData(np.array([1, 2, 3, 4, 5]))
99 |
100 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
101 | T = bl.tm.RegimeSwitch('p_min', -3)
102 | S.set(L, T)
103 |
104 | S.fit()
105 |
106 | # test model evidence value
107 | np.testing.assert_almost_equal(S.logEvidence, -10.372866559561402, decimal=5,
108 | err_msg='Erroneous log-evidence value.')
109 |
110 | def test_independent(self):
111 | S = bl.Study()
112 | S.loadData(np.array([1, 2, 3, 4, 5]))
113 |
114 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
115 | T = bl.tm.Independent()
116 | S.set(L, T)
117 |
118 | S.fit()
119 |
120 | # test model evidence value
121 | np.testing.assert_almost_equal(S.logEvidence, -11.087360077190617, decimal=5,
122 | err_msg='Erroneous log-evidence value.')
123 |
124 | def test_notequal(self):
125 | S = bl.Study()
126 | S.loadData(np.array([1, 2, 3, 4, 5]))
127 |
128 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
129 | T = bl.tm.NotEqual('p_min', -3)
130 | S.set(L, T)
131 |
132 | S.fit()
133 |
134 | # test model evidence value
135 | np.testing.assert_almost_equal(S.logEvidence, -10.569099863134156, decimal=5,
136 | err_msg='Erroneous log-evidence value.')
137 |
138 |
139 | class TestNested:
140 | def test_nested(self):
141 | S = bl.Study()
142 | S.loadData(np.array([1, 2, 3, 4, 5]))
143 |
144 | L = bl.om.Poisson('rate', bl.oint(0, 6, 100))
145 | T = bl.tm.SerialTransitionModel(
146 | bl.tm.Static(),
147 | bl.tm.ChangePoint('t_change', 1),
148 | bl.tm.CombinedTransitionModel(
149 | bl.tm.GaussianRandomWalk('sigma', 0.2, target='rate'),
150 | bl.tm.RegimeSwitch('p_min', -3)
151 | ),
152 | bl.tm.BreakPoint('t_break', 3),
153 | bl.tm.Independent()
154 | )
155 | S.set(L, T)
156 |
157 | S.fit()
158 |
159 | # test model evidence value
160 | np.testing.assert_almost_equal(S.logEvidence, -13.269918024215237, decimal=5,
161 | err_msg='Erroneous log-evidence value.')
162 |
--------------------------------------------------------------------------------