├── .gitignore ├── Chapter 02 - Probability.ipynb ├── Chapter 03 - Random Variables.ipynb ├── Chapter 04 - Expectation.ipynb ├── Chapter 05 - Inequalities.ipynb ├── Chapter 06 - Convergence of Random Variables.ipynb ├── Chapter 07 - Models, Statistical Inference and Learning.ipynb ├── Chapter 08 - Estimating the CDF and Statistical Functionals.ipynb ├── Chapter 09 - The Bootstrap.ipynb ├── Chapter 10 - Parametric Inference.ipynb ├── Chapter 11 - Hypothesis Testing and p-values.ipynb ├── Chapter 12 - Bayesian Inference.ipynb ├── Chapter 13 - Statistical Decision Theory.ipynb ├── Chapter 14 - Linear Regression.ipynb ├── Chapter 15 - Multivariate Models.ipynb ├── Chapter 16 - Inference about Independence.ipynb ├── Chapter 17 - Undirected Graphs and Conditional Independence.ipynb ├── Chapter 18 - Loglinear Models.ipynb ├── Chapter 19 - Causal Inference.ipynb ├── Chapter 20 - Directed Graphs.ipynb ├── Chapter 21 - Nonparametric Curve Estimation.ipynb ├── Chapter 22 - Smoothing Using Orthogonal Functions.ipynb ├── Chapter 23 - Classification.ipynb ├── Chapter 24 - Stochastic Processes.ipynb ├── Chapter 25 - Simulation Methods.ipynb ├── README.md └── data ├── UStemperatures.txt ├── calcium.txt ├── carmileage.csv ├── cloud_seeding.csv ├── coris.csv ├── fijiquakes.csv ├── geysers.csv ├── glass.txt ├── montana.csv ├── nerve.csv └── spam.txt /.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints 2 | -------------------------------------------------------------------------------- /Chapter 07 - Models, Statistical Inference and Learning.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## 7. Models, Statistical Inference and Learning" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "### 7.2 Parametric and Nonparametric Models" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "A **statistical model** is a set of distributions $\\mathfrak{F}$." 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": {}, 27 | "source": [ 28 | "A **parametric model** is a set $\\mathfrak{F}$ that may be parametrized by a finite number of parameters. For example, if we assume that data comes from a normal distribution then\n", 29 | "\n", 30 | "$$ \\mathfrak{F} = \\left\\{ f(x; \\mu, \\sigma) = \n", 31 | "\\frac{1}{\\sigma\\sqrt{\\pi}} \\exp \\left\\{ -\\frac{1}{2\\sigma^2} \\left(x - \\mu \\right)^2 \\right\\}, \\; \\; \n", 32 | "\\mu \\in \\mathbb{R}, \\; \\sigma > 0\n", 33 | "\\right\\} $$\n", 34 | "\n", 35 | "In general, a parametric model takes the form\n", 36 | "\n", 37 | "$$ \\mathfrak{F} = \\left\\{ f(x; \\theta) : \\; \\theta \\in \\Theta \\right\\} $$\n", 38 | "\n", 39 | "where $\\theta$ is an unknown parameter that takes values in the **parameter space** $\\Theta$.\n", 40 | "\n", 41 | "If $\\theta$ is a vector and we are only interested in one component of $\\theta$, we call the remaining parameters **nuisance parameters**.\n", 42 | "\n", 43 | "A **nonparametric model** is a set $\\mathfrak{F}$ that cannot be parametrized by a finite number of parameters." 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "metadata": {}, 49 | "source": [ 50 | "#### Some notation\n", 51 | "\n", 52 | "If $\\mathfrak{F} = \\{ f(x; \\theta) : \\; \\theta \\in \\Theta \\}$ is a parametric model, we write\n", 53 | "\n", 54 | "$$\\mathbb{P}_\\theta(X \\in A) = \\int_A f(x; \\theta)dx$$\n", 55 | "\n", 56 | "$$\\mathbb{E}_\\theta(X \\in A) = \\int_A x f(x; \\theta)dx$$ \n", 57 | "\n", 58 | "The subscript $\\theta$ indicates that the probability or expectation is defined with respect to $f(x; \\theta)$; it does not mean we are averaging over $\\theta$." 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "### 7.3 Fundamental Concepts in Inference" 66 | ] 67 | }, 68 | { 69 | "cell_type": "markdown", 70 | "metadata": {}, 71 | "source": [ 72 | "#### 7.3.1 Point estimation\n", 73 | "\n", 74 | "Let $X_1, \\dots, X_n$ be $n$ iid data points from some distribution $F$. A point estimator $\\hat{\\theta_n}$ of a parameter $\\theta$ is some function:\n", 75 | "\n", 76 | "$$ \\hat{\\theta_n} = g(X_1, \\dots, X_n) $$\n", 77 | "\n", 78 | "We define\n", 79 | "\n", 80 | "$$ \\text{bias}(\\hat{\\theta_n}) = \\mathbb{E}_\\theta(\\hat{\\theta_n}) - \\theta $$\n", 81 | "\n", 82 | "to be the bias of $\\hat{\\theta_n}$. We say that $\\hat{\\theta_n}$ is **unbiased** if $ \\mathbb{E}_\\theta(\\hat{\\theta_n}) = 0 $.\n", 83 | "\n", 84 | "A point estimator $\\hat{\\theta_n}$ of a parameter $\\theta$ is **consistent** if $\\hat{\\theta_n} \\xrightarrow{\\text{P}} \\theta$.\n", 85 | "\n", 86 | "The distribution of $\\hat{\\theta_n}$ is called the **sampling distribution**.\n", 87 | "\n", 88 | "The standard deviation of $\\hat{\\theta_n}$ is called the **standard error**, denoted by $\\text{se}$:\n", 89 | "\n", 90 | "$$ \\text{se} = \\text{se}(\\hat{\\theta_n}) = \\sqrt{\\mathbb{V}(\\hat{\\theta_n})} $$\n", 91 | "\n", 92 | "Often it is not possible to compute the standard error but usually we can estimate the standard error. The estimated standard error is denoted by $\\hat{\\text{se}}$." 93 | ] 94 | }, 95 | { 96 | "cell_type": "markdown", 97 | "metadata": {}, 98 | "source": [ 99 | "**Example**. Let $X_1, \\dots, X_n \\sim \\text{Bernoulli}(p)$ and let $\\hat{p_n} = n^{-1} \\sum_i X_i$. Then $\\mathbb{E}(\\hat{p_n}) = n^{-1} \\sum_i \\mathbb{E}(X_i) = p$ so $\\hat{p_n}$ is unbiased. The standard error is $\\text{se} = \\sqrt{\\mathbb{V}(\\hat{p_n})} = \\sqrt{p(1-p)/n}$. The estimated standard error is $\\hat{\\text{se}} = \\sqrt{\\hat{p}(1 - \\hat{p})/n}$." 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "metadata": {}, 105 | "source": [ 106 | "The quality of a point estimate is sometimes assessed by the **mean squared error**, or MSE, defined by:\n", 107 | "\n", 108 | "$$ \\text{MSE} = \\mathbb{E}_\\theta \\left( \\hat{\\theta_n} - \\theta \\right)^2 $$" 109 | ] 110 | }, 111 | { 112 | "cell_type": "markdown", 113 | "metadata": {}, 114 | "source": [ 115 | "**Theorem 7.8**. The MSE can be rewritten as:\n", 116 | "\n", 117 | "$$ \\text{MSE} = \\text{bias}(\\hat{\\theta_n})^2 + \\mathbb{V}_\\theta(\\hat{\\theta_n}) $$\n", 118 | "\n", 119 | "**Proof**. Let $\\bar{\\theta_n} = \\mathbb{E}_\\theta(\\hat{\\theta_n})$. Then\n", 120 | "\n", 121 | "$$\n", 122 | "\\begin{align}\n", 123 | "\\mathbb{E}_\\theta(\\hat{\\theta_n} - \\theta)^2 & = \\mathbb{E}_\\theta(\\hat{\\theta_n} - \\bar{\\theta_n} + \\bar{\\theta_n} - \\theta)^2 \\\\\n", 124 | "&= \\mathbb{E}_\\theta(\\hat{\\theta_n} - \\bar{\\theta_n})^2\n", 125 | " + 2 (\\bar{\\theta_n} - \\theta) \\mathbb{E}_\\theta(\\hat{\\theta_n} - \\bar{\\theta_n})\n", 126 | " + \\mathbb{E}_\\theta(\\hat{\\theta_n} - \\theta)^2 \\\\\n", 127 | "&= (\\bar{\\theta_n} - \\theta)^2 + \\mathbb{E}_\\theta(\\hat{\\theta_n} - \\bar{\\theta_n})^2 \\\\\n", 128 | "&= \\text{bias}^2 + \\mathbb{V}_\\theta(\\hat{\\theta_n})\n", 129 | "\\end{align}\n", 130 | "$$" 131 | ] 132 | }, 133 | { 134 | "cell_type": "markdown", 135 | "metadata": {}, 136 | "source": [ 137 | "**Theorem 7.9**. If $\\text{bias} \\rightarrow 0$ and $\\text{se} \\rightarrow 0$ as $n \\rightarrow \\infty$ then $\\hat{\\theta_n}$ is consistent, that is, $\\hat{\\theta_n} \\xrightarrow{\\text{P}} \\theta$.\n", 138 | "\n", 139 | "**Proof**. If $\\text{bias} \\rightarrow 0$ and $\\text{se} \\rightarrow 0$ then, by theorem 7.8, $\\text{MSE} \\rightarrow 0$. It follows that $\\hat{\\theta_n} \\xrightarrow{\\text{qm}} \\theta$ -- and quadratic mean convergence implies probability convergence." 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "An estimator is **asymptotically Normal** if\n", 147 | "\n", 148 | "$$ \\frac{\\hat{\\theta_n} - \\theta}{\\text{se}} \\leadsto N(0, 1) $$" 149 | ] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "#### 7.3.2 Confidence sets\n", 156 | "\n", 157 | "A $1 - \\alpha$ **confidence interval** for a parameter $\\theta$ is an interval $C_n = (a, b)$ where $a = a(X_1, \\dots, X_n)$ and $b = b(X_1, \\dots, _n)$ are functions of the data such that\n", 158 | "\n", 159 | "$$\\mathbb{P}_\\theta(\\theta \\in C_n) \\geq 1 - \\alpha, \\;\\; \\text{for all } \\theta \\in \\Theta$$\n", 160 | "\n", 161 | "In words, $(a, b)$ traps $\\theta$ with probability $1 - \\alpha$. We call $1 - \\alpha$ the **coverage** of the confidence interval.\n", 162 | "\n", 163 | "Note: **$C_n$ is random and $\\theta$ is fixed!**\n", 164 | "\n", 165 | "If $\\theta$ is a vector then we use a confidence set (such as a sphere or ellipse) instead of an interval." 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "Point estimators often have a limiting Normal distribution, meaning $\\hat{\\theta_n} \\approx N(\\theta, \\hat{\\text{se}}^2)$. In this case we can construct (approximate) confidence intervals as follows:" 173 | ] 174 | }, 175 | { 176 | "cell_type": "markdown", 177 | "metadata": {}, 178 | "source": [ 179 | "**Theorem 7.14 (Normal-based Confidence Interval)**. Suppose that $\\hat{\\theta_n} \\approx N(\\theta, \\hat{\\text{se}}^2)$. Let $\\Phi$ be the CDF of a standard Normal and let $z_{\\alpha/2} = \\Phi^{-1}\\left(1 - (\\alpha / 2)\\right)$, that is, $\\mathbb{P}(Z > z_{\\alpha/2}) = \\alpha/2$ and $\\mathbb{P}(-z_{\\alpha/2} < Z < z_{alpha/2}) = 1 - \\alpha$ where $Z \\sim N(0, 1)$. Let\n", 180 | "\n", 181 | "$$ C_n = \\left(\\hat{\\theta_n} - z_{\\alpha/2} \\hat{\\text{se}}, \\; \\hat{\\theta_n} + z_{\\alpha/2} \\hat{\\text{se}}\\right) $$\n", 182 | "\n", 183 | "Then\n", 184 | "\n", 185 | "$$ \\mathbb{P}_\\theta(\\theta \\in C_n) \\rightarrow 1 - \\alpha $$.\n", 186 | "\n", 187 | "**Proof**.\n", 188 | "\n", 189 | "Let $Z_n = (\\hat{\\theta_n} - \\theta) / \\hat{\\text{se}}$. By assumption $Z_n \\leadsto Z \\sim N(0, 1)$. Hence,\n", 190 | "\n", 191 | "$$\n", 192 | "\\begin{align}\n", 193 | "\\mathbb{P}_\\theta(\\theta \\in C_n) \n", 194 | "& = \\mathbb{P}_\\theta \\left(\\hat{\\theta_n} - z_{\\alpha/2} \\hat{\\text{se}} < \\theta < \\hat{\\theta_n} + z_{\\alpha/2} \\hat{\\text{se}} \\right) \\\\\n", 195 | "& = \\mathbb{P}_\\theta \\left(-z_{\\alpha/2} < \\frac{\\hat{\\theta_n} - \\theta}{\\hat{\\text{se}}} < z_{\\alpha/2} \\right) \\\\\n", 196 | "& \\rightarrow \\mathbb{P}\\left(-z_{\\alpha/2} < Z < z_{\\alpha/2}\\right) \\\\\n", 197 | "&= 1 - \\alpha\n", 198 | "\\end{align}\n", 199 | "$$" 200 | ] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "metadata": {}, 205 | "source": [ 206 | "#### 7.3.3 Hypothesis Testing\n", 207 | "\n", 208 | "In **hypothesis testing**, we start with some default theory -- called a **null hypothesis** -- and we ask if the data provide sufficient evidence to reject the theory. If not we retain the null hypothesis." 209 | ] 210 | }, 211 | { 212 | "cell_type": "markdown", 213 | "metadata": {}, 214 | "source": [ 215 | "### 7.5 Technical Appendix" 216 | ] 217 | }, 218 | { 219 | "cell_type": "markdown", 220 | "metadata": {}, 221 | "source": [ 222 | "- Our definition of confidence interval requires that $\\mathbb{P}_\\theta(\\theta \\in C_n) \\geq 1 - \\alpha$ for all $\\theta \\in \\Theta$.\n", 223 | "- A **pointwise asymptotic** confidence interval requires that $\\lim \\inf_{n \\rightarrow \\infty} \\mathbb{P}_\\theta(\\theta \\in C_n) \\geq 1 - \\alpha$ for all $\\theta \\in \\Theta$.\n", 224 | "- An **uniform asymptotic** confidence interval requires that $\\lim \\inf_{n \\rightarrow \\infty} \\inf{\\theta \\in \\Theta} \\mathbb{P}_\\theta(\\theta \\in C_n) \\geq 1 - \\alpha$.\n", 225 | "\n", 226 | "The approximate Normal-based interval is a pointwise asymptotic confidence interval. In general, it might not be a uniform asymptotic confidence interval." 227 | ] 228 | } 229 | ], 230 | "metadata": { 231 | "kernelspec": { 232 | "display_name": "Python 3", 233 | "language": "python", 234 | "name": "python3" 235 | }, 236 | "language_info": { 237 | "codemirror_mode": { 238 | "name": "ipython", 239 | "version": 3 240 | }, 241 | "file_extension": ".py", 242 | "mimetype": "text/x-python", 243 | "name": "python", 244 | "nbconvert_exporter": "python", 245 | "pygments_lexer": "ipython3", 246 | "version": "3.7.3" 247 | } 248 | }, 249 | "nbformat": 4, 250 | "nbformat_minor": 2 251 | } 252 | -------------------------------------------------------------------------------- /Chapter 11 - Hypothesis Testing and p-values.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## 11. Hypothesis Testing and p-values" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "Suppose we partition the parameters space $\\Theta$ into two disjoint sets $\\Theta_0$ and $\\Theta_1$ and we wish to test\n", 15 | "\n", 16 | "$$\n", 17 | "H_0: \\theta \\in \\Theta_0\n", 18 | "\\quad \\text{versus} \\quad\n", 19 | "H_1: \\theta \\in \\Theta_1\n", 20 | "$$\n", 21 | "\n", 22 | "We call $H_0$ the **null hypothesis** and $H_1$ the **alternative hypothesis**." 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "Let $X$ be a random variable and let $\\mathcal{X}$ be the range of $X$. We test a hypothesis by finding an appropriate subset of outcomes $R \\subset \\mathcal{X}$ called the **rejection region**.\n", 30 | "\n", 31 | "$$\n", 32 | "\\begin{align}\n", 33 | "X \\in R & \\Longrightarrow \\text{reject } H_0 \\\\\n", 34 | "X \\notin R & \\Longrightarrow \\text{retain (do not reject) } H_0\n", 35 | "\\end{align}\n", 36 | "$$\n", 37 | "\n", 38 | "Usually the rejection region is of form\n", 39 | "\n", 40 | "$$ R = \\bigg\\{x: T(x) > c \\bigg\\}$$\n", 41 | "\n", 42 | "where $T$ is a **test statistic** and $c$ is a **critical value**. The problem in hypothesis testing is to find an appropriate test statistic $T$ and and appropriate cutoff value $c$." 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "| | Retain Null | Reject Null |\n", 50 | "|------------|---------------|--------------| \n", 51 | "|$H_0$ true | $\\checkmark$ | Type I error |\n", 52 | "|$H_1$ true | Type II error | $\\checkmark$ |" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "The **power function** of a test with rejection region $R$ is defined by\n", 60 | "\n", 61 | "$$\\beta(\\theta) = \\mathbb{P}_\\theta(X \\in R)$$\n", 62 | "\n", 63 | "The **size** of a test is defined to be\n", 64 | "\n", 65 | "$$\\alpha = \\sup_{\\theta \\in \\Theta_0} \\beta(\\theta)$$\n", 66 | "\n", 67 | "A test is said to have **level $\\alpha$** if its size is less than or equal to $\\alpha$." 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "- A hypothesis of the form $\\theta = \\theta_0$ is called a **simple hypothesis**.\n", 75 | "- A hypothesis of the form $\\theta > \\theta_0$ or $\\theta < \\theta_0$ is called a **composite hypothesis**.\n", 76 | "- A test of the form $H_0 : \\theta = \\theta_0$ versus $H_1 : \\theta \\neq \\theta_0$ is called a **two-sided test**.\n", 77 | "- A test of the form $H_0 : \\theta \\leq \\theta_0$ versus $H_1: \\theta > \\theta_0$ or $H_0: \\theta \\geq \\theta_0$ versus $H_1: \\theta < \\theta_0$ is called a **one-sided test**.\n", 78 | "\n", 79 | "The most common tests are two-sided." 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "metadata": {}, 85 | "source": [ 86 | "Finding most powerful tests is hard and, in many cases, most powerful tests don't even exist. We will just consider three widely used tests: the Wald test, the $\\chi^2$ test, and the permutation test. A fourth test, the likelihood ratio test, is discussed in the appendix." 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "metadata": {}, 92 | "source": [ 93 | "### 11.1 The Wald Test" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "metadata": {}, 99 | "source": [ 100 | "Consider testing\n", 101 | "\n", 102 | "$$ H_0: \\theta = \\theta_0\n", 103 | "\\quad \\text{versus} \\quad\n", 104 | "H_1: \\theta \\neq \\theta_0$$\n", 105 | "\n", 106 | "Assume that $\\hat{\\theta}$ is asymptotically Normal:\n", 107 | "\n", 108 | "$$ \\frac{\\sqrt{n}(\\hat{\\theta} - \\theta_0)}{\\hat{\\text{se}}} \\leadsto N(0, 1) $$\n", 109 | "\n", 110 | "The size $\\alpha$ **Wald test** is: reject $H_0$ when $|W| > z_{\\alpha/2}$, where\n", 111 | "\n", 112 | "$$ W = \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}}$$" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "**Theorem 11.4**. Asymptotically, the Wald test has size $\\alpha$, that is,\n", 120 | "\n", 121 | "$$ \\mathbb{P}_{\\theta_0} \\left(|Z| > z_{\\alpha/2} \\right) \\rightarrow \\alpha$$\n", 122 | "\n", 123 | "as $n \\rightarrow \\infty$." 124 | ] 125 | }, 126 | { 127 | "cell_type": "markdown", 128 | "metadata": {}, 129 | "source": [ 130 | "**Proof**. Under $\\theta = \\theta_0$, $(\\hat{\\theta} - \\theta) / \\text{se} \\leadsto N(0, 1)$, so the probability of rejecting when the null hypothesis $\\theta = \\theta_0$ is true is\n", 131 | "\n", 132 | "$$\n", 133 | "\\begin{align}\n", 134 | "\\mathbb{P}_{\\theta_0}(|W| > z_{\\alpha / 2}) &= \\mathbb{P}_{\\theta_0} \\left(\\frac{|\\hat{\\theta} - \\theta_0|}{\\text{se}} > z_{\\alpha/2} \\right) \\\\\n", 135 | "& \\rightarrow \\mathbb{P}_{\\theta_0}(| N(0, 1) | > z_{\\alpha/2}) \\\\\n", 136 | "& = \\alpha\n", 137 | "\\end{align}\n", 138 | "$$" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "Most texts define the Wald test with the standard error computed at $\\theta = \\theta_0$ rather than at the estimated value $\\hat{\\theta}$. Both versions are valid." 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "**Theorem 11.6**. Suppose that the true value of $\\theta$ is $\\theta_* \\neq \\theta_0$. The power $\\beta(\\theta_*)$, the probability of correctly rejecting the null hypothesis, is given (approximately) by\n", 153 | "\n", 154 | "$$ 1 - \\Phi \\left(\\frac{\\theta_0 - \\theta_*}{\\hat{\\text{se}}} + z_{\\alpha/2} \\right) + \\Phi \\left(\\frac{\\theta_0 - \\theta_*}{\\hat{\\text{se}}} - z_{\\alpha/2} \\right) $$" 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "metadata": {}, 160 | "source": [ 161 | "**Theorem 11.10**. The size $\\alpha$ Wald test rejects $H_0: \\theta = \\theta_0$ versus $H_1: \\theta \\neq \\theta_0$ if and only if $\\theta_0 \\notin C$ where\n", 162 | "\n", 163 | "$$ C = \\left(\\hat{\\theta} - \\hat{\\text{se}} z_{\\alpha/2}, \\; \\hat{\\theta} + \\hat{\\text{se}} z_{\\alpha / 2} \\right) $$\n", 164 | "\n", 165 | "Thus, testing the hypothesis is equivalent to checking whether the null value is in the confidence interval." 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "### 11.2 p-values" 173 | ] 174 | }, 175 | { 176 | "cell_type": "markdown", 177 | "metadata": {}, 178 | "source": [ 179 | "Suppose that for every $\\alpha \\in (0, 1)$ we have a size $\\alpha$ test with rejection region $R_\\alpha$. Then,\n", 180 | "\n", 181 | "$$ \\text{p-value} = \\inf \\Big\\{ \\alpha : T(X^n) \\in R_\\alpha \\Big\\} $$\n", 182 | "\n", 183 | "That is, the p-value is the smallest level at which we can reject $H_0$." 184 | ] 185 | }, 186 | { 187 | "cell_type": "markdown", 188 | "metadata": {}, 189 | "source": [ 190 | "Informally, the p-value is a measure of the evidence against $H_0$: the smaller the p-value, the stronger the evidence against $H_0$. Typically, researchers use the following evidence scale:\n", 191 | "\n", 192 | "| p-value | evidence |\n", 193 | "|------------|-------------------------------------|\n", 194 | "| under 1% | very strong evidence against $H_0$ |\n", 195 | "| 1% to 5% | strong evidence against $H_0$ |\n", 196 | "| 5% to 10% | weak evidence against $H_0$ |\n", 197 | "| over 10% | little or no evidence against $H_0$ |" 198 | ] 199 | }, 200 | { 201 | "cell_type": "markdown", 202 | "metadata": {}, 203 | "source": [ 204 | "**Warning**: a large p-value is not strong evidence in favor of $H_0$. A large p-value can occur for two reasons:\n", 205 | "- $H_0$ is true, or\n", 206 | "- $H_0$ is false but the test has low power" 207 | ] 208 | }, 209 | { 210 | "cell_type": "markdown", 211 | "metadata": {}, 212 | "source": [ 213 | "**The p-value is not the probability that the null hypothesis is true**. We discuss quantities like $\\mathbb{P}(H_0 | \\text{Data})$ in the chapter on Bayesian inference." 214 | ] 215 | }, 216 | { 217 | "cell_type": "markdown", 218 | "metadata": {}, 219 | "source": [ 220 | "**Theorem 11.12**. Suppose that the size $\\alpha$ test is of the form\n", 221 | "\n", 222 | "$$ \\text{reject } H_0 \\text{ if and only if } T(X^n) \\geq c_\\alpha$$\n", 223 | "\n", 224 | "Then,\n", 225 | "\n", 226 | "$$ \\text{p-value} = \\sup_{\\theta \\in \\Theta_0} \\mathbb{P}_\\theta \\left(T(X^n) \\geq T(x^n) \\right)$$\n", 227 | "\n", 228 | "In words, the p-value is the probability (under $H_0$) of observing a value of the test statistic as or more extreme than what was actually observed." 229 | ] 230 | }, 231 | { 232 | "cell_type": "markdown", 233 | "metadata": {}, 234 | "source": [ 235 | "For a Wald test, $W$ has an approximate $N(0, 1)$ distribution under $H_0$. Hence, the p-value is\n", 236 | "\n", 237 | "$$ \\text{p-value} \\approx \\mathbb{P}(|Z| > |w|) = 2\\mathbb{P}(Z < -|w|) = 2 \\Phi(Z < -|w|) $$\n", 238 | "\n", 239 | "where $Z \\sim N(0, 1)$ and $w = (\\hat{\\theta} - \\theta_0) / \\hat{\\text{se}}$ is the observed value of the test statistic." 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "metadata": {}, 245 | "source": [ 246 | "**Theorem 11.13**. If the test statistic has a continuous distribution, then under $H_0: \\theta = \\theta_0$ the p-value has an $\\text{Uniform}(0, 1)$ distribution." 247 | ] 248 | }, 249 | { 250 | "cell_type": "markdown", 251 | "metadata": {}, 252 | "source": [ 253 | "### 11.3 The $\\chi^2$ distribution" 254 | ] 255 | }, 256 | { 257 | "cell_type": "markdown", 258 | "metadata": {}, 259 | "source": [ 260 | "Let $Z_1, \\dots, Z_k$ be independent, standard normals. Let $V = \\sum_{i=1}^k Z_i^2$. Then we say that $V$ has a $\\chi^2$ distribution with $k$ degrees of freedom, written $V \\sim \\chi^2_k$. The probability density of $V$ is\n", 261 | "\n", 262 | "$$ f(v) = \\frac{v^{(k/2) - 1}e^{-v/2}}{2^{k/2} \\Gamma(k / 2)} $$\n", 263 | "\n", 264 | "for $v > 0$. It can be shown that $\\mathbb{E}(V) = k$ and $\\mathbb{V}(k) = 2k$. We define the upper $\\alpha$ quantile $\\chi^2_{k, \\alpha} = F^{-1}(1 - alpha)$ where $F$ is the CDF. That is, $\\mathbb{P}(\\chi^2_k > \\chi^2_{k, \\alpha}) = \\alpha$." 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "### 11.4 Pearson's $\\chi^2$ Test for Multinomial Data" 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "Recall that $X = (X_1, \\dots, X_n)$ has a multinomial distribution if\n", 279 | "\n", 280 | "$$ f(x_1, \\dots, x_k; p) = \\begin{pmatrix}\n", 281 | "n \\\\\n", 282 | "x_1 \\dots x_k\n", 283 | "\\end{pmatrix} p_1^{x_1} \\dots p_k^{x_k}$$\n", 284 | "\n", 285 | "where\n", 286 | "\n", 287 | "$$\n", 288 | "\\begin{pmatrix}\n", 289 | "n \\\\\n", 290 | "x_1 \\dots x_k\n", 291 | "\\end{pmatrix} = \n", 292 | "\\frac{n!}{x_1! \\dots x_k!}\n", 293 | "$$\n", 294 | "\n", 295 | "The MLE of $p$ is $\\hat{p} = (\\hat{p_1}, \\dots, \\hat{p_k}) = \\left(X_1 / n, \\dots, X_k / n\\right)$." 296 | ] 297 | }, 298 | { 299 | "cell_type": "markdown", 300 | "metadata": {}, 301 | "source": [ 302 | "Let $(p_{01}, \\dots, p_{0k})$ be some fixed test of probabilities and suppose we want to test\n", 303 | "\n", 304 | "$$ H_0: (p_1, \\dots, p_k) = (p_{01}, \\dots, p_{0k})\n", 305 | "\\quad \\text{versus} \\quad\n", 306 | "H_1: (p_1, \\dots, p_k) \\neq (p_{01}, \\dots, p_{0k})$$\n", 307 | "\n", 308 | "**Pearson's $\\chi^2$ statistic** is\n", 309 | "\n", 310 | "$$ T = \\sum_{j=1}^k \\frac{(X_j - np_{0j})^2}{np_{0j}} = \\sum_{j=1}^k \\frac{(O_j - E_j)^2}{E_j}$$\n", 311 | "\n", 312 | "where $O_j = X_j$ is the observed data and $E_j = \\mathbb{E}(X_j) = np_{0j}$ is the expected value of $X_j$ under $H_0$." 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "metadata": {}, 318 | "source": [ 319 | "**Theorem 11.15**. Under $H_0$, $T \\leadsto \\chi^2_{k - 1}$. Hence the test: reject $H_0$ if $T > \\chi^2_{k - 1, \\alpha}$ has asymptotic level $\\alpha$. The p-value is $\\mathbb{P}(\\chi^2_k > t)$ where $t$ is the observed value of the test statistic." 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "### 11.5 The Permutation Test" 327 | ] 328 | }, 329 | { 330 | "cell_type": "markdown", 331 | "metadata": {}, 332 | "source": [ 333 | "Suppose that $X_1, \\dots, X_m \\sim F_X$ and $Y_1, \\dots, Y_n \\sim F_Y$ are two independent samples and $H_0$ is the hypothesis that both samples are identically distributed. More precisely we are testing\n", 334 | "\n", 335 | "$$ H_0: F_X = F_Y \\quad \\text{versus} \\quad F_1: F_X \\neq F_Y$$\n", 336 | "\n", 337 | "Let $T(x_1, \\dots, x_m, y_1, \\dots, y_n)$ be some test statistic, for example\n", 338 | "\n", 339 | "$$ T(X_1, \\dots, X_m, Y_1, \\dots, Y_n) = | \\overline{X}_m - \\overline{Y}_n |$$\n", 340 | "\n", 341 | "Let $N = m + n$ and consider forming all $N!$ permutations of the data $X_1, \\dots, X_m, Y_1, \\dots, Y_n$. For exach permutation, compute the test statistic $T$. Denote these values by $T_1, \\dots, T_{N!}$. Under the null hypothesis, each of these values is equally likely. The distribution $P_0$ that puts mass $1 / N!$ on each $T_j$ is called the **permutation distribution** of $T$. Let $t_\\text{obs}$ be the observed value of the test statistic. Assuming we reject when $T$ is large, the p-value is\n", 342 | "\n", 343 | "$$ \\text{p-value} = \\mathbb{P}_0(T > t_\\text{obs}) = \\frac{1}{N!} \\sum_{i=1}^{N!} I(T_j > t_\\text{obs}) $$" 344 | ] 345 | }, 346 | { 347 | "cell_type": "markdown", 348 | "metadata": {}, 349 | "source": [ 350 | "Usually it is not practical to evaluate all $N!$ permutations. We can approximate the p-value by sampling randomly from the set of permutations. The fraction of times $T > t_\\text{obs}$ among these samples approximates the p-value." 351 | ] 352 | }, 353 | { 354 | "cell_type": "markdown", 355 | "metadata": {}, 356 | "source": [ 357 | "**Algorithm for the Permutation Test**\n", 358 | "\n", 359 | "1. Compute the observed value of the test statistic $t_\\text{obs} = T(X_1, \\dots, X_m, Y_1, \\dots, Y_n)$.\n", 360 | "\n", 361 | "2. Randomly permute the data. Compute the statistic again using the permuted data.\n", 362 | "\n", 363 | "3. Repeat the previous step $B$ times and let $T_1, \\dots, T_B$ denote the resulting values.\n", 364 | "\n", 365 | "4. The approximate p-value is\n", 366 | "\n", 367 | "$$ \\frac{1}{B} \\sum_{i=1}^B I(T_j > t_\\text{obs}) $$" 368 | ] 369 | }, 370 | { 371 | "cell_type": "markdown", 372 | "metadata": {}, 373 | "source": [ 374 | "In large samples, the permutation test usually gives similar results to a test that is based on large sample theory. The permutation test is thus most useful for small samples." 375 | ] 376 | }, 377 | { 378 | "cell_type": "markdown", 379 | "metadata": {}, 380 | "source": [ 381 | "### 11.6 Multiple Testing" 382 | ] 383 | }, 384 | { 385 | "cell_type": "markdown", 386 | "metadata": {}, 387 | "source": [ 388 | "Consider $m$ hypothesis tests:\n", 389 | "\n", 390 | "$$ H_{0i} \\quad \\text{versus} \\quad H_{1i}, \\quad i = 1, \\dots, m$$\n", 391 | "\n", 392 | "and let $P_1, \\dots, P_m$ denote $m$ p-values for these tests." 393 | ] 394 | }, 395 | { 396 | "cell_type": "markdown", 397 | "metadata": {}, 398 | "source": [ 399 | "**The Bonferroni Method**: Given p-values $P_1, \\dots, P_m$, reject null hypothesis $H_{0i}$ if $P_i < \\alpha / m$." 400 | ] 401 | }, 402 | { 403 | "cell_type": "markdown", 404 | "metadata": {}, 405 | "source": [ 406 | "**Theorem 11.19**. Using the Bonferroni method, the probability of falsely rejecting any null hypothesis is less than or equal to $\\alpha$." 407 | ] 408 | }, 409 | { 410 | "cell_type": "markdown", 411 | "metadata": {}, 412 | "source": [ 413 | "**Proof**. Let $R$ be the event that at least one null hypothesis is falsely rejected. Let $R_i$ be the event that the $i$-th null hypothesis is falsely rejected. Recall that if $A_1, \\dots, A_k$ are events then $\\mathbb{P} \\left( \\bigcup_{i=1}^k A_i \\right) \\leq \\sum_{i=1}^k \\mathbb{P}(A_i)$. Hence,\n", 414 | "\n", 415 | "$$\\mathbb{P}(R) = \\mathbb{P}\\left( \\bigcup_{i=1}^k R_i \\right) \\leq \\sum_{i=1}^k \\mathbb{P}(R_i) = \\sum_{i=1}^k \\frac{\\alpha}{m} = \\alpha$$" 416 | ] 417 | }, 418 | { 419 | "cell_type": "markdown", 420 | "metadata": {}, 421 | "source": [ 422 | "The Bonferroni Method is very conservative because it is trying to make it unlikely that you would make even one false rejection. Sometimes, a more reasonable idea is to control the **false discovery rate** (FDR) which is defined as the mean number of false rejections divided by the number of rejections." 423 | ] 424 | }, 425 | { 426 | "cell_type": "markdown", 427 | "metadata": {}, 428 | "source": [ 429 | "| | $H_0$ not rejected | $H_0$ rejected | Total |\n", 430 | "|-----------|---------------------|-----------------|-------|\n", 431 | "|$H_0$ true | $U$ | $V$ | $m_0$ |\n", 432 | "|$H_0$ false| $T$ | $S$ | $m_1$ |\n", 433 | "|Total | $m - R$ | $R$ | $m$ |" 434 | ] 435 | }, 436 | { 437 | "cell_type": "markdown", 438 | "metadata": {}, 439 | "source": [ 440 | "Define the **false discovery proportion** (FDP):\n", 441 | "\n", 442 | "$$ \\text{FDP} = \\begin{cases}\n", 443 | "V / R & \\text{if } R > 0\\\\\n", 444 | "0 & \\text{if} R = 0\n", 445 | "\\end{cases}$$\n", 446 | "\n", 447 | "The FDP is the proportion of rejections that are incorrect. Next define $\\text{FDR} = \\mathbb{E}(\\text{FDP})$." 448 | ] 449 | }, 450 | { 451 | "cell_type": "markdown", 452 | "metadata": {}, 453 | "source": [ 454 | "**The Benjamini-Hochberg (BH) Method**\n", 455 | "\n", 456 | "1. Let $P_{(1)} < \\cdots < P_{(m)}$ denote the ordered p-values.\n", 457 | "\n", 458 | "2. Define\n", 459 | "\n", 460 | "$$\\ell_i = \\frac{i \\alpha}{C_m m},\n", 461 | "\\quad \\text{and} \\quad\n", 462 | "R = \\max \\bigg\\{ i: P_{(i)} < \\ell_i \\bigg\\}$$\n", 463 | "\n", 464 | "where $C_m$ is defined to be 1 if the p-values are independent and $C_m = \\sum_{i=1}^m (1/i)$ otherwise.\n", 465 | "\n", 466 | "3. Let $t = P_{(R)}$; we call $t$ the **BH rejection threshold**.\n", 467 | "\n", 468 | "4. Reject all null hypothesis $H_{0i}$ for which $P_i \\leq t$." 469 | ] 470 | }, 471 | { 472 | "cell_type": "markdown", 473 | "metadata": {}, 474 | "source": [ 475 | "**Theorem 11.21 (Benjamini and Hocheberg)**. If the procedure above is applied, then regardless of how many nulls are true and regardless of the distribution of p-values when the null hypothesis is false,\n", 476 | "\n", 477 | "$$\\text{FDR} = \\mathbb{E}(\\text{FDP}) \\leq \\frac{m_0}{m} \\alpha \\leq \\alpha $$." 478 | ] 479 | }, 480 | { 481 | "cell_type": "markdown", 482 | "metadata": {}, 483 | "source": [ 484 | "### 11.7 Technical Appendix" 485 | ] 486 | }, 487 | { 488 | "cell_type": "markdown", 489 | "metadata": {}, 490 | "source": [ 491 | "#### 11.7.1 The Neyman-Pearson Lemma" 492 | ] 493 | }, 494 | { 495 | "cell_type": "markdown", 496 | "metadata": {}, 497 | "source": [ 498 | "**Theorem 11.23 (Neyman-Pearson)**. Suppose we test $H_0: \\theta = \\theta_0$ versus $H_1: \\theta = \\theta_1$. Let\n", 499 | "\n", 500 | "$$ T = \\frac{\\mathcal{L}(\\theta_1)}{\\mathcal{L}(\\theta_0)} = \\frac{\\prod_{i=1}^n f(x_i; \\theta_1)}{\\prod_{i=1}^n f(x_i; \\theta_0)}$$\n", 501 | "\n", 502 | "Suppose we reject $H_0$ when $T > k$. If we choose $k$ so that $\\mathbb{P}_{\\theta_0}(T > k) = \\alpha$ then this test is the most powerful, size $\\alpha$ test. That is, among all tests with size $\\alpha$, this test maximizes power $\\beta(\\theta_1)$." 503 | ] 504 | }, 505 | { 506 | "cell_type": "markdown", 507 | "metadata": {}, 508 | "source": [ 509 | "#### 11.7.2 Power of the Wald Test" 510 | ] 511 | }, 512 | { 513 | "cell_type": "markdown", 514 | "metadata": {}, 515 | "source": [ 516 | "**Proof of Theorem 11.6**. \n", 517 | "\n", 518 | "Let $Z \\sim N(0, 1)$. Then,\n", 519 | "\n", 520 | "$$\n", 521 | "\\begin{align}\n", 522 | "\\text{Power} &= \\beta(\\theta_*) \\\\\n", 523 | "&= \\mathbb{P}_{\\theta_*}(\\text{Reject } H_0) \\\\\n", 524 | "&= \\mathbb{P}_{\\theta_*}\\left( \\frac{|\\hat{\\theta} - \\theta_0|}{\\hat{\\text{se}}} > z_{\\alpha/2} \\right) \\\\\n", 525 | "&= \\mathbb{P}_{\\theta_*}\\left( \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} > z_{\\alpha/2} \\right) \n", 526 | "+ \\mathbb{P}_{\\theta_*}\\left( \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} < -z_{\\alpha/2} \\right) \\\\\n", 527 | "&= \\mathbb{P}_{\\theta_*}(\\hat{\\theta} > \\theta_0 + \\hat{\\text{se}} z_{\\alpha/2})\n", 528 | "+ \\mathbb{P}_{\\theta_*}(\\hat{\\theta} < \\theta_0 - \\hat{\\text{se}} z_{\\alpha/2}) \\\\\n", 529 | "&= \\mathbb{P}_{\\theta_*}\\left( \\frac{\\hat{\\theta} - \\theta_*}{\\hat{\\text{se}}} > \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} + z_{\\alpha/2} \\right) \n", 530 | "+ \\mathbb{P}_{\\theta_*}\\left( \\frac{\\hat{\\theta} - \\theta_*}{\\hat{\\text{se}}} < \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} - z_{\\alpha/2} \\right) \\\\\n", 531 | "& \\approx \\mathbb{P}\\left(Z > \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} + z_{\\alpha/2} \\right) \n", 532 | "+ \\mathbb{P}\\left(Z < \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} - z_{\\alpha/2} \\right) \\\\\n", 533 | "&= 1 - \\Phi\\left( \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} + z_{\\alpha/2} \\right) \n", 534 | "+ \\Phi\\left( \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} - z_{\\alpha/2} \\right)\n", 535 | "\\end{align}\n", 536 | "$$" 537 | ] 538 | }, 539 | { 540 | "cell_type": "markdown", 541 | "metadata": {}, 542 | "source": [ 543 | "#### 11.7.3 The t-test" 544 | ] 545 | }, 546 | { 547 | "cell_type": "markdown", 548 | "metadata": {}, 549 | "source": [ 550 | "To test $H_0: \\mu = \\mu_0$ where $\\mu$ is the mean, we can use the Wald test. When the data is assumed to be Normal and the sample size is small, it is common instead to use the **t-test**. A random variable $T$ as a *t-distribution with $k$ degrees of freedom* if it has density\n", 551 | "\n", 552 | "$$ f(t) = \\frac{\\Gamma\\left(\\frac{k+1}{2}\\right)}{\\sqrt{k \\pi} \\Gamma\\left(\\frac{k}{2}\\right) \\left(1 + \\frac{t^2}{k}\\right)^{(k+1)/2}}$$\n", 553 | "\n", 554 | "When the degrees of freedom $k \\rightarrow \\infty$, this tends to a Normal distribution. When $k = 1$ it reduces to a Cauchy distribution.\n", 555 | "\n", 556 | "Let $X_1, \\dots, X_n \\sim N(\\mu, \\sigma^2)$ where $\\theta = (\\mu, \\sigma^2)$ are both unknown. Suppose we want to test $\\mu = \\mu_0$ versus $\\mu \\neq \\mu_0$. Let\n", 557 | "\n", 558 | "$$T = \\frac{\\sqrt{n}(\\overline{X}_n - \\mu_0)}{S_n}$$\n", 559 | "\n", 560 | "where $S_n^2$ is the sample variance. For large samples $T \\approx N(0, 1)$ under $H_0$. The exact distribution of $T$ under $H_0$ is $t_{n-1}$. Hence if we reject when $|T| > t_{n-1, \\alpha/2}$ then we get a size $\\alpha$ test." 561 | ] 562 | }, 563 | { 564 | "cell_type": "markdown", 565 | "metadata": {}, 566 | "source": [ 567 | "#### 11.7.4 The Likelihood Ratio Test" 568 | ] 569 | }, 570 | { 571 | "cell_type": "markdown", 572 | "metadata": {}, 573 | "source": [ 574 | "Let $\\theta = (\\theta_1, \\dots, \\theta_q, \\theta_{q+1}, \\dots, \\theta_r)$ and suppose that $\\Theta_0$ consists of all parameter values $\\theta$ such that $(\\theta_{q+1}, \\dots, \\theta_r) = (\\theta_{0, q+1}, \\dots, \\theta_{0, r})$.\n", 575 | "\n", 576 | "Define the **likelihood ratio statistic** by\n", 577 | "\n", 578 | "$$ \\lambda \n", 579 | "= 2 \\log \\left( \\frac{\\sup_{\\theta \\in \\Theta} \\mathcal{L}(\\theta)}{\\sup_{\\theta \\in \\Theta_0} \\mathcal{L}(\\theta)} \\right) \n", 580 | "= 2 \\log \\left( \\frac{\\mathcal{L}(\\hat{\\theta})}{\\mathcal{L}(\\hat{\\theta_0})} \\right) $$\n", 581 | "\n", 582 | "where $\\hat{\\theta}$ is the MLE and $\\hat{\\theta_0}$ is the MLE when $\\theta$ is restricted to lie in $\\Theta_0$. \n", 583 | "\n", 584 | "The **likelihood ratio test** is: reject $H_0$ when $\\lambda(x^n) > \\chi^2_{r-q, \\alpha}$." 585 | ] 586 | }, 587 | { 588 | "cell_type": "markdown", 589 | "metadata": {}, 590 | "source": [ 591 | "**Theorem 11.25**. Under $H_0$,\n", 592 | "\n", 593 | "$$ 2 \\log \\lambda (x^n) \\leadsto \\chi^2_{r - q}$$\n", 594 | "\n", 595 | "Hence, asymptotically, the LR test level is $\\alpha$." 596 | ] 597 | }, 598 | { 599 | "cell_type": "markdown", 600 | "metadata": {}, 601 | "source": [ 602 | "### 11.9 Exercises" 603 | ] 604 | }, 605 | { 606 | "cell_type": "markdown", 607 | "metadata": {}, 608 | "source": [ 609 | "**Exercise 11.9.1**. Prove Theorem 11.13." 610 | ] 611 | }, 612 | { 613 | "cell_type": "markdown", 614 | "metadata": {}, 615 | "source": [ 616 | "If the test statistic has a continuous distribution, then under $H_0: \\theta = \\theta_0$ the p-value has an $\\text{Uniform}(0, 1)$ distribution." 617 | ] 618 | }, 619 | { 620 | "cell_type": "markdown", 621 | "metadata": {}, 622 | "source": [ 623 | "**Solution**. Let $T$ be the test statistic, and $F_T$ be its CDF under $H_0$. Then\n", 624 | "\n", 625 | "$$ F_\\text{p-value}(p) = \\mathbb{P}(\\text{p-value} < p) = \\mathbb{P}(F(T) < p) = \\mathbb{P}(T < F^{-1}(p)) = F(F^{-1}(p)) = p$$\n", 626 | "\n", 627 | "so the CDF for the p-value is the same as the CDF for the $\\text{Uniform}(0, 1)$ distribution, therefore the p-value follows a $\\text{Uniform}(0, 1)$ distribution." 628 | ] 629 | }, 630 | { 631 | "cell_type": "markdown", 632 | "metadata": {}, 633 | "source": [ 634 | "**Exercise 11.9.2**. Prove Theorem 11.10." 635 | ] 636 | }, 637 | { 638 | "cell_type": "markdown", 639 | "metadata": {}, 640 | "source": [ 641 | "The size $\\alpha$ Wald test rejects $H_0: \\theta = \\theta_0$ versus $H_1: \\theta \\neq \\theta_0$ if and only if $\\theta_0 \\notin C$ where\n", 642 | "\n", 643 | "$$ C = \\left(\\hat{\\theta} - \\hat{\\text{se}} z_{\\alpha/2}, \\; \\hat{\\theta} + \\hat{\\text{se}} z_{\\alpha / 2} \\right) $$" 644 | ] 645 | }, 646 | { 647 | "cell_type": "markdown", 648 | "metadata": {}, 649 | "source": [ 650 | "**Solution**.\n", 651 | "\n", 652 | "$\\theta_0 \\in C$ if and only if\n", 653 | "\n", 654 | "$$\n", 655 | "\\hat{\\theta} - \\hat{\\text{se}} z_{\\alpha/2} < \\theta_0 < \\hat{\\theta} + \\hat{\\text{se}} z_{\\alpha/2} \\\\\n", 656 | "\\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} - z_{\\alpha/2} < 0 < \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} + z_{\\alpha/2} \\\\\n", 657 | "- z_{\\alpha/2} < \\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} < z_{\\alpha/2} \\\\\n", 658 | "|Z| > z_{\\alpha/2}\n", 659 | "$$\n", 660 | "\n", 661 | "which is the criteria for the Wald test." 662 | ] 663 | }, 664 | { 665 | "cell_type": "markdown", 666 | "metadata": {}, 667 | "source": [ 668 | "**Exercise 11.9.3**. Let $X_1, \\dots, X_n \\sim \\text{Uniform}(0, \\theta)$ and let $Y = \\max \\{ X_1, \\dots, X_n \\}$. We want to test\n", 669 | "\n", 670 | "$$ H_0: \\theta = 1/2 \\quad \\text{versus} \\quad H_1: \\theta > 1/2 $$\n", 671 | "\n", 672 | "The Wald test is not appropriate since $Y$ does not converge to a Normal. Suppose we decide to test this hypothesis by rejecting $H_0$ when $Y > c$.\n", 673 | "\n", 674 | "**(a)** Find the power function.\n", 675 | "\n", 676 | "**(b)** What choice of $c$ will make the size of the test 0.05?\n", 677 | "\n", 678 | "**(c)** In a sample of size $n = 20$ with $Y = 0.48$ what is the p-value? What conclusion about $H_0$ would you make?\n", 679 | "\n", 680 | "**(d)** In a sample of size $n = 20$ with $Y = 0.52$ what is the p-value? What conclusion about $H_0$ would you make?" 681 | ] 682 | }, 683 | { 684 | "cell_type": "markdown", 685 | "metadata": {}, 686 | "source": [ 687 | "**Solution**\n", 688 | "\n", 689 | "**(a)** The power function for this test is\n", 690 | "\n", 691 | "$$\\beta(\\theta) = \\mathbb{P}_\\theta(X \\in R) = \\mathbb{P}_\\theta(Y > c) = 1 - \\mathbb{P}_\\theta(Y \\leq c) = 1 - \\prod_{i=1}^n \\mathbb{P}_\\theta(X_i \\leq c) = 1 - \\left(\\text{clip}\\left(\\frac{c}{\\theta}\\right)\\right)^n$$\n", 692 | "\n", 693 | "where we limit $c/\\theta$ to the $[0, 1]$ interval, $$\\text{clip}(t) = \\begin{cases} 1 & \\text{if } t > 1 \\\\ t & \\text{if } 0 \\leq t \\leq 1 \\\\ 0 & \\text{otherwise} \\end{cases}$$\n", 694 | "\n", 695 | "**(b)** There is only one value in $\\Theta_0$, $\\theta = 1/2$. The size of the test then is $\\beta(1/2) = 1 - \\left(\\text{clip}\\left(2c\\right)\\right)^n$. Equating it to 0.05 and solving, we get $c = 0.5 \\cdot 0.95^{1/n}$.\n", 696 | "\n", 697 | "**(c)** The test has size $\\alpha$ when $ 1 - \\left(\\text{clip}\\left(2c\\right)\\right)^n = \\alpha$, or $c = 0.5 \\cdot (1 - \\alpha)^{1/n}$. The p-value occurs on the threshold of the rejection region, that is, the infimal $\\alpha$ such that $Y > c$, so $Y > 0.5 \\cdot (1 - \\alpha)^{1/n}$, or $\\alpha > 1 - (2Y)^n$.\n", 698 | "\n", 699 | "For $n = 20$, $Y = 0.48$, that reduces to $\\alpha > 1 - (2\\cdot 0.48)^{20} \\approx 0.557$. This means that the test provides little or no evidence against $H_0$.\n", 700 | "\n", 701 | "**(d)** For $n = 20$, $Y = 0.52$, the clipping function gets stuck at its maximum -- even for $\\alpha = 0$ we get $Y > c = 0.05$, so the p-value is 0, and the test almost surely rejects the null hypothesis. This is in agreement with the observation that, since $Y > \\theta_0$, the maximum sample was generated with value greater than $\\theta_0$, so it is certain that $\\theta_* > \\theta_0$." 702 | ] 703 | }, 704 | { 705 | "cell_type": "markdown", 706 | "metadata": {}, 707 | "source": [ 708 | "**Exercise 11.9.4**. There is a theory that people can postpone their death until after an important event. To test the theory, Phillips and King (1988) collected data on deaths around the Jewish holiday Passover. Of 1919 deaths, 922 died the week before the holiday and 997 died the week after. Think of this as a binomial and test the null hypothesis that $\\theta = 1/2$. Report and interpret the p-value. Also construct a confidence interval for $\\theta$.\n", 709 | "\n", 710 | "Reference:\n", 711 | "Philips, D.P and King, E.W. (1988).\n", 712 | "*Death takes a holiday: Mortality surrounding major social occasions.*\n", 713 | "The Lancet, 2, 728-732." 714 | ] 715 | }, 716 | { 717 | "cell_type": "markdown", 718 | "metadata": {}, 719 | "source": [ 720 | "**Solution**. Let the number of deaths be $X \\sim \\text{Binomial}(n, \\theta)$, with $n = 1919$. We have a measurement for $X = 922$, and we want to test the null hypothesis:\n", 721 | "\n", 722 | "$$H_0: \\theta = 1/2 \\quad \\text{versus} \\quad H_1: \\theta \\neq 1/2$$\n", 723 | "\n", 724 | "The MLE is $\\hat{\\theta} = X / n = 922 / 1919$; the estimated standard error is $\\sqrt{\\hat{\\theta}(1 - \\hat{\\theta})/n}$. The Wald test statistic is $W = (\\hat{\\theta} - \\theta_0) / \\hat{\\text{se}}$, and the p-value is $\\mathbb{P}(|Z| > |W|) = 2(1 - \\Phi(|W|))$. " 725 | ] 726 | }, 727 | { 728 | "cell_type": "code", 729 | "execution_count": 1, 730 | "metadata": {}, 731 | "outputs": [ 732 | { 733 | "name": "stdout", 734 | "output_type": "stream", 735 | "text": [ 736 | "Estimated theta: \t 0.480\n", 737 | "Estimated SE: \t\t 0.011\n", 738 | "95% confidence interval: (0.458, 0.503)\n", 739 | "Wald statistic: \t -1.713\n", 740 | "p-value: \t\t 0.087\n" 741 | ] 742 | } 743 | ], 744 | "source": [ 745 | "import math\n", 746 | "from scipy.stats import norm\n", 747 | "\n", 748 | "n = 1919\n", 749 | "X = 922\n", 750 | "theta_hat = 922 / 1919\n", 751 | "se_hat = math.sqrt(theta_hat * (1 - theta_hat) / n)\n", 752 | "\n", 753 | "z_05 = norm.ppf(0.975)\n", 754 | "confidence_interval = (theta_hat - z_05 * se_hat, theta_hat + z_05 * se_hat)\n", 755 | "\n", 756 | "w = (theta_hat - 0.5) / se_hat\n", 757 | "p_value = 2 * (1 - norm.cdf(abs(w)))\n", 758 | "\n", 759 | "print('Estimated theta: \\t %.3f' % theta_hat)\n", 760 | "print('Estimated SE: \\t\\t %.3f' % se_hat)\n", 761 | "print('95%% confidence interval: (%.3f, %.3f)' % confidence_interval)\n", 762 | "print('Wald statistic: \\t %.3f' % w)\n", 763 | "print('p-value: \\t\\t %.3f' % p_value)" 764 | ] 765 | }, 766 | { 767 | "cell_type": "markdown", 768 | "metadata": {}, 769 | "source": [ 770 | "A 95% confidence interval can be built based on the estimated value for $\\theta$ and its estimated standard error, from 45.8% to 50.3%.\n", 771 | "\n", 772 | "The Wald test produces a p-value of 8.7%, which represents only weak evidence against the null hypothesis -- inconclusive results." 773 | ] 774 | }, 775 | { 776 | "cell_type": "markdown", 777 | "metadata": {}, 778 | "source": [ 779 | "**Exercise 11.9.5**. In 1861, 10 essays appeared in the New Orleans Daily Crescent. They were signed \"Quintus Curtuis Snodgrass\" and some people suspected they were actually written by Mark Twain. To investigate this, we will investigate the proportion of three letter words found in an author's work.\n", 780 | "\n", 781 | "From eight Twain essays we have:\n", 782 | "\n", 783 | "$$ \n", 784 | "0.225 \\quad 0.262 \\quad 0.217 \\quad 0.240 \\quad 0.230 \\quad 0.229 \\quad 0.235 \\quad 0.217\n", 785 | "$$\n", 786 | "\n", 787 | "From 10 Snodgrass essays we have:\n", 788 | "\n", 789 | "$$\n", 790 | "0.209 \\quad 0.205 \\quad 0.196 \\quad 0.210 \\quad 0.202 \\quad 0.207 \\quad 0.224 \\quad 0.223 \\quad 0.220 \\quad 0.201\n", 791 | "$$\n", 792 | "\n", 793 | "(source: Rice xxxx)\n", 794 | "\n", 795 | "**(a)** Perform a Wald test for equality of the means. Use the nonparametric plug-in estimator. Report the p-value and a 95% confidence interval for the difference of means. What do you conclude?\n", 796 | "\n", 797 | "**(b)** Now use a permutation test to avoid the use of large sample methods. What is your conclusion?" 798 | ] 799 | }, 800 | { 801 | "cell_type": "code", 802 | "execution_count": 2, 803 | "metadata": {}, 804 | "outputs": [], 805 | "source": [ 806 | "X = [0.225, 0.262, 0.217, 0.240, 0.230, 0.229, 0.235, 0.217]\n", 807 | "Y = [0.209, 0.205, 0.196, 0.210, 0.202, 0.207, 0.224, 0.223, 0.220, 0.201]" 808 | ] 809 | }, 810 | { 811 | "cell_type": "markdown", 812 | "metadata": {}, 813 | "source": [ 814 | "**Solution**.\n", 815 | "\n", 816 | "**(a)**" 817 | ] 818 | }, 819 | { 820 | "cell_type": "code", 821 | "execution_count": 3, 822 | "metadata": {}, 823 | "outputs": [ 824 | { 825 | "name": "stdout", 826 | "output_type": "stream", 827 | "text": [ 828 | "Estimated difference of means:\t 0.022\n", 829 | "Estimated SE: \t\t\t 0.006\n", 830 | "95% confidence interval:\t (0.010, 0.034)\n", 831 | "Wald statistic: \t\t 3.704\n", 832 | "Wald test p-value: \t\t 0.0002\n" 833 | ] 834 | } 835 | ], 836 | "source": [ 837 | "import numpy as np\n", 838 | "from scipy.stats import norm\n", 839 | "from tqdm import tqdm_notebook\n", 840 | "\n", 841 | "X = np.array(X)\n", 842 | "Y = np.array(Y)\n", 843 | "\n", 844 | "x_hat = X.mean()\n", 845 | "y_hat = Y.mean()\n", 846 | "\n", 847 | "diff_hat = x_hat - y_hat\n", 848 | "se_hat = np.sqrt(X.var(ddof=1)/len(X) + Y.var(ddof=1)/len(Y))\n", 849 | "\n", 850 | "z_05 = norm.ppf(0.975)\n", 851 | "confidence_interval = (diff_hat - z_05 * se_hat, diff_hat + z_05 * se_hat)\n", 852 | "\n", 853 | "w = diff_hat / se_hat\n", 854 | "p_value = 2 * (1 - norm.cdf(abs(w)))\n", 855 | "\n", 856 | "print('Estimated difference of means:\\t %.3f' % diff_hat)\n", 857 | "print('Estimated SE: \\t\\t\\t %.3f' % se_hat)\n", 858 | "print('95%% confidence interval:\\t (%.3f, %.3f)' % confidence_interval)\n", 859 | "print('Wald statistic: \\t\\t %.3f' % w)\n", 860 | "print('Wald test p-value: \\t\\t %.4f' % p_value)" 861 | ] 862 | }, 863 | { 864 | "cell_type": "markdown", 865 | "metadata": {}, 866 | "source": [ 867 | "Such a small p-value (0.02%) offers very strong evidence to reject the null hypothesis -- that is, that the series follow distributions with different means. The writing styles are different according to this metric, even if the same writer is responsible for both." 868 | ] 869 | }, 870 | { 871 | "cell_type": "markdown", 872 | "metadata": {}, 873 | "source": [ 874 | "**(b)**" 875 | ] 876 | }, 877 | { 878 | "cell_type": "code", 879 | "execution_count": 4, 880 | "metadata": {}, 881 | "outputs": [ 882 | { 883 | "data": { 884 | "application/vnd.jupyter.widget-view+json": { 885 | "model_id": "f1152ef31b3f42ecacd6cf539829f41e", 886 | "version_major": 2, 887 | "version_minor": 0 888 | }, 889 | "text/plain": [ 890 | "HBox(children=(IntProgress(value=0, max=1000000), HTML(value='')))" 891 | ] 892 | }, 893 | "metadata": {}, 894 | "output_type": "display_data" 895 | }, 896 | { 897 | "name": "stdout", 898 | "output_type": "stream", 899 | "text": [ 900 | "\n", 901 | "Permutation test p-value: \t\t 0.0005\n" 902 | ] 903 | } 904 | ], 905 | "source": [ 906 | "# Permutation test using random shuffling\n", 907 | "\n", 908 | "B = 1000000\n", 909 | "full_series = np.concatenate([X, Y])\n", 910 | "nx = len(X)\n", 911 | "diff_boot_count = 0\n", 912 | "for i in tqdm_notebook(range(B)):\n", 913 | " np.random.shuffle(full_series)\n", 914 | " xx, yy = full_series[:nx], full_series[nx:]\n", 915 | " diff_boot = xx.mean() - yy.mean()\n", 916 | " if diff_boot > diff_hat:\n", 917 | " diff_boot_count += 1\n", 918 | " \n", 919 | "p_value_perm = diff_boot_count / B\n", 920 | "\n", 921 | "print('Permutation test p-value: \\t\\t %.4f' % p_value_perm)" 922 | ] 923 | }, 924 | { 925 | "cell_type": "markdown", 926 | "metadata": {}, 927 | "source": [ 928 | "This small p-value (0.05%) also offers very strong evidence that the two samples come from series with different means." 929 | ] 930 | }, 931 | { 932 | "cell_type": "markdown", 933 | "metadata": {}, 934 | "source": [ 935 | "**Exercise 11.9.6**. Let $X_1, \\dots, X_n \\sim N(\\theta, 1)$. Consider testing\n", 936 | "\n", 937 | "$$ H_0: \\theta = 0 \\quad \\text{versus} \\quad H_1: \\theta = 1$$\n", 938 | "\n", 939 | "Let the rejection region be $R = \\{ x^n : T(x^n) > c \\}$ where $T(x^n) = n^{-1} \\sum_{i=1}^n X_i$.\n", 940 | "\n", 941 | "**(a)** Find $c$ so that the test has size $\\alpha$.\n", 942 | "\n", 943 | "**(b)** Find the power under $H_1$, i.e., find $\\beta(1)$.\n", 944 | "\n", 945 | "**(c)** Show that $\\beta(1) \\rightarrow 1$ as $n \\rightarrow \\infty$." 946 | ] 947 | }, 948 | { 949 | "cell_type": "markdown", 950 | "metadata": {}, 951 | "source": [ 952 | "**Solution**.\n", 953 | "\n", 954 | "**(a)** The size of the rejection region is\n", 955 | "\n", 956 | "$$\\mathbb{P}_{\\theta = 0}(T > c) = \\mathbb{P}_{\\theta = 0}(\\sqrt{n}T / 1 > \\sqrt{n}c) = \\mathbb{P}(Z > \\sqrt{n} c) = 1 - \\Phi(\\sqrt{n}c)$$\n", 957 | "\n", 958 | "If this value is $\\alpha$, then \n", 959 | "\n", 960 | "$$\n", 961 | "\\begin{align}\n", 962 | "\\alpha &= 1 - \\Phi(\\sqrt{n}c) \\\\\n", 963 | "\\Phi(\\sqrt{n}c) &= 1 - \\alpha \\\\\n", 964 | "\\sqrt{n}c &= z_{\\alpha} \\\\\n", 965 | "c &= z_{\\alpha} / \\sqrt{n}\n", 966 | "\\end{align}\n", 967 | "$$" 968 | ] 969 | }, 970 | { 971 | "cell_type": "markdown", 972 | "metadata": {}, 973 | "source": [ 974 | "**(b)** We have $\\beta(1) = \\mathbb{P}_{\\theta = 1}(T > c) = \\mathbb{\\theta = 1}(\\sqrt{n}(T - 1)/1 > \\sqrt{n}(c - 1)) = \\mathbb{P}(Z > \\sqrt{n}(c - 1)) = 1 - \\Phi(\\sqrt{n}(c - 1)) = 1 - \\Phi(z_\\alpha - \\sqrt{n})$" 975 | ] 976 | }, 977 | { 978 | "cell_type": "markdown", 979 | "metadata": {}, 980 | "source": [ 981 | "**(c)** As $n \\rightarrow \\infty$, $\\beta(1) \\rightarrow 1 - \\Phi(-\\infty) = 1$." 982 | ] 983 | }, 984 | { 985 | "cell_type": "markdown", 986 | "metadata": {}, 987 | "source": [ 988 | "**Exercise 11.9.7**. Let $\\hat{\\theta}$ be the MLE of a parameter theta and let $\\hat{\\text{se}} = \\{ n I(\\hat{\\theta})\\}^{-1/2}$ where $I(\\hat{\\theta})$ is the Fisher information. Consider testing\n", 989 | "\n", 990 | "$$H_0: \\theta = \\theta_0 \\quad \\text{versus} \\quad H_1: \\theta \\neq \\theta_0$$\n", 991 | "\n", 992 | "Consider the Wald test with rejection region $R = \\{ x^n: |Z| > z_{\\alpha/2} \\}$ where $Z = (\\hat{\\theta} - \\theta_0) / \\hat{\\text{se}}$. Let $\\theta_1 > \\theta_0$ be some alternative. Show that $\\beta(\\theta_1) \\rightarrow 1$." 993 | ] 994 | }, 995 | { 996 | "cell_type": "markdown", 997 | "metadata": {}, 998 | "source": [ 999 | "**Solution**.\n", 1000 | "\n", 1001 | "We have \n", 1002 | "\n", 1003 | "$$\n", 1004 | "\\begin{align}\n", 1005 | "\\beta(\\theta_1) &= \\mathbb{P}_{\\theta_1}(|Z| > z_{\\alpha/2}) \\\\\n", 1006 | "&= \\mathbb{P}_{\\theta_1}(Z > z_{\\alpha/2}) + \\mathbb{P}_{\\theta_1}(Z < - z_{\\alpha/2})\\\\\n", 1007 | "&= \\mathbb{P}_{\\theta_1}\\left(\\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} > z_{\\alpha/2}\\right)\n", 1008 | "+ \\mathbb{P}_{\\theta_1}\\left(\\frac{\\hat{\\theta} - \\theta_0}{\\hat{\\text{se}}} < -z_{\\alpha/2}\\right) \\\\\n", 1009 | "&= \\mathbb{P}_{\\theta_1}\\left(\\hat{\\theta} > \\hat{\\text{se}} z_{\\alpha/2} + \\theta_0 \\right)\n", 1010 | "+ \\mathbb{P}_{\\theta_1}\\left(\\hat{\\theta} < -\\hat{\\text{se}} z_{\\alpha/2} + \\theta_0 \\right) \\\\\n", 1011 | "&= \\mathbb{P}_{\\theta_1}\\left(\\frac{\\hat{\\theta} - \\theta_1}{\\hat{\\text{se}}} > z_{\\alpha/2} + \\frac{\\theta_0 - \\theta_1}{\\hat{\\text{se}}} \\right)\n", 1012 | "+ \\mathbb{P}_{\\theta_1}\\left(\\frac{\\hat{\\theta} - \\theta_1}{\\hat{\\text{se}}} < -z_{\\alpha/2} + \\frac{\\theta_0 - \\theta_1}{\\hat{\\text{se}}} \\right) \\\\\n", 1013 | "&= \\mathbb{P}_{\\theta_1}\\left(W > z_{\\alpha/2} + \\frac{\\theta_0 - \\theta_1}{\\hat{\\text{se}}} \\right)\n", 1014 | "+ \\mathbb{P}_{\\theta_1}\\left(W < -z_{\\alpha/2} + \\frac{\\theta_0 - \\theta_1}{\\hat{\\text{se}}} \\right) \\\\\n", 1015 | "& \\geq \\mathbb{P}_{\\theta_1}\\left(W > z_{\\alpha/2} + \\frac{\\theta_0 - \\theta_1}{\\hat{\\text{se}}} \\right)\n", 1016 | "\\end{align}\n", 1017 | "$$\n", 1018 | "\n", 1019 | "When $n \\rightarrow \\infty$, $\\text{\\hat{se}} \\rightarrow 0$. Since $\\theta_1 > \\theta_0$, the right hand side of the inequality goes to $-\\infty$, and so the lower bound probability goes to 1. Therefore $\\beta(\\theta_1) \\rightarrow 1$." 1020 | ] 1021 | }, 1022 | { 1023 | "cell_type": "markdown", 1024 | "metadata": {}, 1025 | "source": [ 1026 | "**Exercise 11.9.8**. Here are the number of elderly Jewish and Chinese women who died just before and after the Chinese Harvest Moon Festival.\n", 1027 | "\n", 1028 | "| Week | Chinese | Jewish |\n", 1029 | "|------|---------|--------|\n", 1030 | "| -2 | 55 | 141 |\n", 1031 | "| -1 | 33 | 145 |\n", 1032 | "| 1 | 70 | 139 |\n", 1033 | "| 2 | 49 | 161 |\n", 1034 | "\n", 1035 | "Compare the two mortality patterns." 1036 | ] 1037 | }, 1038 | { 1039 | "cell_type": "markdown", 1040 | "metadata": {}, 1041 | "source": [ 1042 | "**Solution**." 1043 | ] 1044 | }, 1045 | { 1046 | "cell_type": "code", 1047 | "execution_count": 5, 1048 | "metadata": {}, 1049 | "outputs": [], 1050 | "source": [ 1051 | "import pandas as pd\n", 1052 | "\n", 1053 | "df = pd.DataFrame({\n", 1054 | " 'Chinese': [55, 33, 70, 49],\n", 1055 | " 'Jewish': [141, 145, 139, 161]\n", 1056 | "}, index=[-2, -1, 1, 2])" 1057 | ] 1058 | }, 1059 | { 1060 | "cell_type": "markdown", 1061 | "metadata": {}, 1062 | "source": [ 1063 | "Let's use a few different approaches.\n", 1064 | "\n", 1065 | "First: $\\chi^2$ tests comparing, separately, Chinese and Jewish mortalities before and after the Chinese Harvest Moon Festival." 1066 | ] 1067 | }, 1068 | { 1069 | "cell_type": "code", 1070 | "execution_count": 6, 1071 | "metadata": {}, 1072 | "outputs": [ 1073 | { 1074 | "name": "stdout", 1075 | "output_type": "stream", 1076 | "text": [ 1077 | "Test statistic:\t\t\t\t4.643 \n", 1078 | "95% percentile chi squared with 1 df:\t3.841\n", 1079 | "p-value for different distributions:\t0.031\n" 1080 | ] 1081 | } 1082 | ], 1083 | "source": [ 1084 | "from scipy.stats import chi2\n", 1085 | "\n", 1086 | "chinese_before = df['Chinese'].loc[-2:-1].sum()\n", 1087 | "chinese_after = df['Chinese'].loc[1:2].sum()\n", 1088 | "\n", 1089 | "mu_chinese_hat = (chinese_before + chinese_after) / 2\n", 1090 | "t_pearson_chinese = (chinese_before - mu_chinese_hat)**2/mu_chinese_hat \\\n", 1091 | " + (chinese_after - mu_chinese_hat)**2/mu_chinese_hat\n", 1092 | "\n", 1093 | "chi2_95 = chi2.ppf(0.95, 1)\n", 1094 | "p_value_chinese = 1 - chi2.cdf(t_pearson_chinese, 1)\n", 1095 | "\n", 1096 | "print('Test statistic:\\t\\t\\t\\t%.3f ' % t_pearson_chinese)\n", 1097 | "print('95%% percentile chi squared with 1 df:\\t%.3f' % chi2_95)\n", 1098 | "print('p-value for different distributions:\\t%.3f' % p_value_chinese)" 1099 | ] 1100 | }, 1101 | { 1102 | "cell_type": "code", 1103 | "execution_count": 7, 1104 | "metadata": {}, 1105 | "outputs": [ 1106 | { 1107 | "name": "stdout", 1108 | "output_type": "stream", 1109 | "text": [ 1110 | "Test statistic:\t\t\t\t0.334 \n", 1111 | "95% percentile chi squared with 1 df:\t3.841\n", 1112 | "p-value for different distributions:\t0.563\n" 1113 | ] 1114 | } 1115 | ], 1116 | "source": [ 1117 | "jewish_before = df['Jewish'].loc[-2:-1].sum()\n", 1118 | "jewish_after = df['Jewish'].loc[1:2].sum()\n", 1119 | "\n", 1120 | "mu_jewish_hat = (jewish_before + jewish_after) / 2\n", 1121 | "t_pearson_jewish = (jewish_before - mu_jewish_hat)**2/mu_jewish_hat \\\n", 1122 | " + (jewish_after - mu_jewish_hat)**2/mu_jewish_hat\n", 1123 | "\n", 1124 | "chi2_95 = chi2.ppf(0.95, 1)\n", 1125 | "p_value_jewish = 1 - chi2.cdf(t_pearson_jewish, 1)\n", 1126 | "\n", 1127 | "print('Test statistic:\\t\\t\\t\\t%.3f ' % t_pearson_jewish)\n", 1128 | "print('95%% percentile chi squared with 1 df:\\t%.3f' % chi2_95)\n", 1129 | "print('p-value for different distributions:\\t%.3f' % p_value_jewish)" 1130 | ] 1131 | }, 1132 | { 1133 | "cell_type": "markdown", 1134 | "metadata": {}, 1135 | "source": [ 1136 | "The $\\chi^2$ test statistics for the Chinese population *do* suggest, with a p-value of 3.1%, that the mortality rates are distinct before and after the event, while the statistics for the Jewish population do not suggest so (with a no-evidence p-value of 56.3%." 1137 | ] 1138 | }, 1139 | { 1140 | "cell_type": "markdown", 1141 | "metadata": {}, 1142 | "source": [ 1143 | "Using the likelihood test to compare mortality in each group before and after the event:\n", 1144 | "\n", 1145 | "| Group | Chinese | Jewish |\n", 1146 | "|---------|---------|--------|\n", 1147 | "| Before | 88 | 286 |\n", 1148 | "| After | 119 | 300 |\n", 1149 | "\n", 1150 | "MLE under alternate hypothesis: $$\\hat{\\theta} = \\left( \\frac{88}{793}, \\frac{119}{793}, \\frac{286}{793}, \\frac{300}{793} \\right)$$\n", 1151 | "\n", 1152 | "MLE under null hypothesis: $$\\hat{\\theta_0} =\\left( \\frac{103.5}{793}, \\frac{103.5}{793}, \\frac{293}{793}, \\frac{293}{793} \\right)$$\n", 1153 | "\n", 1154 | "Log-likelihood ratio statistic: \n", 1155 | "\n", 1156 | "$$ \\lambda = 2 \\log \\left( \\frac{\\mathcal{L}(\\hat{\\theta})}{\\mathcal{L}(\\hat{\\theta_0})} \\right)\n", 1157 | "= 2 \\left( 88 \\log \\frac{88}{103.5} + 119 \\log \\frac{119}{103.5} + 286 \\log \\frac{286}{293} + 300 \\log \\frac{300}{293} \\right) \\approx 2.497\n", 1158 | "$$\n", 1159 | "\n", 1160 | "The 95% percentile value for the $\\chi^2$ distribution with 1 degree of freedom is $\\approx 3.841$, so the likelihood test does not reject the null hypothesis at this confidence level. The p-value for this test is 0.114, suggesting little or no evidence that, for both groups, the mortality rates are distinct." 1161 | ] 1162 | }, 1163 | { 1164 | "cell_type": "markdown", 1165 | "metadata": {}, 1166 | "source": [ 1167 | "Finally, let's do 4 distinct sample binomial tests, one for each week:" 1168 | ] 1169 | }, 1170 | { 1171 | "cell_type": "code", 1172 | "execution_count": 8, 1173 | "metadata": {}, 1174 | "outputs": [ 1175 | { 1176 | "data": { 1177 | "text/html": [ 1178 | "
\n", 1179 | "\n", 1192 | "\n", 1193 | " \n", 1194 | " \n", 1195 | " \n", 1196 | " \n", 1197 | " \n", 1198 | " \n", 1199 | " \n", 1200 | " \n", 1201 | " \n", 1202 | " \n", 1203 | " \n", 1204 | " \n", 1205 | " \n", 1206 | " \n", 1207 | " \n", 1208 | " \n", 1209 | " \n", 1210 | " \n", 1211 | " \n", 1212 | " \n", 1213 | " \n", 1214 | " \n", 1215 | " \n", 1216 | " \n", 1217 | "
p-value
-20.516380
-10.021061
10.017937
20.388072
\n", 1218 | "
" 1219 | ], 1220 | "text/plain": [ 1221 | " p-value\n", 1222 | "-2 0.516380\n", 1223 | "-1 0.021061\n", 1224 | " 1 0.017937\n", 1225 | " 2 0.388072" 1226 | ] 1227 | }, 1228 | "execution_count": 8, 1229 | "metadata": {}, 1230 | "output_type": "execute_result" 1231 | } 1232 | ], 1233 | "source": [ 1234 | "import pandas as pd\n", 1235 | "from scipy.stats import binom_test\n", 1236 | "\n", 1237 | "theta_week1 = 55 / (55 + 141)\n", 1238 | "theta_week2 = 33 / (33 + 145)\n", 1239 | "theta_week3 = 70 / (70 + 139)\n", 1240 | "theta_week4 = 49 / (49 + 161)\n", 1241 | "\n", 1242 | "theta_null = (55 + 33 + 70 + 49) / (55 + 33 + 70 + 49 + 141 + 145 + 139 + 161)\n", 1243 | "\n", 1244 | "p1 = binom_test(55, 55 + 141, theta_null, alternative=\"two-sided\")\n", 1245 | "p2 = binom_test(33, 33 + 145, theta_null, alternative=\"two-sided\")\n", 1246 | "p3 = binom_test(70, 70 + 139, theta_null, alternative=\"two-sided\")\n", 1247 | "p4 = binom_test(49, 49 + 161, theta_null, alternative=\"two-sided\")\n", 1248 | "\n", 1249 | "results = pd.DataFrame({'p-value': [p1, p2, p3, p4]}, index=[-2, -1, 1, 2])\n", 1250 | "results" 1251 | ] 1252 | }, 1253 | { 1254 | "cell_type": "markdown", 1255 | "metadata": {}, 1256 | "source": [ 1257 | "The tests suggest that there *is* strong evidence for distinct mortality rates between the Chinese and Jewish populations on the weeks immediately preceding and following the event (p-values of 2.1% and 1.8%), but not in the 2 weeks before or after the event (p-values of 51.6% and 38.8%)." 1258 | ] 1259 | }, 1260 | { 1261 | "cell_type": "markdown", 1262 | "metadata": {}, 1263 | "source": [ 1264 | "**Exercise 11.9.9**. A randomized, double-blind experiment was conducted to assess the effectiveness of several drugs for reducing post-operative nausea. The data are as follows.\n", 1265 | "\n", 1266 | "| | Number of patients | Incidents of Nausea |\n", 1267 | "|------------------------|--------------------|---------------------|\n", 1268 | "| Placebo | 80 | 45 |\n", 1269 | "| Chlorpromazine | 75 | 26 |\n", 1270 | "| Dimenhydrate | 85 | 52 |\n", 1271 | "| Pentobarbital (100 mg) | 67 | 35 |\n", 1272 | "| Pentobarbital (150 mg) | 85 | 37 |\n", 1273 | "\n", 1274 | "**(a)** Test each drug versus the placebo at the 5% level. Also, report the estimated odds-ratios. Summarize your findings.\n", 1275 | "\n", 1276 | "**(b)** Use the Bonferroni and the FDR method to adjust for multiple testing." 1277 | ] 1278 | }, 1279 | { 1280 | "cell_type": "markdown", 1281 | "metadata": {}, 1282 | "source": [ 1283 | "**Solution**.\n", 1284 | "\n", 1285 | "**(a)**" 1286 | ] 1287 | }, 1288 | { 1289 | "cell_type": "code", 1290 | "execution_count": 9, 1291 | "metadata": {}, 1292 | "outputs": [], 1293 | "source": [ 1294 | "import numpy as np\n", 1295 | "import pandas as pd\n", 1296 | "from scipy.stats import norm\n", 1297 | "\n", 1298 | "df = pd.DataFrame({\n", 1299 | " 'Treatment': ['Placebo', 'Chlorpromazine', 'Dimenhydrate', 'Pentobarbital (100 mg)', 'Pentobarbital (150 mg)'],\n", 1300 | " 'Number of patients': [80, 75, 85, 67, 85],\n", 1301 | " 'Incidents of Nausea': [45, 26, 52, 35, 37]\n", 1302 | "})" 1303 | ] 1304 | }, 1305 | { 1306 | "cell_type": "code", 1307 | "execution_count": 10, 1308 | "metadata": {}, 1309 | "outputs": [], 1310 | "source": [ 1311 | "df['p_hat'] = df['Incidents of Nausea'] / df['Number of patients']\n", 1312 | "df['variance'] = df['p_hat'] * (1 - df['p_hat']) / df['Number of patients']" 1313 | ] 1314 | }, 1315 | { 1316 | "cell_type": "code", 1317 | "execution_count": 11, 1318 | "metadata": {}, 1319 | "outputs": [], 1320 | "source": [ 1321 | "p_hat_placebo = df[df['Treatment'] == 'Placebo']['p_hat'][0]\n", 1322 | "variance_placebo = df[df['Treatment'] == 'Placebo']['variance'][0]\n", 1323 | "\n", 1324 | "# odds = affected / normal = p / (1 - p)\n", 1325 | "df['Odds'] = df['p_hat'] / (1 - df['p_hat'])\n", 1326 | "\n", 1327 | "# odds ratio = odds_placebo / odds_treatment\n", 1328 | "df['Odds ratio'] = df[df['Treatment'] == 'Placebo']['Odds'][0] / df['Odds']\n", 1329 | "\n", 1330 | "df['Wald statistic'] = (df['p_hat'] - p_hat_placebo) / np.sqrt(df['variance'] + variance_placebo)\n", 1331 | "df['p-value'] = 2 * (1 - norm.cdf(abs(df['Wald statistic'])))" 1332 | ] 1333 | }, 1334 | { 1335 | "cell_type": "code", 1336 | "execution_count": 12, 1337 | "metadata": {}, 1338 | "outputs": [ 1339 | { 1340 | "data": { 1341 | "text/html": [ 1342 | "
\n", 1343 | "\n", 1356 | "\n", 1357 | " \n", 1358 | " \n", 1359 | " \n", 1360 | " \n", 1361 | " \n", 1362 | " \n", 1363 | " \n", 1364 | " \n", 1365 | " \n", 1366 | " \n", 1367 | " \n", 1368 | " \n", 1369 | " \n", 1370 | " \n", 1371 | " \n", 1372 | " \n", 1373 | " \n", 1374 | " \n", 1375 | " \n", 1376 | " \n", 1377 | " \n", 1378 | " \n", 1379 | " \n", 1380 | " \n", 1381 | " \n", 1382 | " \n", 1383 | " \n", 1384 | " \n", 1385 | " \n", 1386 | " \n", 1387 | " \n", 1388 | " \n", 1389 | " \n", 1390 | " \n", 1391 | " \n", 1392 | " \n", 1393 | " \n", 1394 | " \n", 1395 | " \n", 1396 | "
TreatmentOdds ratioWald statisticp-value
1Chlorpromazine2.423077-2.7643640.005703
2Dimenhydrate0.8159340.6429870.520232
3Pentobarbital (100 mg)1.175510-0.4864280.626664
4Pentobarbital (150 mg)1.667954-1.6466050.099639
\n", 1397 | "
" 1398 | ], 1399 | "text/plain": [ 1400 | " Treatment Odds ratio Wald statistic p-value\n", 1401 | "1 Chlorpromazine 2.423077 -2.764364 0.005703\n", 1402 | "2 Dimenhydrate 0.815934 0.642987 0.520232\n", 1403 | "3 Pentobarbital (100 mg) 1.175510 -0.486428 0.626664\n", 1404 | "4 Pentobarbital (150 mg) 1.667954 -1.646605 0.099639" 1405 | ] 1406 | }, 1407 | "execution_count": 12, 1408 | "metadata": {}, 1409 | "output_type": "execute_result" 1410 | } 1411 | ], 1412 | "source": [ 1413 | "df[['Treatment', 'Odds ratio', 'Wald statistic', 'p-value']][df['Treatment'] != 'Placebo']" 1414 | ] 1415 | }, 1416 | { 1417 | "cell_type": "markdown", 1418 | "metadata": {}, 1419 | "source": [ 1420 | "At the 5% significance level, only Chlorpromazine changes the Nausea incidence, with an odds ratio of 2.42." 1421 | ] 1422 | }, 1423 | { 1424 | "cell_type": "markdown", 1425 | "metadata": {}, 1426 | "source": [ 1427 | "**(b)** There are 4 tests, so the significance ratio for the Bonferroni method will be $0.05 / 4 = 0.0125$. Again, only Chlorpromazine rejects the null hypothesis.\n", 1428 | "\n", 1429 | "For the Benjamini-Hochfield method, the ordered p-values are $0.0057 < 0.0996 < 0.5202 < 0.6266$; $\\ell_i = i \\alpha / m = 0.0125 \\cdot i$, and $R = \\max \\{ i : P_{(i)} < \\ell_i \\} = 1$, so the rejection threshold again only rejects the null hypothesis for the first treatment." 1430 | ] 1431 | }, 1432 | { 1433 | "cell_type": "markdown", 1434 | "metadata": {}, 1435 | "source": [ 1436 | "**Exercise 11.9.10**. Let $X_1, \\dots, X_n \\sim \\text{Poisson}(\\lambda)$.\n", 1437 | "\n", 1438 | "**(a)** Let $\\lambda_0 > 0$. Find the size $\\alpha$ Wald test for \n", 1439 | "\n", 1440 | "$$ H_0: \\lambda = \\lambda_0 \\quad \\text{versus} \\quad H_1: \\lambda \\neq \\lambda_0$$\n", 1441 | "\n", 1442 | "**(b)** (Computer Experiment) Let $\\lambda_0 = 1$, $n = 20$ and $\\alpha = 0.05$. Simulate $X_1, \\dots, X_n \\sim \\text{Poisson}(\\lambda_0)$ and perform the Wald test. Repeat many times and count how often you reject the null. How close is the type I error rate to 0.05?" 1443 | ] 1444 | }, 1445 | { 1446 | "cell_type": "markdown", 1447 | "metadata": {}, 1448 | "source": [ 1449 | "**Solution**.\n", 1450 | "\n", 1451 | "**(a)**\n", 1452 | "\n", 1453 | "$$\n", 1454 | "\\begin{align}\n", 1455 | "\\beta(\\lambda_0) &= \\mathbb{P}_{\\lambda_0}\\left( |W| > z_{\\alpha/2} \\right) \\\\\n", 1456 | "&= \\mathbb{P}_{\\lambda_0}\\left( \\Bigg| \\frac{\\hat{\\lambda} - \\lambda_0}{\\hat{\\text{se}}} \\Bigg| > z_{\\alpha/2} \\right)\n", 1457 | "\\end{align}\n", 1458 | "$$\n", 1459 | "\n", 1460 | "The MLE for the parameter is the mean of the data, $\\hat{\\lambda} = \\overline{X}$, while the estimate of the SE is $\\sqrt{1 / I_n(\\hat{\\lambda})} = 1 / \\sqrt{n\\overline{X}}$. Replacing these, we get\n", 1461 | "\n", 1462 | "$$\n", 1463 | "\\mathbb{P}_{\\lambda_0}\\left( \\big| (\\overline{X} - \\lambda_0)\\sqrt{n \\overline{X}} \\big| > z_{\\alpha/2} \\right)\n", 1464 | "$$" 1465 | ] 1466 | }, 1467 | { 1468 | "cell_type": "markdown", 1469 | "metadata": {}, 1470 | "source": [ 1471 | "**(b)**" 1472 | ] 1473 | }, 1474 | { 1475 | "cell_type": "code", 1476 | "execution_count": 13, 1477 | "metadata": {}, 1478 | "outputs": [], 1479 | "source": [ 1480 | "import numpy as np\n", 1481 | "from scipy.stats import norm, poisson\n", 1482 | "from tqdm import tqdm_notebook\n", 1483 | "\n", 1484 | "lambda_0 = 1\n", 1485 | "n = 20\n", 1486 | "alpha = 0.05\n", 1487 | "\n", 1488 | "z = norm.ppf(1 - alpha / 2)" 1489 | ] 1490 | }, 1491 | { 1492 | "cell_type": "code", 1493 | "execution_count": 14, 1494 | "metadata": {}, 1495 | "outputs": [ 1496 | { 1497 | "name": "stdout", 1498 | "output_type": "stream", 1499 | "text": [ 1500 | "Rejection: False\n" 1501 | ] 1502 | } 1503 | ], 1504 | "source": [ 1505 | "# Perform process once\n", 1506 | "X = poisson.rvs(lambda_0, size=n)\n", 1507 | "lambda_hat = X.mean()\n", 1508 | "rejection = ((lambda_hat - lambda_0)**2) * (n * lambda_hat) > z**2\n", 1509 | "print('Rejection: ', rejection)" 1510 | ] 1511 | }, 1512 | { 1513 | "cell_type": "code", 1514 | "execution_count": 15, 1515 | "metadata": {}, 1516 | "outputs": [ 1517 | { 1518 | "data": { 1519 | "application/vnd.jupyter.widget-view+json": { 1520 | "model_id": "0dc88ab9f88c49dda002cbb42e258c66", 1521 | "version_major": 2, 1522 | "version_minor": 0 1523 | }, 1524 | "text/plain": [ 1525 | "HBox(children=(IntProgress(value=0, max=1000000), HTML(value='')))" 1526 | ] 1527 | }, 1528 | "metadata": {}, 1529 | "output_type": "display_data" 1530 | }, 1531 | { 1532 | "name": "stdout", 1533 | "output_type": "stream", 1534 | "text": [ 1535 | "\n", 1536 | "Fraction of rejections: 0.052\n" 1537 | ] 1538 | } 1539 | ], 1540 | "source": [ 1541 | "# Perform process many times\n", 1542 | "B = 1000000\n", 1543 | "num_rejections = 0\n", 1544 | "for i in tqdm_notebook(range(B)):\n", 1545 | " X = poisson.rvs(lambda_0, size=n)\n", 1546 | " lambda_hat = X.mean()\n", 1547 | " if ((lambda_hat - lambda_0)**2) * (n * lambda_hat) > z**2:\n", 1548 | " num_rejections += 1\n", 1549 | " \n", 1550 | "fraction_rejections = num_rejections / B\n", 1551 | "\n", 1552 | "print('Fraction of rejections: %.3f' % fraction_rejections)" 1553 | ] 1554 | }, 1555 | { 1556 | "cell_type": "markdown", 1557 | "metadata": {}, 1558 | "source": [ 1559 | "The measured fraction of rejections of the null hypothesis is $0.052$, which is very close to $\\alpha = 0.05$." 1560 | ] 1561 | } 1562 | ], 1563 | "metadata": { 1564 | "kernelspec": { 1565 | "display_name": "Python 3", 1566 | "language": "python", 1567 | "name": "python3" 1568 | }, 1569 | "language_info": { 1570 | "codemirror_mode": { 1571 | "name": "ipython", 1572 | "version": 3 1573 | }, 1574 | "file_extension": ".py", 1575 | "mimetype": "text/x-python", 1576 | "name": "python", 1577 | "nbconvert_exporter": "python", 1578 | "pygments_lexer": "ipython3", 1579 | "version": "3.7.3" 1580 | } 1581 | }, 1582 | "nbformat": 4, 1583 | "nbformat_minor": 2 1584 | } 1585 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # all-of-statistics 2 | 3 | This repository contains my personal notes and complete solutions from my self-study of Larry Wasserman's "All of Statistics: A Concise Course in Statistical Inference". This follows an older edition of the book, though there is almost complete overlap with the latest edition. 4 | 5 | Main definitions, ideas, and theorems from each chapter are included in the notes, but examples are usually skipped. This repository is not intended as a replacement for the book. 6 | 7 | Each included chapter has a Jupyter notebook including notes on the chapter and exercise solutions -- notes and solutions using LaTeX in markdown and executable Python. -------------------------------------------------------------------------------- /data/UStemperatures.txt: -------------------------------------------------------------------------------- 1 | city lat long JanTF JulyTF RelHum Rain 2 | Akron, OH 41.05 81.3 27 71 59 36 3 | Albany-Schenectady-Troy, NY 42.4 73.5 23 72 57 35 4 | Allentown, Bethlehem, PA-NJ 40.35 75.3 29 74 54 44 5 | Atlanta, GA 33.45 84.23 45 79 56 47 6 | Baltimore, MD 39.2 76.38 35 77 55 43 7 | Birmingham, AL 33.31 86.49 45 80 54 53 8 | Boston, MA 42.15 71.07 30 74 56 43 9 | Bridgeport-Milford, CT 41.12 73.12 30 73 56 45 10 | Buffalo, NY 42.54 78.51 24 70 61 36 11 | Canton, OH 40.5 81.25 27 72 59 36 12 | Chattanooga, TN-GA 35.01 85.15 42 79 56 52 13 | Chicago, IL 41.49 87.37 26 76 58 33 14 | Cincinnati, OH-KY-IN 39.08 84.3 34 77 57 40 15 | Cleveland, OH 41.3 81.42 28 71 60 35 16 | Columbus, OH 40 83 31 75 58 37 17 | Dallas, TX 32.45 96.45 46 85 54 35 18 | Dayton-Springfield, OH 39.54 84.15 30 75 58 36 19 | Denver, CO 39.44 104.59 30 73 38 15 20 | Detroit, MI 42.06 83.08 27 74 59 31 21 | Flint, MI 43 83.45 24 72 61 30 22 | Fort Worth, TX 45 85 53 31 23 | Grand Rapids, MI 43 85.45 24 72 61 31 24 | Greensboro-Winston-Salem-High Point, NC 36.04 79.45 40 77 53 42 25 | Hartford, CT 41.45 72.4 27 72 56 43 26 | Houston, TX 29.46 95.21 55 84 59 46 27 | Indianapolis, IN 39.45 86.08 29 75 60 39 28 | Kansas City, MO 39.05 94.35 31 81 55 35 29 | Lancaster, PA 40.05 76.2 32 74 54 43 30 | Los Angeles, Long Beach, CA 34 118.15 53 68 47 11 31 | Louisville, KY-IN 38.15 85.45 35 71 57 30 32 | Memphis, TN-AR-MS 35.07 90.03 42 82 59 50 33 | Miami-Hialeah, FL 25.45 80.11 67 82 60 60 34 | Milwaukee, WI 43.03 87.55 20 69 64 30 35 | Minneapolis-St. Paul, MN-WI 44.58 93.15 12 73 58 25 36 | Nashville, TN 36.1 86.48 40 80 56 45 37 | New Haven-Meriden, CT 41.2 72.55 30 72 58 46 38 | New Orleans, LA 30 90.05 54 81 62 54 39 | New York, NY 40.4 73.58 33 77 58 42 40 | Philadelphia, PA-NJ 40 75.13 32 76 54 42 41 | Pittsburgh, PA 40.26 80.01 29 72 56 36 42 | Portland, OR 45.31 122.41 38 67 73 37 43 | Providence, RI 41.5 71.23 29 72 56 42 44 | Reading, PA 40.2 75.55 33 77 54 41 45 | Richmond-Petersburg, VA 37.35 77.3 39 78 53 44 46 | Rochester, NY 43.15 77.35 25 72 60 32 47 | St. Louis, MO-IL 38.39 90.15 32 79 57 34 48 | San Diego, CA 32.43 117.1 55 70 61 10 49 | San Francisco, CA 37.45 122.26 48 63 71 18 50 | San Jose, CA 37.2 121.54 49 68 71 13 51 | Seattle, WA 47.36 122.2 40 64 72 35 52 | Springfield, MA 42.05 72.35 28 74 56 45 53 | Syracuse, NY 43.05 76.1 24 72 61 38 54 | Toledo, OH 41.4 83.35 26 73 59 31 55 | Utica-Rome, NY 43.05 75.1 23 71 60 40 56 | Washington, DC-MD-VA 38.5 77 37 78 52 42 57 | Wichita, KS 37.42 97.21 32 81 54 28 58 | Wilmington, DE-NJ-MD 39.45 75.33 33 76 56 65 59 | Worcester, MA 42.16 71.49 24 70 56 65 60 | York, PA 40 76.4 33 76 54 62 61 | Youngstown-Warren, OH 41.05 80.4 28 72 58 38 -------------------------------------------------------------------------------- /data/calcium.txt: -------------------------------------------------------------------------------- 1 | Treatment Begin End Decrease 2 | Calcium 107 100 7 3 | Calcium 110 114 -4 4 | Calcium 123 105 18 5 | Calcium 129 112 17 6 | Calcium 112 115 -3 7 | Calcium 111 116 -5 8 | Calcium 107 106 1 9 | Calcium 112 102 10 10 | Calcium 136 125 11 11 | Calcium 102 104 -2 12 | Placebo 123 124 -1 13 | Placebo 109 97 12 14 | Placebo 112 113 -1 15 | Placebo 102 105 -3 16 | Placebo 98 95 3 17 | Placebo 114 119 -5 18 | Placebo 119 114 5 19 | Placebo 112 114 2 20 | Placebo 110 121 -11 21 | Placebo 117 118 -1 22 | Placebo 130 133 -3 -------------------------------------------------------------------------------- /data/carmileage.csv: -------------------------------------------------------------------------------- 1 | MAKEMODEL,VOL,HP,MPG,SP,WT 2 | GM/GeoMetroXF1,89,49,65.4,96,17.5 3 | GM/GeoMetro,92,55,56.0,97,20.0 4 | GM/GeoMetroLSI,92,55,55.9,97,20.0 5 | SuzukiSwift,92,70,49.0,105,20.0 6 | DaihatsuCharade,92,53,46.5,96,20.0 7 | GM/GeoSprintTurbo,89,70,46.2,105,20.0 8 | GM/GeoSprint,92,55,45.4,97,20.0 9 | HondaCivicCRXHF,50,62,59.2,98,22.5 10 | HondaCivicCRXHF,50,62,53.3,98,22.5 11 | DaihatsuCharade,94,80,43.4,107,22.5 12 | SubaruJusty,89,73,41.1,103,22.5 13 | HondaCivicCRX,50,92,40.9,113,22.5 14 | HondaCivic,99,92,40.9,113,22.5 15 | SubaruJusty,89,73,40.4,103,22.5 16 | SubaruJusty,89,66,39.6,100,22.5 17 | SubaruJusty4wd,89,73,39.3,103,22.5 18 | ToyotaTercel,91,78,38.9,106,22.5 19 | HondaCivicCRX,50,92,38.8,113,22.5 20 | ToyotaTercel,91,78,38.2,106,22.5 21 | FordEscort,103,90,42.2,109,25.0 22 | HondaCivic,99,92,40.9,110,25.0 23 | PontiacLeMans,107,74,40.7,101,25.0 24 | IsuzuStylus,101,95,40.0,111,25.0 25 | DodgeColt,96,81,39.3,105,25.0 26 | GM/GeoStorm,89,95,38.8,111,25.0 27 | HondaCivicCRX,50,92,38.4,110,25.0 28 | HondaCivicWagon,117,92,38.4,110,25.0 29 | HondaCivic,99,92,38.4,110,25.0 30 | SubaruLoyale,102,90,29.5,109,25.0 31 | VolksJettaDiesel,104,52,46.9,90,27.5 32 | Mazda323Protege,107,103,36.3,112,27.5 33 | FordEscortWagon,114,84,36.1,103,27.5 34 | FordEscort,101,84,36.1,103,27.5 35 | GM/GeoPrism,97,102,35.4,111,27.5 36 | ToyotaCorolla,113,102,35.3,111,27.5 37 | EagleSummit,101,81,35.1,102,27.5 38 | NissanCentraCoupe,98,90,35.1,106,27.5 39 | NissanCentraWagon,88,90,35.0,106,27.5 40 | ToyotaCelica,86,102,33.2,109,30.0 41 | ToyotaCelica,86,102,32.9,109,30.0 42 | ToyotaCorolla,92,130,32.3,120,30.0 43 | ChevroletCorsica,113,95,32.2,106,30.0 44 | ChevroletBeretta,106,95,32.2,106,30.0 45 | ToyotaCorolla,92,102,32.2,109,30.0 46 | PontiacSunbirdConv,88,95,32.2,106,30.0 47 | DodgeShadow,102,93,31.5,105,30.0 48 | DodgeDaytona,99,100,31.5,108,30.0 49 | EagleSpirit,111,100,31.4,108,30.0 50 | FordTempo,103,98,31.4,107,30.0 51 | ToyotaCelica,86,130,31.2,120,30.0 52 | ToyotaCamry,101,115,33.7,109,35.0 53 | ToyotaCamry,101,115,32.6,109,35.0 54 | ToyotaCamry,101,115,31.3,109,35.0 55 | ToyotaCamryWagon,124,115,31.3,109,35.0 56 | OldsCutlassSup,113,180,30.4,133,35.0 57 | OldsCutlassSup,113,160,28.9,125,35.0 58 | Saab9000,124,130,28.0,115,35.0 59 | FordMustang,92,96,28.0,102,35.0 60 | ToyotaCamry,101,115,28.0,109,35.0 61 | ChryslerLebaronConv,94,100,28.0,104,35.0 62 | DodgeDynasty,115,100,28.0,105,35.0 63 | Volvo740,111,145,27.7,120,35.0 64 | FordThunderbird,116,120,25.6,107,40.0 65 | ChevroletCaprice,131,140,25.3,114,40.0 66 | LincolnContinental,123,140,23.9,114,40.0 67 | ChryslerNewYorker,121,150,23.6,117,40.0 68 | BuickReatta,50,165,23.6,122,40.0 69 | OldsTrof/Toronado,114,165,23.6,122,40.0 70 | Oldsmobile98,127,165,23.6,122,40.0 71 | PontiacBonneville,123,165,23.6,122,40.0 72 | LexusLS400,112,245,23.5,148,40.0 73 | Nissan300ZX,50,280,23.4,160,40.0 74 | Volvo760Wagon,135,162,23.4,121,40.0 75 | Audi200QuatroWag,132,162,23.1,121,40.0 76 | BuickElectraWagon,160,140,22.9,110,45.0 77 | CadillacBrougham,129,140,22.9,110,45.0 78 | CadillacBrougham,129,175,19.5,121,45.0 79 | Mercedes500SL,50,322,18.1,165,45.0 80 | Mercedes560SEL,115,238,17.2,140,45.0 81 | JaguarXJSConvert,50,263,17.0,147,45.0 82 | BMW750IL,119,295,16.7,157,45.0 83 | Rolls-RoyceVarious,107,236,13.2,130,55.0 -------------------------------------------------------------------------------- /data/cloud_seeding.csv: -------------------------------------------------------------------------------- 1 | Unseeded_Clouds,Seeded_Clouds 2 | 1202.6,2745.6 3 | 830.1,1697.8 4 | 372.4,1656.0 5 | 345.5,978.0 6 | 321.2,703.4 7 | 244.3,489.1 8 | 163.0,430.0 9 | 147.8,334.1 10 | 95.0,302.8 11 | 87.0,274.7 12 | 81.2,274.7 13 | 68.5,255.0 14 | 47.3,242.5 15 | 41.1,200.7 16 | 36.6,198.6 17 | 29.0,129.6 18 | 28.6,119.0 19 | 26.3,118.3 20 | 26.1,115.3 21 | 24.4,92.4 22 | 21.7,40.6 23 | 17.3,32.7 24 | 11.5,31.4 25 | 4.9,17.5 26 | 4.9,7.7 27 | 1.0,4.1 28 | -------------------------------------------------------------------------------- /data/coris.csv: -------------------------------------------------------------------------------- 1 | row.names,sbp,tobacco,ldl,adiposity,famhist,typea,obesity,alcohol,age,chd 2 | 1,160,12.00, 5.73,23.11,1,49,25.30, 97.20,52,1 3 | 2,144, 0.01, 4.41,28.61,0,55,28.87, 2.06,63,1 4 | 3,118, 0.08, 3.48,32.28,1,52,29.14, 3.81,46,0 5 | 4,170, 7.50, 6.41,38.03,1,51,31.99, 24.26,58,1 6 | 5,134,13.60, 3.50,27.78,1,60,25.99, 57.34,49,1 7 | 6,132, 6.20, 6.47,36.21,1,62,30.77, 14.14,45,0 8 | 7,142, 4.05, 3.38,16.20,0,59,20.81, 2.62,38,0 9 | 8,114, 4.08, 4.59,14.60,1,62,23.11, 6.72,58,1 10 | 9,114, 0.00, 3.83,19.40,1,49,24.86, 2.49,29,0 11 | 10,132, 0.00, 5.80,30.96,1,69,30.11, 0.00,53,1 12 | 11,206, 6.00, 2.95,32.27,0,72,26.81, 56.06,60,1 13 | 12,134,14.10, 4.44,22.39,1,65,23.09, 0.00,40,1 14 | 13,118, 0.00, 1.88,10.05,0,59,21.57, 0.00,17,0 15 | 14,132, 0.00, 1.87,17.21,0,49,23.63, 0.97,15,0 16 | 15,112, 9.65, 2.29,17.20,1,54,23.53, 0.68,53,0 17 | 16,117, 1.53, 2.44,28.95,1,35,25.89, 30.03,46,0 18 | 17,120, 7.50,15.33,22.00,0,60,25.31, 34.49,49,0 19 | 18,146,10.50, 8.29,35.36,1,78,32.73, 13.89,53,1 20 | 19,158, 2.60, 7.46,34.07,1,61,29.30, 53.28,62,1 21 | 20,124,14.00, 6.23,35.96,1,45,30.09, 0.00,59,1 22 | 21,106, 1.61, 1.74,12.32,0,74,20.92, 13.37,20,1 23 | 22,132, 7.90, 2.85,26.50,1,51,26.16, 25.71,44,0 24 | 23,150, 0.30, 6.38,33.99,1,62,24.64, 0.00,50,0 25 | 24,138, 0.60, 3.81,28.66,0,54,28.70, 1.46,58,0 26 | 25,142,18.20, 4.34,24.38,0,61,26.19, 0.00,50,0 27 | 26,124, 4.00,12.42,31.29,1,54,23.23, 2.06,42,1 28 | 27,118, 6.00, 9.65,33.91,0,60,38.80, 0.00,48,0 29 | 28,145, 9.10, 5.24,27.55,0,59,20.96, 21.60,61,1 30 | 29,144, 4.09, 5.55,31.40,1,60,29.43, 5.55,56,0 31 | 30,146, 0.00, 6.62,25.69,0,60,28.07, 8.23,63,1 32 | 31,136, 2.52, 3.95,25.63,0,51,21.86, 0.00,45,1 33 | 32,158, 1.02, 6.33,23.88,0,66,22.13, 24.99,46,1 34 | 33,122, 6.60, 5.58,35.95,1,53,28.07, 12.55,59,1 35 | 34,126, 8.75, 6.53,34.02,0,49,30.25, 0.00,41,1 36 | 35,148, 5.50, 7.10,25.31,0,56,29.84, 3.60,48,0 37 | 36,122, 4.26, 4.44,13.04,0,57,19.49, 48.99,28,1 38 | 37,140, 3.90, 7.32,25.05,0,47,27.36, 36.77,32,0 39 | 38,110, 4.64, 4.55,30.46,0,48,30.90, 15.22,46,0 40 | 39,130, 0.00, 2.82,19.63,1,70,24.86, 0.00,29,0 41 | 40,136,11.20, 5.81,31.85,1,75,27.68, 22.94,58,1 42 | 41,118, 0.28, 5.80,33.70,1,60,30.98, 0.00,41,1 43 | 42,144, 0.04, 3.38,23.61,0,30,23.75, 4.66,30,0 44 | 43,120, 0.00, 1.07,16.02,0,47,22.15, 0.00,15,0 45 | 44,130, 2.61, 2.72,22.99,1,51,26.29, 13.37,51,1 46 | 45,114, 0.00, 2.99, 9.74,0,54,46.58, 0.00,17,0 47 | 46,128, 4.65, 3.31,22.74,0,62,22.95, 0.51,48,0 48 | 47,162, 7.40, 8.55,24.65,1,64,25.71, 5.86,58,1 49 | 48,116, 1.91, 7.56,26.45,1,52,30.01, 3.60,33,1 50 | 49,114, 0.00, 1.94,11.02,0,54,20.17, 38.98,16,0 51 | 50,126, 3.80, 3.88,31.79,0,57,30.53, 0.00,30,0 52 | 51,122, 0.00, 5.75,30.90,1,46,29.01, 4.11,42,0 53 | 52,134, 2.50, 3.66,30.90,0,52,27.19, 23.66,49,0 54 | 53,152, 0.90, 9.12,30.23,0,56,28.64, 0.37,42,1 55 | 54,134, 8.08, 1.55,17.50,1,56,22.65, 66.65,31,1 56 | 55,156, 3.00, 1.82,27.55,0,60,23.91, 54.00,53,0 57 | 56,152, 5.99, 7.99,32.48,0,45,26.57,100.32,48,0 58 | 57,118, 0.00, 2.99,16.17,0,49,23.83, 3.22,28,0 59 | 58,126, 5.10, 2.96,26.50,0,55,25.52, 12.34,38,1 60 | 59,103, 0.03, 4.21,18.96,0,48,22.94, 2.62,18,0 61 | 60,121, 0.80, 5.29,18.95,1,47,22.51, 0.00,61,0 62 | 61,142, 0.28, 1.80,21.03,0,57,23.65, 2.93,33,0 63 | 62,138, 1.15, 5.09,27.87,1,61,25.65, 2.34,44,0 64 | 63,152,10.10, 4.71,24.65,1,65,26.21, 24.53,57,0 65 | 64,140, 0.45, 4.30,24.33,0,41,27.23, 10.08,38,0 66 | 65,130, 0.00, 1.82,10.45,0,57,22.07, 2.06,17,0 67 | 66,136, 7.36, 2.19,28.11,1,61,25.00, 61.71,54,0 68 | 67,124, 4.82, 3.24,21.10,1,48,28.49, 8.42,30,0 69 | 68,112, 0.41, 1.88,10.29,0,39,22.08, 20.98,27,0 70 | 69,118, 4.46, 7.27,29.13,1,48,29.01, 11.11,33,0 71 | 70,122, 0.00, 3.37,16.10,0,67,21.06, 0.00,32,1 72 | 71,118, 0.00, 3.67,12.13,0,51,19.15, 0.60,15,0 73 | 72,130, 1.72, 2.66,10.38,0,68,17.81, 11.10,26,0 74 | 73,130, 5.60, 3.37,24.80,0,58,25.76, 43.20,36,0 75 | 74,126, 0.09, 5.03,13.27,1,50,17.75, 4.63,20,0 76 | 75,128, 0.40, 6.17,26.35,0,64,27.86, 11.11,34,0 77 | 76,136, 0.00, 4.12,17.42,0,52,21.66, 12.86,40,0 78 | 77,134, 0.00, 5.90,30.84,0,49,29.16, 0.00,55,0 79 | 78,140, 0.60, 5.56,33.39,1,58,27.19, 0.00,55,1 80 | 79,168, 4.50, 6.68,28.47,0,43,24.25, 24.38,56,1 81 | 80,108, 0.40, 5.91,22.92,1,57,25.72, 72.00,39,0 82 | 81,114, 3.00, 7.04,22.64,1,55,22.59, 0.00,45,1 83 | 82,140, 8.14, 4.93,42.49,0,53,45.72, 6.43,53,1 84 | 83,148, 4.80, 6.09,36.55,1,63,25.44, 0.88,55,1 85 | 84,148,12.20, 3.79,34.15,0,57,26.38, 14.40,57,1 86 | 85,128, 0.00, 2.43,13.15,1,63,20.75, 0.00,17,0 87 | 86,130, 0.56, 3.30,30.86,0,49,27.52, 33.33,45,0 88 | 87,126,10.50, 4.49,17.33,0,67,19.37, 0.00,49,1 89 | 88,140, 0.00, 5.08,27.33,1,41,27.83, 1.25,38,0 90 | 89,126, 0.90, 5.64,17.78,1,55,21.94, 0.00,41,0 91 | 90,122, 0.72, 4.04,32.38,0,34,28.34, 0.00,55,0 92 | 91,116, 1.03, 2.83,10.85,0,45,21.59, 1.75,21,0 93 | 92,120, 3.70, 4.02,39.66,0,61,30.57, 0.00,64,1 94 | 93,143, 0.46, 2.40,22.87,0,62,29.17, 15.43,29,0 95 | 94,118, 4.00, 3.95,18.96,0,54,25.15, 8.33,49,1 96 | 95,194, 1.70, 6.32,33.67,0,47,30.16, 0.19,56,0 97 | 96,134, 3.00, 4.37,23.07,0,56,20.54, 9.65,62,0 98 | 97,138, 2.16, 4.90,24.83,1,39,26.06, 28.29,29,0 99 | 98,136, 0.00, 5.00,27.58,1,49,27.59, 1.47,39,0 100 | 99,122, 3.20,11.32,35.36,1,55,27.07, 0.00,51,1 101 | 100,164,12.00, 3.91,19.59,0,51,23.44, 19.75,39,0 102 | 101,136, 8.00, 7.85,23.81,1,51,22.69, 2.78,50,0 103 | 102,166, 0.07, 4.03,29.29,0,53,28.37, 0.00,27,0 104 | 103,118, 0.00, 4.34,30.12,1,52,32.18, 3.91,46,0 105 | 104,128, 0.42, 4.60,26.68,0,41,30.97, 10.33,31,0 106 | 105,118, 1.50, 5.38,25.84,0,64,28.63, 3.89,29,0 107 | 106,158, 3.60, 2.97,30.11,0,63,26.64,108.00,64,0 108 | 107,108, 1.50, 4.33,24.99,0,66,22.29, 21.60,61,1 109 | 108,170, 7.60, 5.50,37.83,1,42,37.41, 6.17,54,1 110 | 109,118, 1.00, 5.76,22.10,0,62,23.48, 7.71,42,0 111 | 110,124, 0.00, 3.04,17.33,0,49,22.04, 0.00,18,0 112 | 111,114, 0.00, 8.01,21.64,0,66,25.51, 2.49,16,0 113 | 112,168, 9.00, 8.53,24.48,1,69,26.18, 4.63,54,1 114 | 113,134, 2.00, 3.66,14.69,0,52,21.03, 2.06,37,0 115 | 114,174, 0.00, 8.46,35.10,1,35,25.27, 0.00,61,1 116 | 115,116,31.20, 3.17,14.99,0,47,19.40, 49.06,59,1 117 | 116,128, 0.00,10.58,31.81,1,46,28.41, 14.66,48,0 118 | 117,140, 4.50, 4.59,18.01,0,63,21.91, 22.09,32,1 119 | 118,154, 0.70, 5.91,25.00,0,13,20.60, 0.00,42,0 120 | 119,150, 3.50, 6.99,25.39,1,50,23.35, 23.48,61,1 121 | 120,130, 0.00, 3.92,25.55,0,68,28.02, 0.68,27,0 122 | 121,128, 2.00, 6.13,21.31,0,66,22.86, 11.83,60,0 123 | 122,120, 1.40, 6.25,20.47,0,60,25.85, 8.51,28,0 124 | 123,120, 0.00, 5.01,26.13,0,64,26.21, 12.24,33,0 125 | 124,138, 4.50, 2.85,30.11,0,55,24.78, 24.89,56,1 126 | 125,153, 7.80, 3.96,25.73,0,54,25.91, 27.03,45,0 127 | 126,123, 8.60,11.17,35.28,1,70,33.14, 0.00,59,1 128 | 127,148, 4.04, 3.99,20.69,0,60,27.78, 1.75,28,0 129 | 128,136, 3.96, 2.76,30.28,1,50,34.42, 18.51,38,0 130 | 129,134, 8.80, 7.41,26.84,0,35,29.44, 29.52,60,1 131 | 130,152,12.18, 4.04,37.83,1,63,34.57, 4.17,64,0 132 | 131,158,13.50, 5.04,30.79,0,54,24.79, 21.50,62,0 133 | 132,132, 2.00, 3.08,35.39,0,45,31.44, 79.82,58,1 134 | 133,134, 1.50, 3.73,21.53,0,41,24.70, 11.11,30,1 135 | 134,142, 7.44, 5.52,33.97,0,47,29.29, 24.27,54,0 136 | 135,134, 6.00, 3.30,28.45,0,65,26.09, 58.11,40,0 137 | 136,122, 4.18, 9.05,29.27,1,44,24.05, 19.34,52,1 138 | 137,116, 2.70, 3.69,13.52,0,55,21.13, 18.51,32,0 139 | 138,128, 0.50, 3.70,12.81,1,66,21.25, 22.73,28,0 140 | 139,120, 0.00, 3.68,12.24,0,51,20.52, 0.51,20,0 141 | 140,124, 0.00, 3.95,36.35,1,59,32.83, 9.59,54,0 142 | 141,160,14.00, 5.90,37.12,0,58,33.87, 3.52,54,1 143 | 142,130, 2.78, 4.89, 9.39,1,63,19.30, 17.47,25,1 144 | 143,128, 2.80, 5.53,14.29,0,64,24.97, 0.51,38,0 145 | 144,130, 4.50, 5.86,37.43,0,61,31.21, 32.30,58,0 146 | 145,109, 1.20, 6.14,29.26,0,47,24.72, 10.46,40,0 147 | 146,144, 0.00, 3.84,18.72,0,56,22.10, 4.80,40,0 148 | 147,118, 1.05, 3.16,12.98,1,46,22.09, 16.35,31,0 149 | 148,136, 3.46, 6.38,32.25,1,43,28.73, 3.13,43,1 150 | 149,136, 1.50, 6.06,26.54,0,54,29.38, 14.50,33,1 151 | 150,124,15.50, 5.05,24.06,0,46,23.22, 0.00,61,1 152 | 151,148, 6.00, 6.49,26.47,0,48,24.70, 0.00,55,0 153 | 152,128, 6.60, 3.58,20.71,0,55,24.15, 0.00,52,0 154 | 153,122, 0.28, 4.19,19.97,0,61,25.63, 0.00,24,0 155 | 154,108, 0.00, 2.74,11.17,0,53,22.61, 0.95,20,0 156 | 155,124, 3.04, 4.80,19.52,1,60,21.78,147.19,41,1 157 | 156,138, 8.80, 3.12,22.41,1,63,23.33,120.03,55,1 158 | 157,127, 0.00, 2.81,15.70,0,42,22.03, 1.03,17,0 159 | 158,174, 9.45, 5.13,35.54,0,55,30.71, 59.79,53,0 160 | 159,122, 0.00, 3.05,23.51,0,46,25.81, 0.00,38,0 161 | 160,144, 6.75, 5.45,29.81,0,53,25.62, 26.23,43,1 162 | 161,126, 1.80, 6.22,19.71,0,65,24.81, 0.69,31,0 163 | 162,208,27.40, 3.12,26.63,0,66,27.45, 33.07,62,1 164 | 163,138, 0.00, 2.68,17.04,0,42,22.16, 0.00,16,0 165 | 164,148, 0.00, 3.84,17.26,0,70,20.00, 0.00,21,0 166 | 165,122, 0.00, 3.08,16.30,0,43,22.13, 0.00,16,0 167 | 166,132, 7.00, 3.20,23.26,0,77,23.64, 23.14,49,0 168 | 167,110,12.16, 4.99,28.56,0,44,27.14, 21.60,55,1 169 | 168,160, 1.52, 8.12,29.30,1,54,25.87, 12.86,43,1 170 | 169,126, 0.54, 4.39,21.13,1,45,25.99, 0.00,25,0 171 | 170,162, 5.30, 7.95,33.58,1,58,36.06, 8.23,48,0 172 | 171,194, 2.55, 6.89,33.88,1,69,29.33, 0.00,41,0 173 | 172,118, 0.75, 2.58,20.25,0,59,24.46, 0.00,32,0 174 | 173,124, 0.00, 4.79,34.71,0,49,26.09, 9.26,47,0 175 | 174,160, 0.00, 2.42,34.46,0,48,29.83, 1.03,61,0 176 | 175,128, 0.00, 2.51,29.35,1,53,22.05, 1.37,62,0 177 | 176,122, 4.00, 5.24,27.89,1,45,26.52, 0.00,61,1 178 | 177,132, 2.00, 2.70,21.57,1,50,27.95, 9.26,37,0 179 | 178,120, 0.00, 2.42,16.66,0,46,20.16, 0.00,17,0 180 | 179,128, 0.04, 8.22,28.17,0,65,26.24, 11.73,24,0 181 | 180,108,15.00, 4.91,34.65,0,41,27.96, 14.40,56,0 182 | 181,166, 0.00, 4.31,34.27,0,45,30.14, 13.27,56,0 183 | 182,152, 0.00, 6.06,41.05,1,51,40.34, 0.00,51,0 184 | 183,170, 4.20, 4.67,35.45,1,50,27.14, 7.92,60,1 185 | 184,156, 4.00, 2.05,19.48,1,50,21.48, 27.77,39,1 186 | 185,116, 8.00, 6.73,28.81,1,41,26.74, 40.94,48,1 187 | 186,122, 4.40, 3.18,11.59,1,59,21.94, 0.00,33,1 188 | 187,150,20.00, 6.40,35.04,0,53,28.88, 8.33,63,0 189 | 188,129, 2.15, 5.17,27.57,0,52,25.42, 2.06,39,0 190 | 189,134, 4.80, 6.58,29.89,1,55,24.73, 23.66,63,0 191 | 190,126, 0.00, 5.98,29.06,1,56,25.39, 11.52,64,1 192 | 191,142, 0.00, 3.72,25.68,0,48,24.37, 5.25,40,1 193 | 192,128, 0.70, 4.90,37.42,1,72,35.94, 3.09,49,1 194 | 193,102, 0.40, 3.41,17.22,1,56,23.59, 2.06,39,1 195 | 194,130, 0.00, 4.89,25.98,0,72,30.42, 14.71,23,0 196 | 195,138, 0.05, 2.79,10.35,0,46,21.62, 0.00,18,0 197 | 196,138, 0.00, 1.96,11.82,1,54,22.01, 8.13,21,0 198 | 197,128, 0.00, 3.09,20.57,0,54,25.63, 0.51,17,0 199 | 198,162, 2.92, 3.63,31.33,0,62,31.59, 18.51,42,0 200 | 199,160, 3.00, 9.19,26.47,1,39,28.25, 14.40,54,1 201 | 200,148, 0.00, 4.66,24.39,0,50,25.26, 4.03,27,0 202 | 201,124, 0.16, 2.44,16.67,0,65,24.58, 74.91,23,0 203 | 202,136, 3.15, 4.37,20.22,1,59,25.12, 47.16,31,1 204 | 203,134, 2.75, 5.51,26.17,0,57,29.87, 8.33,33,0 205 | 204,128, 0.73, 3.97,23.52,0,54,23.81, 19.20,64,0 206 | 205,122, 3.20, 3.59,22.49,1,45,24.96, 36.17,58,0 207 | 206,152, 3.00, 4.64,31.29,0,41,29.34, 4.53,40,0 208 | 207,162, 0.00, 5.09,24.60,1,64,26.71, 3.81,18,0 209 | 208,124, 4.00, 6.65,30.84,1,54,28.40, 33.51,60,0 210 | 209,136, 5.80, 5.90,27.55,0,65,25.71, 14.40,59,0 211 | 210,136, 8.80, 4.26,32.03,1,52,31.44, 34.35,60,0 212 | 211,134, 0.05, 8.03,27.95,0,48,26.88, 0.00,60,0 213 | 212,122, 1.00, 5.88,34.81,1,69,31.27, 15.94,40,1 214 | 213,116, 3.00, 3.05,30.31,0,41,23.63, 0.86,44,0 215 | 214,132, 0.00, 0.98,21.39,0,62,26.75, 0.00,53,0 216 | 215,134, 0.00, 2.40,21.11,0,57,22.45, 1.37,18,0 217 | 216,160, 7.77, 8.07,34.80,0,64,31.15, 0.00,62,1 218 | 217,180, 0.52, 4.23,16.38,0,55,22.56, 14.77,45,1 219 | 218,124, 0.81, 6.16,11.61,0,35,21.47, 10.49,26,0 220 | 219,114, 0.00, 4.97, 9.69,0,26,22.60, 0.00,25,0 221 | 220,208, 7.40, 7.41,32.03,0,50,27.62, 7.85,57,0 222 | 221,138, 0.00, 3.14,12.00,0,54,20.28, 0.00,16,0 223 | 222,164, 0.50, 6.95,39.64,1,47,41.76, 3.81,46,1 224 | 223,144, 2.40, 8.13,35.61,0,46,27.38, 13.37,60,0 225 | 224,136, 7.50, 7.39,28.04,1,50,25.01, 0.00,45,1 226 | 225,132, 7.28, 3.52,12.33,0,60,19.48, 2.06,56,0 227 | 226,143, 5.04, 4.86,23.59,0,58,24.69, 18.72,42,0 228 | 227,112, 4.46, 7.18,26.25,1,69,27.29, 0.00,32,1 229 | 228,134,10.00, 3.79,34.72,0,42,28.33, 28.80,52,1 230 | 229,138, 2.00, 5.11,31.40,1,49,27.25, 2.06,64,1 231 | 230,188, 0.00, 5.47,32.44,1,71,28.99, 7.41,50,1 232 | 231,110, 2.35, 3.36,26.72,1,54,26.08,109.80,58,1 233 | 232,136,13.20, 7.18,35.95,0,48,29.19, 0.00,62,0 234 | 233,130, 1.75, 5.46,34.34,0,53,29.42, 0.00,58,1 235 | 234,122, 0.00, 3.76,24.59,0,56,24.36, 0.00,30,0 236 | 235,138, 0.00, 3.24,27.68,0,60,25.70, 88.66,29,0 237 | 236,130,18.00, 4.13,27.43,0,54,27.44, 0.00,51,1 238 | 237,126, 5.50, 3.78,34.15,0,55,28.85, 3.18,61,0 239 | 238,176, 5.76, 4.89,26.10,1,46,27.30, 19.44,57,0 240 | 239,122, 0.00, 5.49,19.56,0,57,23.12, 14.02,27,0 241 | 240,124, 0.00, 3.23, 9.64,0,59,22.70, 0.00,16,0 242 | 241,140, 5.20, 3.58,29.26,0,70,27.29, 20.17,45,1 243 | 242,128, 6.00, 4.37,22.98,1,50,26.01, 0.00,47,0 244 | 243,190, 4.18, 5.05,24.83,0,45,26.09, 82.85,41,0 245 | 244,144, 0.76,10.53,35.66,0,63,34.35, 0.00,55,1 246 | 245,126, 4.60, 7.40,31.99,1,57,28.67, 0.37,60,1 247 | 246,128, 0.00, 2.63,23.88,0,45,21.59, 6.54,57,0 248 | 247,136, 0.40, 3.91,21.10,1,63,22.30, 0.00,56,1 249 | 248,158, 4.00, 4.18,28.61,1,42,25.11, 0.00,60,0 250 | 249,160, 0.60, 6.94,30.53,0,36,25.68, 1.42,64,0 251 | 250,124, 6.00, 5.21,33.02,1,64,29.37, 7.61,58,1 252 | 251,158, 6.17, 8.12,30.75,0,46,27.84, 92.62,48,0 253 | 252,128, 0.00, 6.34,11.87,0,57,23.14, 0.00,17,0 254 | 253,166, 3.00, 3.82,26.75,0,45,20.86, 0.00,63,1 255 | 254,146, 7.50, 7.21,25.93,1,55,22.51, 0.51,42,0 256 | 255,161, 9.00, 4.65,15.16,1,58,23.76, 43.20,46,0 257 | 256,164,13.02, 6.26,29.38,1,47,22.75, 37.03,54,1 258 | 257,146, 5.08, 7.03,27.41,1,63,36.46, 24.48,37,1 259 | 258,142, 4.48, 3.57,19.75,1,51,23.54, 3.29,49,0 260 | 259,138,12.00, 5.13,28.34,0,59,24.49, 32.81,58,1 261 | 260,154, 1.80, 7.13,34.04,1,52,35.51, 39.36,44,0 262 | 261,118, 0.00, 2.39,12.13,0,49,18.46, 0.26,17,1 263 | 263,124, 0.61, 2.69,17.15,1,61,22.76, 11.55,20,0 264 | 264,124, 1.04, 2.84,16.42,1,46,20.17, 0.00,61,0 265 | 265,136, 5.00, 4.19,23.99,1,68,27.80, 25.86,35,0 266 | 266,132, 9.90, 4.63,27.86,1,46,23.39, 0.51,52,1 267 | 267,118, 0.12, 1.96,20.31,0,37,20.01, 2.42,18,0 268 | 268,118, 0.12, 4.16, 9.37,0,57,19.61, 0.00,17,0 269 | 269,134,12.00, 4.96,29.79,0,53,24.86, 8.23,57,0 270 | 270,114, 0.10, 3.95,15.89,1,57,20.31, 17.14,16,0 271 | 271,136, 6.80, 7.84,30.74,1,58,26.20, 23.66,45,1 272 | 272,130, 0.00, 4.16,39.43,1,46,30.01, 0.00,55,1 273 | 273,136, 2.20, 4.16,38.02,0,65,37.24, 4.11,41,1 274 | 274,136, 1.36, 3.16,14.97,1,56,24.98, 7.30,24,0 275 | 275,154, 4.20, 5.59,25.02,0,58,25.02, 1.54,43,0 276 | 276,108, 0.80, 2.47,17.53,0,47,22.18, 0.00,55,1 277 | 277,136, 8.80, 4.69,36.07,1,38,26.56, 2.78,63,1 278 | 278,174, 2.02, 6.57,31.90,1,50,28.75, 11.83,64,1 279 | 279,124, 4.25, 8.22,30.77,0,56,25.80, 0.00,43,0 280 | 280,114, 0.00, 2.63, 9.69,0,45,17.89, 0.00,16,0 281 | 281,118, 0.12, 3.26,12.26,0,55,22.65, 0.00,16,0 282 | 282,106, 1.08, 4.37,26.08,0,67,24.07, 17.74,28,1 283 | 283,146, 3.60, 3.51,22.67,0,51,22.29, 43.71,42,0 284 | 284,206, 0.00, 4.17,33.23,0,69,27.36, 6.17,50,1 285 | 285,134, 3.00, 3.17,17.91,0,35,26.37, 15.12,27,0 286 | 286,148,15.00, 4.98,36.94,1,72,31.83, 66.27,41,1 287 | 287,126, 0.21, 3.95,15.11,0,61,22.17, 2.42,17,0 288 | 288,134, 0.00, 3.69,13.92,0,43,27.66, 0.00,19,0 289 | 289,134, 0.02, 2.80,18.84,0,45,24.82, 0.00,17,0 290 | 290,123, 0.05, 4.61,13.69,0,51,23.23, 2.78,16,0 291 | 291,112, 0.60, 5.28,25.71,0,55,27.02, 27.77,38,1 292 | 292,112, 0.00, 1.71,15.96,0,42,22.03, 3.50,16,0 293 | 293,101, 0.48, 7.26,13.00,0,50,19.82, 5.19,16,0 294 | 294,150, 0.18, 4.14,14.40,0,53,23.43, 7.71,44,0 295 | 295,170, 2.60, 7.22,28.69,1,71,27.87, 37.65,56,1 296 | 296,134, 0.00, 5.63,29.12,0,68,32.33, 2.02,34,0 297 | 297,142, 0.00, 4.19,18.04,0,56,23.65, 20.78,42,1 298 | 298,132, 0.10, 3.28,10.73,0,73,20.42, 0.00,17,0 299 | 299,136, 0.00, 2.28,18.14,0,55,22.59, 0.00,17,0 300 | 300,132,12.00, 4.51,21.93,0,61,26.07, 64.80,46,1 301 | 301,166, 4.10, 4.00,34.30,1,32,29.51, 8.23,53,0 302 | 302,138, 0.00, 3.96,24.70,1,53,23.80, 0.00,45,0 303 | 303,138, 2.27, 6.41,29.07,0,58,30.22, 2.93,32,1 304 | 304,170, 0.00, 3.12,37.15,0,47,35.42, 0.00,53,0 305 | 305,128, 0.00, 8.41,28.82,1,60,26.86, 0.00,59,1 306 | 306,136, 1.20, 2.78, 7.12,0,52,22.51, 3.41,27,0 307 | 307,128, 0.00, 3.22,26.55,1,39,26.59, 16.71,49,0 308 | 308,150,14.40, 5.04,26.52,1,60,28.84, 0.00,45,0 309 | 309,132, 8.40, 3.57,13.68,0,42,18.75, 15.43,59,1 310 | 310,142, 2.40, 2.55,23.89,0,54,26.09, 59.14,37,0 311 | 311,130, 0.05, 2.44,28.25,1,67,30.86, 40.32,34,0 312 | 312,174, 3.50, 5.26,21.97,1,36,22.04, 8.33,59,1 313 | 313,114, 9.60, 2.51,29.18,0,49,25.67, 40.63,46,0 314 | 314,162, 1.50, 2.46,19.39,1,49,24.32, 0.00,59,1 315 | 315,174, 0.00, 3.27,35.40,0,58,37.71, 24.95,44,0 316 | 316,190, 5.15, 6.03,36.59,0,42,30.31, 72.00,50,0 317 | 317,154, 1.40, 1.72,18.86,0,58,22.67, 43.20,59,0 318 | 318,124, 0.00, 2.28,24.86,1,50,22.24, 8.26,38,0 319 | 319,114, 1.20, 3.98,14.90,0,49,23.79, 25.82,26,0 320 | 320,168,11.40, 5.08,26.66,1,56,27.04, 2.61,59,1 321 | 321,142, 3.72, 4.24,32.57,0,52,24.98, 7.61,51,0 322 | 322,154, 0.00, 4.81,28.11,1,56,25.67, 75.77,59,0 323 | 323,146, 4.36, 4.31,18.44,1,47,24.72, 10.80,38,0 324 | 324,166, 6.00, 3.02,29.30,0,35,24.38, 38.06,61,0 325 | 325,140, 8.60, 3.90,32.16,1,52,28.51, 11.11,64,1 326 | 326,136, 1.70, 3.53,20.13,0,56,19.44, 14.40,55,0 327 | 327,156, 0.00, 3.47,21.10,0,73,28.40, 0.00,36,1 328 | 328,132, 0.00, 6.63,29.58,1,37,29.41, 2.57,62,0 329 | 329,128, 0.00, 2.98,12.59,0,65,20.74, 2.06,19,0 330 | 330,106, 5.60, 3.20,12.30,0,49,20.29, 0.00,39,0 331 | 331,144, 0.40, 4.64,30.09,0,30,27.39, 0.74,55,0 332 | 332,154, 0.31, 2.33,16.48,0,33,24.00, 11.83,17,0 333 | 333,126, 3.10, 2.01,32.97,1,56,28.63, 26.74,45,0 334 | 334,134, 6.40, 8.49,37.25,1,56,28.94, 10.49,51,1 335 | 335,152,19.45, 4.22,29.81,0,28,23.95, 0.00,59,1 336 | 336,146, 1.35, 6.39,34.21,0,51,26.43, 0.00,59,1 337 | 337,162, 6.94, 4.55,33.36,1,52,27.09, 32.06,43,0 338 | 338,130, 7.28, 3.56,23.29,1,20,26.80, 51.87,58,1 339 | 339,138, 6.00, 7.24,37.05,0,38,28.69, 0.00,59,0 340 | 340,148, 0.00, 5.32,26.71,1,52,32.21, 32.78,27,0 341 | 341,124, 4.20, 2.94,27.59,0,50,30.31, 85.06,30,0 342 | 342,118, 1.62, 9.01,21.70,0,59,25.89, 21.19,40,0 343 | 343,116, 4.28, 7.02,19.99,1,68,23.31, 0.00,52,1 344 | 344,162, 6.30, 5.73,22.61,1,46,20.43, 62.54,53,1 345 | 345,138, 0.87, 1.87,15.89,0,44,26.76, 42.99,31,0 346 | 346,137, 1.20, 3.14,23.87,0,66,24.13, 45.00,37,0 347 | 347,198, 0.52,11.89,27.68,1,48,28.40, 78.99,26,1 348 | 348,154, 4.50, 4.75,23.52,1,43,25.76, 0.00,53,1 349 | 349,128, 5.40, 2.36,12.98,0,51,18.36, 6.69,61,0 350 | 350,130, 0.08, 5.59,25.42,1,50,24.98, 6.27,43,1 351 | 351,162, 5.60, 4.24,22.53,0,29,22.91, 5.66,60,0 352 | 352,120,10.50, 2.70,29.87,1,54,24.50, 16.46,49,0 353 | 353,136, 3.99, 2.58,16.38,1,53,22.41, 27.67,36,0 354 | 354,176, 1.20, 8.28,36.16,1,42,27.81, 11.60,58,1 355 | 355,134,11.79, 4.01,26.57,1,38,21.79, 38.88,61,1 356 | 356,122, 1.70, 5.28,32.23,1,51,24.08, 0.00,54,0 357 | 357,134, 0.90, 3.18,23.66,1,52,23.26, 27.36,58,1 358 | 358,134, 0.00, 2.43,22.24,0,52,26.49, 41.66,24,0 359 | 359,136, 6.60, 6.08,32.74,0,64,33.28, 2.72,49,0 360 | 360,132, 4.05, 5.15,26.51,1,31,26.67, 16.30,50,0 361 | 361,152, 1.68, 3.58,25.43,0,50,27.03, 0.00,32,0 362 | 362,132,12.30, 5.96,32.79,1,57,30.12, 21.50,62,1 363 | 363,124, 0.40, 3.67,25.76,0,43,28.08, 20.57,34,0 364 | 364,140, 4.20, 2.91,28.83,1,43,24.70, 47.52,48,0 365 | 365,166, 0.60, 2.42,34.03,1,53,26.96, 54.00,60,0 366 | 366,156, 3.02, 5.35,25.72,1,53,25.22, 28.11,52,1 367 | 367,132, 0.72, 4.37,19.54,0,48,26.11, 49.37,28,0 368 | 368,150, 0.00, 4.99,27.73,0,57,30.92, 8.33,24,0 369 | 369,134, 0.12, 3.40,21.18,1,33,26.27, 14.21,30,0 370 | 370,126, 3.40, 4.87,15.16,1,65,22.01, 11.11,38,0 371 | 371,148, 0.50, 5.97,32.88,0,54,29.27, 6.43,42,0 372 | 372,148, 8.20, 7.75,34.46,1,46,26.53, 6.04,64,1 373 | 373,132, 6.00, 5.97,25.73,1,66,24.18,145.29,41,0 374 | 374,128, 1.60, 5.41,29.30,0,68,29.38, 23.97,32,0 375 | 375,128, 5.16, 4.90,31.35,1,57,26.42, 0.00,64,0 376 | 376,140, 0.00, 2.40,27.89,1,70,30.74,144.00,29,0 377 | 377,126, 0.00, 5.29,27.64,0,25,27.62, 2.06,45,0 378 | 378,114, 3.60, 4.16,22.58,0,60,24.49, 65.31,31,0 379 | 379,118, 1.25, 4.69,31.58,1,52,27.16, 4.11,53,0 380 | 380,126, 0.96, 4.99,29.74,0,66,33.35, 58.32,38,0 381 | 381,154, 4.50, 4.68,39.97,0,61,33.17, 1.54,64,1 382 | 382,112, 1.44, 2.71,22.92,0,59,24.81, 0.00,52,0 383 | 383,140, 8.00, 4.42,33.15,1,47,32.77, 66.86,44,0 384 | 384,140, 1.68,11.41,29.54,1,74,30.75, 2.06,38,1 385 | 385,128, 2.60, 4.94,21.36,0,61,21.30, 0.00,31,0 386 | 386,126,19.60, 6.03,34.99,0,49,26.99, 55.89,44,0 387 | 387,160, 4.20, 6.76,37.99,1,61,32.91, 3.09,54,1 388 | 388,144, 0.00, 4.17,29.63,1,52,21.83, 0.00,59,0 389 | 389,148, 4.50,10.49,33.27,0,50,25.92, 2.06,53,1 390 | 390,146, 0.00, 4.92,18.53,0,57,24.20, 34.97,26,0 391 | 391,164, 5.60, 3.17,30.98,1,44,25.99, 43.20,53,1 392 | 392,130, 0.54, 3.63,22.03,1,69,24.34, 12.86,39,1 393 | 393,154, 2.40, 5.63,42.17,1,59,35.07, 12.86,50,1 394 | 394,178, 0.95, 4.75,21.06,0,49,23.74, 24.69,61,0 395 | 395,180, 3.57, 3.57,36.10,0,36,26.70, 19.95,64,0 396 | 396,134,12.50, 2.73,39.35,0,48,35.58, 0.00,48,0 397 | 397,142, 0.00, 3.54,16.64,0,58,25.97, 8.36,27,0 398 | 398,162, 7.00, 7.67,34.34,1,33,30.77, 0.00,62,0 399 | 399,218,11.20, 2.77,30.79,0,38,24.86, 90.93,48,1 400 | 400,126, 8.75, 6.06,32.72,1,33,27.00, 62.43,55,1 401 | 401,126, 0.00, 3.57,26.01,0,61,26.30, 7.97,47,0 402 | 402,134, 6.10, 4.77,26.08,0,47,23.82, 1.03,49,0 403 | 403,132, 0.00, 4.17,36.57,0,57,30.61, 18.00,49,0 404 | 404,178, 5.50, 3.79,23.92,1,45,21.26, 6.17,62,1 405 | 405,208, 5.04, 5.19,20.71,1,52,25.12, 24.27,58,1 406 | 406,160, 1.15,10.19,39.71,0,31,31.65, 20.52,57,0 407 | 407,116, 2.38, 5.67,29.01,1,54,27.26, 15.77,51,0 408 | 408,180,25.01, 3.70,38.11,1,57,30.54, 0.00,61,1 409 | 409,200,19.20, 4.43,40.60,1,55,32.04, 36.00,60,1 410 | 410,112, 4.20, 3.58,27.14,0,52,26.83, 2.06,40,0 411 | 411,120, 0.00, 3.10,26.97,0,41,24.80, 0.00,16,0 412 | 412,178,20.00, 9.78,33.55,0,37,27.29, 2.88,62,1 413 | 413,166, 0.80, 5.63,36.21,0,50,34.72, 28.80,60,0 414 | 414,164, 8.20,14.16,36.85,0,52,28.50, 17.02,55,1 415 | 415,216, 0.92, 2.66,19.85,1,49,20.58, 0.51,63,1 416 | 416,146, 6.40, 5.62,33.05,1,57,31.03, 0.74,46,0 417 | 417,134, 1.10, 3.54,20.41,1,58,24.54, 39.91,39,1 418 | 418,158,16.00, 5.56,29.35,0,36,25.92, 58.32,60,0 419 | 419,176, 0.00, 3.14,31.04,1,45,30.18, 4.63,45,0 420 | 420,132, 2.80, 4.79,20.47,1,50,22.15, 11.73,48,0 421 | 421,126, 0.00, 4.55,29.18,0,48,24.94, 36.00,41,0 422 | 422,120, 5.50, 3.51,23.23,0,46,22.40, 90.31,43,0 423 | 423,174, 0.00, 3.86,21.73,0,42,23.37, 0.00,63,0 424 | 424,150,13.80, 5.10,29.45,1,52,27.92, 77.76,55,1 425 | 425,176, 6.00, 3.98,17.20,1,52,21.07, 4.11,61,1 426 | 426,142, 2.20, 3.29,22.70,0,44,23.66, 5.66,42,1 427 | 427,132, 0.00, 3.30,21.61,0,42,24.92, 32.61,33,0 428 | 428,142, 1.32, 7.63,29.98,1,57,31.16, 72.93,33,0 429 | 429,146, 1.16, 2.28,34.53,0,50,28.71, 45.00,49,0 430 | 430,132, 7.20, 3.65,17.16,1,56,23.25, 0.00,34,0 431 | 431,120, 0.00, 3.57,23.22,0,58,27.20, 0.00,32,0 432 | 432,118, 0.00, 3.89,15.96,0,65,20.18, 0.00,16,0 433 | 433,108, 0.00, 1.43,26.26,0,42,19.38, 0.00,16,0 434 | 434,136, 0.00, 4.00,19.06,0,40,21.94, 2.06,16,0 435 | 435,120, 0.00, 2.46,13.39,0,47,22.01, 0.51,18,0 436 | 436,132, 0.00, 3.55, 8.66,1,61,18.50, 3.87,16,0 437 | 437,136, 0.00, 1.77,20.37,0,45,21.51, 2.06,16,0 438 | 438,138, 0.00, 1.86,18.35,1,59,25.38, 6.51,17,0 439 | 439,138, 0.06, 4.15,20.66,0,49,22.59, 2.49,16,0 440 | 440,130, 1.22, 3.30,13.65,0,50,21.40, 3.81,31,0 441 | 441,130, 4.00, 2.40,17.42,0,60,22.05, 0.00,40,0 442 | 442,110, 0.00, 7.14,28.28,0,57,29.00, 0.00,32,0 443 | 443,120, 0.00, 3.98,13.19,1,47,21.89, 0.00,16,0 444 | 444,166, 6.00, 8.80,37.89,0,39,28.70, 43.20,52,0 445 | 445,134, 0.57, 4.75,23.07,0,67,26.33, 0.00,37,0 446 | 446,142, 3.00, 3.69,25.10,0,60,30.08, 38.88,27,0 447 | 447,136, 2.80, 2.53, 9.28,1,61,20.70, 4.55,25,0 448 | 448,142, 0.00, 4.32,25.22,0,47,28.92, 6.53,34,1 449 | 449,130, 0.00, 1.88,12.51,1,52,20.28, 0.00,17,0 450 | 450,124, 1.80, 3.74,16.64,1,42,22.26, 10.49,20,0 451 | 451,144, 4.00, 5.03,25.78,1,57,27.55, 90.00,48,1 452 | 452,136, 1.81, 3.31, 6.74,0,63,19.57, 24.94,24,0 453 | 453,120, 0.00, 2.77,13.35,0,67,23.37, 1.03,18,0 454 | 454,154, 5.53, 3.20,28.81,1,61,26.15, 42.79,42,0 455 | 455,124, 1.60, 7.22,39.68,1,36,31.50, 0.00,51,1 456 | 456,146, 0.64, 4.82,28.02,0,60,28.11, 8.23,39,1 457 | 457,128, 2.24, 2.83,26.48,0,48,23.96, 47.42,27,1 458 | 458,170, 0.40, 4.11,42.06,1,56,33.10, 2.06,57,0 459 | 459,214, 0.40, 5.98,31.72,0,64,28.45, 0.00,58,0 460 | 460,182, 4.20, 4.41,32.10,0,52,28.61, 18.72,52,1 461 | 461,108, 3.00, 1.59,15.23,0,40,20.09, 26.64,55,0 462 | 462,118, 5.40,11.61,30.79,0,64,27.35, 23.97,40,0 463 | 463,132, 0.00, 4.82,33.41,1,62,14.70, 0.00,46,1 464 | -------------------------------------------------------------------------------- /data/fijiquakes.csv: -------------------------------------------------------------------------------- 1 | obs lat long depth mag stations 2 | 1 -20.42 181.62 562 4.8 41 3 | 2 -20.62 181.03 650 4.2 15 4 | 3 -26.00 184.10 42 5.4 43 5 | 4 -17.97 181.66 626 4.1 19 6 | 5 -20.42 181.96 649 4.0 11 7 | 6 -19.68 184.31 195 4.0 12 8 | 7 -11.70 166.10 82 4.8 43 9 | 8 -28.11 181.93 194 4.4 15 10 | 9 -28.74 181.74 211 4.7 35 11 | 10 -17.47 179.59 622 4.3 19 12 | 11 -21.44 180.69 583 4.4 13 13 | 12 -12.26 167.00 249 4.6 16 14 | 13 -18.54 182.11 554 4.4 19 15 | 14 -21.00 181.66 600 4.4 10 16 | 15 -20.70 169.92 139 6.1 94 17 | 16 -15.94 184.95 306 4.3 11 18 | 17 -13.64 165.96 50 6.0 83 19 | 18 -17.83 181.50 590 4.5 21 20 | 19 -23.50 179.78 570 4.4 13 21 | 20 -22.63 180.31 598 4.4 18 22 | 21 -20.84 181.16 576 4.5 17 23 | 22 -10.98 166.32 211 4.2 12 24 | 23 -23.30 180.16 512 4.4 18 25 | 24 -30.20 182.00 125 4.7 22 26 | 25 -19.66 180.28 431 5.4 57 27 | 26 -17.94 181.49 537 4.0 15 28 | 27 -14.72 167.51 155 4.6 18 29 | 28 -16.46 180.79 498 5.2 79 30 | 29 -20.97 181.47 582 4.5 25 31 | 30 -19.84 182.37 328 4.4 17 32 | 31 -22.58 179.24 553 4.6 21 33 | 32 -16.32 166.74 50 4.7 30 34 | 33 -15.55 185.05 292 4.8 42 35 | 34 -23.55 180.80 349 4.0 10 36 | 35 -16.30 186.00 48 4.5 10 37 | 36 -25.82 179.33 600 4.3 13 38 | 37 -18.73 169.23 206 4.5 17 39 | 38 -17.64 181.28 574 4.6 17 40 | 39 -17.66 181.40 585 4.1 17 41 | 40 -18.82 169.33 230 4.4 11 42 | 41 -37.37 176.78 263 4.7 34 43 | 42 -15.31 186.10 96 4.6 32 44 | 43 -24.97 179.82 511 4.4 23 45 | 44 -15.49 186.04 94 4.3 26 46 | 45 -19.23 169.41 246 4.6 27 47 | 46 -30.10 182.30 56 4.9 34 48 | 47 -26.40 181.70 329 4.5 24 49 | 48 -11.77 166.32 70 4.4 18 50 | 49 -24.12 180.08 493 4.3 21 51 | 50 -18.97 185.25 129 5.1 73 52 | 51 -18.75 182.35 554 4.2 13 53 | 52 -19.26 184.42 223 4.0 15 54 | 53 -22.75 173.20 46 4.6 26 55 | 54 -21.37 180.67 593 4.3 13 56 | 55 -20.10 182.16 489 4.2 16 57 | 56 -19.85 182.13 562 4.4 31 58 | 57 -22.70 181.00 445 4.5 17 59 | 58 -22.06 180.60 584 4.0 11 60 | 59 -17.80 181.35 535 4.4 23 61 | 60 -24.20 179.20 530 4.3 12 62 | 61 -20.69 181.55 582 4.7 35 63 | 62 -21.16 182.40 260 4.1 12 64 | 63 -13.82 172.38 613 5.0 61 65 | 64 -11.49 166.22 84 4.6 32 66 | 65 -20.68 181.41 593 4.9 40 67 | 66 -17.10 184.93 286 4.7 25 68 | 67 -20.14 181.60 587 4.1 13 69 | 68 -21.96 179.62 627 5.0 45 70 | 69 -20.42 181.86 530 4.5 27 71 | 70 -15.46 187.81 40 5.5 91 72 | 71 -15.31 185.80 152 4.0 11 73 | 72 -19.86 184.35 201 4.5 30 74 | 73 -11.55 166.20 96 4.3 14 75 | 74 -23.74 179.99 506 5.2 75 76 | 75 -17.70 181.23 546 4.4 35 77 | 76 -23.54 180.04 564 4.3 15 78 | 77 -19.21 184.70 197 4.1 11 79 | 78 -12.11 167.06 265 4.5 23 80 | 79 -21.81 181.71 323 4.2 15 81 | 80 -28.98 181.11 304 5.3 60 82 | 81 -34.02 180.21 75 5.2 65 83 | 82 -23.84 180.99 367 4.5 27 84 | 83 -19.57 182.38 579 4.6 38 85 | 84 -20.12 183.40 284 4.3 15 86 | 85 -17.70 181.70 450 4.0 11 87 | 86 -19.66 184.31 170 4.3 15 88 | 87 -21.50 170.50 117 4.7 32 89 | 88 -23.64 179.96 538 4.5 26 90 | 89 -15.43 186.30 123 4.2 16 91 | 90 -15.41 186.44 69 4.3 42 92 | 91 -15.48 167.53 128 5.1 61 93 | 92 -13.36 167.06 236 4.7 22 94 | 93 -20.64 182.02 497 5.2 64 95 | 94 -19.72 169.71 271 4.2 14 96 | 95 -15.44 185.26 224 4.2 21 97 | 96 -19.73 182.40 375 4.0 18 98 | 97 -27.24 181.11 365 4.5 21 99 | 98 -18.16 183.41 306 5.2 54 100 | 99 -13.66 166.54 50 5.1 45 101 | 100 -24.57 179.92 484 4.7 33 102 | 101 -16.98 185.61 108 4.1 12 103 | 102 -26.20 178.41 583 4.6 25 104 | 103 -21.88 180.39 608 4.7 30 105 | 104 -33.00 181.60 72 4.7 22 106 | 105 -21.33 180.69 636 4.6 29 107 | 106 -19.44 183.50 293 4.2 15 108 | 107 -34.89 180.60 42 4.4 25 109 | 108 -20.24 169.49 100 4.6 22 110 | 109 -22.55 185.90 42 5.7 76 111 | 110 -36.95 177.81 146 5.0 35 112 | 111 -15.75 185.23 280 4.5 28 113 | 112 -16.85 182.31 388 4.2 14 114 | 113 -19.06 182.45 477 4.0 16 115 | 114 -26.11 178.30 617 4.8 39 116 | 115 -26.20 178.35 606 4.4 21 117 | 116 -26.13 178.31 609 4.2 25 118 | 117 -13.66 172.23 46 5.3 67 119 | 118 -13.47 172.29 64 4.7 14 120 | 119 -14.60 167.40 178 4.8 52 121 | 120 -18.96 169.48 248 4.2 13 122 | 121 -14.65 166.97 82 4.8 28 123 | 122 -19.90 178.90 81 4.3 11 124 | 123 -22.05 180.40 606 4.7 27 125 | 124 -19.22 182.43 571 4.5 23 126 | 125 -31.24 180.60 328 4.4 18 127 | 126 -17.93 167.89 49 5.1 43 128 | 127 -19.30 183.84 517 4.2 21 129 | 128 -26.53 178.57 600 5.0 69 130 | 129 -27.72 181.70 94 4.8 59 131 | 130 -19.19 183.51 307 4.3 19 132 | 131 -17.43 185.43 189 4.5 22 133 | 132 -17.05 181.22 527 4.2 24 134 | 133 -19.52 168.98 63 4.5 21 135 | 134 -23.71 180.30 510 4.6 30 136 | 135 -21.30 180.82 624 4.3 14 137 | 136 -16.24 168.02 53 4.7 12 138 | 137 -16.14 187.32 42 5.1 68 139 | 138 -23.95 182.80 199 4.6 14 140 | 139 -25.20 182.60 149 4.9 31 141 | 140 -18.84 184.16 210 4.2 17 142 | 141 -12.66 169.46 658 4.6 43 143 | 142 -20.65 181.40 582 4.0 14 144 | 143 -13.23 167.10 220 5.0 46 145 | 144 -29.91 181.43 205 4.4 34 146 | 145 -14.31 173.50 614 4.2 23 147 | 146 -20.10 184.40 186 4.2 10 148 | 147 -17.80 185.17 97 4.4 22 149 | 148 -21.27 173.49 48 4.9 42 150 | 149 -23.58 180.17 462 5.3 63 151 | 150 -17.90 181.50 573 4.0 19 152 | 151 -23.34 184.50 56 5.7 106 153 | 152 -15.56 167.62 127 6.4 122 154 | 153 -23.83 182.56 229 4.3 24 155 | 154 -11.80 165.80 112 4.2 20 156 | 155 -15.54 167.68 140 4.7 16 157 | 156 -20.65 181.32 597 4.7 39 158 | 157 -11.75 166.07 69 4.2 14 159 | 158 -24.81 180.00 452 4.3 19 160 | 159 -20.90 169.84 93 4.9 31 161 | 160 -11.34 166.24 103 4.6 30 162 | 161 -17.98 180.50 626 4.1 19 163 | 162 -24.34 179.52 504 4.8 34 164 | 163 -13.86 167.16 202 4.6 30 165 | 164 -35.56 180.20 42 4.6 32 166 | 165 -35.48 179.90 59 4.8 35 167 | 166 -34.20 179.43 40 5.0 37 168 | 167 -26.00 182.12 205 5.6 98 169 | 168 -19.89 183.84 244 5.3 73 170 | 169 -23.43 180.00 553 4.7 41 171 | 170 -18.89 169.42 239 4.5 27 172 | 171 -17.82 181.83 640 4.3 24 173 | 172 -25.68 180.34 434 4.6 41 174 | 173 -20.20 180.90 627 4.1 11 175 | 174 -15.20 184.68 99 4.1 14 176 | 175 -15.03 182.29 399 4.1 10 177 | 176 -32.22 180.20 216 5.7 90 178 | 177 -22.64 180.64 544 5.0 50 179 | 178 -17.42 185.16 206 4.5 22 180 | 179 -17.84 181.48 542 4.1 20 181 | 180 -15.02 184.24 339 4.6 27 182 | 181 -18.04 181.75 640 4.5 47 183 | 182 -24.60 183.50 67 4.3 25 184 | 183 -19.88 184.30 161 4.4 17 185 | 184 -20.30 183.00 375 4.2 15 186 | 185 -20.45 181.85 534 4.1 14 187 | 186 -17.67 187.09 45 4.9 62 188 | 187 -22.30 181.90 309 4.3 11 189 | 188 -19.85 181.85 576 4.9 54 190 | 189 -24.27 179.88 523 4.6 24 191 | 190 -15.85 185.13 290 4.6 29 192 | 191 -20.02 184.09 234 5.3 71 193 | 192 -18.56 169.31 223 4.7 35 194 | 193 -17.87 182.00 569 4.6 12 195 | 194 -24.08 179.50 605 4.1 21 196 | 195 -32.20 179.61 422 4.6 41 197 | 196 -20.36 181.19 637 4.2 23 198 | 197 -23.85 182.53 204 4.6 27 199 | 198 -24.00 182.75 175 4.5 14 200 | 199 -20.41 181.74 538 4.3 31 201 | 200 -17.72 180.30 595 5.2 74 202 | 201 -19.67 182.18 360 4.3 23 203 | 202 -17.70 182.20 445 4.0 12 204 | 203 -16.23 183.59 367 4.7 35 205 | 204 -26.72 183.35 190 4.5 36 206 | 205 -12.95 169.09 629 4.5 19 207 | 206 -21.97 182.32 261 4.3 13 208 | 207 -21.96 180.54 603 5.2 66 209 | 208 -20.32 181.69 508 4.5 14 210 | 209 -30.28 180.62 350 4.7 32 211 | 210 -20.20 182.30 533 4.2 11 212 | 211 -30.66 180.13 411 4.7 42 213 | 212 -16.17 184.10 338 4.3 13 214 | 213 -28.25 181.71 226 4.1 19 215 | 214 -20.47 185.68 93 5.4 85 216 | 215 -23.55 180.27 535 4.3 22 217 | 216 -20.94 181.58 573 4.3 21 218 | 217 -26.67 182.40 186 4.2 17 219 | 218 -18.13 181.52 618 4.6 41 220 | 219 -20.21 183.83 242 4.4 29 221 | 220 -18.31 182.39 342 4.2 14 222 | 221 -16.52 185.70 90 4.7 30 223 | 222 -22.36 171.65 130 4.6 39 224 | 223 -22.43 184.48 65 4.9 48 225 | 224 -20.37 182.10 397 4.2 22 226 | 225 -23.77 180.16 505 4.5 26 227 | 226 -13.65 166.66 71 4.9 52 228 | 227 -21.55 182.90 207 4.2 18 229 | 228 -16.24 185.75 154 4.5 22 230 | 229 -23.73 182.53 232 5.0 55 231 | 230 -22.34 171.52 106 5.0 43 232 | 231 -19.40 180.94 664 4.7 34 233 | 232 -24.64 180.81 397 4.3 24 234 | 233 -16.00 182.82 431 4.4 16 235 | 234 -19.62 185.35 57 4.9 31 236 | 235 -23.84 180.13 525 4.5 15 237 | 236 -23.54 179.93 574 4.0 12 238 | 237 -28.23 182.68 74 4.4 20 239 | 238 -21.68 180.63 617 5.0 63 240 | 239 -13.44 166.53 44 4.7 27 241 | 240 -24.96 180.22 470 4.8 41 242 | 241 -20.08 182.74 298 4.5 33 243 | 242 -24.36 182.84 148 4.1 16 244 | 243 -14.70 166.00 48 5.3 16 245 | 244 -18.20 183.68 107 4.8 52 246 | 245 -16.65 185.51 218 5.0 52 247 | 246 -18.11 181.67 597 4.6 28 248 | 247 -17.95 181.65 619 4.3 26 249 | 248 -15.50 186.90 46 4.7 18 250 | 249 -23.36 180.01 553 5.3 61 251 | 250 -19.15 169.50 150 4.2 12 252 | 251 -10.97 166.26 180 4.7 26 253 | 252 -14.85 167.24 97 4.5 26 254 | 253 -17.80 181.38 587 5.1 47 255 | 254 -22.50 170.40 106 4.9 38 256 | 255 -29.10 182.10 179 4.4 19 257 | 256 -20.32 180.88 680 4.2 22 258 | 257 -16.09 184.89 304 4.6 34 259 | 258 -19.18 169.33 254 4.7 35 260 | 259 -23.81 179.36 521 4.2 23 261 | 260 -23.79 179.89 526 4.9 43 262 | 261 -19.02 184.23 270 5.1 72 263 | 262 -20.90 181.51 548 4.7 32 264 | 263 -19.06 169.01 158 4.4 10 265 | 264 -17.88 181.47 562 4.4 27 266 | 265 -19.41 183.05 300 4.2 16 267 | 266 -26.17 184.20 65 4.9 37 268 | 267 -14.95 167.24 130 4.6 16 269 | 268 -18.73 168.80 82 4.4 14 270 | 269 -20.21 182.37 482 4.6 37 271 | 270 -21.29 180.85 607 4.5 23 272 | 271 -19.76 181.41 105 4.4 15 273 | 272 -22.09 180.38 590 4.9 35 274 | 273 -23.80 179.90 498 4.1 12 275 | 274 -20.16 181.99 504 4.2 11 276 | 275 -22.13 180.38 577 5.7 104 277 | 276 -17.44 181.40 529 4.6 25 278 | 277 -23.33 180.18 528 5.0 59 279 | 278 -24.78 179.22 492 4.3 16 280 | 279 -22.00 180.52 561 4.5 19 281 | 280 -19.13 182.51 579 5.2 56 282 | 281 -30.72 180.10 413 4.4 22 283 | 282 -22.32 180.54 565 4.2 12 284 | 283 -16.45 177.77 138 4.6 17 285 | 284 -17.70 185.00 383 4.0 10 286 | 285 -17.95 184.68 260 4.4 21 287 | 286 -24.40 179.85 522 4.7 29 288 | 287 -19.30 180.60 671 4.2 16 289 | 288 -21.13 185.32 123 4.7 36 290 | 289 -18.07 181.57 572 4.5 26 291 | 290 -20.60 182.28 529 5.0 50 292 | 291 -18.48 181.49 641 5.0 49 293 | 292 -13.34 166.20 67 4.8 18 294 | 293 -20.92 181.50 546 4.6 31 295 | 294 -25.31 179.69 507 4.6 35 296 | 295 -15.24 186.21 158 5.0 57 297 | 296 -16.40 185.86 148 5.0 47 298 | 297 -24.57 178.40 562 5.6 80 299 | 298 -17.94 181.51 601 4.0 16 300 | 299 -30.64 181.20 175 4.0 16 301 | 300 -18.64 169.32 260 4.6 23 302 | 301 -13.09 169.28 654 4.4 22 303 | 302 -19.68 184.14 242 4.8 40 304 | 303 -16.44 185.74 126 4.7 30 305 | 304 -21.09 181.38 555 4.6 15 306 | 305 -14.99 171.39 637 4.3 21 307 | 306 -23.30 179.70 500 4.7 29 308 | 307 -17.68 181.36 515 4.1 19 309 | 308 -22.00 180.53 583 4.9 20 310 | 309 -21.38 181.39 501 4.6 36 311 | 310 -32.62 181.50 55 4.8 26 312 | 311 -13.05 169.58 644 4.9 68 313 | 312 -12.93 169.63 641 5.1 57 314 | 313 -18.60 181.91 442 5.4 82 315 | 314 -21.34 181.41 464 4.5 21 316 | 315 -21.48 183.78 200 4.9 54 317 | 316 -17.40 181.02 479 4.4 14 318 | 317 -17.32 181.03 497 4.1 13 319 | 318 -18.77 169.24 218 5.3 53 320 | 319 -26.16 179.50 492 4.5 25 321 | 320 -12.59 167.10 325 4.9 26 322 | 321 -14.82 167.32 123 4.8 28 323 | 322 -21.79 183.48 210 5.2 69 324 | 323 -19.83 182.04 575 4.4 23 325 | 324 -29.50 182.31 129 4.4 14 326 | 325 -12.49 166.36 74 4.9 55 327 | 326 -26.10 182.30 49 4.4 11 328 | 327 -21.04 181.20 483 4.2 10 329 | 328 -10.78 165.77 93 4.6 20 330 | 329 -20.76 185.77 118 4.6 15 331 | 330 -11.41 166.24 83 5.3 55 332 | 331 -19.10 183.87 61 5.3 42 333 | 332 -23.91 180.00 534 4.5 11 334 | 333 -27.33 182.60 42 4.4 11 335 | 334 -12.25 166.60 219 5.0 28 336 | 335 -23.49 179.07 544 5.1 58 337 | 336 -27.18 182.18 56 4.5 14 338 | 337 -25.80 182.10 68 4.5 26 339 | 338 -27.19 182.18 69 5.4 68 340 | 339 -27.27 182.38 45 4.5 16 341 | 340 -27.10 182.18 43 4.7 17 342 | 341 -27.22 182.28 65 4.2 14 343 | 342 -27.38 181.70 80 4.8 13 344 | 343 -27.27 182.50 51 4.5 13 345 | 344 -27.54 182.50 68 4.3 12 346 | 345 -27.20 182.39 69 4.3 14 347 | 346 -27.71 182.47 103 4.3 11 348 | 347 -27.60 182.40 61 4.6 11 349 | 348 -27.38 182.39 69 4.5 12 350 | 349 -21.54 185.48 51 5.0 29 351 | 350 -27.21 182.43 55 4.6 10 352 | 351 -28.96 182.61 54 4.6 15 353 | 352 -12.01 166.29 59 4.9 27 354 | 353 -17.46 181.32 573 4.1 17 355 | 354 -30.17 182.02 56 5.5 68 356 | 355 -27.27 182.36 65 4.7 21 357 | 356 -17.79 181.32 587 5.0 49 358 | 357 -22.19 171.40 150 5.1 49 359 | 358 -17.10 182.68 403 5.5 82 360 | 359 -27.18 182.53 60 4.6 21 361 | 360 -11.64 166.47 130 4.7 19 362 | 361 -17.98 181.58 590 4.2 14 363 | 362 -16.90 185.72 135 4.0 22 364 | 363 -21.98 179.60 583 5.4 67 365 | 364 -32.14 179.90 406 4.3 19 366 | 365 -18.80 169.21 221 4.4 16 367 | 366 -26.78 183.61 40 4.6 22 368 | 367 -20.43 182.37 502 5.1 48 369 | 368 -18.30 183.20 103 4.5 14 370 | 369 -15.83 182.51 423 4.2 21 371 | 370 -23.44 182.93 158 4.1 20 372 | 371 -23.73 179.99 527 5.1 49 373 | 372 -19.89 184.08 219 5.4 105 374 | 373 -17.59 181.09 536 5.1 61 375 | 374 -19.77 181.40 630 5.1 54 376 | 375 -20.31 184.06 249 4.4 21 377 | 376 -15.33 186.75 48 5.7 123 378 | 377 -18.20 181.60 553 4.4 14 379 | 378 -15.36 186.66 112 5.1 57 380 | 379 -15.29 186.42 153 4.6 31 381 | 380 -15.36 186.71 130 5.5 95 382 | 381 -16.24 167.95 188 5.1 68 383 | 382 -13.47 167.14 226 4.4 26 384 | 383 -25.50 182.82 124 5.0 25 385 | 384 -14.32 167.33 204 5.0 49 386 | 385 -20.04 182.01 605 5.1 49 387 | 386 -28.83 181.66 221 5.1 63 388 | 387 -17.82 181.49 573 4.2 14 389 | 388 -27.23 180.98 401 4.5 39 390 | 389 -10.72 165.99 195 4.0 14 391 | 390 -27.00 183.88 56 4.9 36 392 | 391 -20.36 186.16 102 4.3 21 393 | 392 -27.17 183.68 44 4.8 27 394 | 393 -20.94 181.26 556 4.4 21 395 | 394 -17.46 181.90 417 4.2 14 396 | 395 -21.04 181.20 591 4.9 45 397 | 396 -23.70 179.60 646 4.2 21 398 | 397 -17.72 181.42 565 5.3 89 399 | 398 -15.87 188.13 52 5.0 30 400 | 399 -17.84 181.30 535 5.7 112 401 | 400 -13.45 170.30 641 5.3 93 402 | 401 -30.80 182.16 41 4.7 24 403 | 402 -11.63 166.14 109 4.6 36 404 | 403 -30.40 181.40 40 4.3 17 405 | 404 -26.18 178.59 548 5.4 65 406 | 405 -15.70 184.50 118 4.4 30 407 | 406 -17.95 181.50 593 4.3 16 408 | 407 -20.51 182.30 492 4.3 23 409 | 408 -15.36 167.51 123 4.7 28 410 | 409 -23.61 180.23 475 4.4 26 411 | 410 -33.20 181.60 153 4.2 21 412 | 411 -17.68 186.80 112 4.5 35 413 | 412 -22.24 184.56 99 4.8 57 414 | 413 -20.07 169.14 66 4.8 37 415 | 414 -25.04 180.10 481 4.3 15 416 | 415 -21.50 185.20 139 4.4 15 417 | 416 -14.28 167.26 211 5.1 51 418 | 417 -14.43 167.26 151 4.4 17 419 | 418 -32.70 181.70 211 4.4 40 420 | 419 -34.10 181.80 246 4.3 23 421 | 420 -19.70 186.20 47 4.8 19 422 | 421 -24.19 180.38 484 4.3 27 423 | 422 -26.60 182.77 119 4.5 29 424 | 423 -17.04 186.80 70 4.1 22 425 | 424 -22.10 179.71 579 5.1 58 426 | 425 -32.60 180.90 57 4.7 44 427 | 426 -33.00 182.40 176 4.6 28 428 | 427 -20.58 181.24 602 4.7 44 429 | 428 -20.61 182.60 488 4.6 12 430 | 429 -19.47 169.15 149 4.4 15 431 | 430 -17.47 180.96 546 4.2 23 432 | 431 -18.40 183.40 343 4.1 10 433 | 432 -23.33 180.26 530 4.7 22 434 | 433 -18.55 182.23 563 4.0 17 435 | 434 -26.16 178.47 537 4.8 33 436 | 435 -21.80 183.20 325 4.4 19 437 | 436 -27.63 182.93 80 4.3 14 438 | 437 -18.89 169.48 259 4.4 21 439 | 438 -20.30 182.30 476 4.5 10 440 | 439 -20.56 182.04 499 4.5 29 441 | 440 -16.10 185.32 257 4.7 30 442 | 441 -12.66 166.37 165 4.3 18 443 | 442 -21.05 184.68 136 4.7 29 444 | 443 -17.97 168.52 146 4.8 33 445 | 444 -19.83 182.54 524 4.6 14 446 | 445 -22.55 183.81 82 5.1 68 447 | 446 -22.28 183.52 90 4.7 19 448 | 447 -15.72 185.64 138 4.3 21 449 | 448 -20.85 181.59 499 5.1 91 450 | 449 -21.11 181.50 538 5.5 104 451 | 450 -25.31 180.15 467 4.5 25 452 | 451 -26.46 182.50 184 4.3 11 453 | 452 -24.09 179.68 538 4.3 21 454 | 453 -16.96 167.70 45 4.7 23 455 | 454 -23.19 182.80 237 4.3 18 456 | 455 -20.81 184.70 162 4.3 20 457 | 456 -15.03 167.32 136 4.6 20 458 | 457 -18.06 181.59 604 4.5 23 459 | 458 -19.00 185.60 107 4.5 15 460 | 459 -23.53 179.99 538 5.4 87 461 | 460 -18.18 180.63 639 4.6 39 462 | 461 -15.66 186.80 45 4.4 11 463 | 462 -18.00 180.62 636 5.0 100 464 | 463 -18.08 180.70 628 5.2 72 465 | 464 -18.05 180.86 632 4.4 15 466 | 465 -29.90 181.16 215 5.1 51 467 | 466 -20.90 181.90 556 4.4 17 468 | 467 -15.61 167.50 135 4.4 21 469 | 468 -16.03 185.43 297 4.8 25 470 | 469 -17.68 181.11 568 4.4 22 471 | 470 -31.94 180.57 168 4.7 39 472 | 471 -19.14 184.36 269 4.7 31 473 | 472 -18.00 185.48 143 4.4 29 474 | 473 -16.95 185.94 95 4.3 12 475 | 474 -10.79 166.06 142 5.0 40 476 | 475 -20.83 185.90 104 4.5 19 477 | 476 -32.90 181.60 169 4.6 27 478 | 477 -37.93 177.47 65 5.4 65 479 | 478 -29.09 183.20 54 4.6 23 480 | 479 -23.56 180.23 474 4.5 13 481 | 480 -19.60 185.20 125 4.4 13 482 | 481 -21.39 180.68 617 4.5 18 483 | 482 -14.85 184.87 294 4.1 10 484 | 483 -22.70 183.30 180 4.0 13 485 | 484 -32.42 181.21 47 4.9 39 486 | 485 -17.90 181.30 593 4.1 13 487 | 486 -23.58 183.40 94 5.2 79 488 | 487 -34.40 180.50 201 4.4 41 489 | 488 -17.61 181.20 537 4.1 11 490 | 489 -21.07 181.13 594 4.9 43 491 | 490 -13.84 170.62 638 4.6 20 492 | 491 -30.24 181.63 80 4.5 17 493 | 492 -18.49 169.04 211 4.8 30 494 | 493 -23.45 180.23 520 4.2 19 495 | 494 -16.04 183.54 384 4.2 23 496 | 495 -17.14 185.31 223 4.1 15 497 | 496 -22.54 172.91 54 5.5 71 498 | 497 -15.90 185.30 57 4.4 19 499 | 498 -30.04 181.20 49 4.8 20 500 | 499 -24.03 180.22 508 4.2 23 501 | 500 -18.89 184.46 242 4.8 36 502 | 501 -16.51 187.10 62 4.9 46 503 | 502 -20.10 186.30 63 4.6 19 504 | 503 -21.06 183.81 203 4.5 34 505 | 504 -13.07 166.87 132 4.4 24 506 | 505 -23.46 180.09 543 4.6 28 507 | 506 -19.41 182.30 589 4.2 19 508 | 507 -11.81 165.98 51 4.7 28 509 | 508 -11.76 165.96 45 4.4 51 510 | 509 -12.08 165.76 63 4.5 51 511 | 510 -25.59 180.02 485 4.9 48 512 | 511 -26.54 183.63 66 4.7 34 513 | 512 -20.90 184.28 58 5.5 92 514 | 513 -16.99 187.00 70 4.7 30 515 | 514 -23.46 180.17 541 4.6 32 516 | 515 -17.81 181.82 598 4.1 14 517 | 516 -15.17 187.20 50 4.7 28 518 | 517 -11.67 166.02 102 4.6 21 519 | 518 -20.75 184.52 144 4.3 25 520 | 519 -19.50 186.90 58 4.4 20 521 | 520 -26.18 179.79 460 4.7 44 522 | 521 -20.66 185.77 69 4.3 25 523 | 522 -19.22 182.54 570 4.1 22 524 | 523 -24.68 183.33 70 4.7 30 525 | 524 -15.43 167.38 137 4.5 16 526 | 525 -32.45 181.15 41 5.5 81 527 | 526 -21.31 180.84 586 4.5 17 528 | 527 -15.44 167.18 140 4.6 44 529 | 528 -13.26 167.01 213 5.1 70 530 | 529 -15.26 183.13 393 4.4 28 531 | 530 -33.57 180.80 51 4.7 35 532 | 531 -15.77 167.01 64 5.5 73 533 | 532 -15.79 166.83 45 4.6 39 534 | 533 -21.00 183.20 296 4.0 16 535 | 534 -16.28 166.94 50 4.6 24 536 | 535 -23.28 184.60 44 4.8 34 537 | 536 -16.10 167.25 68 4.7 36 538 | 537 -17.70 181.31 549 4.7 33 539 | 538 -15.96 166.69 150 4.2 20 540 | 539 -15.95 167.34 47 5.4 87 541 | 540 -17.56 181.59 543 4.6 34 542 | 541 -15.90 167.42 40 5.5 86 543 | 542 -15.29 166.90 100 4.2 15 544 | 543 -15.86 166.85 85 4.5 22 545 | 544 -16.20 166.80 98 4.5 21 546 | 545 -15.71 166.91 58 4.8 20 547 | 546 -16.45 167.54 125 4.6 18 548 | 547 -11.54 166.18 89 5.4 80 549 | 548 -19.61 181.91 590 4.6 34 550 | 549 -15.61 187.15 49 5.0 30 551 | 550 -21.16 181.41 543 4.3 17 552 | 551 -20.65 182.22 506 4.3 24 553 | 552 -20.33 168.71 40 4.8 38 554 | 553 -15.08 166.62 42 4.7 23 555 | 554 -23.28 184.61 76 4.7 36 556 | 555 -23.44 184.60 63 4.8 27 557 | 556 -23.12 184.42 104 4.2 17 558 | 557 -23.65 184.46 93 4.2 16 559 | 558 -22.91 183.95 64 5.9 118 560 | 559 -22.06 180.47 587 4.6 28 561 | 560 -13.56 166.49 83 4.5 25 562 | 561 -17.99 181.57 579 4.9 49 563 | 562 -23.92 184.47 40 4.7 17 564 | 563 -30.69 182.10 62 4.9 25 565 | 564 -21.92 182.80 273 5.3 78 566 | 565 -25.04 180.97 393 4.2 21 567 | 566 -19.92 183.91 264 4.2 23 568 | 567 -27.75 182.26 174 4.5 18 569 | 568 -17.71 181.18 574 5.2 67 570 | 569 -19.60 183.84 309 4.5 23 571 | 570 -34.68 179.82 75 5.6 79 572 | 571 -14.46 167.26 195 5.2 87 573 | 572 -18.85 187.55 44 4.8 35 574 | 573 -17.02 182.41 420 4.5 29 575 | 574 -20.41 186.51 63 5.0 28 576 | 575 -18.18 182.04 609 4.4 26 577 | 576 -16.49 187.80 40 4.5 18 578 | 577 -17.74 181.31 575 4.6 42 579 | 578 -20.49 181.69 559 4.5 24 580 | 579 -18.51 182.64 405 5.2 74 581 | 580 -27.28 183.40 70 5.1 54 582 | 581 -15.90 167.16 41 4.8 42 583 | 582 -20.57 181.33 605 4.3 18 584 | 583 -11.25 166.36 130 5.1 55 585 | 584 -20.04 181.87 577 4.7 19 586 | 585 -20.89 181.25 599 4.6 20 587 | 586 -16.62 186.74 82 4.8 51 588 | 587 -20.09 168.75 50 4.6 23 589 | 588 -24.96 179.87 480 4.4 25 590 | 589 -20.95 181.42 559 4.6 27 591 | 590 -23.31 179.27 566 5.1 49 592 | 591 -20.95 181.06 611 4.3 20 593 | 592 -21.58 181.90 409 4.4 19 594 | 593 -13.62 167.15 209 4.7 30 595 | 594 -12.72 166.28 70 4.8 47 596 | 595 -21.79 185.00 74 4.1 15 597 | 596 -20.48 169.76 134 4.6 33 598 | 597 -12.84 166.78 150 4.9 35 599 | 598 -17.02 182.93 406 4.0 17 600 | 599 -23.89 182.39 243 4.7 32 601 | 600 -23.07 184.03 89 4.7 32 602 | 601 -27.98 181.96 53 5.2 89 603 | 602 -28.10 182.25 68 4.6 18 604 | 603 -21.24 180.81 605 4.6 34 605 | 604 -21.24 180.86 615 4.9 23 606 | 605 -19.89 174.46 546 5.7 99 607 | 606 -32.82 179.80 176 4.7 26 608 | 607 -22.00 185.50 52 4.4 18 609 | 608 -21.57 185.62 66 4.9 38 610 | 609 -24.50 180.92 377 4.8 43 611 | 610 -33.03 180.20 186 4.6 27 612 | 611 -30.09 182.40 51 4.4 18 613 | 612 -22.75 170.99 67 4.8 35 614 | 613 -17.99 168.98 234 4.7 28 615 | 614 -19.60 181.87 597 4.2 18 616 | 615 -15.65 186.26 64 5.1 54 617 | 616 -17.78 181.53 511 4.8 56 618 | 617 -22.04 184.91 47 4.9 47 619 | 618 -20.06 168.69 49 5.1 49 620 | 619 -18.07 181.54 546 4.3 28 621 | 620 -12.85 165.67 75 4.4 30 622 | 621 -33.29 181.30 60 4.7 33 623 | 622 -34.63 179.10 278 4.7 24 624 | 623 -24.18 179.02 550 5.3 86 625 | 624 -23.78 180.31 518 5.1 71 626 | 625 -22.37 171.50 116 4.9 38 627 | 626 -23.97 179.91 518 4.5 23 628 | 627 -34.12 181.75 75 4.7 41 629 | 628 -25.25 179.86 491 4.2 23 630 | 629 -22.87 172.65 56 5.1 50 631 | 630 -18.48 182.37 376 4.8 57 632 | 631 -21.46 181.02 584 4.2 18 633 | 632 -28.56 183.47 48 4.8 56 634 | 633 -28.56 183.59 53 4.4 20 635 | 634 -21.30 180.92 617 4.5 26 636 | 635 -20.08 183.22 294 4.3 18 637 | 636 -18.82 182.21 417 5.6 129 638 | 637 -19.51 183.97 280 4.0 16 639 | 638 -12.05 167.39 332 5.0 36 640 | 639 -17.40 186.54 85 4.2 28 641 | 640 -23.93 180.18 525 4.6 31 642 | 641 -21.23 181.09 613 4.6 18 643 | 642 -16.23 167.91 182 4.5 28 644 | 643 -28.15 183.40 57 5.0 32 645 | 644 -20.81 185.01 79 4.7 42 646 | 645 -20.72 181.41 595 4.6 36 647 | 646 -23.29 184.00 164 4.8 50 648 | 647 -38.46 176.03 148 4.6 44 649 | 648 -15.48 186.73 82 4.4 17 650 | 649 -37.03 177.52 153 5.6 87 651 | 650 -20.48 181.38 556 4.2 13 652 | 651 -18.12 181.88 649 5.4 88 653 | 652 -18.17 181.98 651 4.8 43 654 | 653 -11.40 166.07 93 5.6 94 655 | 654 -23.10 180.12 533 4.4 27 656 | 655 -14.28 170.34 642 4.7 29 657 | 656 -22.87 171.72 47 4.6 27 658 | 657 -17.59 180.98 548 5.1 79 659 | 658 -27.60 182.10 154 4.6 22 660 | 659 -17.94 180.60 627 4.5 29 661 | 660 -17.88 180.58 622 4.2 23 662 | 661 -30.01 180.80 286 4.8 43 663 | 662 -19.19 182.30 390 4.9 48 664 | 663 -18.14 180.87 624 5.5 105 665 | 664 -23.46 180.11 539 5.0 41 666 | 665 -18.44 181.04 624 4.2 21 667 | 666 -18.21 180.87 631 5.2 69 668 | 667 -18.26 180.98 631 4.8 36 669 | 668 -15.85 184.83 299 4.4 30 670 | 669 -23.82 180.09 498 4.8 40 671 | 670 -18.60 184.28 255 4.4 31 672 | 671 -17.80 181.32 539 4.1 12 673 | 672 -10.78 166.10 195 4.9 45 674 | 673 -18.12 181.71 594 4.6 24 675 | 674 -19.34 182.62 573 4.5 32 676 | 675 -15.34 167.10 128 5.3 18 677 | 676 -24.97 182.85 137 4.8 40 678 | 677 -15.97 186.08 143 4.6 41 679 | 678 -23.47 180.24 511 4.8 37 680 | 679 -23.11 179.15 564 4.7 17 681 | 680 -20.54 181.66 559 4.9 50 682 | 681 -18.92 169.37 248 5.3 60 683 | 682 -20.16 184.27 210 4.4 27 684 | 683 -25.48 180.94 390 4.6 33 685 | 684 -18.19 181.74 616 4.3 17 686 | 685 -15.35 186.40 98 4.4 17 687 | 686 -18.69 169.10 218 4.2 27 688 | 687 -18.89 181.24 655 4.1 14 689 | 688 -17.61 183.32 356 4.2 15 690 | 689 -20.93 181.54 564 5.0 64 691 | 690 -17.60 181.50 548 4.1 10 692 | 691 -17.96 181.40 655 4.3 20 693 | 692 -18.80 182.41 385 5.2 67 694 | 693 -20.61 182.44 518 4.2 10 695 | 694 -20.74 181.53 598 4.5 36 696 | 695 -25.23 179.86 476 4.4 29 697 | 696 -23.90 179.90 579 4.4 16 698 | 697 -18.07 181.58 603 5.0 65 699 | 698 -15.43 185.19 249 4.0 11 700 | 699 -14.30 167.32 208 4.8 25 701 | 700 -18.04 181.57 587 5.0 51 702 | 701 -13.90 167.18 221 4.2 21 703 | 702 -17.64 177.01 545 5.2 91 704 | 703 -17.98 181.51 586 5.2 68 705 | 704 -25.00 180.00 488 4.5 10 706 | 705 -19.45 184.48 246 4.3 15 707 | 706 -16.11 187.48 61 4.5 19 708 | 707 -23.73 179.98 524 4.6 11 709 | 708 -17.74 186.78 104 5.1 71 710 | 709 -21.56 183.23 271 4.4 36 711 | 710 -20.97 181.72 487 4.3 16 712 | 711 -15.45 186.73 83 4.7 37 713 | 712 -15.93 167.91 183 5.6 109 714 | 713 -21.47 185.86 55 4.9 46 715 | 714 -21.44 170.45 166 5.1 22 716 | 715 -22.16 180.49 586 4.6 13 717 | 716 -13.36 172.76 618 4.4 18 718 | 717 -21.22 181.51 524 4.8 49 719 | 718 -26.10 182.50 133 4.2 17 720 | 719 -18.35 185.27 201 4.7 57 721 | 720 -17.20 182.90 383 4.1 11 722 | 721 -22.42 171.40 86 4.7 33 723 | 722 -17.91 181.48 555 4.0 17 724 | 723 -26.53 178.30 605 4.9 43 725 | 724 -26.50 178.29 609 5.0 50 726 | 725 -16.31 168.08 204 4.5 16 727 | 726 -18.76 169.71 287 4.4 23 728 | 727 -17.10 182.80 390 4.0 14 729 | 728 -19.28 182.78 348 4.5 30 730 | 729 -23.50 180.00 550 4.7 23 731 | 730 -21.26 181.69 487 4.4 20 732 | 731 -17.97 181.48 578 4.7 43 733 | 732 -26.02 181.20 361 4.7 32 734 | 733 -30.30 180.80 275 4.0 14 735 | 734 -24.89 179.67 498 4.2 14 736 | 735 -14.57 167.24 162 4.5 18 737 | 736 -15.40 186.87 78 4.7 44 738 | 737 -22.06 183.95 134 4.5 17 739 | 738 -25.14 178.42 554 4.1 15 740 | 739 -20.30 181.40 608 4.6 13 741 | 740 -25.28 181.17 367 4.3 25 742 | 741 -20.63 181.61 599 4.6 30 743 | 742 -19.02 186.83 45 5.2 65 744 | 743 -22.10 185.30 50 4.6 22 745 | 744 -38.59 175.70 162 4.7 36 746 | 745 -19.30 183.00 302 5.0 65 747 | 746 -31.03 181.59 57 5.2 49 748 | 747 -30.51 181.30 203 4.4 20 749 | 748 -22.55 183.34 66 4.6 18 750 | 749 -22.14 180.64 591 4.5 18 751 | 750 -25.60 180.30 440 4.0 12 752 | 751 -18.04 181.84 611 4.2 20 753 | 752 -21.29 185.77 57 5.3 69 754 | 753 -21.08 180.85 627 5.9 119 755 | 754 -20.64 169.66 89 4.9 42 756 | 755 -24.41 180.03 500 4.5 34 757 | 756 -12.16 167.03 264 4.4 14 758 | 757 -17.10 185.90 127 5.4 75 759 | 758 -21.13 185.60 85 5.3 86 760 | 759 -12.34 167.43 50 5.1 47 761 | 760 -16.43 186.73 75 4.1 20 762 | 761 -20.70 184.30 182 4.3 17 763 | 762 -21.18 180.92 619 4.5 18 764 | 763 -17.78 185.33 223 4.1 10 765 | 764 -21.57 183.86 156 5.1 70 766 | 765 -13.70 166.75 46 5.3 71 767 | 766 -12.27 167.41 50 4.5 29 768 | 767 -19.10 184.52 230 4.1 16 769 | 768 -19.85 184.51 184 4.4 26 770 | 769 -11.37 166.55 188 4.7 24 771 | 770 -20.70 186.30 80 4.0 10 772 | 771 -20.24 185.10 86 5.1 61 773 | 772 -16.40 182.73 391 4.0 16 774 | 773 -19.60 184.53 199 4.3 21 775 | 774 -21.63 180.77 592 4.3 21 776 | 775 -21.60 180.50 595 4.0 22 777 | 776 -21.77 181.00 618 4.1 10 778 | 777 -21.80 183.60 213 4.4 17 779 | 778 -21.05 180.90 616 4.3 10 780 | 779 -10.80 165.80 175 4.2 12 781 | 780 -17.90 181.50 589 4.0 12 782 | 781 -22.26 171.44 83 4.5 25 783 | 782 -22.33 171.46 119 4.7 32 784 | 783 -24.04 184.85 70 5.0 48 785 | 784 -20.40 186.10 74 4.3 22 786 | 785 -15.00 184.62 40 5.1 54 787 | 786 -27.87 183.40 87 4.7 34 788 | 787 -14.12 166.64 63 5.3 69 789 | 788 -23.61 180.27 537 5.0 63 790 | 789 -21.56 185.50 47 4.5 29 791 | 790 -21.19 181.58 490 5.0 77 792 | 791 -18.07 181.65 593 4.1 16 793 | 792 -26.00 178.43 644 4.9 27 794 | 793 -20.21 181.90 576 4.1 16 795 | 794 -28.00 182.00 199 4.0 16 796 | 795 -20.74 180.70 589 4.4 27 797 | 796 -31.80 180.60 178 4.5 19 798 | 797 -18.91 169.46 248 4.4 33 799 | 798 -20.45 182.10 500 4.5 37 800 | 799 -22.90 183.80 71 4.3 19 801 | 800 -18.11 181.63 568 4.3 36 802 | 801 -23.80 184.70 42 5.0 36 803 | 802 -23.42 180.21 510 4.5 37 804 | 803 -23.20 184.80 97 4.5 13 805 | 804 -12.93 169.52 663 4.4 30 806 | 805 -21.14 181.06 625 4.5 35 807 | 806 -19.13 184.97 210 4.1 22 808 | 807 -21.08 181.30 557 4.9 78 809 | 808 -20.07 181.75 582 4.7 27 810 | 809 -20.90 182.02 402 4.3 18 811 | 810 -25.04 179.84 474 4.6 32 812 | 811 -21.85 180.89 577 4.6 43 813 | 812 -19.34 186.59 56 5.2 49 814 | 813 -15.83 167.10 43 4.5 19 815 | 814 -23.73 183.00 118 4.3 11 816 | 815 -18.10 181.72 544 4.6 52 817 | 816 -22.12 180.49 532 4.0 14 818 | 817 -15.39 185.10 237 4.5 39 819 | 818 -16.21 186.52 111 4.8 30 820 | 819 -21.75 180.67 595 4.6 30 821 | 820 -22.10 180.40 603 4.1 11 822 | 821 -24.97 179.54 505 4.9 50 823 | 822 -19.36 186.36 100 4.7 40 824 | 823 -22.14 179.62 587 4.1 23 825 | 824 -21.48 182.44 364 4.3 20 826 | 825 -18.54 168.93 100 4.4 17 827 | 826 -21.62 182.40 350 4.0 12 828 | 827 -13.40 166.90 228 4.8 15 829 | 828 -15.50 185.30 93 4.4 25 830 | 829 -15.67 185.23 66 4.4 34 831 | 830 -21.78 183.11 225 4.6 21 832 | 831 -30.63 180.90 334 4.2 28 833 | 832 -15.70 185.10 70 4.1 15 834 | 833 -19.20 184.37 220 4.2 18 835 | 834 -19.70 182.44 397 4.0 12 836 | 835 -19.40 182.29 326 4.1 15 837 | 836 -15.85 185.90 121 4.1 17 838 | 837 -17.38 168.63 209 4.7 29 839 | 838 -24.33 179.97 510 4.8 44 840 | 839 -20.89 185.26 54 5.1 44 841 | 840 -18.97 169.44 242 5.0 41 842 | 841 -17.99 181.62 574 4.8 38 843 | 842 -15.80 185.25 82 4.4 39 844 | 843 -25.42 182.65 102 5.0 36 845 | 844 -21.60 169.90 43 5.2 56 846 | 845 -26.06 180.05 432 4.2 19 847 | 846 -17.56 181.23 580 4.1 16 848 | 847 -25.63 180.26 464 4.8 60 849 | 848 -25.46 179.98 479 4.5 27 850 | 849 -22.23 180.48 581 5.0 54 851 | 850 -21.55 181.39 513 5.1 81 852 | 851 -15.18 185.93 77 4.1 16 853 | 852 -13.79 166.56 68 4.7 41 854 | 853 -15.18 167.23 71 5.2 59 855 | 854 -18.78 186.72 68 4.8 48 856 | 855 -17.90 181.41 586 4.5 33 857 | 856 -18.50 185.40 243 4.0 11 858 | 857 -14.82 171.17 658 4.7 49 859 | 858 -15.65 185.17 315 4.1 15 860 | 859 -30.01 181.15 210 4.3 17 861 | 860 -13.16 167.24 278 4.3 17 862 | 861 -21.03 180.78 638 4.0 14 863 | 862 -21.40 180.78 615 4.7 51 864 | 863 -17.93 181.89 567 4.1 27 865 | 864 -20.87 181.70 560 4.2 13 866 | 865 -12.01 166.66 99 4.8 36 867 | 866 -19.10 169.63 266 4.8 31 868 | 867 -22.85 181.37 397 4.2 15 869 | 868 -17.08 185.96 180 4.2 29 870 | 869 -21.14 174.21 40 5.7 78 871 | 870 -12.23 167.02 242 6.0 132 872 | 871 -20.91 181.57 530 4.2 20 873 | 872 -11.38 167.05 133 4.5 32 874 | 873 -11.02 167.01 62 4.9 36 875 | 874 -22.09 180.58 580 4.4 22 876 | 875 -17.80 181.20 530 4.0 15 877 | 876 -18.94 182.43 566 4.3 20 878 | 877 -18.85 182.20 501 4.2 23 879 | 878 -21.91 181.28 548 4.5 30 880 | 879 -22.03 179.77 587 4.8 31 881 | 880 -18.10 181.63 592 4.4 28 882 | 881 -18.40 184.84 221 4.2 18 883 | 882 -21.20 181.40 560 4.2 12 884 | 883 -12.00 166.20 94 5.0 31 885 | 884 -11.70 166.30 139 4.2 15 886 | 885 -26.72 182.69 162 5.2 64 887 | 886 -24.39 178.98 562 4.5 30 888 | 887 -19.64 169.50 204 4.6 35 889 | 888 -21.35 170.04 56 5.0 22 890 | 889 -22.82 184.52 49 5.0 52 891 | 890 -38.28 177.10 100 5.4 71 892 | 891 -12.57 167.11 231 4.8 28 893 | 892 -22.24 180.28 601 4.2 21 894 | 893 -13.80 166.53 42 5.5 70 895 | 894 -21.07 183.78 180 4.3 25 896 | 895 -17.74 181.25 559 4.1 16 897 | 896 -23.87 180.15 524 4.4 22 898 | 897 -21.29 185.80 69 4.9 74 899 | 898 -22.20 180.58 594 4.5 45 900 | 899 -15.24 185.11 262 4.9 56 901 | 900 -17.82 181.27 538 4.0 33 902 | 901 -32.14 180.00 331 4.5 27 903 | 902 -19.30 185.86 48 5.0 40 904 | 903 -33.09 180.94 47 4.9 47 905 | 904 -20.18 181.62 558 4.5 31 906 | 905 -17.46 181.42 524 4.2 16 907 | 906 -17.44 181.33 545 4.2 37 908 | 907 -24.71 179.85 477 4.2 34 909 | 908 -21.53 170.52 129 5.2 30 910 | 909 -19.17 169.53 268 4.3 21 911 | 910 -28.05 182.39 117 5.1 43 912 | 911 -23.39 179.97 541 4.6 50 913 | 912 -22.33 171.51 112 4.6 14 914 | 913 -15.28 185.98 162 4.4 36 915 | 914 -20.27 181.51 609 4.4 32 916 | 915 -10.96 165.97 76 4.9 64 917 | 916 -21.52 169.75 61 5.1 40 918 | 917 -19.57 184.47 202 4.2 28 919 | 918 -23.08 183.45 90 4.7 30 920 | 919 -25.06 182.80 133 4.0 14 921 | 920 -17.85 181.44 589 5.6 115 922 | 921 -15.99 167.95 190 5.3 81 923 | 922 -20.56 184.41 138 5.0 82 924 | 923 -17.98 181.61 598 4.3 27 925 | 924 -18.40 181.77 600 4.1 11 926 | 925 -27.64 182.22 162 5.1 67 927 | 926 -20.99 181.02 626 4.5 36 928 | 927 -14.86 167.32 137 4.9 22 929 | 928 -29.33 182.72 57 5.4 61 930 | 929 -25.81 182.54 201 4.7 40 931 | 930 -14.10 166.01 69 4.8 29 932 | 931 -17.63 185.13 219 4.5 28 933 | 932 -23.47 180.21 553 4.2 23 934 | 933 -23.92 180.21 524 4.6 50 935 | 934 -20.88 185.18 51 4.6 28 936 | 935 -20.25 184.75 107 5.6 121 937 | 936 -19.33 186.16 44 5.4 110 938 | 937 -18.14 181.71 574 4.0 20 939 | 938 -22.41 183.99 128 5.2 72 940 | 939 -20.77 181.16 568 4.2 12 941 | 940 -17.95 181.73 583 4.7 57 942 | 941 -20.83 181.01 622 4.3 15 943 | 942 -27.84 182.10 193 4.8 27 944 | 943 -19.94 182.39 544 4.6 30 945 | 944 -23.60 183.99 118 5.4 88 946 | 945 -23.70 184.13 51 4.8 27 947 | 946 -30.39 182.40 63 4.6 22 948 | 947 -18.98 182.32 442 4.2 22 949 | 948 -27.89 182.92 87 5.5 67 950 | 949 -23.50 184.90 61 4.7 16 951 | 950 -23.73 184.49 60 4.7 35 952 | 951 -17.93 181.62 561 4.5 32 953 | 952 -35.94 178.52 138 5.5 78 954 | 953 -18.68 184.50 174 4.5 34 955 | 954 -23.47 179.95 543 4.1 21 956 | 955 -23.49 180.06 530 4.0 23 957 | 956 -23.85 180.26 497 4.3 32 958 | 957 -27.08 183.44 63 4.7 27 959 | 958 -20.88 184.95 82 4.9 50 960 | 959 -20.97 181.20 605 4.5 31 961 | 960 -21.71 183.58 234 4.7 55 962 | 961 -23.90 184.60 41 4.5 22 963 | 962 -15.78 167.44 40 4.8 42 964 | 963 -12.57 166.72 137 4.3 20 965 | 964 -19.69 184.23 223 4.1 23 966 | 965 -22.04 183.95 109 5.4 61 967 | 966 -17.99 181.59 595 4.1 26 968 | 967 -23.50 180.13 512 4.8 40 969 | 968 -21.40 180.74 613 4.2 20 970 | 969 -15.86 166.98 60 4.8 25 971 | 970 -23.95 184.64 43 5.4 45 972 | 971 -25.79 182.38 172 4.4 14 973 | 972 -23.75 184.50 54 5.2 74 974 | 973 -24.10 184.50 68 4.7 23 975 | 974 -18.56 169.05 217 4.9 35 976 | 975 -23.30 184.68 102 4.9 27 977 | 976 -17.03 185.74 178 4.2 32 978 | 977 -20.77 183.71 251 4.4 47 979 | 978 -28.10 183.50 42 4.4 17 980 | 979 -18.83 182.26 575 4.3 11 981 | 980 -23.00 170.70 43 4.9 20 982 | 981 -20.82 181.67 577 5.0 67 983 | 982 -22.95 170.56 42 4.7 21 984 | 983 -28.22 183.60 75 4.9 49 985 | 984 -27.99 183.50 71 4.3 22 986 | 985 -15.54 187.15 60 4.5 17 987 | 986 -12.37 166.93 291 4.2 16 988 | 987 -22.33 171.66 125 5.2 51 989 | 988 -22.70 170.30 69 4.8 27 990 | 989 -17.86 181.30 614 4.0 12 991 | 990 -16.00 184.53 108 4.7 33 992 | 991 -20.73 181.42 575 4.3 18 993 | 992 -15.45 181.42 409 4.3 27 994 | 993 -20.05 183.86 243 4.9 65 995 | 994 -17.95 181.37 642 4.0 17 996 | 995 -17.70 188.10 45 4.2 10 997 | 996 -25.93 179.54 470 4.4 22 998 | 997 -12.28 167.06 248 4.7 35 999 | 998 -20.13 184.20 244 4.5 34 1000 | 999 -17.40 187.80 40 4.5 14 1001 | 1000 -21.59 170.56 165 6.0 119 1002 | -------------------------------------------------------------------------------- /data/geysers.csv: -------------------------------------------------------------------------------- 1 | index,eruptions,waiting 2 | 1, 3.600,79 3 | 2, 1.800,54 4 | 3, 3.333,74 5 | 4, 2.283,62 6 | 5, 4.533,85 7 | 6, 2.883,55 8 | 7, 4.700,88 9 | 8, 3.600,85 10 | 9, 1.950,51 11 | 10,4.350,85 12 | 11,1.833,54 13 | 12,3.917,84 14 | 13,4.200,78 15 | 14,1.750,47 16 | 15,4.700,83 17 | 16,2.167,52 18 | 17,1.750,62 19 | 18,4.800,84 20 | 19,1.600,52 21 | 20,4.250,79 22 | 21,1.800,51 23 | 22,1.750,47 24 | 23,3.450,78 25 | 24,3.067,69 26 | 25,4.533,74 27 | 26,3.600,83 28 | 27,1.967,55 29 | 28,4.083,76 30 | 29,3.850,78 31 | 30,4.433,79 32 | 31,4.300,73 33 | 32,4.467,77 34 | 33,3.367,66 35 | 34,4.033,80 36 | 35,3.833,74 37 | 36,2.017,52 38 | 37,1.867,48 39 | 38,4.833,80 40 | 39,1.833,59 41 | 40,4.783,90 42 | 41,4.350,80 43 | 42,1.883,58 44 | 43,4.567,84 45 | 44,1.750,58 46 | 45,4.533,73 47 | 46,3.317,83 48 | 47,3.833,64 49 | 48,2.100,53 50 | 49,4.633,82 51 | 50,2.000,59 52 | 51,4.800,75 53 | 52,4.716,90 54 | 53,1.833,54 55 | 54,4.833,80 56 | 55,1.733,54 57 | 56,4.883,83 58 | 57,3.717,71 59 | 58,1.667,64 60 | 59,4.567,77 61 | 60,4.317,81 62 | 61,2.233,59 63 | 62,4.500,84 64 | 63,1.750,48 65 | 64,4.800,82 66 | 65,1.817,60 67 | 66,4.400,92 68 | 67,4.167,78 69 | 68,4.700,78 70 | 69,2.067,65 71 | 70,4.700,73 72 | 71,4.033,82 73 | 72,1.967,56 74 | 73,4.500,79 75 | 74,4.000,71 76 | 75,1.983,62 77 | 76,5.067,76 78 | 77,2.017,60 79 | 78,4.567,78 80 | 79,3.883,76 81 | 80,3.600,83 82 | 81,4.133,75 83 | 82,4.333,82 84 | 83,4.100,70 85 | 84,2.633,65 86 | 85,4.067,73 87 | 86,4.933,88 88 | 87,3.950,76 89 | 88,4.517,80 90 | 89,2.167,48 91 | 90,4.000,86 92 | 91,2.200,60 93 | 92,4.333,90 94 | 93,1.867,50 95 | 94,4.817,78 96 | 95,1.833,63 97 | 96,4.300,72 98 | 97,4.667,84 99 | 98,3.750,75 100 | 99,1.867,51 101 | 100,4.900,82 102 | 101,2.483,62 103 | 102,4.367,88 104 | 103,2.100,49 105 | 104,4.500,83 106 | 105,4.050,81 107 | 106,1.867,47 108 | 107,4.700,84 109 | 108,1.783,52 110 | 109,4.850,86 111 | 110,3.683,81 112 | 111,4.733,75 113 | 112,2.300,59 114 | 113,4.900,89 115 | 114,4.417,79 116 | 115,1.700,59 117 | 116,4.633,81 118 | 117,2.317,50 119 | 118,4.600,85 120 | 119,1.817,59 121 | 120,4.417,87 122 | 121,2.617,53 123 | 122,4.067,69 124 | 123,4.250,77 125 | 124,1.967,56 126 | 125,4.600,88 127 | 126,3.767,81 128 | 127,1.917,45 129 | 128,4.500,82 130 | 129,2.267,55 131 | 130,4.650,90 132 | 131,1.867,45 133 | 132,4.167,83 134 | 133,2.800,56 135 | 134,4.333,89 136 | 135,1.833,46 137 | 136,4.383,82 138 | 137,1.883,51 139 | 138,4.933,86 140 | 139,2.033,53 141 | 140,3.733,79 142 | 141,4.233,81 143 | 142,2.233,60 144 | 143,4.533,82 145 | 144,4.817,77 146 | 145,4.333,76 147 | 146,1.983,59 148 | 147,4.633,80 149 | 148,2.017,49 150 | 149,5.100,96 151 | 150,1.800,53 152 | 151,5.033,77 153 | 152,4.000,77 154 | 153,2.400,65 155 | 154,4.600,81 156 | 155,3.567,71 157 | 156,4.000,70 158 | 157,4.500,81 159 | 158,4.083,93 160 | 159,1.800,53 161 | 160,3.967,89 162 | 161,2.200,45 163 | 162,4.150,86 164 | 163,2.000,58 165 | 164,3.833,78 166 | 165,3.500,66 167 | 166,4.583,76 168 | 167,2.367,63 169 | 168,5.000,88 170 | 169,1.933,52 171 | 170,4.617,93 172 | 171,1.917,49 173 | 172,2.083,57 174 | 173,4.583,77 175 | 174,3.333,68 176 | 175,4.167,81 177 | 176,4.333,81 178 | 177,4.500,73 179 | 178,2.417,50 180 | 179,4.000,85 181 | 180,4.167,74 182 | 181,1.883,55 183 | 182,4.583,77 184 | 183,4.250,83 185 | 184,3.767,83 186 | 185,2.033,51 187 | 186,4.433,78 188 | 187,4.083,84 189 | 188,1.833,46 190 | 189,4.417,83 191 | 190,2.183,55 192 | 191,4.800,81 193 | 192,1.833,57 194 | 193,4.800,76 195 | 194,4.100,84 196 | 195,3.966,77 197 | 196,4.233,81 198 | 197,3.500,87 199 | 198,4.366,77 200 | 199,2.250,51 201 | 200,4.667,78 202 | 201,2.100,60 203 | 202,4.350,82 204 | 203,4.133,91 205 | 204,1.867,53 206 | 205,4.600,78 207 | 206,1.783,46 208 | 207,4.367,77 209 | 208,3.850,84 210 | 209,1.933,49 211 | 210,4.500,83 212 | 211,2.383,71 213 | 212,4.700,80 214 | 213,1.867,49 215 | 214,3.833,75 216 | 215,3.417,64 217 | 216,4.233,76 218 | 217,2.400,53 219 | 218,4.800,94 220 | 219,2.000,55 221 | 220,4.150,76 222 | 221,1.867,50 223 | 222,4.267,82 224 | 223,1.750,54 225 | 224,4.483,75 226 | 225,4.000,78 227 | 226,4.117,79 228 | 227,4.083,78 229 | 228,4.267,78 230 | 229,3.917,70 231 | 230,4.550,79 232 | 231,4.083,70 233 | 232,2.417,54 234 | 233,4.183,86 235 | 234,2.217,50 236 | 235,4.450,90 237 | 236,1.883,54 238 | 237,1.850,54 239 | 238,4.283,77 240 | 239,3.950,79 241 | 240,2.333,64 242 | 241,4.150,75 243 | 242,2.350,47 244 | 243,4.933,86 245 | 244,2.900,63 246 | 245,4.583,85 247 | 246,3.833,82 248 | 247,2.083,57 249 | 248,4.367,82 250 | 249,2.133,67 251 | 250,4.350,74 252 | 251,2.200,54 253 | 252,4.450,83 254 | 253,3.567,73 255 | 254,4.500,73 256 | 255,4.150,88 257 | 256,3.817,80 258 | 257,3.917,71 259 | 258,4.450,83 260 | 259,2.000,56 261 | 260,4.283,79 262 | 261,4.767,78 263 | 262,4.533,84 264 | 263,1.850,58 265 | 264,4.250,83 266 | 265,1.983,43 267 | 266,2.250,60 268 | 267,4.750,75 269 | 268,4.117,81 270 | 269,2.150,46 271 | 270,4.417,90 272 | 271,1.817,46 273 | 272,4.467,74 274 | -------------------------------------------------------------------------------- /data/glass.txt: -------------------------------------------------------------------------------- 1 | RI Na Mg Al Si K Ca Ba Fe type 2 | 1 3.01 13.64 4.49 1.10 71.78 0.06 8.75 0.00 0.00 WinF 3 | 2 -0.39 13.89 3.60 1.36 72.73 0.48 7.83 0.00 0.00 WinF 4 | 3 -1.82 13.53 3.55 1.54 72.99 0.39 7.78 0.00 0.00 WinF 5 | 4 -0.34 13.21 3.69 1.29 72.61 0.57 8.22 0.00 0.00 WinF 6 | 5 -0.58 13.27 3.62 1.24 73.08 0.55 8.07 0.00 0.00 WinF 7 | 6 -2.04 12.79 3.61 1.62 72.97 0.64 8.07 0.00 0.26 WinF 8 | 7 -0.57 13.30 3.60 1.14 73.09 0.58 8.17 0.00 0.00 WinF 9 | 8 -0.44 13.15 3.61 1.05 73.24 0.57 8.24 0.00 0.00 WinF 10 | 9 1.18 14.04 3.58 1.37 72.08 0.56 8.30 0.00 0.00 WinF 11 | 10 -0.45 13.00 3.60 1.36 72.99 0.57 8.40 0.00 0.11 WinF 12 | 11 -2.29 12.72 3.46 1.56 73.20 0.67 8.09 0.00 0.24 WinF 13 | 12 -0.37 12.80 3.66 1.27 73.01 0.60 8.56 0.00 0.00 WinF 14 | 13 -2.11 12.88 3.43 1.40 73.28 0.69 8.05 0.00 0.24 WinF 15 | 14 -0.52 12.86 3.56 1.27 73.21 0.54 8.38 0.00 0.17 WinF 16 | 15 -0.37 12.61 3.59 1.31 73.29 0.58 8.50 0.00 0.00 WinF 17 | 16 -0.39 12.81 3.54 1.23 73.24 0.58 8.39 0.00 0.00 WinF 18 | 17 -0.16 12.68 3.67 1.16 73.11 0.61 8.70 0.00 0.00 WinF 19 | 18 3.96 14.36 3.85 0.89 71.36 0.15 9.15 0.00 0.00 WinF 20 | 19 1.11 13.90 3.73 1.18 72.12 0.06 8.89 0.00 0.00 WinF 21 | 20 -0.65 13.02 3.54 1.69 72.73 0.54 8.44 0.00 0.07 WinF 22 | 21 -0.50 12.82 3.55 1.49 72.75 0.54 8.52 0.00 0.19 WinF 23 | 22 1.66 14.77 3.75 0.29 72.02 0.03 9.00 0.00 0.00 WinF 24 | 23 -0.64 12.78 3.62 1.29 72.79 0.59 8.70 0.00 0.00 WinF 25 | 24 -0.49 12.81 3.57 1.35 73.02 0.62 8.59 0.00 0.00 WinF 26 | 25 -0.80 13.38 3.50 1.15 72.85 0.50 8.43 0.00 0.00 WinF 27 | 26 -0.36 12.98 3.54 1.21 73.00 0.65 8.53 0.00 0.00 WinF 28 | 27 -0.07 13.21 3.48 1.41 72.64 0.59 8.43 0.00 0.00 WinF 29 | 28 -0.79 12.87 3.48 1.33 73.04 0.56 8.43 0.00 0.00 WinF 30 | 29 -0.32 12.56 3.52 1.43 73.15 0.57 8.54 0.00 0.00 WinF 31 | 30 -0.16 13.08 3.49 1.28 72.86 0.60 8.49 0.00 0.00 WinF 32 | 31 -0.32 12.65 3.56 1.30 73.08 0.61 8.69 0.00 0.14 WinF 33 | 32 -0.53 12.84 3.50 1.14 73.27 0.56 8.55 0.00 0.00 WinF 34 | 33 -0.25 12.85 3.48 1.23 72.97 0.61 8.56 0.09 0.22 WinF 35 | 34 -0.47 12.57 3.47 1.38 73.39 0.60 8.55 0.00 0.06 WinF 36 | 35 -0.17 12.69 3.54 1.34 72.95 0.57 8.75 0.00 0.00 WinF 37 | 36 -2.33 13.29 3.45 1.21 72.74 0.56 8.57 0.00 0.00 WinF 38 | 37 1.09 13.89 3.53 1.32 71.81 0.51 8.78 0.11 0.00 WinF 39 | 38 -0.03 12.74 3.48 1.35 72.96 0.64 8.68 0.00 0.00 WinF 40 | 39 4.13 14.21 3.82 0.47 71.77 0.11 9.57 0.00 0.00 WinF 41 | 40 4.13 14.21 3.82 0.47 71.77 0.11 9.57 0.00 0.00 WinF 42 | 41 -0.07 12.79 3.50 1.12 73.03 0.64 8.77 0.00 0.00 WinF 43 | 42 -0.45 12.71 3.42 1.20 73.20 0.59 8.64 0.00 0.00 WinF 44 | 43 -0.21 13.21 3.39 1.33 72.76 0.59 8.59 0.00 0.00 WinF 45 | 44 4.10 13.73 3.84 0.72 71.76 0.17 9.74 0.00 0.00 WinF 46 | 45 -0.14 12.73 3.43 1.19 72.95 0.62 8.76 0.00 0.30 WinF 47 | 46 1.00 13.49 3.48 1.35 71.95 0.55 9.00 0.00 0.00 WinF 48 | 47 0.69 13.19 3.37 1.18 72.72 0.57 8.83 0.00 0.16 WinF 49 | 48 8.67 13.99 3.70 0.71 71.57 0.02 9.82 0.00 0.10 WinF 50 | 49 4.23 13.21 3.77 0.79 71.99 0.13 10.02 0.00 0.00 WinF 51 | 50 0.98 13.58 3.35 1.23 72.08 0.59 8.91 0.00 0.00 WinF 52 | 51 5.20 13.72 3.72 0.51 71.75 0.09 10.06 0.00 0.16 WinF 53 | 52 1.26 13.20 3.33 1.28 72.36 0.60 9.14 0.00 0.11 WinF 54 | 53 0.08 13.43 2.87 1.19 72.84 0.55 9.03 0.00 0.00 WinF 55 | 54 0.37 13.14 2.84 1.28 72.85 0.55 9.07 0.00 0.00 WinF 56 | 55 -0.22 13.21 2.81 1.29 72.98 0.51 9.02 0.00 0.09 WinF 57 | 56 -0.31 12.45 2.71 1.29 73.70 0.56 9.06 0.00 0.24 WinF 58 | 57 -5.85 12.99 3.47 1.12 72.98 0.62 8.35 0.00 0.31 WinF 59 | 58 0.24 12.87 3.48 1.29 72.95 0.60 8.43 0.00 0.00 WinF 60 | 59 -0.46 13.48 3.74 1.17 72.99 0.59 8.03 0.00 0.00 WinF 61 | 60 -0.46 13.39 3.66 1.19 72.79 0.57 8.27 0.00 0.11 WinF 62 | 61 1.05 13.60 3.62 1.11 72.64 0.14 8.76 0.00 0.00 WinF 63 | 62 1.77 13.81 3.58 1.32 71.72 0.12 8.67 0.69 0.00 WinF 64 | 63 3.72 13.51 3.86 0.88 71.79 0.23 9.54 0.00 0.11 WinF 65 | 64 4.27 14.17 3.81 0.78 71.35 0.00 9.69 0.00 0.00 WinF 66 | 65 3.72 13.48 3.74 0.90 72.01 0.18 9.61 0.00 0.07 WinF 67 | 66 2.99 13.69 3.59 1.12 71.96 0.09 9.40 0.00 0.00 WinF 68 | 67 3.52 13.05 3.65 0.87 72.22 0.19 9.85 0.00 0.17 WinF 69 | 68 3.52 13.05 3.65 0.87 72.32 0.19 9.85 0.00 0.17 WinF 70 | 69 3.52 13.12 3.58 0.90 72.20 0.23 9.82 0.00 0.16 WinF 71 | 70 5.00 13.31 3.58 0.82 71.99 0.12 10.17 0.00 0.03 WinF 72 | 71 -2.26 14.86 3.67 1.74 71.87 0.16 7.36 0.00 0.12 WinNF 73 | 72 0.48 13.64 3.87 1.27 71.96 0.54 8.32 0.00 0.32 WinNF 74 | 73 -2.07 13.09 3.59 1.52 73.10 0.67 7.83 0.00 0.00 WinNF 75 | 74 -1.69 13.34 3.57 1.57 72.87 0.61 7.89 0.00 0.00 WinNF 76 | 75 -2.04 13.02 3.56 1.54 73.11 0.72 7.90 0.00 0.00 WinNF 77 | 76 -2.10 13.02 3.58 1.51 73.12 0.69 7.96 0.00 0.00 WinNF 78 | 77 -1.55 13.44 3.61 1.54 72.39 0.66 8.03 0.00 0.00 WinNF 79 | 78 -1.73 13.00 3.58 1.54 72.83 0.61 8.04 0.00 0.00 WinNF 80 | 79 -1.87 13.92 3.52 1.25 72.88 0.37 7.94 0.00 0.14 WinNF 81 | 80 -2.10 12.82 3.52 1.90 72.86 0.69 7.97 0.00 0.00 WinNF 82 | 81 -2.08 12.86 3.52 2.12 72.66 0.69 7.97 0.00 0.00 WinNF 83 | 82 -2.07 13.25 3.45 1.43 73.17 0.61 7.86 0.00 0.00 WinNF 84 | 83 -1.54 13.41 3.55 1.25 72.81 0.68 8.10 0.00 0.00 WinNF 85 | 84 -2.06 13.09 3.52 1.55 72.87 0.68 8.05 0.00 0.09 WinNF 86 | 85 -3.91 14.25 3.09 2.08 72.28 1.10 7.08 0.00 0.00 WinNF 87 | 86 -1.75 13.36 3.58 1.49 72.72 0.45 8.21 0.00 0.00 WinNF 88 | 87 -2.31 13.24 3.49 1.47 73.25 0.38 8.03 0.00 0.00 WinNF 89 | 88 -1.55 13.40 3.49 1.52 72.65 0.67 8.08 0.00 0.10 WinNF 90 | 89 -1.82 13.01 3.50 1.48 72.89 0.60 8.12 0.00 0.00 WinNF 91 | 90 -1.60 12.55 3.48 1.87 73.23 0.63 8.08 0.00 0.09 WinNF 92 | 91 0.41 12.93 3.74 1.11 72.28 0.64 8.96 0.00 0.22 WinNF 93 | 92 -1.95 12.90 3.44 1.45 73.06 0.44 8.27 0.00 0.00 WinNF 94 | 93 -2.12 13.12 3.41 1.58 73.26 0.07 8.39 0.00 0.19 WinNF 95 | 94 -2.10 13.24 3.34 1.47 73.10 0.39 8.22 0.00 0.00 WinNF 96 | 95 -1.71 12.71 3.33 1.49 73.28 0.67 8.24 0.00 0.00 WinNF 97 | 96 0.60 13.36 3.43 1.43 72.26 0.51 8.60 0.00 0.00 WinNF 98 | 97 0.41 13.02 3.62 1.06 72.34 0.64 9.13 0.00 0.15 WinNF 99 | 98 -0.57 12.20 3.25 1.16 73.55 0.62 8.90 0.00 0.24 WinNF 100 | 99 -1.11 12.67 2.88 1.71 73.21 0.73 8.54 0.00 0.00 WinNF 101 | 100 0.11 12.96 2.96 1.43 72.92 0.60 8.79 0.14 0.00 WinNF 102 | 101 -1.45 12.75 2.85 1.44 73.27 0.57 8.79 0.11 0.22 WinNF 103 | 102 -0.70 12.35 2.72 1.63 72.87 0.70 9.23 0.00 0.00 WinNF 104 | 103 0.20 12.62 2.76 0.83 73.81 0.35 9.42 0.00 0.20 WinNF 105 | 104 9.25 13.80 3.15 0.66 70.57 0.08 11.64 0.00 0.00 WinNF 106 | 105 6.10 13.83 2.90 1.17 71.15 0.08 10.79 0.00 0.00 WinNF 107 | 106 6.75 11.45 0.00 1.88 72.19 0.81 13.24 0.00 0.34 WinNF 108 | 107 13.25 10.73 0.00 2.10 69.81 0.58 13.30 3.15 0.28 WinNF 109 | 108 15.93 12.30 0.00 1.00 70.16 0.12 16.19 0.00 0.24 WinNF 110 | 109 4.22 14.43 0.00 1.00 72.67 0.10 11.52 0.00 0.08 WinNF 111 | 110 0.18 13.72 0.00 0.56 74.45 0.00 10.99 0.00 0.00 WinNF 112 | 111 8.64 11.23 0.00 0.77 73.21 0.00 14.68 0.00 0.00 WinNF 113 | 112 9.39 11.02 0.00 0.75 73.08 0.00 14.96 0.00 0.00 WinNF 114 | 113 9.77 12.64 0.00 0.67 72.02 0.06 14.40 0.00 0.00 WinNF 115 | 114 0.92 13.46 3.83 1.26 72.55 0.57 8.21 0.00 0.14 WinNF 116 | 115 0.47 13.10 3.97 1.19 72.44 0.60 8.43 0.00 0.00 WinNF 117 | 116 0.46 13.41 3.89 1.33 72.38 0.51 8.28 0.00 0.00 WinNF 118 | 117 0.29 13.24 3.90 1.41 72.33 0.55 8.31 0.00 0.10 WinNF 119 | 118 -0.92 13.72 3.68 1.81 72.06 0.64 7.88 0.00 0.00 WinNF 120 | 119 -1.27 13.30 3.64 1.53 72.53 0.65 8.03 0.00 0.29 WinNF 121 | 120 -1.48 13.56 3.57 1.47 72.45 0.64 7.96 0.00 0.00 WinNF 122 | 121 0.44 13.25 3.76 1.32 72.40 0.58 8.42 0.00 0.00 WinNF 123 | 122 -1.37 12.93 3.54 1.62 72.96 0.64 8.03 0.00 0.21 WinNF 124 | 123 -1.13 13.23 3.54 1.48 72.84 0.56 8.10 0.00 0.00 WinNF 125 | 124 -0.93 13.48 3.48 1.71 72.52 0.62 7.99 0.00 0.00 WinNF 126 | 125 3.77 13.20 3.68 1.15 72.75 0.54 8.52 0.00 0.00 WinNF 127 | 126 0.72 12.93 3.66 1.56 72.51 0.58 8.55 0.00 0.12 WinNF 128 | 127 -1.33 12.94 3.61 1.26 72.75 0.56 8.60 0.00 0.00 WinNF 129 | 128 2.81 13.78 2.28 1.43 71.99 0.49 9.85 0.00 0.17 WinNF 130 | 129 2.68 13.55 2.09 1.67 72.18 0.53 9.57 0.27 0.17 WinNF 131 | 130 2.20 13.98 1.35 1.63 71.76 0.39 10.56 0.00 0.18 WinNF 132 | 131 3.77 13.75 1.01 1.36 72.19 0.33 11.14 0.00 0.00 WinNF 133 | 132 8.14 13.70 0.00 1.36 71.24 0.19 13.44 0.00 0.10 WinNF 134 | 133 0.13 13.43 3.98 1.18 72.49 0.58 8.15 0.00 0.00 WinNF 135 | 134 0.00 13.71 3.93 1.54 71.81 0.54 8.21 0.00 0.15 WinNF 136 | 135 0.11 13.33 3.85 1.25 72.78 0.52 8.12 0.00 0.00 WinNF 137 | 136 -0.11 13.19 3.90 1.30 72.33 0.55 8.44 0.00 0.28 WinNF 138 | 137 0.06 13.00 3.80 1.08 73.07 0.56 8.38 0.00 0.12 WinNF 139 | 138 -0.89 12.89 3.62 1.57 72.96 0.61 8.11 0.00 0.00 WinNF 140 | 139 -1.26 12.79 3.52 1.54 73.36 0.66 7.90 0.00 0.00 WinNF 141 | 140 -1.26 12.87 3.56 1.64 73.14 0.65 7.99 0.00 0.00 WinNF 142 | 141 -1.10 13.33 3.54 1.61 72.54 0.68 8.11 0.00 0.00 WinNF 143 | 142 0.51 13.20 3.63 1.07 72.83 0.57 8.41 0.09 0.17 WinNF 144 | 143 -1.38 12.85 3.51 1.44 73.01 0.68 8.23 0.06 0.25 WinNF 145 | 144 -0.91 13.00 3.47 1.79 72.72 0.66 8.18 0.00 0.00 WinNF 146 | 145 -1.40 12.99 3.18 1.23 72.97 0.58 8.81 0.00 0.24 WinNF 147 | 146 0.39 12.85 3.67 1.24 72.57 0.62 8.68 0.00 0.35 WinNF 148 | 147 -0.31 13.65 3.66 1.11 72.77 0.11 8.60 0.00 0.00 Veh 149 | 148 -1.90 13.33 3.53 1.34 72.67 0.56 8.33 0.00 0.00 Veh 150 | 149 -1.30 13.24 3.57 1.38 72.70 0.56 8.44 0.00 0.10 Veh 151 | 150 -1.57 12.16 3.52 1.35 72.89 0.57 8.53 0.00 0.00 Veh 152 | 151 -1.35 13.14 3.45 1.76 72.48 0.60 8.38 0.00 0.17 Veh 153 | 152 3.27 14.32 3.90 0.83 71.50 0.00 9.49 0.00 0.00 Veh 154 | 153 -0.21 13.64 3.65 0.65 73.00 0.06 8.93 0.00 0.00 Veh 155 | 154 -1.90 13.42 3.40 1.22 72.69 0.59 8.32 0.00 0.00 Veh 156 | 155 -1.06 12.86 3.58 1.31 72.61 0.61 8.79 0.00 0.00 Veh 157 | 156 -1.54 13.04 3.40 1.26 73.01 0.52 8.58 0.00 0.00 Veh 158 | 157 -1.45 13.41 3.39 1.28 72.64 0.52 8.65 0.00 0.00 Veh 159 | 158 3.21 14.03 3.76 0.58 71.79 0.11 9.65 0.00 0.00 Veh 160 | 159 -0.24 13.53 3.41 1.52 72.04 0.58 8.79 0.00 0.00 Veh 161 | 160 -0.04 13.50 3.36 1.63 71.94 0.57 8.81 0.00 0.09 Veh 162 | 161 0.32 13.33 3.34 1.54 72.14 0.56 8.99 0.00 0.00 Veh 163 | 162 1.34 13.64 3.54 0.75 72.65 0.16 8.89 0.15 0.24 Veh 164 | 163 4.11 14.19 3.78 0.91 71.36 0.23 9.14 0.00 0.37 Veh 165 | 164 -2.86 14.01 2.68 3.50 69.89 1.68 5.87 2.20 0.00 Con 166 | 165 1.15 12.73 1.85 1.86 72.69 0.60 10.09 0.00 0.00 Con 167 | 166 3.71 11.56 1.88 1.56 72.86 0.47 11.41 0.00 0.00 Con 168 | 167 3.51 11.03 1.71 1.56 73.44 0.58 11.62 0.00 0.00 Con 169 | 168 1.69 12.64 0.00 1.65 73.75 0.38 11.53 0.00 0.00 Con 170 | 169 -1.34 12.86 0.00 1.83 73.88 0.97 10.17 0.00 0.00 Con 171 | 170 1.94 13.27 0.00 1.76 73.03 0.47 11.32 0.00 0.00 Con 172 | 171 5.69 13.44 0.00 1.58 72.22 0.32 12.24 0.00 0.00 Con 173 | 172 -4.84 13.02 0.00 3.04 70.48 6.21 6.96 0.00 0.00 Con 174 | 173 -4.79 13.00 0.00 3.02 70.70 6.21 6.93 0.00 0.00 Con 175 | 174 2.43 13.38 0.00 1.40 72.25 0.33 12.50 0.00 0.00 Con 176 | 175 2.58 12.85 1.61 2.17 72.18 0.76 9.70 0.24 0.51 Con 177 | 176 3.19 12.97 0.33 1.51 73.39 0.13 11.27 0.00 0.28 Con 178 | 177 1.05 14.00 2.39 1.56 72.37 0.00 9.57 0.00 0.00 Tabl 179 | 178 1.37 13.79 2.41 1.19 72.76 0.00 9.77 0.00 0.00 Tabl 180 | 179 0.29 14.46 2.24 1.62 72.38 0.00 9.26 0.00 0.00 Tabl 181 | 180 0.52 14.09 2.19 1.66 72.67 0.00 9.32 0.00 0.00 Tabl 182 | 181 -5.01 14.40 1.74 1.54 74.55 0.00 7.59 0.00 0.00 Tabl 183 | 182 0.88 14.99 0.78 1.74 72.50 0.00 9.95 0.00 0.00 Tabl 184 | 183 1.16 14.15 0.00 2.09 72.74 0.00 10.88 0.00 0.00 Tabl 185 | 184 1.69 14.56 0.00 0.56 73.48 0.00 11.22 0.00 0.00 Tabl 186 | 185 -6.85 17.38 0.00 0.34 75.41 0.00 6.65 0.00 0.00 Tabl 187 | 186 -6.69 13.69 3.20 1.81 72.81 1.76 5.43 1.19 0.00 Head 188 | 187 0.38 14.32 3.26 2.22 71.25 1.46 5.79 1.63 0.00 Head 189 | 188 5.15 13.44 3.34 1.23 72.38 0.60 8.83 0.00 0.00 Head 190 | 189 4.47 14.86 2.20 2.06 70.26 0.76 9.76 0.00 0.00 Head 191 | 190 5.65 15.79 1.83 1.31 70.43 0.31 8.61 1.68 0.00 Head 192 | 191 -1.87 13.88 1.78 1.79 73.10 0.00 8.67 0.76 0.00 Head 193 | 192 -1.98 14.85 0.00 2.38 73.28 0.00 8.76 0.64 0.09 Head 194 | 193 -1.77 14.20 0.00 2.79 73.46 0.04 9.04 0.40 0.09 Head 195 | 194 -0.81 14.75 0.00 2.00 73.02 0.00 8.53 1.59 0.08 Head 196 | 195 -1.17 14.56 0.00 1.98 73.29 0.00 8.52 1.57 0.07 Head 197 | 196 -2.55 14.14 0.00 2.68 73.39 0.08 9.07 0.61 0.05 Head 198 | 197 -2.44 13.87 0.00 2.54 73.23 0.14 9.41 0.81 0.01 Head 199 | 198 -0.73 14.70 0.00 2.34 73.28 0.00 8.95 0.66 0.00 Head 200 | 199 -2.69 14.38 0.00 2.66 73.10 0.04 9.08 0.64 0.00 Head 201 | 200 -1.91 15.01 0.00 2.51 73.05 0.05 8.83 0.53 0.00 Head 202 | 201 -2.92 15.15 0.00 2.25 73.50 0.00 8.34 0.63 0.00 Head 203 | 202 -1.47 11.95 0.00 1.19 75.18 2.70 8.93 0.00 0.00 Head 204 | 203 -2.86 14.85 0.00 2.42 73.72 0.00 8.39 0.56 0.00 Head 205 | 204 -1.42 14.80 0.00 1.99 73.11 0.00 8.28 1.71 0.00 Head 206 | 205 -1.83 14.95 0.00 2.27 73.30 0.00 8.71 0.67 0.00 Head 207 | 206 -0.68 14.95 0.00 1.80 72.99 0.00 8.61 1.55 0.00 Head 208 | 207 -1.55 14.94 0.00 1.87 73.11 0.00 8.67 1.38 0.00 Head 209 | 208 0.31 14.39 0.00 1.82 72.86 1.41 6.47 2.88 0.00 Head 210 | 209 -1.60 14.37 0.00 2.74 72.85 0.00 9.45 0.54 0.00 Head 211 | 210 -1.77 14.14 0.00 2.88 72.61 0.08 9.18 1.06 0.00 Head 212 | 211 -1.15 14.92 0.00 1.99 73.06 0.00 8.40 1.59 0.00 Head 213 | 212 2.65 14.36 0.00 2.02 73.42 0.00 8.44 1.64 0.00 Head 214 | 213 -1.49 14.38 0.00 1.94 73.61 0.00 8.48 1.57 0.00 Head 215 | 214 -0.89 14.23 0.00 2.08 73.36 0.00 8.62 1.67 0.00 Head 216 | -------------------------------------------------------------------------------- /data/montana.csv: -------------------------------------------------------------------------------- 1 | AGE,SEX,INC,POL,AREA,FIN,STAT 2 | 3,0,2,2,1,2,1 3 | 2,0,3,3,1,3,1 4 | 1,0,2,*,1,2,1 5 | 3,1,2,1,1,1,0 6 | 3,1,3,3,3,2,* 7 | 1,0,2,1,3,3,* 8 | 3,1,1,3,3,1,1 9 | 1,0,1,3,2,1,0 10 | 3,1,*,3,3,2,0 11 | 1,0,*,1,1,2,1 12 | 2,1,2,3,1,2,* 13 | 3,1,1,3,2,2,0 14 | 2,0,2,1,3,3,0 15 | 3,0,*,3,2,2,0 16 | 3,0,3,3,3,3,1 17 | 3,0,3,1,3,1,1 18 | 1,1,2,1,3,3,* 19 | 3,1,2,1,3,2,1 20 | 3,1,2,3,3,1,0 21 | 2,0,3,3,3,3,1 22 | 1,1,3,1,1,3,0 23 | 3,0,1,2,1,2,0 24 | 1,1,2,3,1,3,* 25 | 3,0,2,2,1,2,1 26 | 2,0,3,1,1,3,0 27 | 1,0,1,3,1,1,1 28 | 3,0,2,1,1,3,0 29 | 3,1,*,1,3,2,0 30 | 2,1,2,2,3,2,0 31 | 2,1,2,3,3,1,0 32 | 2,0,2,3,1,2,1 33 | 1,0,2,*,1,3,0 34 | 1,1,2,3,3,3,1 35 | 2,1,3,1,3,2,0 36 | 1,0,1,1,3,2,1 37 | 3,0,2,1,3,2,* 38 | 2,0,3,3,3,3,0 39 | 3,0,1,3,3,3,0 40 | 1,0,1,3,3,1,0 41 | 1,0,1,1,1,1,1 42 | 3,1,3,1,2,1,0 43 | 2,1,2,1,1,1,0 44 | 1,0,2,3,1,2,1 45 | 1,1,3,1,1,2,0 46 | 1,1,3,3,1,3,1 47 | 2,0,3,2,1,3,1 48 | 1,1,2,3,2,3,0 49 | 2,0,3,1,1,1,0 50 | 1,1,*,1,1,3,1 51 | 1,1,2,1,3,3,* 52 | 2,1,*,3,3,3,* 53 | 1,1,2,2,1,1,0 54 | 2,1,1,1,1,1,0 55 | 3,1,*,1,1,2,1 56 | 1,0,3,1,1,1,0 57 | 2,1,2,1,1,3,1 58 | 1,0,3,2,1,1,1 59 | 3,0,2,1,2,1,0 60 | 3,0,2,2,2,1,0 61 | 1,1,1,1,1,1,0 62 | 2,0,3,2,2,2,0 63 | 3,0,2,1,3,2,0 64 | 3,1,1,1,3,2,1 65 | 3,1,1,1,3,1,0 66 | 3,1,*,2,3,2,* 67 | 3,1,1,3,3,3,* 68 | 1,1,3,1,1,3,1 69 | 2,0,3,3,2,3,0 70 | 3,1,2,3,3,2,1 71 | 2,0,3,3,3,1,0 72 | 3,1,2,2,2,2,0 73 | 3,1,2,1,1,3,1 74 | 3,0,2,3,3,3,1 75 | 2,0,3,2,2,1,0 76 | 3,0,3,3,2,2,0 77 | 2,1,3,3,2,3,0 78 | 1,0,1,1,2,1,0 79 | 3,1,*,*,1,1,0 80 | 1,1,1,*,1,1,0 81 | 2,1,2,3,1,3,0 82 | 2,1,3,1,1,2,0 83 | 3,0,3,3,2,1,0 84 | 3,1,1,2,3,2,0 85 | 3,1,*,2,1,2,0 86 | 1,0,2,3,3,2,* 87 | 1,1,2,1,2,1,0 88 | 1,1,*,*,1,2,* 89 | 2,1,3,1,1,2,0 90 | 2,0,3,1,3,2,1 91 | *,1,1,1,1,1,0 92 | 1,1,1,3,2,3,0 93 | 2,0,3,1,2,1,0 94 | 2,1,3,1,1,2,0 95 | 2,0,2,2,3,2,0 96 | 2,0,*,2,3,1,0 97 | 3,1,1,1,1,2,0 98 | 2,0,2,2,3,2,0 99 | 2,0,1,3,2,2,* 100 | 2,0,2,2,2,2,0 101 | 2,1,2,3,2,3,0 102 | 3,0,3,1,3,2,0 103 | 1,1,2,1,1,1,1 104 | 3,0,2,1,1,1,0 105 | 1,1,2,3,3,1,0 106 | 3,1,1,1,1,2,* 107 | 1,1,2,3,3,3,1 108 | 2,1,2,2,3,1,0 109 | 1,0,2,1,2,1,1 110 | 3,1,2,2,2,2,0 111 | 3,1,2,1,1,1,0 112 | 3,0,2,2,3,1,1 113 | 3,1,3,1,2,2,0 114 | 1,0,1,3,2,3,1 115 | 3,0,2,2,2,1,0 116 | 3,1,2,1,3,3,0 117 | 1,1,1,*,3,1,0 118 | 2,0,3,1,3,1,1 119 | 3,0,2,3,1,3,1 120 | 3,0,2,2,3,3,1 121 | 1,1,2,3,2,3,1 122 | 3,0,3,1,3,2,0 123 | 3,1,1,1,2,1,0 124 | 1,1,3,1,1,3,* 125 | 1,1,2,1,2,3,1 126 | 3,1,2,1,3,2,* 127 | 1,1,1,3,3,3,1 128 | 3,1,*,1,1,1,* 129 | 2,1,2,1,2,3,* 130 | 3,0,2,3,3,2,1 131 | 3,1,*,3,3,2,* 132 | 1,1,2,1,3,1,0 133 | 2,0,3,1,1,1,0 134 | 3,0,2,1,2,2,0 135 | 1,0,3,3,3,3,0 136 | 2,0,2,2,1,2,* 137 | 2,0,3,3,2,3,1 138 | 1,0,1,3,1,3,1 139 | 2,0,3,3,1,2,* 140 | 2,0,3,1,3,2,* 141 | 2,0,3,3,3,3,1 142 | 2,0,2,3,1,3,1 143 | 1,1,3,1,2,3,1 144 | 2,0,2,1,1,1,0 145 | 1,0,1,1,3,3,1 146 | 2,0,3,3,3,2,1 147 | 3,1,1,1,1,2,0 148 | 3,0,2,3,1,1,0 149 | 2,0,2,3,1,1,1 150 | 3,0,2,3,3,3,0 151 | 2,0,3,3,2,2,1 152 | 1,1,*,1,1,3,* 153 | 1,1,2,2,1,1,0 154 | 2,1,1,1,1,3,0 155 | 2,1,1,1,3,1,0 156 | 2,1,1,2,2,2,0 157 | 2,0,2,2,2,1,0 158 | 1,1,3,3,2,1,0 159 | 2,0,3,3,3,3,* 160 | 3,0,3,1,1,2,0 161 | 3,1,2,3,2,1,1 162 | 1,0,1,1,3,2,1 163 | 2,1,3,2,3,3,0 164 | 1,0,2,2,1,2,0 165 | 1,0,3,3,2,2,0 166 | 3,0,2,1,3,2,0 167 | 1,1,3,3,2,3,0 168 | 2,0,3,2,3,2,0 169 | 3,1,2,2,2,3,0 170 | 3,1,1,3,2,1,0 171 | 1,1,2,2,1,2,0 172 | 1,1,2,3,2,3,* 173 | 1,1,1,2,3,2,1 174 | 2,1,1,1,2,1,0 175 | 1,0,2,2,2,2,0 176 | 1,0,3,1,1,3,0 177 | 2,0,*,2,3,2,0 178 | 3,0,*,1,3,2,1 179 | 2,0,2,1,2,3,0 180 | 1,0,2,2,1,1,0 181 | 1,0,3,3,2,2,1 182 | 2,0,3,3,2,3,0 183 | 2,0,3,1,3,3,0 184 | 1,1,1,1,3,3,1 185 | 1,1,2,3,3,3,0 186 | 3,0,3,2,3,1,0 187 | 1,0,1,3,1,3,0 188 | 2,1,3,*,2,3,0 189 | 2,0,1,2,3,2,1 190 | 3,0,2,3,2,2,0 191 | 2,0,1,1,1,1,0 192 | 1,1,1,1,1,2,0 193 | 3,0,1,1,2,1,1 194 | 3,1,*,3,2,2,* 195 | 1,1,1,2,2,3,0 196 | 1,1,2,1,3,3,1 197 | 1,0,2,3,2,3,0 198 | 1,1,1,3,1,1,0 199 | 2,0,2,3,3,3,0 200 | 1,0,3,3,2,2,1 201 | 1,1,1,1,1,1,0 202 | 2,0,2,1,1,3,1 203 | 3,0,2,3,1,2,0 204 | 2,1,*,1,3,3,* 205 | 1,1,1,2,3,3,* 206 | 1,0,3,3,2,*,1 207 | 1,1,2,3,3,3,1 208 | 3,1,1,3,2,2,0 209 | 3,0,3,3,3,1,1 210 | 2,0,3,3,2,2,0 -------------------------------------------------------------------------------- /data/nerve.csv: -------------------------------------------------------------------------------- 1 | duration 2 | 0.21 3 | 0.03 4 | 0.05 5 | 0.11 6 | 0.59 7 | 0.06 8 | 0.18 9 | 0.55 10 | 0.37 11 | 0.09 12 | 0.14 13 | 0.19 14 | 0.02 15 | 0.14 16 | 0.09 17 | 0.05 18 | 0.15 19 | 0.23 20 | 0.15 21 | 0.08 22 | 0.24 23 | 0.16 24 | 0.06 25 | 0.11 26 | 0.15 27 | 0.09 28 | 0.03 29 | 0.21 30 | 0.02 31 | 0.14 32 | 0.24 33 | 0.29 34 | 0.16 35 | 0.07 36 | 0.07 37 | 0.04 38 | 0.02 39 | 0.15 40 | 0.12 41 | 0.26 42 | 0.15 43 | 0.33 44 | 0.06 45 | 0.51 46 | 0.11 47 | 0.28 48 | 0.36 49 | 0.14 50 | 0.55 51 | 0.28 52 | 0.04 53 | 0.01 54 | 0.94 55 | 0.73 56 | 0.05 57 | 0.07 58 | 0.11 59 | 0.38 60 | 0.21 61 | 0.49 62 | 0.38 63 | 0.38 64 | 0.01 65 | 0.06 66 | 0.13 67 | 0.06 68 | 0.01 69 | 0.16 70 | 0.05 71 | 0.10 72 | 0.16 73 | 0.06 74 | 0.06 75 | 0.06 76 | 0.06 77 | 0.11 78 | 0.44 79 | 0.05 80 | 0.09 81 | 0.04 82 | 0.27 83 | 0.50 84 | 0.25 85 | 0.25 86 | 0.08 87 | 0.01 88 | 0.70 89 | 0.04 90 | 0.08 91 | 0.16 92 | 0.38 93 | 0.08 94 | 0.32 95 | 0.39 96 | 0.58 97 | 0.56 98 | 0.74 99 | 0.15 100 | 0.07 101 | 0.26 102 | 0.25 103 | 0.01 104 | 0.17 105 | 0.64 106 | 0.61 107 | 0.15 108 | 0.26 109 | 0.03 110 | 0.05 111 | 0.34 112 | 0.07 113 | 0.10 114 | 0.09 115 | 0.02 116 | 0.30 117 | 0.07 118 | 0.12 119 | 0.01 120 | 0.16 121 | 0.14 122 | 0.49 123 | 0.07 124 | 0.11 125 | 0.35 126 | 1.21 127 | 0.17 128 | 0.01 129 | 0.35 130 | 0.45 131 | 0.07 132 | 0.93 133 | 0.04 134 | 0.96 135 | 0.14 136 | 1.38 137 | 0.15 138 | 0.01 139 | 0.05 140 | 0.23 141 | 0.31 142 | 0.05 143 | 0.05 144 | 0.29 145 | 0.01 146 | 0.74 147 | 0.30 148 | 0.09 149 | 0.02 150 | 0.19 151 | 0.47 152 | 0.01 153 | 0.51 154 | 0.12 155 | 0.12 156 | 0.43 157 | 0.32 158 | 0.09 159 | 0.20 160 | 0.03 161 | 0.05 162 | 0.13 163 | 0.15 164 | 0.05 165 | 0.08 166 | 0.04 167 | 0.09 168 | 0.10 169 | 0.10 170 | 0.26 171 | 0.07 172 | 0.68 173 | 0.15 174 | 0.01 175 | 0.27 176 | 0.05 177 | 0.03 178 | 0.40 179 | 0.04 180 | 0.21 181 | 0.29 182 | 0.24 183 | 0.08 184 | 0.23 185 | 0.10 186 | 0.19 187 | 0.20 188 | 0.26 189 | 0.06 190 | 0.40 191 | 0.51 192 | 0.15 193 | 1.10 194 | 0.16 195 | 0.78 196 | 0.04 197 | 0.27 198 | 0.35 199 | 0.71 200 | 0.15 201 | 0.29 202 | 0.04 203 | 0.01 204 | 0.28 205 | 0.21 206 | 0.09 207 | 0.17 208 | 0.09 209 | 0.17 210 | 0.15 211 | 0.62 212 | 0.50 213 | 0.07 214 | 0.39 215 | 0.28 216 | 0.20 217 | 0.34 218 | 0.16 219 | 0.65 220 | 0.04 221 | 0.67 222 | 0.10 223 | 0.51 224 | 0.26 225 | 0.07 226 | 0.71 227 | 0.11 228 | 0.47 229 | 0.02 230 | 0.38 231 | 0.04 232 | 0.43 233 | 0.11 234 | 0.23 235 | 0.14 236 | 0.08 237 | 1.12 238 | 0.50 239 | 0.25 240 | 0.18 241 | 0.12 242 | 0.02 243 | 0.15 244 | 0.12 245 | 0.08 246 | 0.38 247 | 0.22 248 | 0.16 249 | 0.04 250 | 0.58 251 | 0.05 252 | 0.07 253 | 0.28 254 | 0.27 255 | 0.24 256 | 0.07 257 | 0.02 258 | 0.27 259 | 0.27 260 | 0.16 261 | 0.05 262 | 0.34 263 | 0.10 264 | 0.02 265 | 0.04 266 | 0.10 267 | 0.22 268 | 0.24 269 | 0.04 270 | 0.28 271 | 0.10 272 | 0.23 273 | 0.03 274 | 0.34 275 | 0.21 276 | 0.41 277 | 0.15 278 | 0.05 279 | 0.17 280 | 0.53 281 | 0.30 282 | 0.15 283 | 0.19 284 | 0.07 285 | 0.83 286 | 0.04 287 | 0.04 288 | 0.14 289 | 0.34 290 | 0.10 291 | 0.15 292 | 0.05 293 | 0.04 294 | 0.05 295 | 0.65 296 | 0.16 297 | 0.32 298 | 0.87 299 | 0.07 300 | 0.17 301 | 0.10 302 | 0.03 303 | 0.17 304 | 0.38 305 | 0.28 306 | 0.14 307 | 0.07 308 | 0.14 309 | 0.03 310 | 0.21 311 | 0.40 312 | 0.04 313 | 0.11 314 | 0.44 315 | 0.90 316 | 0.10 317 | 0.49 318 | 0.09 319 | 0.01 320 | 0.08 321 | 0.06 322 | 0.08 323 | 0.01 324 | 0.15 325 | 0.50 326 | 0.36 327 | 0.08 328 | 0.34 329 | 0.02 330 | 0.21 331 | 0.32 332 | 0.22 333 | 0.51 334 | 0.12 335 | 0.16 336 | 0.52 337 | 0.21 338 | 0.05 339 | 0.46 340 | 0.44 341 | 0.04 342 | 0.05 343 | 0.04 344 | 0.14 345 | 0.08 346 | 0.21 347 | 0.02 348 | 0.63 349 | 0.35 350 | 0.01 351 | 0.38 352 | 0.43 353 | 0.03 354 | 0.39 355 | 0.04 356 | 0.17 357 | 0.23 358 | 0.78 359 | 0.14 360 | 0.08 361 | 0.11 362 | 0.07 363 | 0.45 364 | 0.46 365 | 0.20 366 | 0.19 367 | 0.50 368 | 0.09 369 | 0.22 370 | 0.29 371 | 0.01 372 | 0.19 373 | 0.06 374 | 0.39 375 | 0.08 376 | 0.03 377 | 0.28 378 | 0.09 379 | 0.17 380 | 0.45 381 | 0.40 382 | 0.07 383 | 0.30 384 | 0.16 385 | 0.24 386 | 0.81 387 | 1.35 388 | 0.01 389 | 0.02 390 | 0.03 391 | 0.06 392 | 0.12 393 | 0.31 394 | 0.64 395 | 0.08 396 | 0.15 397 | 0.06 398 | 0.06 399 | 0.15 400 | 0.68 401 | 0.30 402 | 0.02 403 | 0.04 404 | 0.02 405 | 0.81 406 | 0.09 407 | 0.19 408 | 0.14 409 | 0.12 410 | 0.36 411 | 0.02 412 | 0.11 413 | 0.04 414 | 0.08 415 | 0.17 416 | 0.04 417 | 0.05 418 | 0.14 419 | 0.07 420 | 0.39 421 | 0.13 422 | 0.56 423 | 0.12 424 | 0.31 425 | 0.05 426 | 0.10 427 | 0.13 428 | 0.05 429 | 0.01 430 | 0.09 431 | 0.03 432 | 0.27 433 | 0.17 434 | 0.03 435 | 0.05 436 | 0.26 437 | 0.23 438 | 0.20 439 | 0.76 440 | 0.05 441 | 0.02 442 | 0.01 443 | 0.20 444 | 0.21 445 | 0.02 446 | 0.04 447 | 0.16 448 | 0.32 449 | 0.43 450 | 0.20 451 | 0.13 452 | 0.10 453 | 0.20 454 | 0.08 455 | 0.81 456 | 0.11 457 | 0.09 458 | 0.26 459 | 0.15 460 | 0.36 461 | 0.18 462 | 0.10 463 | 0.34 464 | 0.56 465 | 0.09 466 | 0.15 467 | 0.14 468 | 0.15 469 | 0.22 470 | 0.33 471 | 0.04 472 | 0.07 473 | 0.09 474 | 0.18 475 | 0.08 476 | 0.07 477 | 0.07 478 | 0.68 479 | 0.27 480 | 0.21 481 | 0.11 482 | 0.07 483 | 0.44 484 | 0.13 485 | 0.04 486 | 0.39 487 | 0.14 488 | 0.10 489 | 0.08 490 | 0.02 491 | 0.57 492 | 0.35 493 | 0.17 494 | 0.21 495 | 0.14 496 | 0.77 497 | 0.06 498 | 0.34 499 | 0.15 500 | 0.29 501 | 0.08 502 | 0.72 503 | 0.31 504 | 0.20 505 | 0.10 506 | 0.01 507 | 0.24 508 | 0.07 509 | 0.22 510 | 0.49 511 | 0.03 512 | 0.18 513 | 0.47 514 | 0.37 515 | 0.17 516 | 0.42 517 | 0.02 518 | 0.22 519 | 0.12 520 | 0.01 521 | 0.34 522 | 0.41 523 | 0.27 524 | 0.07 525 | 0.30 526 | 0.09 527 | 0.27 528 | 0.28 529 | 0.15 530 | 0.26 531 | 0.01 532 | 0.06 533 | 0.35 534 | 0.03 535 | 0.26 536 | 0.05 537 | 0.18 538 | 0.46 539 | 0.12 540 | 0.23 541 | 0.32 542 | 0.08 543 | 0.26 544 | 0.82 545 | 0.10 546 | 0.69 547 | 0.15 548 | 0.01 549 | 0.39 550 | 0.04 551 | 0.13 552 | 0.34 553 | 0.13 554 | 0.13 555 | 0.30 556 | 0.29 557 | 0.23 558 | 0.01 559 | 0.38 560 | 0.04 561 | 0.08 562 | 0.15 563 | 0.10 564 | 0.62 565 | 0.83 566 | 0.11 567 | 0.71 568 | 0.08 569 | 0.61 570 | 0.18 571 | 0.05 572 | 0.20 573 | 0.12 574 | 0.10 575 | 0.03 576 | 0.11 577 | 0.20 578 | 0.16 579 | 0.10 580 | 0.03 581 | 0.23 582 | 0.12 583 | 0.01 584 | 0.12 585 | 0.17 586 | 0.14 587 | 0.10 588 | 0.02 589 | 0.13 590 | 0.06 591 | 0.21 592 | 0.50 593 | 0.04 594 | 0.42 595 | 0.29 596 | 0.08 597 | 0.01 598 | 0.30 599 | 0.45 600 | 0.06 601 | 0.25 602 | 0.02 603 | 0.06 604 | 0.02 605 | 0.17 606 | 0.10 607 | 0.28 608 | 0.21 609 | 0.28 610 | 0.30 611 | 0.02 612 | 0.02 613 | 0.28 614 | 0.09 615 | 0.71 616 | 0.06 617 | 0.12 618 | 0.29 619 | 0.05 620 | 0.27 621 | 0.25 622 | 0.10 623 | 0.16 624 | 0.08 625 | 0.52 626 | 0.44 627 | 0.19 628 | 0.72 629 | 0.12 630 | 0.30 631 | 0.14 632 | 0.45 633 | 0.42 634 | 0.09 635 | 0.07 636 | 0.62 637 | 0.51 638 | 0.50 639 | 0.47 640 | 0.28 641 | 0.04 642 | 0.66 643 | 0.08 644 | 0.11 645 | 0.03 646 | 0.32 647 | 0.16 648 | 0.11 649 | 0.26 650 | 0.05 651 | 0.07 652 | 0.04 653 | 0.22 654 | 0.08 655 | 0.08 656 | 0.01 657 | 0.06 658 | 0.05 659 | 0.05 660 | 0.16 661 | 0.05 662 | 0.13 663 | 0.42 664 | 0.21 665 | 0.36 666 | 0.05 667 | 0.01 668 | 0.44 669 | 0.14 670 | 0.14 671 | 0.14 672 | 0.08 673 | 0.51 674 | 0.18 675 | 0.02 676 | 0.51 677 | 0.06 678 | 0.22 679 | 0.01 680 | 0.09 681 | 0.22 682 | 0.59 683 | 0.03 684 | 0.71 685 | 0.14 686 | 0.02 687 | 0.51 688 | 0.03 689 | 0.41 690 | 0.17 691 | 0.37 692 | 0.39 693 | 0.82 694 | 0.81 695 | 0.24 696 | 0.52 697 | 0.40 698 | 0.24 699 | 0.06 700 | 0.73 701 | 0.27 702 | 0.18 703 | 0.01 704 | 0.17 705 | 0.02 706 | 0.11 707 | 0.26 708 | 0.13 709 | 0.68 710 | 0.13 711 | 0.08 712 | 0.71 713 | 0.04 714 | 0.11 715 | 0.13 716 | 0.17 717 | 0.34 718 | 0.23 719 | 0.08 720 | 0.26 721 | 0.03 722 | 0.21 723 | 0.45 724 | 0.40 725 | 0.03 726 | 0.16 727 | 0.06 728 | 0.29 729 | 0.43 730 | 0.03 731 | 0.10 732 | 0.10 733 | 0.31 734 | 0.27 735 | 0.27 736 | 0.33 737 | 0.14 738 | 0.09 739 | 0.27 740 | 0.14 741 | 0.09 742 | 0.08 743 | 0.06 744 | 0.16 745 | 0.02 746 | 0.07 747 | 0.19 748 | 0.11 749 | 0.10 750 | 0.17 751 | 0.24 752 | 0.01 753 | 0.13 754 | 0.21 755 | 0.03 756 | 0.39 757 | 0.01 758 | 0.27 759 | 0.19 760 | 0.02 761 | 0.21 762 | 0.04 763 | 0.10 764 | 0.06 765 | 0.48 766 | 0.12 767 | 0.15 768 | 0.12 769 | 0.52 770 | 0.48 771 | 0.29 772 | 0.57 773 | 0.22 774 | 0.01 775 | 0.44 776 | 0.05 777 | 0.49 778 | 0.10 779 | 0.19 780 | 0.44 781 | 0.02 782 | 0.72 783 | 0.09 784 | 0.04 785 | 0.02 786 | 0.02 787 | 0.06 788 | 0.22 789 | 0.53 790 | 0.18 791 | 0.10 792 | 0.10 793 | 0.03 794 | 0.08 795 | 0.15 796 | 0.05 797 | 0.13 798 | 0.02 799 | 0.10 800 | 0.51 801 | --------------------------------------------------------------------------------