├── z-table.xlsx
├── images
├── cud.png
├── dud.png
├── pdf.png
├── pmf.png
├── skew.png
├── scaled.png
├── union.png
├── bar_chart.png
├── dot_plot.png
├── empirical.png
├── kde_chart.png
├── kurtosis.png
├── shifted.png
├── z-table.png
├── independent.png
├── line_chart.png
├── ogive_chart.png
├── plot_chart.png
├── continuos_cdf.png
├── discrete_cdf.png
├── excel_chart_1.png
├── intersection.png
├── stat_diagram.jpg
├── tree_diagram.png
├── violin_chart.png
├── histogram_chart.png
├── homoscedasticity.png
├── regression_chart.png
├── box_whisker_chart.png
├── heteroscedasticity.png
├── normal_distribution.png
└── box_whisker_chart-01.png
├── datachart_template.ai
├── Statistical Workbook-Carlo Occhiena.xlsx
├── The Statistics Handbook_COcchiena_main.pdf
├── LICENSE
├── README.md
├── z_score_table_t_score_analysis.ipynb
└── main.tex
/z-table.xlsx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/z-table.xlsx
--------------------------------------------------------------------------------
/images/cud.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/cud.png
--------------------------------------------------------------------------------
/images/dud.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/dud.png
--------------------------------------------------------------------------------
/images/pdf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/pdf.png
--------------------------------------------------------------------------------
/images/pmf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/pmf.png
--------------------------------------------------------------------------------
/images/skew.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/skew.png
--------------------------------------------------------------------------------
/images/scaled.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/scaled.png
--------------------------------------------------------------------------------
/images/union.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/union.png
--------------------------------------------------------------------------------
/images/bar_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/bar_chart.png
--------------------------------------------------------------------------------
/images/dot_plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/dot_plot.png
--------------------------------------------------------------------------------
/images/empirical.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/empirical.png
--------------------------------------------------------------------------------
/images/kde_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/kde_chart.png
--------------------------------------------------------------------------------
/images/kurtosis.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/kurtosis.png
--------------------------------------------------------------------------------
/images/shifted.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/shifted.png
--------------------------------------------------------------------------------
/images/z-table.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/z-table.png
--------------------------------------------------------------------------------
/datachart_template.ai:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/datachart_template.ai
--------------------------------------------------------------------------------
/images/independent.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/independent.png
--------------------------------------------------------------------------------
/images/line_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/line_chart.png
--------------------------------------------------------------------------------
/images/ogive_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/ogive_chart.png
--------------------------------------------------------------------------------
/images/plot_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/plot_chart.png
--------------------------------------------------------------------------------
/images/continuos_cdf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/continuos_cdf.png
--------------------------------------------------------------------------------
/images/discrete_cdf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/discrete_cdf.png
--------------------------------------------------------------------------------
/images/excel_chart_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/excel_chart_1.png
--------------------------------------------------------------------------------
/images/intersection.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/intersection.png
--------------------------------------------------------------------------------
/images/stat_diagram.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/stat_diagram.jpg
--------------------------------------------------------------------------------
/images/tree_diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/tree_diagram.png
--------------------------------------------------------------------------------
/images/violin_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/violin_chart.png
--------------------------------------------------------------------------------
/images/histogram_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/histogram_chart.png
--------------------------------------------------------------------------------
/images/homoscedasticity.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/homoscedasticity.png
--------------------------------------------------------------------------------
/images/regression_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/regression_chart.png
--------------------------------------------------------------------------------
/images/box_whisker_chart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/box_whisker_chart.png
--------------------------------------------------------------------------------
/images/heteroscedasticity.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/heteroscedasticity.png
--------------------------------------------------------------------------------
/images/normal_distribution.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/normal_distribution.png
--------------------------------------------------------------------------------
/images/box_whisker_chart-01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/images/box_whisker_chart-01.png
--------------------------------------------------------------------------------
/Statistical Workbook-Carlo Occhiena.xlsx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/Statistical Workbook-Carlo Occhiena.xlsx
--------------------------------------------------------------------------------
/The Statistics Handbook_COcchiena_main.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/carloocchiena/the_statistics_handbook/HEAD/The Statistics Handbook_COcchiena_main.pdf
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Attribution 4.0 International (CC BY 4.0)
2 |
3 | Copyright (c) 2023 Carlo Occhiena
4 |
5 | [LINK TO CREATIVE COMMONS LICENSE](https://creativecommons.org/licenses/by/4.0/)
6 |
7 | This document is distributed under a Creative Common, Free Culture, License
8 |
9 | This work was created as a means of learning and diffusion for study purposes, so I hope for its free dissemination and dissemination.
10 |
11 | While producing it, I mainly observed a two-pronged approach:
12 | - Maintaining the accuracy of definitions, formulas, and descriptions.
13 | - Describe everything with proprietary wording that does not infringe upon the intellectual property of the sources I have drawn on.
14 |
15 | For obvious reasons however, mathematical definitions are free up to a point, so where in good faith I have traced the work of licensed material I am happy to take action to remove it.
16 | You can contact me at the email _____________
17 |
18 | Still moving from the initial desire to disseminate knowledge in a streamlined and unconstrained manner.
19 |
20 | This work was originally released by the author as a freely downloadable pdf.
21 |
22 |
23 | Machine-readable license metadata:
24 | 
The Statistics Handbook di Carlo Occhiena è distribuito con Licenza Creative Commons Attribuzione 4.0 Internazionale.
25 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # The Statistics Handbook
2 | _The statistics handbook open source repository_
3 |
4 | ### Just give me the PDF and leave me alone
5 | Ok, fine: click [here](https://github.com/carloocchiena/the_statistics_handbook/blob/main/The%20Statistics%20Handbook_COcchiena_main.pdf) (it's just the link from the above repo).
6 |
7 | ## Version
8 | Version 0.2 is out:
9 | - Revisited images.
10 | - Update of some explanations and formulas.
11 | - Typo fixing and general editing of doc layout.
12 | - Updated list of contributors.
13 |
14 | Thanks to all the proof-readers and contributors!
15 | Can't believe this repo has more than 200 stars!
16 |
17 | ## Contents of this repo
18 | - **LICENSE:** the license definition for this package.
19 | - **README.md:** the readme file.
20 | - **The Statistics Handbook:** the compiled version of the handbook.
21 | - **Statistical Workbook.xlsx:** the .xlsx file with exercises and demonstrations.
22 | - **datachart_template.ai:** the .ai file with the charts and image source file.
23 | - **main.tex:** is the latex source code of the handbook.
24 | - **z-table.xlsx:** the z-table generated with an Excel formula.
25 | - **z_score_table_t_score_analysis.ipynb:** interactive Jupyter Notebook to explore t-score and z-score testing.
26 |
27 | ## Scope of this handbook
28 | _"Statistical analysis is the best way to predict events we do not know using information we do know."_
29 |
30 | We are used to talking generally about mathematical skills, thinking perhaps of derivatives, integrals, theorems, and graphs of functions.
31 |
32 | Often we do that in an abstract way, as if they were certainly logical elements, but with just specific applications. Instead, we forget that not only are mathematical elements present in every single action, but that quantitative sciences are components of everyday life.
33 |
34 | Specifically, I believe that statistics is among all the mathematical sciences the most fascinating because of the vastness and incredible opportunities for its application.
35 |
36 | Every decision we make can be traced back to statistical phenomena, either innate (such as fear of the dark, because in the dark increases the likelihood of dangerous animals) or conscious (today I think it's likely to rain, so I'll take my umbrella).
37 |
38 | On the other hand, approaching even basic statistical calculations (e.g., the infamous probability of winning the lottery) requires nontrivial skills in order to apply concepts and formulas that are not always complex but certainly have dissimilar results if used thoughtlessly. I claim for certain that worse than the lack of mathematical thinking is the misuse of mathematical thinking. This paper of mine is also in fact intended to combat my limitations through study and applications.
39 |
40 | In this handbook, I wanted to create a path from the basics, including terminology (often one of the main obstacles for the laymen approaching the subject), to formulations of hypotheses, validations, and verification of formulas.
41 |
42 | The path was constructed by consulting a large number of sources, cited in the appendix, and during long months of study and in-depth proof of the results and evidence, precisely because first and foremost I wanted to verify my own expertise, even before, of course, I could write about it.
43 |
44 | Before releasing this publication, which is distributed under a Creative Common and Free Culture license, I asked for a check from eminent acquaintances with important academic and working backgrounds. I would like to endlessly thank all of them (their names can be found in the appropriate section). Nevertheless, I am staying receptive to additions, insights and corrections, taking full responsibility for any shortcomings and errors, certainly reported in good faith.
45 |
46 | Happy reading!
47 |
48 | Carlo, 26th of May 2024.
49 |
50 | ## Featured
51 | - [Positively voted on HackerNews](https://news.ycombinator.com/item?id=36111993) (111 upvotes, 21 comments). Also made the "Buzzing on HN daily rank" for that day.
52 | - Cited in the popular Newsletter "La Cultura del Dato" by Stefano Gatti, and "most readed" article for that week ([episode #54](https://stefanogatti.substack.com/p/laculturadeldato-053-a48)).
53 | - LibHunt top 10 TeX project tagged "mathematics" ([url](https://www.libhunt.com/l/tex/topic/mathematics)).
54 |
--------------------------------------------------------------------------------
/z_score_table_t_score_analysis.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "provenance": []
7 | },
8 | "kernelspec": {
9 | "name": "python3",
10 | "display_name": "Python 3"
11 | },
12 | "language_info": {
13 | "name": "python"
14 | }
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "source": [
20 | "# Create a z-score table\n"
21 | ],
22 | "metadata": {
23 | "id": "Zq0NyiyXe8qr"
24 | }
25 | },
26 | {
27 | "cell_type": "code",
28 | "execution_count": null,
29 | "metadata": {
30 | "id": "SEmIiPM7fbqC"
31 | },
32 | "outputs": [],
33 | "source": [
34 | "import numpy as np\n",
35 | "import pandas as pd\n",
36 | "from scipy.integrate import quad"
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "source": [
42 | "def normal_probability_density(x):\n",
43 | " constant = 1.0 / np.sqrt(2 * np.pi)\n",
44 | " return(constant * np.exp((-x**2) / 2))"
45 | ],
46 | "metadata": {
47 | "id": "rfMB0nxHfow2"
48 | },
49 | "execution_count": null,
50 | "outputs": []
51 | },
52 | {
53 | "cell_type": "code",
54 | "source": [
55 | "standard_normal_table = pd.DataFrame(data=[],\n",
56 | " index = np.round(np.arange(0, 3.5, .1), 2),\n",
57 | " columns=np.round(np.arange(0.00, 0.1, 0.01), 2))\n",
58 | "\n",
59 | "for index in standard_normal_table.index:\n",
60 | " for column in standard_normal_table.columns:\n",
61 | " z = np.round(index + column, 2)\n",
62 | " value, _ = quad(normal_probability_density, np.NINF, z)\n",
63 | " standard_normal_table.loc[index, column] = value\n",
64 | "\n",
65 | "standard_normal_table.index = standard_normal_table.index.astype(str)\n",
66 | "standard_normal_table.columns = (str(column).ljust(4, '0') for column in standard_normal_table.columns)\n",
67 | "\n",
68 | "standard_normal_table"
69 | ],
70 | "metadata": {
71 | "colab": {
72 | "base_uri": "https://localhost:8080/",
73 | "height": 1000
74 | },
75 | "id": "mfNhwkrdfeRc",
76 | "outputId": "bf285ee7-b940-4e43-e1c6-617a7333ea3f"
77 | },
78 | "execution_count": null,
79 | "outputs": [
80 | {
81 | "output_type": "execute_result",
82 | "data": {
83 | "text/plain": [
84 | " 0.00 0.01 0.02 0.03 0.04 0.05 0.06 \\\n",
85 | "0.0 0.5 0.503989 0.507978 0.511966 0.515953 0.519939 0.523922 \n",
86 | "0.1 0.539828 0.543795 0.547758 0.551717 0.55567 0.559618 0.563559 \n",
87 | "0.2 0.57926 0.583166 0.587064 0.590954 0.594835 0.598706 0.602568 \n",
88 | "0.3 0.617911 0.62172 0.625516 0.6293 0.633072 0.636831 0.640576 \n",
89 | "0.4 0.655422 0.659097 0.662757 0.666402 0.670031 0.673645 0.677242 \n",
90 | "0.5 0.691462 0.694974 0.698468 0.701944 0.705401 0.70884 0.71226 \n",
91 | "0.6 0.725747 0.729069 0.732371 0.735653 0.738914 0.742154 0.745373 \n",
92 | "0.7 0.758036 0.761148 0.764238 0.767305 0.77035 0.773373 0.776373 \n",
93 | "0.8 0.788145 0.79103 0.793892 0.796731 0.799546 0.802337 0.805105 \n",
94 | "0.9 0.81594 0.818589 0.821214 0.823814 0.826391 0.828944 0.831472 \n",
95 | "1.0 0.841345 0.843752 0.846136 0.848495 0.85083 0.853141 0.855428 \n",
96 | "1.1 0.864334 0.8665 0.868643 0.870762 0.872857 0.874928 0.876976 \n",
97 | "1.2 0.88493 0.886861 0.888768 0.890651 0.892512 0.89435 0.896165 \n",
98 | "1.3 0.9032 0.904902 0.906582 0.908241 0.909877 0.911492 0.913085 \n",
99 | "1.4 0.919243 0.92073 0.922196 0.923641 0.925066 0.926471 0.927855 \n",
100 | "1.5 0.933193 0.934478 0.935745 0.936992 0.93822 0.939429 0.94062 \n",
101 | "1.6 0.945201 0.946301 0.947384 0.948449 0.949497 0.950529 0.951543 \n",
102 | "1.7 0.955435 0.956367 0.957284 0.958185 0.95907 0.959941 0.960796 \n",
103 | "1.8 0.96407 0.964852 0.96562 0.966375 0.967116 0.967843 0.968557 \n",
104 | "1.9 0.971283 0.971933 0.972571 0.973197 0.97381 0.974412 0.975002 \n",
105 | "2.0 0.97725 0.977784 0.978308 0.978822 0.979325 0.979818 0.980301 \n",
106 | "2.1 0.982136 0.982571 0.982997 0.983414 0.983823 0.984222 0.984614 \n",
107 | "2.2 0.986097 0.986447 0.986791 0.987126 0.987455 0.987776 0.988089 \n",
108 | "2.3 0.989276 0.989556 0.98983 0.990097 0.990358 0.990613 0.990863 \n",
109 | "2.4 0.991802 0.992024 0.99224 0.992451 0.992656 0.992857 0.993053 \n",
110 | "2.5 0.99379 0.993963 0.994132 0.994297 0.994457 0.994614 0.994766 \n",
111 | "2.6 0.995339 0.995473 0.995604 0.995731 0.995855 0.995975 0.996093 \n",
112 | "2.7 0.996533 0.996636 0.996736 0.996833 0.996928 0.99702 0.99711 \n",
113 | "2.8 0.997445 0.997523 0.997599 0.997673 0.997744 0.997814 0.997882 \n",
114 | "2.9 0.998134 0.998193 0.99825 0.998305 0.998359 0.998411 0.998462 \n",
115 | "3.0 0.99865 0.998694 0.998736 0.998777 0.998817 0.998856 0.998893 \n",
116 | "3.1 0.999032 0.999065 0.999096 0.999126 0.999155 0.999184 0.999211 \n",
117 | "3.2 0.999313 0.999336 0.999359 0.999381 0.999402 0.999423 0.999443 \n",
118 | "3.3 0.999517 0.999534 0.99955 0.999566 0.999581 0.999596 0.99961 \n",
119 | "3.4 0.999663 0.999675 0.999687 0.999698 0.999709 0.99972 0.99973 \n",
120 | "\n",
121 | " 0.07 0.08 0.09 \n",
122 | "0.0 0.527903 0.531881 0.535856 \n",
123 | "0.1 0.567495 0.571424 0.575345 \n",
124 | "0.2 0.60642 0.610261 0.614092 \n",
125 | "0.3 0.644309 0.648027 0.651732 \n",
126 | "0.4 0.680822 0.684386 0.687933 \n",
127 | "0.5 0.715661 0.719043 0.722405 \n",
128 | "0.6 0.748571 0.751748 0.754903 \n",
129 | "0.7 0.77935 0.782305 0.785236 \n",
130 | "0.8 0.80785 0.81057 0.813267 \n",
131 | "0.9 0.833977 0.836457 0.838913 \n",
132 | "1.0 0.85769 0.859929 0.862143 \n",
133 | "1.1 0.879 0.881 0.882977 \n",
134 | "1.2 0.897958 0.899727 0.901475 \n",
135 | "1.3 0.914657 0.916207 0.917736 \n",
136 | "1.4 0.929219 0.930563 0.931888 \n",
137 | "1.5 0.941792 0.942947 0.944083 \n",
138 | "1.6 0.95254 0.953521 0.954486 \n",
139 | "1.7 0.961636 0.962462 0.963273 \n",
140 | "1.8 0.969258 0.969946 0.970621 \n",
141 | "1.9 0.975581 0.976148 0.976705 \n",
142 | "2.0 0.980774 0.981237 0.981691 \n",
143 | "2.1 0.984997 0.985371 0.985738 \n",
144 | "2.2 0.988396 0.988696 0.988989 \n",
145 | "2.3 0.991106 0.991344 0.991576 \n",
146 | "2.4 0.993244 0.993431 0.993613 \n",
147 | "2.5 0.994915 0.99506 0.995201 \n",
148 | "2.6 0.996207 0.996319 0.996427 \n",
149 | "2.7 0.997197 0.997282 0.997365 \n",
150 | "2.8 0.997948 0.998012 0.998074 \n",
151 | "2.9 0.998511 0.998559 0.998605 \n",
152 | "3.0 0.99893 0.998965 0.998999 \n",
153 | "3.1 0.999238 0.999264 0.999289 \n",
154 | "3.2 0.999462 0.999481 0.999499 \n",
155 | "3.3 0.999624 0.999638 0.999651 \n",
156 | "3.4 0.99974 0.999749 0.999758 "
157 | ],
158 | "text/html": [
159 | "\n",
160 | "
\n",
161 | "
\n",
162 | "
\n",
163 | "\n",
176 | "
\n",
177 | " \n",
178 | " \n",
179 | " | \n",
180 | " 0.00 | \n",
181 | " 0.01 | \n",
182 | " 0.02 | \n",
183 | " 0.03 | \n",
184 | " 0.04 | \n",
185 | " 0.05 | \n",
186 | " 0.06 | \n",
187 | " 0.07 | \n",
188 | " 0.08 | \n",
189 | " 0.09 | \n",
190 | "
\n",
191 | " \n",
192 | " \n",
193 | " \n",
194 | " | 0.0 | \n",
195 | " 0.5 | \n",
196 | " 0.503989 | \n",
197 | " 0.507978 | \n",
198 | " 0.511966 | \n",
199 | " 0.515953 | \n",
200 | " 0.519939 | \n",
201 | " 0.523922 | \n",
202 | " 0.527903 | \n",
203 | " 0.531881 | \n",
204 | " 0.535856 | \n",
205 | "
\n",
206 | " \n",
207 | " | 0.1 | \n",
208 | " 0.539828 | \n",
209 | " 0.543795 | \n",
210 | " 0.547758 | \n",
211 | " 0.551717 | \n",
212 | " 0.55567 | \n",
213 | " 0.559618 | \n",
214 | " 0.563559 | \n",
215 | " 0.567495 | \n",
216 | " 0.571424 | \n",
217 | " 0.575345 | \n",
218 | "
\n",
219 | " \n",
220 | " | 0.2 | \n",
221 | " 0.57926 | \n",
222 | " 0.583166 | \n",
223 | " 0.587064 | \n",
224 | " 0.590954 | \n",
225 | " 0.594835 | \n",
226 | " 0.598706 | \n",
227 | " 0.602568 | \n",
228 | " 0.60642 | \n",
229 | " 0.610261 | \n",
230 | " 0.614092 | \n",
231 | "
\n",
232 | " \n",
233 | " | 0.3 | \n",
234 | " 0.617911 | \n",
235 | " 0.62172 | \n",
236 | " 0.625516 | \n",
237 | " 0.6293 | \n",
238 | " 0.633072 | \n",
239 | " 0.636831 | \n",
240 | " 0.640576 | \n",
241 | " 0.644309 | \n",
242 | " 0.648027 | \n",
243 | " 0.651732 | \n",
244 | "
\n",
245 | " \n",
246 | " | 0.4 | \n",
247 | " 0.655422 | \n",
248 | " 0.659097 | \n",
249 | " 0.662757 | \n",
250 | " 0.666402 | \n",
251 | " 0.670031 | \n",
252 | " 0.673645 | \n",
253 | " 0.677242 | \n",
254 | " 0.680822 | \n",
255 | " 0.684386 | \n",
256 | " 0.687933 | \n",
257 | "
\n",
258 | " \n",
259 | " | 0.5 | \n",
260 | " 0.691462 | \n",
261 | " 0.694974 | \n",
262 | " 0.698468 | \n",
263 | " 0.701944 | \n",
264 | " 0.705401 | \n",
265 | " 0.70884 | \n",
266 | " 0.71226 | \n",
267 | " 0.715661 | \n",
268 | " 0.719043 | \n",
269 | " 0.722405 | \n",
270 | "
\n",
271 | " \n",
272 | " | 0.6 | \n",
273 | " 0.725747 | \n",
274 | " 0.729069 | \n",
275 | " 0.732371 | \n",
276 | " 0.735653 | \n",
277 | " 0.738914 | \n",
278 | " 0.742154 | \n",
279 | " 0.745373 | \n",
280 | " 0.748571 | \n",
281 | " 0.751748 | \n",
282 | " 0.754903 | \n",
283 | "
\n",
284 | " \n",
285 | " | 0.7 | \n",
286 | " 0.758036 | \n",
287 | " 0.761148 | \n",
288 | " 0.764238 | \n",
289 | " 0.767305 | \n",
290 | " 0.77035 | \n",
291 | " 0.773373 | \n",
292 | " 0.776373 | \n",
293 | " 0.77935 | \n",
294 | " 0.782305 | \n",
295 | " 0.785236 | \n",
296 | "
\n",
297 | " \n",
298 | " | 0.8 | \n",
299 | " 0.788145 | \n",
300 | " 0.79103 | \n",
301 | " 0.793892 | \n",
302 | " 0.796731 | \n",
303 | " 0.799546 | \n",
304 | " 0.802337 | \n",
305 | " 0.805105 | \n",
306 | " 0.80785 | \n",
307 | " 0.81057 | \n",
308 | " 0.813267 | \n",
309 | "
\n",
310 | " \n",
311 | " | 0.9 | \n",
312 | " 0.81594 | \n",
313 | " 0.818589 | \n",
314 | " 0.821214 | \n",
315 | " 0.823814 | \n",
316 | " 0.826391 | \n",
317 | " 0.828944 | \n",
318 | " 0.831472 | \n",
319 | " 0.833977 | \n",
320 | " 0.836457 | \n",
321 | " 0.838913 | \n",
322 | "
\n",
323 | " \n",
324 | " | 1.0 | \n",
325 | " 0.841345 | \n",
326 | " 0.843752 | \n",
327 | " 0.846136 | \n",
328 | " 0.848495 | \n",
329 | " 0.85083 | \n",
330 | " 0.853141 | \n",
331 | " 0.855428 | \n",
332 | " 0.85769 | \n",
333 | " 0.859929 | \n",
334 | " 0.862143 | \n",
335 | "
\n",
336 | " \n",
337 | " | 1.1 | \n",
338 | " 0.864334 | \n",
339 | " 0.8665 | \n",
340 | " 0.868643 | \n",
341 | " 0.870762 | \n",
342 | " 0.872857 | \n",
343 | " 0.874928 | \n",
344 | " 0.876976 | \n",
345 | " 0.879 | \n",
346 | " 0.881 | \n",
347 | " 0.882977 | \n",
348 | "
\n",
349 | " \n",
350 | " | 1.2 | \n",
351 | " 0.88493 | \n",
352 | " 0.886861 | \n",
353 | " 0.888768 | \n",
354 | " 0.890651 | \n",
355 | " 0.892512 | \n",
356 | " 0.89435 | \n",
357 | " 0.896165 | \n",
358 | " 0.897958 | \n",
359 | " 0.899727 | \n",
360 | " 0.901475 | \n",
361 | "
\n",
362 | " \n",
363 | " | 1.3 | \n",
364 | " 0.9032 | \n",
365 | " 0.904902 | \n",
366 | " 0.906582 | \n",
367 | " 0.908241 | \n",
368 | " 0.909877 | \n",
369 | " 0.911492 | \n",
370 | " 0.913085 | \n",
371 | " 0.914657 | \n",
372 | " 0.916207 | \n",
373 | " 0.917736 | \n",
374 | "
\n",
375 | " \n",
376 | " | 1.4 | \n",
377 | " 0.919243 | \n",
378 | " 0.92073 | \n",
379 | " 0.922196 | \n",
380 | " 0.923641 | \n",
381 | " 0.925066 | \n",
382 | " 0.926471 | \n",
383 | " 0.927855 | \n",
384 | " 0.929219 | \n",
385 | " 0.930563 | \n",
386 | " 0.931888 | \n",
387 | "
\n",
388 | " \n",
389 | " | 1.5 | \n",
390 | " 0.933193 | \n",
391 | " 0.934478 | \n",
392 | " 0.935745 | \n",
393 | " 0.936992 | \n",
394 | " 0.93822 | \n",
395 | " 0.939429 | \n",
396 | " 0.94062 | \n",
397 | " 0.941792 | \n",
398 | " 0.942947 | \n",
399 | " 0.944083 | \n",
400 | "
\n",
401 | " \n",
402 | " | 1.6 | \n",
403 | " 0.945201 | \n",
404 | " 0.946301 | \n",
405 | " 0.947384 | \n",
406 | " 0.948449 | \n",
407 | " 0.949497 | \n",
408 | " 0.950529 | \n",
409 | " 0.951543 | \n",
410 | " 0.95254 | \n",
411 | " 0.953521 | \n",
412 | " 0.954486 | \n",
413 | "
\n",
414 | " \n",
415 | " | 1.7 | \n",
416 | " 0.955435 | \n",
417 | " 0.956367 | \n",
418 | " 0.957284 | \n",
419 | " 0.958185 | \n",
420 | " 0.95907 | \n",
421 | " 0.959941 | \n",
422 | " 0.960796 | \n",
423 | " 0.961636 | \n",
424 | " 0.962462 | \n",
425 | " 0.963273 | \n",
426 | "
\n",
427 | " \n",
428 | " | 1.8 | \n",
429 | " 0.96407 | \n",
430 | " 0.964852 | \n",
431 | " 0.96562 | \n",
432 | " 0.966375 | \n",
433 | " 0.967116 | \n",
434 | " 0.967843 | \n",
435 | " 0.968557 | \n",
436 | " 0.969258 | \n",
437 | " 0.969946 | \n",
438 | " 0.970621 | \n",
439 | "
\n",
440 | " \n",
441 | " | 1.9 | \n",
442 | " 0.971283 | \n",
443 | " 0.971933 | \n",
444 | " 0.972571 | \n",
445 | " 0.973197 | \n",
446 | " 0.97381 | \n",
447 | " 0.974412 | \n",
448 | " 0.975002 | \n",
449 | " 0.975581 | \n",
450 | " 0.976148 | \n",
451 | " 0.976705 | \n",
452 | "
\n",
453 | " \n",
454 | " | 2.0 | \n",
455 | " 0.97725 | \n",
456 | " 0.977784 | \n",
457 | " 0.978308 | \n",
458 | " 0.978822 | \n",
459 | " 0.979325 | \n",
460 | " 0.979818 | \n",
461 | " 0.980301 | \n",
462 | " 0.980774 | \n",
463 | " 0.981237 | \n",
464 | " 0.981691 | \n",
465 | "
\n",
466 | " \n",
467 | " | 2.1 | \n",
468 | " 0.982136 | \n",
469 | " 0.982571 | \n",
470 | " 0.982997 | \n",
471 | " 0.983414 | \n",
472 | " 0.983823 | \n",
473 | " 0.984222 | \n",
474 | " 0.984614 | \n",
475 | " 0.984997 | \n",
476 | " 0.985371 | \n",
477 | " 0.985738 | \n",
478 | "
\n",
479 | " \n",
480 | " | 2.2 | \n",
481 | " 0.986097 | \n",
482 | " 0.986447 | \n",
483 | " 0.986791 | \n",
484 | " 0.987126 | \n",
485 | " 0.987455 | \n",
486 | " 0.987776 | \n",
487 | " 0.988089 | \n",
488 | " 0.988396 | \n",
489 | " 0.988696 | \n",
490 | " 0.988989 | \n",
491 | "
\n",
492 | " \n",
493 | " | 2.3 | \n",
494 | " 0.989276 | \n",
495 | " 0.989556 | \n",
496 | " 0.98983 | \n",
497 | " 0.990097 | \n",
498 | " 0.990358 | \n",
499 | " 0.990613 | \n",
500 | " 0.990863 | \n",
501 | " 0.991106 | \n",
502 | " 0.991344 | \n",
503 | " 0.991576 | \n",
504 | "
\n",
505 | " \n",
506 | " | 2.4 | \n",
507 | " 0.991802 | \n",
508 | " 0.992024 | \n",
509 | " 0.99224 | \n",
510 | " 0.992451 | \n",
511 | " 0.992656 | \n",
512 | " 0.992857 | \n",
513 | " 0.993053 | \n",
514 | " 0.993244 | \n",
515 | " 0.993431 | \n",
516 | " 0.993613 | \n",
517 | "
\n",
518 | " \n",
519 | " | 2.5 | \n",
520 | " 0.99379 | \n",
521 | " 0.993963 | \n",
522 | " 0.994132 | \n",
523 | " 0.994297 | \n",
524 | " 0.994457 | \n",
525 | " 0.994614 | \n",
526 | " 0.994766 | \n",
527 | " 0.994915 | \n",
528 | " 0.99506 | \n",
529 | " 0.995201 | \n",
530 | "
\n",
531 | " \n",
532 | " | 2.6 | \n",
533 | " 0.995339 | \n",
534 | " 0.995473 | \n",
535 | " 0.995604 | \n",
536 | " 0.995731 | \n",
537 | " 0.995855 | \n",
538 | " 0.995975 | \n",
539 | " 0.996093 | \n",
540 | " 0.996207 | \n",
541 | " 0.996319 | \n",
542 | " 0.996427 | \n",
543 | "
\n",
544 | " \n",
545 | " | 2.7 | \n",
546 | " 0.996533 | \n",
547 | " 0.996636 | \n",
548 | " 0.996736 | \n",
549 | " 0.996833 | \n",
550 | " 0.996928 | \n",
551 | " 0.99702 | \n",
552 | " 0.99711 | \n",
553 | " 0.997197 | \n",
554 | " 0.997282 | \n",
555 | " 0.997365 | \n",
556 | "
\n",
557 | " \n",
558 | " | 2.8 | \n",
559 | " 0.997445 | \n",
560 | " 0.997523 | \n",
561 | " 0.997599 | \n",
562 | " 0.997673 | \n",
563 | " 0.997744 | \n",
564 | " 0.997814 | \n",
565 | " 0.997882 | \n",
566 | " 0.997948 | \n",
567 | " 0.998012 | \n",
568 | " 0.998074 | \n",
569 | "
\n",
570 | " \n",
571 | " | 2.9 | \n",
572 | " 0.998134 | \n",
573 | " 0.998193 | \n",
574 | " 0.99825 | \n",
575 | " 0.998305 | \n",
576 | " 0.998359 | \n",
577 | " 0.998411 | \n",
578 | " 0.998462 | \n",
579 | " 0.998511 | \n",
580 | " 0.998559 | \n",
581 | " 0.998605 | \n",
582 | "
\n",
583 | " \n",
584 | " | 3.0 | \n",
585 | " 0.99865 | \n",
586 | " 0.998694 | \n",
587 | " 0.998736 | \n",
588 | " 0.998777 | \n",
589 | " 0.998817 | \n",
590 | " 0.998856 | \n",
591 | " 0.998893 | \n",
592 | " 0.99893 | \n",
593 | " 0.998965 | \n",
594 | " 0.998999 | \n",
595 | "
\n",
596 | " \n",
597 | " | 3.1 | \n",
598 | " 0.999032 | \n",
599 | " 0.999065 | \n",
600 | " 0.999096 | \n",
601 | " 0.999126 | \n",
602 | " 0.999155 | \n",
603 | " 0.999184 | \n",
604 | " 0.999211 | \n",
605 | " 0.999238 | \n",
606 | " 0.999264 | \n",
607 | " 0.999289 | \n",
608 | "
\n",
609 | " \n",
610 | " | 3.2 | \n",
611 | " 0.999313 | \n",
612 | " 0.999336 | \n",
613 | " 0.999359 | \n",
614 | " 0.999381 | \n",
615 | " 0.999402 | \n",
616 | " 0.999423 | \n",
617 | " 0.999443 | \n",
618 | " 0.999462 | \n",
619 | " 0.999481 | \n",
620 | " 0.999499 | \n",
621 | "
\n",
622 | " \n",
623 | " | 3.3 | \n",
624 | " 0.999517 | \n",
625 | " 0.999534 | \n",
626 | " 0.99955 | \n",
627 | " 0.999566 | \n",
628 | " 0.999581 | \n",
629 | " 0.999596 | \n",
630 | " 0.99961 | \n",
631 | " 0.999624 | \n",
632 | " 0.999638 | \n",
633 | " 0.999651 | \n",
634 | "
\n",
635 | " \n",
636 | " | 3.4 | \n",
637 | " 0.999663 | \n",
638 | " 0.999675 | \n",
639 | " 0.999687 | \n",
640 | " 0.999698 | \n",
641 | " 0.999709 | \n",
642 | " 0.99972 | \n",
643 | " 0.99973 | \n",
644 | " 0.99974 | \n",
645 | " 0.999749 | \n",
646 | " 0.999758 | \n",
647 | "
\n",
648 | " \n",
649 | "
\n",
650 | "
\n",
651 | "
\n",
661 | " \n",
662 | " \n",
699 | "\n",
700 | " \n",
724 | "
\n",
725 | "
\n",
726 | " "
727 | ]
728 | },
729 | "metadata": {},
730 | "execution_count": 11
731 | }
732 | ]
733 | },
734 | {
735 | "cell_type": "markdown",
736 | "source": [
737 | "# T-student test from two datasets"
738 | ],
739 | "metadata": {
740 | "id": "VOAoSyO-fTTT"
741 | }
742 | },
743 | {
744 | "cell_type": "code",
745 | "source": [
746 | "# packages\n",
747 | "\n",
748 | "from math import sqrt\n",
749 | "from numpy.random import seed\n",
750 | "from numpy.random import randn\n",
751 | "from numpy import mean\n",
752 | "from scipy.stats import sem\n",
753 | "from scipy.stats import t"
754 | ],
755 | "metadata": {
756 | "id": "7tHTg7cOfUOs"
757 | },
758 | "execution_count": null,
759 | "outputs": []
760 | },
761 | {
762 | "cell_type": "code",
763 | "source": [
764 | "# input data\n",
765 | "\n",
766 | "seed(1)\n",
767 | "dataset_1 = 6 * randn(300) + 30\n",
768 | "dataset_2 = 6 * randn(300) + 40\n",
769 | "alpha = 0.05"
770 | ],
771 | "metadata": {
772 | "id": "QUu8FZrpfXJD"
773 | },
774 | "execution_count": null,
775 | "outputs": []
776 | },
777 | {
778 | "cell_type": "code",
779 | "source": [
780 | "# t_test function calculator\n",
781 | "\n",
782 | "def t_test(dataset_1: list, dataset_2: list, alpha: float) -> float:\n",
783 | " \"\"\"Calculate T test from two dataset\n",
784 | " dataset_1: [list] the first dataset\n",
785 | " dataset_2: [list] the second dataset\n",
786 | "\n",
787 | " return: [int] the p-value for the distributions\n",
788 | " \"\"\"\n",
789 | " mean_1, mean_2 = mean(dataset_1), mean(dataset_2)\n",
790 | "\n",
791 | " stand_err_1, stand_err_2 = sem(dataset_1), sem(dataset_2)\n",
792 | "\n",
793 | " stand_err_diff = sqrt(stand_err_1**2 + stand_err_2**2)\n",
794 | "\n",
795 | " t_stat = (mean_1 - mean_2) / stand_err_diff\n",
796 | "\n",
797 | " df = (len(dataset_1) + len(dataset_2)) - 2\n",
798 | "\n",
799 | " critical = t.ppf(1 - alpha, df)\n",
800 | "\n",
801 | " p_value = (1 - t.cdf(abs(t_stat), df)) * 2\n",
802 | "\n",
803 | " return t_stat, df, critical, p_value\n"
804 | ],
805 | "metadata": {
806 | "id": "yv1PbSICff1r"
807 | },
808 | "execution_count": null,
809 | "outputs": []
810 | },
811 | {
812 | "cell_type": "code",
813 | "source": [
814 | "# solve for input values\n",
815 | "\n",
816 | "t_stat, df, critical, p_value = t_test(dataset_1, dataset_2, alpha)\n",
817 | "\n",
818 | "print(f\"t_stat = {t_stat:.2f}, df = {df}, critical = {critical:.2f}, p-value = {p_value:.4f}\")\n"
819 | ],
820 | "metadata": {
821 | "colab": {
822 | "base_uri": "https://localhost:8080/"
823 | },
824 | "id": "6zpgF9R7hH2s",
825 | "outputId": "9a95ee75-d033-49bb-ae66-889e61d132ea"
826 | },
827 | "execution_count": null,
828 | "outputs": [
829 | {
830 | "output_type": "stream",
831 | "name": "stdout",
832 | "text": [
833 | "t_stat = -20.20, df = 598, critical = 1.65, p-value = 0.0000\n"
834 | ]
835 | }
836 | ]
837 | },
838 | {
839 | "cell_type": "code",
840 | "source": [
841 | "# critical value test\n",
842 | "\n",
843 | "if abs(t_stat) <= critical:\n",
844 | " print(\"accept H0\")\n",
845 | "else:\n",
846 | " print(\"reject H0\")"
847 | ],
848 | "metadata": {
849 | "id": "g7LwijEKiVhq",
850 | "colab": {
851 | "base_uri": "https://localhost:8080/"
852 | },
853 | "outputId": "144e4d48-275f-42a0-dbef-cef02b9d3430"
854 | },
855 | "execution_count": null,
856 | "outputs": [
857 | {
858 | "output_type": "stream",
859 | "name": "stdout",
860 | "text": [
861 | "reject H0\n"
862 | ]
863 | }
864 | ]
865 | },
866 | {
867 | "cell_type": "code",
868 | "source": [
869 | "# p-value test\n",
870 | "if p_value > alpha:\n",
871 | " print(\"accept H0\")\n",
872 | "else:\n",
873 | " print(\"reject H0\")"
874 | ],
875 | "metadata": {
876 | "colab": {
877 | "base_uri": "https://localhost:8080/"
878 | },
879 | "id": "DkKTE1qYpMvA",
880 | "outputId": "16d5ee55-443d-4f1d-8cc6-52b0a68a7c94"
881 | },
882 | "execution_count": null,
883 | "outputs": [
884 | {
885 | "output_type": "stream",
886 | "name": "stdout",
887 | "text": [
888 | "reject H0\n"
889 | ]
890 | }
891 | ]
892 | }
893 | ]
894 | }
--------------------------------------------------------------------------------
/main.tex:
--------------------------------------------------------------------------------
1 | % Created by Carlo Occhiena, 2023
2 | % Engine: XELATEX
3 | % Version: TeX 2019
4 |
5 | \documentclass{article}
6 |
7 | % PACKAGES
8 | \usepackage[utf8]{inputenc}
9 | \usepackage[english]{babel}
10 | \usepackage{amsmath} % provides many mathematical environments & tools
11 | \usepackage{amssymb} % to display \varnothing symbol
12 | \usepackage{array} % to manage table alignment
13 | \usepackage{changepage} % dynamically change a page layout
14 | \usepackage{geometry} % to manage dynamic page layout
15 | \usepackage{graphicx} % to handle images
16 | \usepackage{makecell} % to manage multiline cells
17 | \usepackage[document]{ragged2e} %% to manage text alignment
18 | \usepackage{parskip} % to manage paragraph spacing
19 | \usepackage{fancyhdr} % to manage page header and footer
20 |
21 | % this has to be the last package ever
22 | \usepackage{hyperref} % clickable URLs
23 |
24 | \graphicspath{ {./images/} } % tells the program what folder to find the images in
25 |
26 | % USER CUSTOMIZATION
27 | \renewcommand{\arraystretch}{1.5} % add height to tables
28 |
29 | % FUNCTIONS
30 |
31 | % DOCUMENT CUSTOMIZATION
32 | \title{The Statistics Handbook \\
33 | \normalsize Version 0.2}
34 | \author{ Carlo Occhiena }
35 | \date{ Sep 2023 }
36 |
37 |
38 | \begin{document}
39 |
40 | % CUSTOM HEADER
41 | \fancyhead{}
42 | \setlength{\headheight}{13.6pt}
43 | \pagestyle{fancy}
44 | \rhead[]{\rightmark} % section name
45 |
46 | \maketitle
47 | % \centerline \LaTeX{}
48 | \tableofcontents
49 | \clearpage
50 |
51 | \section{Scope of this handbook}
52 | \emph{"Statistical analysis is the best way to predict events we do not know using information we do know."}
53 |
54 |
55 | We are used to talk generally about mathematical skills, thinking perhaps of derivatives, integrals, theorems, and graphs of functions.
56 |
57 | Often we do that in an abstract way, as if they were certainly logical elements, but with just specific applications. Instead, we forget that not only are mathematical elements present in every single action, but that quantitative sciences are components of everyday life.
58 |
59 | Specifically, I believe that statistics is among all the mathematical sciences the most fascinating because of the vastness and incredible opportunities for its application.
60 |
61 | Every decision we make can be traced back to statistical phenomena, either innate (such as fear of the dark, because in the dark increases the likelihood of dangerous animals) or conscious (today I think it's likely to rain, so I'll take my umbrella).
62 |
63 | On the other hand, approaching even basic statistical calculations (e.g., the infamous probability of winning the lottery) requires nontrivial skills in order to apply concepts and formulas that are not always complex but certainly have dissimilar results if used thoughtlessly. I claim for certain that worse than the lack of mathematical thinking is the misuse of mathematical thinking. This paper of mine is also in fact intended to combat my limitations through study and applications.
64 |
65 | In this handbook, I wanted to create a path from the basics, including terminology (often one of the main obstacles for the laymen approaching the subject), to formulations of hypotheses, validations, and verification of formulas.
66 |
67 | The path was constructed by consulting a large number of sources, cited in the appendix, and during long months of study and in-depth proof of the results and evidence, precisely because first and foremost I wanted to verify my own expertise, even before, of course, I could write about it.
68 |
69 | Before releasing this publication, which is distributed under a Creative Common and Free Culture license, I asked for a check from eminent acquaintances with important academic and working backgrounds. I would like to endlessly thank all of them (their names can be found in the appropriate section). Nevertheless, I am staying receptive to additions, insights and corrections, taking full responsibility for any shortcomings and errors, certainly reported in good faith.
70 |
71 | Happy reading!
72 |
73 | Carlo, 25th of January 2023.
74 |
75 | \subsection{Versioning \& Contributions}
76 | \begin{itemize}
77 | \item Version 0.1 is the first release ever published and distributed online. It's the version written and verified personally by me but does not include any third-party contributions or revisions.
78 | \item I plan to submit the handbook to several SME (Subject Matter Experts).
79 | \item Each contribution will be indicated in the Acknowledgments section.
80 | \item The feedback from each SME will help raise the version by 1/10, so that with 9 revisions it will progress to version 1.0 of the document.
81 | \item Contributions are free and welcome, you can contact me via \href{https://www.linkedin.com/in/carloocchiena/}{Linkedin}.
82 | \end{itemize}
83 |
84 |
85 | \subsection{\LaTeX{} \& Open Source Repository}
86 | In addition to being distributed under a Free Culture CC BY 4.0 license, all materials related to this handbook are available in the GitHub repository at the link: \url{https://github.com/carloocchiena/the_statistics_handbook}.
87 |
88 | This also includes the \LaTeX{} source of this handbook and an Excel with several exercises and applied formulas. This could therefore also be helpful to students and those who want to use this handbook for practical purposes.
89 |
90 | \subsection{Version History}
91 | \begin{itemize}
92 | \item 0.1 first version ever distributed; written, checked, implemented by the Author, under his liability.
93 | \end{itemize}
94 |
95 | \clearpage
96 |
97 | \section{Core Concepts}
98 | \subsection{Let's start from a question}
99 | “What is data?”
100 |
101 | Data are collected observations and information about a given phenomenon.
102 |
103 | “What is statistics?”
104 |
105 | Statistics Is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data.
106 |
107 | “What is a statistical variable?”
108 |
109 | It’s the specific characteristic being analyzed among the statistical units on which the statistical analysis is focused on, such as “age” from all the data that may be related to the object “person”. Classification of variables and their measure of scale is paramount to set up the analytical process of statistical analysis.
110 |
111 | \subsection{Property and type of data}
112 |
113 | We should not think that the data is solely a numerical value. There is a multitude of data types, each with specific characteristics.
114 |
115 | \subsubsection{Continuous vs Discrete}
116 | \textbf{Discrete} means data can only take certain values. There are no “in-between” values.
117 |
118 | Discrete data is the number of people: there could be 1 person or 2 people, but not 1,5 people or 0,99 people.
119 | Discrete data are the possible value of rolling a dice: 1,2,3,4,5,6 and not 6.5 or 1.5
120 |
121 | \textbf{Continuous} means there is an infinite amount of value in between each data point.
122 |
123 | Continuous data is the height or weight of a group of people. \\
124 | Continuous data are temperature records.
125 |
126 | \subsubsection{Nominal vs Ordinal}
127 | \textbf{Nominal} data is classified without a natural order or rank. Nominal data can’t be clearly sorted.
128 | Nominal data can’t be “ordered” (from which the term “ordinal”).
129 |
130 | Nominal data are animal species: lizard, dog, cat. The list of ingredients on a recipe.
131 |
132 | \textbf{Ordinal} data is data that has a natural order or rank.
133 |
134 | Ordinal data can be sorted and ordered.
135 | Ordinal data doesn’t have to be numeric. For example, hot, mild, cold - or even top, low, bottom, can be data attributes that can be ordered and then being considered ordinal.
136 |
137 | Ordinal data are the seat numbers on a train.
138 |
139 | \subsubsection{Structured vs Unstructured}
140 | \textbf{Structured} data is highly specific and stored in a predefined format. It has its own structure.
141 |
142 | Examples are JSON or Excel files, SQL databases.
143 |
144 | \textbf{Unstructured} data is data that does not have a specific or well defined format.
145 |
146 | Unstructured data are audio data, text data, video data.
147 |
148 | Do not confuse “file format” with “formatted data”.
149 | Just because text is in a PDF format doesn't make it structured data.
150 |
151 | \subsubsection{Statistical Variables and their properties}
152 |
153 | \paragraph{Qualitative Statistical Variables}\mbox{} \\
154 | \mbox{} \\
155 |
156 | Qualitative statistical variables are variables whose values are not numbers but modes, or categories.
157 |
158 | Examples are: “male” or “female”, “education”, “marital status”, “ethnicity” and such.
159 |
160 | Those categories have to be exhaustive and mutually exclusive - a datapoint can’t be both “male” and “female” or both “asian” and “european”. This is a specific problem that may occur in the data preparation and data gathering phase.
161 |
162 | Qualitative statistical variables can be classified further in:
163 |
164 | \textbf{Dichotomic:} variables that have only two kinds of mutually exclusive categories, such as “male” or “female” or “alive” or “dead”.
165 |
166 | \textbf{Nominal:} variables that have no logical order, are not comparable and not exclusive to each other. Examples of nominal variables are “transportation used for work” or “sport played”.
167 |
168 | \textbf{Ordinal:} variables that have a logical predefined order, but yet can’t be classified as quantitative.
169 | Example is “education”; High School is surely lower than University, but of how much?
170 | And how far is a MsC from a PhD? They are clearly different, but this difference can’t be clearly measured.
171 |
172 | \begin{itemize}
173 | \item \textbf{Linear ordinal:} they have a clear start and end, such as size “S M L XL”.
174 | \item \textbf{Cyclical ordinal:} they have no clear start and end and their order is based on convention (such as week days: weeks starts both on Monday, or on Sunday. Seasons).
175 | \end{itemize}
176 |
177 | \paragraph{Quantitative statistical variables}\mbox{} \\
178 | \mbox{} \\
179 |
180 | Quantitative statistical variables are expressed by a numerical quantity.
181 | Quantitative data is naturally ordinable and comparable.
182 |
183 | Quantitative data can be further classified in:
184 |
185 | \begin{itemize}
186 | \item \textbf{Interval data:} datapoint are expression of a specific point of the dataset (such as result of a test, QI, temperature).
187 | \item \textbf{Ratio scale data:} data that is expressed by a rate, such as age and weight.
188 | \end{itemize}
189 |
190 | \paragraph{Parametric vs Nonparametric}\mbox{} \\
191 | \mbox{} \\
192 |
193 | \textbf{Parametric}
194 | \begin{itemize}
195 | \item Parametric assumes the presence of distributions of approximately normal type.
196 | \item They involve continuous or interval-type variables and a fairly large sample size.
197 | \item They assume homogeneity of variances for the errors (homoskedasticity).
198 | \item They assume estimation of parametric data such as mean, variance and standard deviation.
199 | \end{itemize}
200 |
201 | Parametric tests have higher statistical power because they provide a higher probability of correct rejection of an incorrect statistical hypothesis.
202 |
203 | \textbf{Nonparametric}\mbox{} \\
204 | Nonparametric doesn’t imply any kind of distribution and doesn’t imply any kind of parametric estimation such as mean, variance and standard deviation (because, for example, such measures are not estimable).
205 |
206 | Nonparametric tests should be preferred whenever the dataset is not distributed in a normal (gaussian distribution) way, or, in any case, this specificity is not being demonstrated. A typical example is whenever the dataset is too small to prove a parametric distribution.
207 |
208 | \paragraph{Homoskedasticity vs Heteroskedasticity}\mbox{} \\
209 | \mbox{} \\
210 |
211 | \textbf{Homoskedasticity} means that all the residuals of the regression have the same variance.
212 |
213 | \includegraphics[width=3cm, height=3cm]{homoscedasticity}
214 |
215 | \textbf{Heteroskedasticity} Heteroskedasticity is the violation of the assumption that all the errors of the regression have the same variance.
216 |
217 | \includegraphics[width=3cm, height=3cm]{heteroscedasticity}
218 |
219 | \paragraph{Deterministic vs Stochastic}\mbox{} \\
220 | \mbox{} \\
221 | A \textbf{deterministic} model produces, for a specific set of inputs, the same exact results. Given the inputs, the result can be predicted accurately.
222 |
223 | A \textbf{stochastic} model does not produce, for a specific set of inputs, a completely predictable result. The result account for a certain level of unpredictability or randomness.
224 |
225 | Stochastic models can be analyzed statistically but may not be predicted precisely (such as Monte Carlo simulations).
226 |
227 | \paragraph{Expected Value}\mbox{} \\
228 | \mbox{} \\
229 | The expected value (also called expectation, expectancy, mathematical expectation, mean, average) is a generalization of the weighted average.
230 |
231 | Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.
232 |
233 | The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration.
234 |
235 | The expected value of a random variable $X$ is often denoted by $E(X)$, $E[X]$, or $EX$, with $E$ also often stylized as \emph{E} or $\mathbb{E}$.
236 |
237 | \paragraph{Linear, Nonlinear, and Monotonic Relationships}\mbox{} \\
238 | \mbox{} \\
239 |
240 | \textbf{Linear}: \\
241 | When variables increase or decrease concurrently and at a constant rate, a positive linear relationship exists.
242 | When one variable increases while the other variable decreases, a negative linear relationship exists.
243 |
244 | \textbf{Nonlinear}: \\
245 | If a relationship between two variables is not linear, the rate of increase or decrease can change as one variable changes, causing a “curved pattern” in the data.
246 |
247 | \textbf{Monotonic}: \\
248 | In a monotonic relationship, the variables tend to move in the same relative direction, but not necessarily at a constant rate.
249 |
250 | \subsection{Population vs Sample}
251 |
252 | \textbf{Population} consists of the representation of every member of a given group or of the entire available data set.
253 | Examples are all the students of a class or all the animals of a specific national park.
254 |
255 | \textbf{Sample} refers to a subset of the entire data set.
256 | For example, the first 10 students of a class or the top 3 predators from a specific national park.
257 |
258 | Population and Sample are data definitions that are heavily dependent from the context.
259 |
260 | When analyzing data related to a population, it is necessary to include a statistically relevant sample. A representative sample.
261 |
262 | In particular, identifying the sample size, knowing the size of a specific population, is critical to the significance of statistical analysis.
263 |
264 | A numerical example of this calculation is provided in the following section: “Calculate the Sample Size from a Population”.
265 |
266 | The calculation has also been exemplified on the spreadsheet made available in the GitHub repository of this handbook.
267 |
268 | Use “population” when:
269 | \begin{itemize}
270 | \item It’s known the dataset is related to the entire population.
271 | \item A generalization to a wider, larger population is not interesting.
272 | \end{itemize}
273 |
274 | Use “sample” when:
275 | \begin{itemize}
276 | \item It’s known the dataset is related to a subset of the whole dataset.
277 | \item A generalization to a wider, larger sample or population is interesting
278 | \end{itemize}
279 |
280 | Rule of thumb: statisticians primarily work with samples. Real-world data can be overwhelmingly large.
281 |
282 | \subsection{Parameters vs Statistics vs Hyperparameters}
283 |
284 | \textbf{Parameters} describe the properties of the entire population.
285 |
286 | \textbf{Statistics} describe the properties of a sample.
287 |
288 | \textbf{Hyperparameters}\footnote{even if slightly out of context this is added for clarity and significance} (used in modeling and machine learning processes) are instead tuning values. Hyperparameters are set before the model is trained and are not coming from the dataset.
289 |
290 | \paragraph{Hat symbols over variables ($\hat{}$)}\mbox{} \\
291 | \mbox{} \\
292 | The estimated or predicted values in a regression or other predictive model in statistics are referred to as “hat values”.
293 |
294 | $\hat{y}$: $y$ is the outcome or dependent variable in the model equation, the "hat" symbol ($\hat{}$) placed over the variable name is the statistical designation of an estimated value.
295 |
296 | \paragraph{Outliers}\mbox{} \\
297 | \mbox{} \\
298 | An outlier is a data point that differs significantly from other observations.
299 | In regression analysis, outliers are the farther points from the regression line.
300 |
301 | \subsection{Descriptive and Inferential statistics}
302 |
303 | \textbf{Descriptive statistics} is a part of statistics that aim to describe data. It is used to summarize the attribute of a dataset, using measures such as Measures of Central Tendency or Measures of Dispersion.
304 |
305 | \textbf{Inferential statistics} is a part of statistics that is used to test and validate assumptions over a dataset by analyzing a sample, using methods such as Hypothesis Testing or Regression Analysis.
306 |
307 | \subsection{Binomial Distribution}
308 | The binomial distribution with parameters $n$ and $p$ is the discrete probability distribution of the number of successes in a sequence of $n$ independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability $p$) or failure (with probability $q=1-p$).
309 | A single success/failure experiment is also called a Bernoulli trial.
310 |
311 | A sequence of outcomes is called a Bernoulli process; for a single trial, i.e., $n = 1$, the binomial distribution is a Bernoulli distribution.
312 |
313 | The binomial distribution is the basis for the popular binomial test of statistical significance.
314 |
315 | \subsubsection{Binomial Coefficient}
316 | Binomial Coefficient is a natural number as defined starting from a pair of natural numbers, usually named $n$ and $k$.
317 | Binomial Coefficient represents the number of sub-groups of $k$ elements that could be made out of a dataset of $n$ objects.
318 |
319 | \subsection{Measurement of Central Tendency}
320 | \textbf{Central tendency} is defined as “the statistical measure that identifies a single value as representative of an entire distribution.” It aims to provide an accurate description of the entire data. It is the single value that is most typical, or representative, of the collected data.
321 |
322 | \subsubsection{Mean}
323 | Mean is generically expressed as:
324 |
325 | $\frac{\text{sum of all data points}}{\text{number of data points}}$
326 |
327 | And, more specifically, with the formula:\\
328 | \mbox{} \\
329 |
330 | ${\displaystyle {\bar{x}}={\frac{1}{n}}\left(\sum _{i=1}^{n}{x_{i}}\right)={\frac{x_{1}+x_{2}+\cdots +x_{n}}{n}}}$
331 |
332 | \mbox{} \\
333 |
334 | Mean hass the same meaning of “average”, but average is generally used in arithmetic, while “mean” is expressingly considering the central point among a dataset in statistics. Arithmetic Mean is equal to average, while Harmonic or Geometric Mean have different meanings.
335 |
336 | Mean can be expressed also with symbols:\\
337 | $\mu$ (mu) or even with $\bar{x}$ (x bar).
338 |
339 | In the specific context of statistical studies:
340 | \begin{itemize}
341 | \item $\bar{x}$ is used for mean of a sample.
342 | \item $\mu$ is used for mean of the entire population.
343 | \end{itemize}
344 |
345 | \textbf{Arithmetic Mean} \\
346 | It's the simplest and most common type of average, expressed as the sum of all data points over the count of data points.
347 |
348 | \textbf{Weighted Mean} \\
349 | It's similar to the arithmetic mean, except the fact that each of the data point contributes to the computation with its own weight factor.
350 |
351 | $ \displaystyle \mu(x) = \frac{\sum \limits ^{k} _{i=1} x_i * n_i}{N}$
352 |
353 | For example, let's calculate the average weight of an apple, given that you have many apples with different weight clusters.
354 |
355 | \begin{center}
356 | \begin{tabular}{|c|c|}
357 | \hline
358 | Apple (n) & Weight (g) \\ \hline
359 | 8 & 200 \\
360 | 3 & 250 \\
361 | 8 & 100 \\
362 | \hline
363 | \end{tabular}
364 | \end{center}
365 |
366 | The weighted mean would be then: $\frac{((8*200)+(3*250)+(8*100))}{(8+3+8)} = 165.75$ grams.
367 |
368 | \textbf{Truncated Mean} \\
369 | A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, and typically discarding an equal amount of both.
370 |
371 | This number of points to be discarded is usually given as a percentage of the total number of points, but may also be given as a fixed number of points.
372 |
373 | High and low end data points are called “outliers” (a data point that differs significantly from other observations).
374 |
375 | \subsubsection{Mode}
376 | The mode is the value occurring most often in a dataset.
377 |
378 | $dataset = 8, 5, 4, 27, 35, 8, 29$ \\
379 | $mode = 8$
380 |
381 | $dataset = 8, 5, 4, 27, 35, 8, 29, 35$ \\
382 | It’s a bi-modal dataset, mode being 35 and 8.
383 |
384 | $dataset = 5, 4, 27, 35, 8, 29$ \\
385 | $mode = \varnothing $
386 |
387 | \subsubsection{Median}
388 | The median is the central value of an ordered dataset.
389 |
390 | Odd number of items dataset: \\
391 | 16, 18, 21, 27, 32, 33, 91 \\
392 | $median$ = 27
393 |
394 | Even number of items dataset: \\
395 | 16, 18, 21, 27, 32, 32, 33, 91 \\
396 | $median$ = $\frac{(27 + 32)}{2} = 29.5$
397 |
398 | \textbf{When to use mean, median and mode}
399 |
400 | \begin{center}
401 | \begin{tabular}{|l|c|c|c|}
402 | \hline
403 | DATASET & MEAN & MEDIAN & MODE \\ \hline
404 | \textbf{Continuous} & YES & YES & YES \\
405 | \textbf{Discrete} & YES & YES & YES \\
406 | \textbf{Nominal} & MAYBE & NO & YES \\
407 | \textbf{Ordinal} & MAYBE & YES & YES \\
408 | \textbf{Numeric} & YES & YES & YES \\
409 | \textbf{Non-numeric} & NO & YES & YES \\
410 | \hline
411 | \end{tabular}
412 | \end{center}
413 |
414 | \subsection{Measurement of Dispersion}
415 | Measurement of dispersion can be defined as positive real numbers that measure how homogeneous or heterogeneous the given data is.
416 |
417 | The most common measurement of dispersion are Variance and Standard Deviation.
418 |
419 | \subsubsection{Variance}
420 | Variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value.
421 |
422 | The always-positive value of variance is made thanks to the exponential factor applied to the distance of each datapoint.
423 | The exponential factor also magnifies values that are more far from the mean in respect to smaller values, allowing to a better understanding of their impact on the dataset.
424 |
425 | Variance is represented by: $\sigma ^{2}$ (sigma squared) (when referred to population), ${\displaystyle s^{2}}$ (when referred to sample), ${\displaystyle \operatorname {Var} (X)}, {\displaystyle V(X)}, or {\displaystyle \mathbb {V} (X)}$
426 |
427 | \subsubsection{Standard Deviation}
428 | Standard deviation is a measure of the amount of variation or dispersion of a set of values. Standard deviation is equal to the square root of variance and it’s represented with the Greek letter $\sigma$ (sigma) or the letter $s$.
429 |
430 | Being square rooted, the standard deviation returns a value that has again the same scale of the initial dataset, hence allowing for better comparisons and understanding of the statistics.
431 |
432 | Mean, Variance, and Standard Deviation, are closed linked together.
433 |
434 | \begin{center}
435 | \begin{tabular}{|m{2cm}|c|c|}
436 | \hline
437 | & POPULATION (N) & SAMPLE (n) \\ \hline
438 | &&\\[-1em]
439 | Mean & $\displaystyle \mu = \frac{\sum\limits _{i=1}^{N} x_{i}}{N}$ & $\displaystyle \bar{x} = \frac{\sum\limits _{i=1}^{n} x_{i}}{n}$ \\[25pt]
440 | Variance & $\displaystyle \sigma^2 = \frac{\sum\limits _{i=1}^{N} (x_{i} - \mu)^2}{N}$ & $\displaystyle s^2 = \frac{\sum\limits _{i=1}^{n} (x_{i} - \bar{x})^2}{n-1}$ \\[25pt]
441 | Standard Deviation & $\displaystyle \sigma = \sqrt{\sigma^2}$ & $\displaystyle s = \sqrt{s^2}$ \\[25pt]
442 | \hline
443 | \end{tabular}
444 | \end{center}
445 |
446 | \paragraph{Bessel's Correction}\mbox{} \\
447 | \mbox{} \\
448 |
449 | Why does sample variance have $n-1$ as denominator?
450 |
451 | That’s a good question, that leads to a non-trivial answer.
452 |
453 | From a mathematical point of view, the -1 correction factor is called Bessel’s correction and it’s used to correct the tendency (that can be demonstrate mathematically or even empirically with a relatively small number of experiment over a dataset) that the biased estimator has to undershoot (and never to overshoot) the parameter being estimated.
454 |
455 | It is possible to think of the Bessel's correction as the degrees of freedom of the vector of residuals. When the sample standard deviation is calculated from a sample of $n$ values, sample mean is used which has already been calculated from that same sample of $n$ values. The calculated sample mean has already taken into account one of the degrees of freedom of variability (which is the mean itself) that is available in the sample.
456 |
457 | Let's approach the topic with an example: we have a table with 10 dice rolls; we know the result of each die, the overall average of the dataset.
458 | How many elements can we make unknown in our dataset, without altering the goodness of the information we have?
459 | Only one. By eliminating the result of one die roll, we are still able to reconstruct it through the mean of the experiment and the remaining values.
460 | But by eliminating more than one value, we are forced to add approximation, thus invalidating the info we possess.
461 |
462 | This is why we can link Bessel's correction to degrees of freedom.
463 |
464 | \subsection{Quartiles and IQR}
465 | A quartile is a type of quantile (quantiles are values that split sorted data or a probability distribution into equal parts) which divides the number of data points into four parts, or quarters, of more-or-less equal size. The data must be ordered from smallest to largest to compute quartiles; as such, quartiles are a form of order statistic.
466 |
467 | \textbf{Quartiles:}
468 | \begin{itemize}
469 | \item Quartile zero (Q0) corresponds to the first value of the ordered dataset. Usually it is not considered in the computation of quartiles.
470 | \item The first quartile (Q1) is defined as the middle number between the smallest number (minimum) and the median of the data set. It is also known as the lower or 25th empirical quartile, as 25\% of the data is below this point.
471 | \item The second quartile (Q2) is the median of a data set; thus 50\% of the data lies below this point.
472 | \item The third quartile (Q3) is the middle value between the median and the highest value (maximum) of the data set. It is known as the upper or 75th empirical quartile, as 75\% of the data lies below this point.
473 | \item Quartile four (Q4) corresponds to the last value of the ordered dataset.
474 | \end{itemize}
475 |
476 | \paragraph{IQR - Interquartile Range}\mbox{} \\
477 | \mbox{} \\
478 | IQR is a measure of statistical dispersion and it is defined as the difference between Q3 and Q1.
479 |
480 | As an example, having an ordered dataset as following:\\
481 | Dataset = 1, 2, 3, 5, 8, 8, 9, 10, 15\\
482 | Q0: 1\\
483 | Q1: (2 + 3) / 2 = 2.5 (median of first half; 25th percentile).\\
484 | Q2: 8 (median; 50th percentile).\\
485 | Q3: (9+10) /2 = 9.5 (median of second half;75th percentile).\\
486 | Q4: 15\\
487 |
488 | Range = Q4 - Q0 = 15 - 1 = 14
489 |
490 | IQR = Q3 - Q1 = 9.5 - 2.5 = 7
491 |
492 | \subsection{Linear Regression}
493 | Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. The dependent variable $y$ is also called response variable. The independent variable $X$ is also called explanatory or predictor variables. \\
494 | The resultant is a straight line intersecting the cartesian plane, attempting to minimize the distance (least-squares optimization) between actual output values so that hypothetical predicted values can be estimated.
495 |
496 | Linear regression is a mathematical function based on the equation of the line:
497 |
498 | $\displaystyle {Y_{i}=\beta _{0}+\beta _{1}X_{i}+u_{i}}$
499 |
500 | Where:\\
501 | \begin{itemize}
502 | \item $\displaystyle i$ ranges between observations, $\displaystyle i=1,\ldots,n.$
503 | \item $\displaystyle Y_{i}$ is the dependent (response) variable.
504 | \item $\displaystyle X_i$ is the independent (explanatory) variable.
505 | \item $\displaystyle \beta _{0}+\beta _{1}X$ is the regression function.
506 | \item $\displaystyle \beta _{0}$ is the line intercept (the value of $y$ when $x = 0$).
507 | \item $\displaystyle \beta _{1}$ is the line angular coefficient.
508 | \item $\displaystyle u_{i}$ is the statistical error.
509 | \end{itemize}
510 |
511 | Linear regression is a fundamental analysis of statistics, both because of its simplicity, interpretive immediacy and breadth of application cases.
512 |
513 | Before attempting to fit a linear model to observed data, a modeler should first determine whether or not there is a relationship between the variables of interest. This does not necessarily imply that one variable causes the other, but that there is some significant association between the two variables. \\
514 | A scatterplot can be a helpful tool in determining the strength of the relationship between two variables. \\
515 | If there appears to be no association between the proposed explanatory and dependent variables (i.e., the scatterplot does not indicate any increasing or decreasing trends), then fitting a linear regression model to the data probably will not provide a useful model. \\
516 | A valuable numerical measure of association between two variables is the correlation coefficient, which is a value between -1 and 1 indicating the strength of the association of the observed data for the two variables.
517 |
518 | \includegraphics[width=3cm, height=3cm]{regression_chart}
519 |
520 | \paragraph{Parameter estimations in the bivariate case}\mbox{} \\
521 | \mbox{} \\
522 | Generalizing the regression line equation, one can, in the case of the two-variable problem, start from:
523 |
524 | $\hat{y} = mx + b +\varepsilon_{i}$,
525 |
526 | Where:
527 | \begin{itemize}
528 | \item $\hat{y}$ is the dependent (response) variable.
529 | \item $m$ is the line angular coefficient.
530 | \item $b$ is the line intercept.
531 | \item $\varepsilon_{i}$ is the statistical error.
532 | \end{itemize}
533 |
534 | At this point, the regression problem results in the determination of $m$ and $b$ so as to express the functional relationship between $y$ and $x$ as best as possible.
535 |
536 | $\displaystyle m = \frac{n \sum{xy} - \sum{x}\sum{y}}{n\sum{x^2} - (\sum{x})^2}$ \\ \mbox{} \\
537 | \mbox{} \\
538 | $\displaystyle b = \frac{\sum{y} - m\sum{x}}{n}$
539 |
540 | \clearpage
541 | \section{Data Visualization}
542 | Data visualization (data viz) is the graphical representation of data.
543 | The main goals of data visualization are to make the phenomena within the dataset more evident, convey the embedded information in the analysis more efficiently, and reinforce cognitive aspects of the provided study (e.g., ease of reporting, memorability).
544 |
545 | While data visualization pertains to the field of science and statistics, it has also taken on cross-cutting significance in purely artistic or design-related contexts.
546 |
547 | Data visualization is so relevant that it could be considered a discipline within a discipline, with a deep vertical of study and insight that spans mathematical, scientific, statistical, cognitive, and humanistic domains.
548 |
549 | The recent spread of data science has made data viz even more important.
550 |
551 | However, this paper will be limited to exploring some of the best-known forms of graphical representation in the field of statistics, and some of their properties.
552 |
553 | \subsection{Scatter Plot}
554 | A scatter plot is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. Every data point is displayed as a dot.
555 |
556 | Scatter plot has its most significance with continuous distributions.
557 |
558 | \includegraphics[width=3cm, height=3cm]{plot_chart}
559 |
560 | \subsection{Line Chart}
561 | Line charts show the evolution of a continuous variable (often over a time horizon).
562 | A line chart is a way of plotting data points on a line.
563 | It is used to show trend data, or the comparison of two data sets.
564 |
565 | \includegraphics[width=3cm, height=3cm]{line_chart}
566 |
567 | \subsection{Dot Plot}
568 | Dot plot\footnote{In some texts, also called Line Plot.} is a way to display data frequency piled over data points and along a number line.
569 |
570 | \includegraphics[width=3cm, height=3cm]{dot_plot}
571 |
572 | \subsection{Histograms}
573 | A histogram is a bar chart that groups continuous data into ranges. Ranges are discretional to the creator of the chart. For example, overall user ages (continuous dataset) can be grouped in clusters such as 0-10, 11-20 and such.
574 |
575 | Histogram bars are adjacent (no spaces between bars).
576 |
577 | Histograms don’t have to be confused with bar charts:
578 | \begin{itemize}
579 | \item Histograms visualize quantitative data or numerical data. Usually, histograms display continuous variables.
580 | \item Bar charts display categorical (discrete) variables.
581 | \end{itemize}
582 |
583 | Correctly labeling horizontal ($X$) axis of an histogram chart is important in order to make it readable.
584 |
585 | \includegraphics[width=3cm, height=3cm]{histogram_chart}
586 |
587 | \subsection{Bar Plot}
588 | Bar plots are usually used to display categorical data along the horizontal axis. That is, discrete data such as products, countries, car types and such.
589 |
590 | Bars within a bar chart are not adjacent. Data on the bar plots are often ordered, in order to enhance chart comprehension.
591 |
592 | \includegraphics[width=3cm, height=3cm]{bar_chart}
593 |
594 | \subsection{Ogive}
595 | An ogive, sometimes called a cumulative frequency chart, is a type of frequency chart that shows cumulative frequencies. In other words, the cumulative percentages are added on the graph from left to right.
596 |
597 | An ogive graph plots cumulative frequency on the y-axis and class boundaries along the x-axis. It’s very similar to a histogram, only instead of rectangles, an ogive has a single point marking where the top right of the rectangle would be.
598 |
599 | It is usually easier to create this kind of graph from a frequency table.
600 |
601 | \includegraphics[width=3cm, height=3cm]{ogive_chart}
602 |
603 | \subsection{Box and Whisker Plot}
604 | A box and whisker plot is defined as a graphical method of displaying variation in a set of data. It is usually used to display data according to quartile intervals.
605 |
606 | BWP are also called: box plot, box and whisker diagram, box and whisker plot with outliers\footnote{"Diagramma a scatola e baffi in italiano"}.
607 |
608 | \includegraphics[width=4cm, height=4cm]{box_whisker_chart}
609 |
610 | \paragraph{Box and whisker vs candlestick chart}\mbox{} \\
611 | \mbox{} \\
612 | Mathematically speaking there is no difference. Both show an upper and lower boundary and points outside these boundaries.
613 |
614 | However,a candlestick chart is mainly used in the finance industry. Its most popular application is to show stock prices. It is mainly used in the vertical position.
615 |
616 | A box and whisker chart tends to be used in non-finance industries. For example, the level of sales of various stores or inventory levels etc. The box and whisker can be shown horizontally as well as vertically. They are often found with labels showing various statistical informations.
617 |
618 | \subsection{Violin Plot}
619 | A violin plot is a method of plotting numeric data. It is similar to a box plot, with the addition of a rotated kernel density plot on each side.
620 |
621 | \includegraphics[width=5cm, height=3cm]{violin_chart}
622 |
623 | \subsection{KDE Plot}
624 |
625 | KDE Plot described as Kernel Density Estimate is used for visualizing the probability density of a continuous variable. It depicts the probability density at different values in a continuous variable. We can also plot a single graph for multiple samples which helps in more efficient data visualization.
626 | Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel.
627 |
628 | \includegraphics[width=5cm, height=3cm]{kde_chart}
629 | \clearpage
630 |
631 | \section{Combinatorics}
632 | Combinatoric is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
633 |
634 | \subsection{Factorials}
635 | In mathematics, the factorial of a non-negative integer $n$, denoted by $n!$, is the product of all positive integers less than or equal to $n$. The factorial of $n$ also equals the product of $n$ with the next smaller factorial.
636 |
637 | 5! = 5 * 4 * 3 * 2 * 1 = 120
638 |
639 | An interesting property is also:
640 |
641 | $n! = n * (n-1)!$
642 |
643 | Example:
644 | 5! = 5 * 4! = 120
645 |
646 | This leads to:
647 | $\frac{n!}{(n-1)!} = \frac{n(n-1)!}{(n-1)!} = n$
648 |
649 | \paragraph{Factorials and 0}\mbox{} \\
650 | \mbox{} \\
651 | Factorials deal only with natural numbers, hence 0 is omitted in the series (otherwise $n! = 0$).
652 |
653 | But why 0! = 1 ?
654 |
655 | It’s proven that:
656 |
657 | $(n-1)! = \frac{n!}{n}$
658 |
659 | This means that: \\
660 | 4! = 24 \\
661 | 3! = 24 / 4 = 6 \\
662 | 2! = 6 / 3 = 2 \\
663 | 1! = 2 / 2 = 1 \\
664 | 0! = 1 / 1 = 1 \\
665 |
666 | And, following the same logic: \\
667 | -1! = 1 / 0 = ND \\
668 |
669 | that’s why $n!$ if $n \in {N} $
670 |
671 | \subsection{Permutations}
672 | A permutation of a set of objects is an arrangement of the objects \textbf{in a certain order}.
673 | Permutations differ from combinations, which are selections of some members of a set regardless of order.
674 |
675 | Usually permutations refer to all the possible arrangements (all the possible permutations of a set of objects).
676 |
677 | Permutations are calculated as factorials $(n!)$
678 |
679 | Permutations are relevant when working with numbers, since “575” is not equal to “577” nor “557”.
680 |
681 | \subsection{Combinations}
682 | A combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations).
683 |
684 | For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set:
685 | \begin{itemize}
686 | \item an apple and a pear;
687 | \item an apple and an orange;
688 | \item a pear and an orange.
689 | \end{itemize}
690 | Combination \textbf{is an unordered selection} of objects from a set of objects.
691 |
692 | More formally, a $k-combination$ of a set $S$ is a subset of $k$ distinct elements of $S$.
693 |
694 | Combinations are relevant when working with products, or people: apple and orange is equal to orange and apple. A team with Mark and Tom is equal to a team with Tom and Mark.
695 |
696 | The number of combinations from a set of $n$ objects taken $k$ a time is:
697 |
698 | $\displaystyle C _n ^k = \binom{n}{k} = \frac{n!}{k!(n - k)!}$
699 |
700 | \subsubsection{Permutations, Combinations and Dispositions}
701 | \begin{center}
702 | \begin{tabular}{|m{2cm}|c|c|}
703 | \hline
704 | & REPETITION & NO REPETITION (simple) \\ \hline
705 | &&\\[-1em]
706 | Permutations & $\displaystyle n^k$ & \makecell{$\displaystyle \frac{n!}{(n - k)!}$ \\[15pt] where $k = \text{cluster size}$ \\[15pt] $\displaystyle \frac{n!}{k1! * k2! * kn!}$ \\[15pt] where $k = \text{items repeated}$} \\[50pt] \hline
707 | &&\\[-1em]
708 | Combinations & $\displaystyle \frac{(n + k - 1)!}{k!(n - 1)!}$ & $\displaystyle \frac{n!}{k!(n - k)!}$ \\[25pt] \hline
709 | &&\\[-1em]
710 | Dispositions & $\displaystyle n^k$ & $\displaystyle \frac{n!}{(n - k)!}$ \\[25pt]
711 | \hline
712 | \end{tabular}
713 | \end{center}
714 |
715 | \paragraph{Examples}\mbox{} \\
716 | \mbox{} \\
717 | \textbf{Permutations with repetitions} \\
718 | How many phone numbers of 7 digits can we generate using all the numbers from 0 to 9, allowing every specific case (such as “all zeros” being a valid number)?
719 |
720 | $n = \text{[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]} = 10$ \\
721 | $k = \_, \_, \_, \_, \_, \_, \_ = 7$ \\
722 | Each slot of k allows 10 combinations, so it’s $n^k = 10^7$
723 |
724 | \textbf{Permutations with no repetitions} \\
725 | Find all the way the word MAMA can be arranged.
726 |
727 | n = 3 \\
728 | k1 = 2 (the letter M is repeated 2 times) \\
729 | k2 = 2 (the letter A is repeated 2 times) \\
730 | 4! / (2! * 2!) = 24 / 4 = 6 \mbox{} \\
731 | \mbox{} \\
732 | How can we arrange 5 students in 3 chairs? \\
733 |
734 | n = 5 (all the students we have to pick from). \\
735 | k = 3 (seats available). \\
736 | 5! / (5 - 3)! = 120 / 2 = 60 \mbox{} \\
737 | \mbox{} \\
738 |
739 | \paragraph{When to use permutations, combinations or dispositions? A diagram}\mbox{} \\
740 | \mbox{} \\
741 | Key insight: \\
742 | Is the order relevant?
743 | \begin{itemize}
744 | \item YES = PERMUTATIONS (ex. numbers)
745 | \item NO = COMBINATIONS (ex. people in teams)
746 | \end{itemize}
747 |
748 | \begin{adjustwidth}{-2.0cm}{-3.0cm}
749 |
750 | \includegraphics[width=20cm, height=20cm]{stat_diagram}
751 |
752 | \end{adjustwidth}
753 | \clearpage
754 |
755 | \section{Probability}
756 | Probability is the branch of mathematics that deals with how likely an event is to occur, or how likely is that a given proposition is true.
757 |
758 | \paragraph{Probability Notation}\mbox{} \\
759 |
760 | \begin{center}
761 | \begin{tabular}{|c|c|c|}
762 | \hline
763 | $P(A)$ & Individual probability & The probability of event $A$ happening \\ \hline
764 | &&\\[-1em]
765 | $P(A')$ & Complement & The probability of event $A$ not happening \\ \hline
766 | &&\\[-1em]
767 | $P(A \cup B)$ & Union & \makecell{The probability of both $A$ and $B$ happening for both datasets \\ (all elements of $A$ plus all elements of $B$).} \\ \hline
768 | &&\\[-1em]
769 | $P(A \cap B)$ & Intersection & \makecell{The probability of some elements of $A$ and $B$ happening for both datasets \\ (some elements of $A$ plus some elements of $B$).} \\ \hline
770 | &&\\[-1em]
771 | $P(A | B)$ & Dependent & The probability of $A$ given that $B$ has occurred. \\
772 | \hline
773 | \end{tabular}
774 | \end{center}
775 |
776 | \mbox{} \\
777 |
778 | \textbf{Example}\\
779 | If $P = \{1,3,5,7,9\}$ and $Q = \{2,3,5,7\}$ \\
780 | What are $P \cup Q$, and $P \cap Q$? \\
781 | \mbox{} \\
782 | $P \cup Q = \{1,2,3,5,7,9\}$ \\
783 | $P \cap Q = \{3,5,7\}$ \\
784 |
785 | \subsection{Simple Probability}
786 | Simple probability define how likely a specific event $A$ is going to happen in the given scenario.
787 |
788 | $P(A) = \text{target events / total events}$ \\
789 | \mbox{} \\
790 | And, consequentially: \\
791 | \mbox{} \\
792 | $P(A') = 1 - P(A)$
793 |
794 | \subsubsection{Experimental and Expected probability}
795 |
796 | \begin{itemize}
797 | \item Experimental probability is the probability resulting from empirical experimentations, such as flipping a coin 100 times and recording the results in a datasheet.
798 | \item Expected probability is the theoretical probability coming from applying the probability formula to the scenario.
799 | \end{itemize}
800 |
801 | The expected probability of having head over a coin toss is 50\%. However over a 100 tosses, the experimental probability may vary (ex. resulting in 30\% heads).
802 |
803 | \subsubsection{Law of Large Numbers}
804 | The law of large numbers, or Bernoulli's theorem (since its first formulation is due to Jakob Bernoulli), describes the behavior of the mean of a sequence of $n$ trials of a random variable, independent and characterized by the same probability distribution (n measurements of the same magnitude, $n$ tosses of the same coin, etc.), as the numerosity of the sequence itself $n$ tends to infinity.
805 |
806 | \paragraph{Regression toward the mean}\mbox{} \\
807 | \mbox{} \\
808 | Regression toward the mean (also called reversion to the mean, and reversion to mediocrity) is the phenomenon where if one sample of a random variable is extreme, the next sampling of the same random variable is more probable to be closer to its mean.
809 |
810 | This is linked to the law of large numbers. Increasing the size of the sample and the length of the observations, the event outcomes will tend toward the population mean.
811 | Law of large number is explaning the whole phenomena, while regression toward the mean is useful to understand the expected behaviour of a single observation.
812 |
813 | However, in no sense does the future event “compensate for” or “even out” the previous event.
814 |
815 | \subsubsection{Probability Addition Rule}
816 | If A and B are two events in a probability experiment, then the probability that either one of the events will occur is: \\
817 | \mbox{} \\
818 | P(A or B) = P(A)+P(B) - P(A and B) \\
819 | \mbox{} \\
820 | Or, with sets notation as: \\
821 | \mbox{} \\
822 | $P(A \cup B) = P(A)+P(B) - P(A \cap B)$
823 |
824 | \includegraphics[width=3.5cm, height=3cm]{union}
825 |
826 | If A and B are two mutually exclusive events, \\
827 | $P(A \cap B) = 0$. Then the probability that either one of the events will occur is: \\
828 | \mbox{} \\
829 | P(A or B)=P(A)+P(B) \\
830 | \mbox{} \\
831 | Or, with sets notation as: \\
832 | \mbox{} \\
833 | $P(A \cup B)=P(A) + P(B)$ \\
834 |
835 | \includegraphics[width=3.5cm, height=3cm]{independent}
836 |
837 | \textbf{Fundamental rule for addition or product in probability calculation} \\
838 |
839 | \begin{itemize}
840 | \item Given two \textbf{independent} events, the probability of them \textbf{occurring both} is given by the \textbf{product} of the individual probabilities.
841 | \begin{itemize}
842 | \item Example: having a head out of two coin flips.
843 | \end{itemize}
844 | \item The probability of two or more \textbf{alternative} events occurring is equal to the \textbf{sum} of the individual probabilities.
845 | \begin{itemize}
846 | \item Example: having 1 or 2 out of a dice roll.
847 | \end{itemize}
848 | \end{itemize}
849 |
850 | \subsubsection{Conditional Probability for Independent and Dependent Events}
851 | \textbf{Independent event probability} \\
852 | The probability of $A$ and $B$ happening. \\
853 | \mbox{} \\
854 | $P(A \cap B) = P(A) * P(B)$ \\
855 | \mbox{} \\
856 | Tossing two coins $A$ and $B$, what is the probability of having two head values? \\
857 | Coins are independent each other, so: \\
858 | \mbox{} \\
859 | $P(A \cap B) = 1 / 2 * 1 / 2 = 1 / 4 $ \\
860 | \mbox{} \\
861 | Defective rate in a production line is 2\%. \\
862 | What is the probability of having 3 defective products in a row?\\
863 | \mbox{} \\
864 | $P(A \cap B \cap C) = 2 / 100 * 2 / 100 * 2 / 100 = \frac{8}{100^3} = 1 / \text{125'000}$ \\
865 | \mbox{} \\
866 | \textbf{Dependent event probability}
867 | \mbox{} \\
868 | The probability of $A$ and $B$, given that $A$ has already occurred. \\
869 | \mbox{} \\
870 | $P(A \cap B) = P(A) * P(B | A)$ \\
871 | \mbox{} \\
872 | What’s the probability of drafting two Kings in a row from a standard deck of cards? \\
873 |
874 | $P(A \cap B) = 4 / 52 * 3 / 51 = 1 / 13 * 1 / 17 = 1 / 221$
875 |
876 | \textbf{Reminder}
877 |
878 | If $P(B) = P(B | A)$, then the events must be independent.
879 |
880 | \subsection{Bayes Theorem}
881 | Bayes' theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
882 |
883 | $\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}$ \\
884 |
885 | where:
886 | \begin{itemize}
887 | \item $A$ and $B$ are events and $P(B) \neq 0 $.
888 | \item $P(A)$ is the probability of event $A$.
889 | \item $P(B)$ is the probability of event $B$.
890 | \item $P(A | B)$ is the probability of observing event $A$ if $B$ is true.
891 | \item $P(B | A)$ is the probability of observing event $B$ if $A$ is true.
892 | \end{itemize}
893 |
894 | \paragraph{Example 1}\mbox{} \\
895 | \mbox{} \\
896 | We have two assembly lines, 1 and 2. \\
897 | Line 1 has a defective rate of 3\%, Line 2 of 1\%. \\
898 | Given a defective part, what is the probability that it came from line 1? \\
899 | \mbox{} \\
900 | Let’s call:
901 | \begin{itemize}
902 | \item $P(B)$ the probability of a product being defective.
903 | \item $P(A)$ the probability of a product coming from line 1.
904 | \end{itemize}
905 | Hence: \\
906 | \mbox{} \\
907 | $P(A) = 1 / 2$ \\
908 | (with the available data, we must assume we have a 50\% likelihood from two lines).
909 |
910 | $P(B | A)$ = probability of $B$(defect) if $A$(product is coming from line 1) has occurred = $3 / 100$ (this is the info provided by the context already).
911 |
912 | $P(B)$ = overall probability of having a defective product = $[(1 / 2) * (3 / 100)] + [(1 / 2) * (1 / 100)] = 3 / 200 + 1/ 200 = 4 / 200 = 1 / 50$
913 |
914 | Applying Bayes Theorem, then:
915 |
916 | $P(A | B) = [(3 / 100) * (1 / 2)] / (1 / 50) = (3 / 200) / (1 / 50) = (3 / 200) * (50 / 1) = 150 / 200 = 3 / 4 = 75\% $
917 |
918 | \paragraph{Example 2}\mbox{} \\
919 | \mbox{} \\
920 | You’re tested for a disease that occurs 1 out of 1’000 people. \\
921 | Test accuracy is 99\%. \\
922 | You are tested positive. \\
923 | What is the change you actually have the disease? \\
924 | \mbox{} \\
925 | \begin{itemize}
926 | \item Population: $\text{1'000}$
927 | \item Incidence: $(1/\text{1’000}) = \text{0.001}$
928 | \item Accuracy: 99\%
929 | \item False positive \ negative: $(100\% - 99\%) = 1\%$
930 | \end{itemize}
931 |
932 | \begin{center}
933 | \begin{tabular}{|c|c|c|c|}
934 | \hline
935 | & SICK & NOT SICK & TOTAL \\ \hline
936 | &&&\\[-1em]
937 | TESTED POS & 0.99[1] & 9.99[3] & 10.98 \\ \hline
938 | &&&\\[-1em]
939 | TESTED NEG & 0.01[2] & 989.01[4] & 989.02 \\ \hline
940 | &&&\\[-1em]
941 | TOTAL & 1 & 999 & 1'000 \\
942 | \hline
943 | \end{tabular}
944 | \end{center}
945 |
946 | \mbox{} \\
947 |
948 | {[1]} = $\text{1’000} * 0.001 * 99\%$ \\
949 | {[2]} = $\text{1’000} * 0.001 * 1\%$ \\
950 | {[3]} = $(\text{1’000} * (1 - 0.001)) * 1\%$ \\
951 | {[4]} = $(\text{1’000} * (1 - 0.001)) * 99\%$ \\
952 |
953 | \mbox{} \\
954 |
955 | $P(A)$ = probability of being sick = $0.001$ \\
956 | $P(B)$ = probability of having a positive test = $10.98 / \text{1’000}$ \\
957 | $P(B|A)$ = probability of having a positive test being sick = $0.99$ \\
958 | $P(A|B)$ = probability of being sick having a positive test = $10.98 / 0.99$ (or applying Bayesian formula) = $9\%$ \\
959 |
960 | \subsubsection{Tree Diagrams}
961 | A tree diagram is a type of diagram that can be useful as an aid in computing probabilities. \\
962 | For example, consider an experiment of tossing a six-sided dice. Each
963 | time the experiment is repeated, the probability of obtaining a 1 (event ) is $P(A) = 1 / 6$.
964 | If you are only concerned with whether the number is 1 or not 1, and the experiment is repeated three times, then eight different sequences of events are possible. \\
965 | The tree diagram below shows the probabilities of these eight sequences of events.
966 |
967 | \includegraphics[width=5cm, height=7cm]{tree_diagram}
968 |
969 | \subsection{Discrete Probability}
970 | Discrete probability deals with events with a finite or countable number of occurrences. \\
971 | This is in contrast to a continuous distribution, where outcomes can fall anywhere on a continuum.
972 |
973 | Common examples of discrete distribution include the binomial, Poisson, and Bernoulli distributions.
974 |
975 | \textbf{Example of of discrete probability} \\
976 | What is the probability of having head out of 3 coin flips?
977 | \begin{itemize}
978 | \item Number of variable: 2 (head or tail).
979 | \item Number of events: 3 flips.
980 | \item Total number of combinations: $2^3 = 8$
981 | \end{itemize}
982 |
983 | Possible outcomes: \\
984 | \begin{center}
985 | \begin{tabular}{|c|c|}
986 | \hline
987 | EVENT & N. OF HEADS \\ \hline
988 | &\\[-1em]
989 | HHH & 3 \\ \hline
990 | &\\[-1em]
991 | THH & 2 \\ \hline
992 | &\\[-1em]
993 | HTH & 2 \\ \hline
994 | &\\[-1em]
995 | TTH & 1 \\ \hline
996 | &\\[-1em]
997 | HHT & 1 \\ \hline
998 | &\\[-1em]
999 | THT & 1 \\ \hline
1000 | &\\[-1em]
1001 | HTT & 1 \\ \hline
1002 | &\\[-1em]
1003 | TTT & 0 \\
1004 | \hline
1005 | \end{tabular}
1006 | \end{center}
1007 |
1008 | \begin{center}
1009 | \begin{tabular}{|c|c|}
1010 | \hline
1011 | \makecell {HEADS IN 3 \\ COIN FLIPS (X)} & P(X)\\ \hline
1012 | &\\[-1em]
1013 | 0 & 1/8 \\ \hline
1014 | &\\[-1em]
1015 | 1 & 3/8 \\ \hline
1016 | &\\[-1em]
1017 | 2 & 3/8 \\ \hline
1018 | &\\[-1em]
1019 | 3 & 1/8 \\
1020 | \hline
1021 | \end{tabular}
1022 | \end{center}
1023 |
1024 | $E(X) = 0 * 1/8 + 1 * 3/8 + 2 * 3/8 + 3 * 1/8 = 12 / 8 = 3 / 2 = 150\% $ \\
1025 | \mbox{}\\
1026 | Mean = $3 / 2$ \\
1027 | \mbox{}\\
1028 | Variance = $\sigma^2 = (0 - 3/2)^2 * 1/8 + (1 - 3/2)^2 * 3/8 + (2 - 3/2)^2 * 3/8 + (3 - 3/2)^2 * 1/8 = 0,75$ \\
1029 | \mbox{}\\
1030 | Standard Deviation = $\sigma = \sqrt{\sigma^2} = 0.866$ \\
1031 | \mbox{}\\
1032 | Note that in practical terms, a rational number is not making sense in a discrete probability calculation (we can’t have half of a coin flip or 1.5 heads as a result. This is a case of theoretical probability vs experimental probability).
1033 |
1034 | \subsubsection{Transforming Random Variables}
1035 | How the distribution of a random variable changes when the variable is transformed in a deterministic way?
1036 |
1037 | \paragraph{Shifting Data}\mbox{} \\
1038 | \mbox{} \\
1039 | Shifthing data means adding a constant $k \in \mathbb{R}$ to each member of a dataset.
1040 | \begin{center}
1041 | \begin{tabular}{|c|c|}
1042 | \hline
1043 | X & Y\\ \hline
1044 | &\\[-1em]
1045 | 3 & 3 + K \\ \hline
1046 | &\\[-1em]
1047 | 3 & 3 + K \\ \hline
1048 | &\\[-1em]
1049 | 7 & 7 + K \\ \hline
1050 | &\\[-1em]
1051 | 10 & 10 + K \\ \hline
1052 | &\\[-1em]
1053 | 12 & 12 + K \\
1054 | \hline
1055 | \end{tabular}
1056 | \end{center}
1057 |
1058 | \begin{center}
1059 | \begin{tabular}{|c|c|}
1060 | \hline
1061 | Dataset & Shifted, K\\ \hline
1062 | &\\[-1em]
1063 | Mean: 6 & Mean: 6 + K \\ \hline
1064 | &\\[-1em]
1065 | Median: 3 & Median: 3 + K \\ \hline
1066 | &\\[-1em]
1067 | Mode: 3 & Mode: 3 + K \\ \hline
1068 | &\\[-1em]
1069 | Range: 10 & Range: 10 \\ \hline
1070 | &\\[-1em]
1071 | IQR: 8 & IQR: 8 \\ \hline
1072 | &\\[-1em]
1073 | St. Dev: $\sigma$ & St. Dev: $\sigma$ \\
1074 | \hline
1075 | \end{tabular}
1076 | \end{center}
1077 | \textbf{Impact of K-shifting:} while mean, median and mode will change of a K-addictive factor, range, IQR and standard deviation will stay the same since in fact the shape (and, more precisely, the distance between each datapoint) will not change.
1078 |
1079 | \includegraphics[width=3cm, height=3cm]{shifted}
1080 |
1081 | \paragraph{Scaling Data}\mbox{} \\
1082 | \mbox{} \\
1083 | Scaling data means multipying each member of a dataset by a constant $k$.
1084 | \begin{center}
1085 | \begin{tabular}{|c|c|}
1086 | \hline
1087 | Dataset & Scaled, K\\ \hline
1088 | &\\[-1em]
1089 | Mean: 6 & Mean: 6K \\ \hline
1090 | &\\[-1em]
1091 | Median: & Median: 3K \\ \hline
1092 | &\\[-1em]
1093 | Mode: 3 & Mode: 3K \\ \hline
1094 | &\\[-1em]
1095 | Range: 10 & Range: 10K \\ \hline
1096 | &\\[-1em]
1097 | IQR: 8 & IQR: 8K \\ \hline
1098 | &\\[-1em]
1099 | St. Dev: $\sigma$ & St. Dev: $\sigma$ K \\
1100 | \hline
1101 | \end{tabular}
1102 | \end{center}
1103 | \textbf{Impact of K-scaling:} all the values are impacted by a k-magnitude factor.
1104 | Consider that both assumptions will be valid in a mixed case such as:
1105 | $N(X) = 10x$ - 2 ($*x$ - 2 applies for mean, median and mode, while only $*x$ applies to standard deviation, IQR and range).
1106 |
1107 | \includegraphics[width=3cm, height=3cm]{scaled}
1108 |
1109 | \subsubsection{Linear Combinations of Random Variables}
1110 | Let $X_1$ and $X_2$ be two independent random variables. Let $a$ and $b$ be scalars. Then a linear combination of the variables $X_1$ and $X_2$ and is defined to be any other random variable of the form $Y= aX_1 + bX_2$.
1111 |
1112 | Base assumptions:
1113 | \begin{itemize}
1114 | \item Variables must be independent.
1115 | \item Variables must have matching units of measurements.
1116 | \end{itemize}
1117 |
1118 | \begin{center}
1119 | \begin{tabular}{|c|c|c|}
1120 | \hline
1121 | & $\sum$ & $\Delta$ \\ \hline
1122 | &&\\[-1em]
1123 | Combination & $\sum = X + Y $ & $\Delta = X - Y$ \\ \hline
1124 | &&\\[-1em]
1125 | Mean & $\mu = \mu x + \mu y $ & $\mu = \mu x - \mu y$ \\ \hline
1126 | &&\\[-1em]
1127 | Variance & $\sigma^2 = \sigma^2 x + \sigma^2 y$ & $\sigma^2 = \sigma^2 x + \sigma^2 y$ \\
1128 | \hline
1129 | \end{tabular}
1130 | \end{center}
1131 |
1132 | \textbf{Example:}
1133 | Time in hours 4 managers manage timesheets: \\
1134 | $X$ = [1, 2, 2, 3] \\
1135 | \mbox{} \\
1136 | Time in hours 4 HRs manages payrolls on such timesheets: \\
1137 | $Y$ = [2, 3, 5, 6] \\
1138 | \mbox{} \\
1139 |
1140 | $\mu X$ = (1 + 2 + 2 + 3) / 4 = 2 \\
1141 | \mbox{} \\
1142 | $\mu Y$ = (2 + 3 + 5 + 6) / 4 = 4 \\
1143 | \mbox{} \\
1144 | $\sigma^2 X$ = $[(1 - 2)^2 + (2 - 2)^2 + (2 - 2)^2 + (3 - 2)^2 ]$ / 4 = 0.5 \\
1145 | \mbox{} \\
1146 | $\sigma X$ = $\sqrt{0.5}$ = 0.70 \\
1147 | \mbox{} \\
1148 | $\sigma^2 Y$ = $[(2 - 4)^2 + (3 - 4)^2 + (5 - 4)^2 + (6 - 4)^2]$ / 4 = 2.5 \\
1149 | \mbox{} \\
1150 | $\sigma Y$ = $\sqrt{2.5}$ = 1.58 \\
1151 | \mbox{} \\
1152 | $X + Y$ \\
1153 | $\mu$ = 2 + 4 = 6 \\
1154 | \mbox{} \\
1155 | $\sigma^2$ = 0.5 + 2.5 = 3 \\
1156 | \mbox{} \\
1157 | $\sigma$ = $\sqrt{3}$ = 1.73 \\
1158 | \mbox{} \\
1159 | $X - Y$ \\
1160 | $\mu$ = $|$2 - 4$|$ = 2 \\
1161 | \mbox{} \\
1162 | $\sigma^2$ = 0.5 + 2.5 = 3 \\
1163 | \mbox{} \\
1164 | $\sigma$ = $\sqrt{3}$ = 1.73 \\
1165 |
1166 | \subsubsection{Fair Game}
1167 | A fair game\footnote{Italian translation: gioco equo} is a game (a bet, as an example, or a lottery) in which the cost of playing the game equals the expected winnings of the game, so that net value of the game equals zero.
1168 |
1169 | $W = B/p = 1$
1170 | \mbox{} \\
1171 | where:
1172 | \begin{itemize}
1173 | \item $W$ = win.
1174 | \item $B$ = Bet.
1175 | \item $p$ = probability
1176 | \end{itemize}
1177 |
1178 |
1179 |
1180 |
1181 | \textbf{Is state lottery a fair game?}\\
1182 | The Italian national lottery (“Lotto”) is a bingo where 5 numbers are drawn out of a poll of 90. Repetition is not allowed (extracted numbers are discarded)\footnote{\url{https://www.lotto-italia.it/lotto/come-dove-giocare/il-gioco/premi-del-lotto}}.
1183 |
1184 | Players win if they have all the 5 numbers on the card, in no specific order (there are also some minor prizes such as “ambo” for two numbers, “terno” for three numbers, and such. But the logic is the same).
1185 |
1186 | A five is paying 6’000’000 times the bet.
1187 |
1188 | It this fair? Intuitively, it’s not.
1189 |
1190 | But we are here not to assume but to investigate.
1191 |
1192 | All the combinations\footnote{see Combinatorics chapter for details} of 5 numbers are $ \displaystyle C _n ^k = \binom{n}{k}$ hence:
1193 |
1194 | $\frac{90!}{5!(90 - 5)!}$ = 43’949’268
1195 |
1196 | This is obviously matching the official lottery statement.
1197 |
1198 | The probability of winning is then 1/43’949’268 = 0.00000228\%
1199 |
1200 | Fair game will assume:
1201 |
1202 | $W = B/p = 1$
1203 |
1204 | But we have:
1205 | \begin{itemize}
1206 | \item W = 6’000’000
1207 | \item B = 1
1208 | \item p = 0.00000228\%
1209 | \end{itemize}
1210 |
1211 | $W = B/p$ = 1 / 0.00000228\% = 43’949’268
1212 |
1213 | How unfair is that game?
1214 |
1215 | 43’949’268 / 6’000’000 = 7.32
1216 |
1217 | The Italian lotto is more than 7 times unfair for the player.
1218 | \clearpage
1219 |
1220 | \section{Joint Distributions}
1221 | Joint distributions allow us to mathematically quantify the relationship between two distributions of data.
1222 |
1223 | Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs.
1224 |
1225 | The joint distribution can just as well be considered for any given number of random variables.
1226 |
1227 | The joint distribution encodes the marginal distributions, i.e. the distributions of each of the individual random variables. It also encodes the conditional probability distributions, which deal with how the outputs of one random variable are distributed when given information on the outputs of the other random variable(s).
1228 |
1229 | \subsection{Covariance}
1230 | Covariance is a numerical value that provides a measure of how much two variables vary together.
1231 |
1232 | It evaluates how the variables change together, providing the \textbf{direction} of the variation.
1233 | It’s a measure of the variance between two variables. However, the metric does not assess the dependency between variables.
1234 |
1235 | A \textbf{positive covariance} means that it’s expected both variables have a concordant behavior ($X$ grows, $Y$ grows, $X$ decreases, $Y$ decreases).
1236 |
1237 | A \textbf{negative covariance} means that it’s expected both variables have a discordant behavior
1238 | ($X$ grows, $Y$ decreases, $X$ decreases, $Y$ grows).
1239 |
1240 | A \textbf{neutral covariance} means that the variables have no relations with each other.
1241 |
1242 | Expressed in mathematical notation, covariance formulas are:
1243 |
1244 | \textbf{Population Covariance}: \\
1245 | $\displaystyle cov(X, Y) = \frac{1}{N}\sum \limits_{i=1}^N (X_i - \bar{X})(Y_i - \bar{Y})$
1246 |
1247 | \textbf{Sample Covariance}: \\
1248 | $\displaystyle cov(x, y) = \frac{1}{n-1}\sum \limits_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})$
1249 |
1250 | \subsection{Correlation}
1251 | Correlation is a metric used to measure the \textbf{strength} of a statistical relationship between two random variables. It’s also called a \textbf{measure} of relationship.
1252 |
1253 | The correlation coefficient is a dimensionless metric and its value ranges from -1 to +1.
1254 |
1255 | The closer it is to +1 or -1, the more closely the two variables are related.
1256 | If there is no relationship at all between two variables, then the correlation coefficient will be close to 0.
1257 |
1258 | However, if it is 0 then we can only say that there is no linear relationship. There could exist other functional relationships between the variables.
1259 |
1260 | \begin{itemize}
1261 | \item +1: positive correlation ($X$ grows, $Y$ grows, $X$ decreases, $Y$ decreases).
1262 | \item -1: negative correlation ($X$ grows, $Y$ decreases, $X$ decreases, $Y$ grows).
1263 | \end{itemize}
1264 |
1265 | While Covariance measures just the direction of variation between two variables, Correlation explores the strength and relation of the variation, in a standardized and comparable format.
1266 |
1267 | In statistics application, there are three kind of correlation being applied:
1268 |
1269 | \begin{itemize}
1270 | \item Pearson (Parametric method).
1271 | \item Spearman (Nonparametric method).
1272 | \item Kendall (Nonparametric method).
1273 | \end{itemize}
1274 |
1275 | \subsubsection{Pearson Correlation}
1276 | Pearson correlation is the most widely used correlation statistic to measure the degree of the relationship between linearly related variables.
1277 |
1278 | For the Pearson correlation, both variables should be normally distributed (normally distributed variables have a bell-shaped curve).
1279 |
1280 | Pearson correlation assumptions:
1281 | \begin{itemize}
1282 | \item Each observation should have a pair of values.
1283 | \item Each variable should be continuous.
1284 | \item Data have no outliers.
1285 | \item Variables linearity.
1286 | \item Variables homoscedasticity (homogeneity of variance).
1287 | \end{itemize}
1288 |
1289 | \textbf{For a population}\\
1290 | Pearson correlation is expressed with the Greek letter $\rho$ (rho) when referred to population.
1291 |
1292 | Given a pair of random variables $(X,Y)$, the formula for $\rho$ is:
1293 |
1294 | $\displaystyle \rho _{X,Y}={\frac{cov(X,Y)}{\sigma _{X}\sigma _{Y}}}$
1295 |
1296 | where:
1297 | \begin{itemize}
1298 | \item $\displaystyle {cov}$ is the covariance.
1299 | \item $\displaystyle \sigma _{X}$ is the standard deviation of $X$.
1300 | \item $\displaystyle \sigma _{Y}$ is the standard deviation of $Y$.
1301 | \end{itemize}
1302 |
1303 | \textbf{For a sample}\\
1304 | Pearson's correlation coefficient, when applied to a sample, is commonly represented by $r_{xy}$ and may be referred to as the sample correlation coefficient or the sample Pearson correlation coefficient.
1305 | We can obtain a formula for $r_{xy}$ by substituting estimates of the covariances and variances based on a sample into the formula above.
1306 |
1307 | Given paired data $\displaystyle \{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}$ consisting of $n$ pairs, $r_{xy}$ is defined as:
1308 |
1309 | $\displaystyle r_{xy}={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{{\sqrt {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}{\sqrt {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}}}}$
1310 |
1311 | where:
1312 | \begin{itemize}
1313 | \item $n$ is sample size.
1314 | \item $x_{i},y_{i}$ are the individual sample points indexed with $i$.
1315 | \item ${\textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}$ (the sample mean); and analogously for $\bar {y}$.
1316 | \end{itemize}
1317 |
1318 | Rearranging gives us this formula for $\displaystyle r_{xy}$:
1319 |
1320 | ${\displaystyle r_{xy}={\frac {n\sum x_{i}y_{i}-\sum x_{i}\sum y_{i}}{{\sqrt {n\sum x_{i}^{2}-\left(\sum x_{i}\right)^{2}}}~{\sqrt {n\sum y_{i}^{2}-\left(\sum y_{i}\right)^{2}}}}}}$
1321 |
1322 | where $n,x_{i},y_{i}$ are defined as above.
1323 |
1324 | This formula suggests a convenient single-pass algorithm for calculating sample correlations.
1325 |
1326 | \subsubsection{Kendall Rank Correlation}
1327 | Kendall rank correlation is a non-parametric test that measures the strength of dependence between two quantitative or qualitative ordinal statistical variables.
1328 |
1329 | Kendall rank correlation is expressed with Greek letter $\tau$ (tau)
1330 |
1331 | $\displaystyle \tau = \frac{n_c - n_d}{n_c + n_d} = \frac{n_c - n_d}{n(n - 1)/2}$
1332 |
1333 | where:
1334 | \begin{itemize}
1335 | \item $n_c$ is the number of concordant pairs.
1336 | \item $n_d$ is the number of discordant pairs.
1337 | \item $n$ is the number of pairs.
1338 | \end{itemize}
1339 |
1340 | Kendall rank assumptions:
1341 | \begin{itemize}
1342 | \item Pairs of observations are independent.
1343 | \item Two variables should be measured on an ordinal, interval or ratio scale.
1344 | \item Monotonic relationship between the two variables.
1345 | \end{itemize}
1346 |
1347 | \subsubsection{Spearman Rank Correlation}
1348 | Spearman rank is a non parametric measure of rank correlation (measure of statistical dependence between the rankings of two variables).
1349 | Spearman rank is also defined as the Pearson correlation between the rank variables.
1350 |
1351 | Spearman rank is denoted with same Greeks as Pearson, $\rho$ and $r$.
1352 |
1353 | For a sample of size $n$, the $n$ raw scores $ X_{i},Y_{i}$ are converted to ranks ${R} ({X_{i}}), {R} ({Y_{i}})$, and $ r_{s}$ is computed as:
1354 |
1355 | $\displaystyle r_{s}=\rho _{{R} (X), {R} (Y)}={\frac { {cov} ( {R} (X),{R} (Y))}{\sigma _{ {R} (X)}\sigma _{{R} (Y)}}}$
1356 |
1357 | where:
1358 | \begin{itemize}
1359 | \item $\rho$ denotes the usual Pearson correlation coefficient, but applied to the rank variables.
1360 | \item ${cov} ( {R} (X),{R} (Y))$ is the covariance of the rank variables.
1361 | \item $\sigma _{ {R} (X)}, \sigma _{{R} (Y)}$ are the standard deviations of the rank variables.
1362 | \end{itemize}
1363 |
1364 | \mbox{}\\
1365 |
1366 | Spearman rank assumptions:
1367 | \begin{itemize}
1368 | \item Pairs of observations are independent.
1369 | \item Two variables should be measured on an ordinal, interval or ratio scale.
1370 | \item Monotonic relationship between the two variables.
1371 | \end{itemize}
1372 |
1373 | \subsubsection{Point-biserial Correlation coefficient}
1374 | The Point-Biserial correlation coefficient is used when one of the given variable is dichotomous (such as “head or tail” on a coin flip).
1375 |
1376 | Point-biserial correlation coefficient is expressed with $r_{pb}$.
1377 |
1378 | \clearpage
1379 |
1380 | \section{Data Distributions}
1381 | In statistics, and specifically to the field of descriptive statistics, a distribution is a representation of how different modes of a character are distributed across the statistical units that make up the collective under study.
1382 |
1383 | \subsection{Probability Mass Function (PMF)}
1384 | Probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value.
1385 |
1386 | $f(x) = P[X = x]$
1387 |
1388 | where $X$ is the \textbf{discrete random variable} and $x$ is the \textbf{target value}.
1389 |
1390 | \textbf{Example}:\\
1391 |
1392 | What is the probability of picking a specific ball out of a jar with 100 balls, all the balls being equal?
1393 | It’s 1/100, or 1\%.
1394 |
1395 | \paragraph{PMF Visualization}\mbox{} \\
1396 | \mbox{} \\
1397 |
1398 | \includegraphics[width=3cm, height=3cm]{pmf}
1399 |
1400 | \textbf{Discrete Uniform Distribution} \\
1401 | Discrete uniform distribution refers to discrete events where all the events have an equal chance of occurring (such as dice rolls).
1402 |
1403 | \includegraphics[width=3cm, height=3cm]{dud}
1404 |
1405 | \paragraph{PMF Overview}\mbox{} \\
1406 |
1407 | \begin{itemize}
1408 | \item Notation: $\displaystyle {\mathcal{U}}\{a,b\}$ or $\displaystyle \text{unif} \{a,b\}$
1409 | \item Parameters: $\displaystyle a,b$ integers with $\displaystyle b\geq a$, $\displaystyle n=b-a+1$
1410 | \item Support: $\displaystyle k\in \{a,a+1,\dots ,b-1,b\}$
1411 | \item PMF: $\displaystyle {\frac {1}{n}}$
1412 | \item CDF: $\displaystyle {\frac {\lfloor k\rfloor -a+1}{n}}$
1413 | \item Mean: $\displaystyle {\frac {a+b}{2}}$
1414 | \item Median: $\displaystyle {\frac {a+b}{2}}$
1415 | \item Mode: N/A
1416 | \item Variance: $\displaystyle {\frac {n^{2}-1}{12}}$
1417 | \end{itemize}
1418 |
1419 | \subsection{Probability Density Function (PDF)}
1420 |
1421 | A probability density function (PDF), or density of a \textbf{continuous random variable}, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be close to that sample.
1422 |
1423 | Density functions require switching from exact outcomes (such as for discrete variables) to approximations or interval ranges for an infinite set of values within the probability interval (in a continuous interval, the values of the variable are so small to be practically uncountable).
1424 |
1425 | $\displaystyle Pr[a\leq X\leq b]=\int _{a}^{b}f_{X}(x)dx$
1426 |
1427 | with:
1428 | \begin{itemize}
1429 | \item $P(x=c)=0$ (the probability $P$ of $x$ to be equal to a constant $c$ is zero).
1430 | \end{itemize}
1431 |
1432 | More technically, in a continuous distribution (e.g. continuous uniform, normal, and others), the probability is calculated by integration, as an area under the probability density function.
1433 |
1434 | For $f(x)$ to be a legitimate PDF, it must satisfy the following two conditions:
1435 | \begin{itemize}
1436 | \item $f(x) \geq 0$, $\forall x \in \mathbb{R}$
1437 | \item $\displaystyle \int _{-\infty }^{\infty } f(x)dx=1$
1438 | \end{itemize}
1439 | If a random variable $X$ is given and its distribution admits a probability density function $f$, then the expected value of $X$ (if the expected value exists) can be calculated as:
1440 |
1441 | $\displaystyle {E} [X]=\int _{-\infty }^{\infty }xf(x)dx$
1442 |
1443 | Not every probability distribution has a density function: the distributions of discrete random variables do not, for example.
1444 |
1445 | Note also that within a PDF, the probability of having a specific, exact value (such as in PMF) is always equal to zero. The expectation is for an approximation $(a < X < b)$ and not for an equality $(X = a)$.
1446 |
1447 | There are many kinds of probability density functions (actually more than 100, going from very common normal distribution, Pareto distribution, uniform distribution, to other less common distributions such as Polya-Gamma).
1448 |
1449 | \paragraph{PDF Visualization}\mbox{} \\
1450 | \mbox{} \\
1451 |
1452 | \includegraphics[width=3cm, height=3cm]{pdf}
1453 |
1454 | \textbf{Continuous Uniform Distribution} \\
1455 | In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds.
1456 |
1457 | The bounds are defined by the parameters, $a$ and $b$, which are the minimum and maximum values. The interval can either be closed (ex. [$a$, $b$]) or open (ex. ($a$, $b$)).
1458 | Therefore, the distribution is often abbreviated $U (a, b)$, where $U$ stands for uniform distribution.
1459 |
1460 | \includegraphics[width=3cm, height=3cm]{cud}
1461 |
1462 | \paragraph{CUD Overview}\mbox{} \\
1463 |
1464 | \begin{itemize}
1465 | \item Notation: $\displaystyle {\mathcal {U}}_{[a,b]}$
1466 | \item Parameters: $\displaystyle -\infty b\end{cases}}$
1470 | \item Mean: $\displaystyle {\tfrac {1}{2}}(a+b)$
1471 | \item Median: $\displaystyle {\tfrac {1}{2}}(a+b$)
1472 | \item Mode: any value in $\displaystyle (a,b)$
1473 | \item Variance: $\displaystyle {\tfrac {1}{12}}(b-a)^{2}$
1474 | \end{itemize}
1475 |
1476 | \subsection{Cumulative Distribution Functions (CDF)}
1477 | The cumulative distribution function (CDF) is the probability that the variable $X$ takes a value less than or equal to $x$.
1478 |
1479 | $F(x) = P(X \leq x)$, $\forall x \in \mathbb{R}$
1480 |
1481 | CDF expresses the cumulative probability of a given event.
1482 |
1483 | \textbf{Discrete Cumulative Distribution Function} \\
1484 | A cumulative distribution function in a discrete set.
1485 |
1486 | As an example, rolling a standard dice:
1487 | \begin{itemize}
1488 | \item the CDF of having no numeric input is 0.
1489 | \item the CDF of a number from 1 to 6 is 1 (6/6).
1490 | \item the CDF of a number less or equal to 3 is 3/6.
1491 | \end{itemize}
1492 |
1493 | \includegraphics[width=3cm, height=3cm]{discrete_cdf}
1494 |
1495 | \textbf{Continuous Cumulative Distribution Function} \\
1496 | The cumulative distribution function, CDF, or cumulant, is a function derived from the probability density function for a continuous random variable. It gives the probability of finding the random variable at a value less than or equal to a given cutoff. Many questions and computations about probability distribution functions are convenient to rephrase or perform in terms of CDFs, ex. computing the PDF of a function of a random variable.
1497 |
1498 | \includegraphics[width=3cm, height=3cm]{continuos_cdf}
1499 |
1500 | \subsection{Hypergeometric Distribution}
1501 | The hypergeometric distribution is a discrete probability distribution that describes the probability of $k$ successes (random draws for which the object drawn has a specified feature) in $n$ draws, without replacement, from a finite population of size $N$ that contains exactly
1502 | $K$ objects with that feature, wherein each draw is either a success or a failure.
1503 |
1504 | In contrast, the binomial distribution describes the probability of $k$ successes in $n$ draws with replacement.
1505 |
1506 | The following conditions characterize the hypergeometric distribution:
1507 | \begin{itemize}
1508 | \item The result of each draw (the elements of the population being sampled) can be classified into one of two mutually exclusive categories.
1509 | \item The probability of a success changes on each draw, as each draw decreases the population (sampling without replacement from a finite population).
1510 | \end{itemize}
1511 |
1512 | (e.g. Pass/Fail or Employed/Unemployed).
1513 |
1514 | A random variable $X$ follows the hypergeometric distribution if its probability mass function (pmf) is given by:
1515 |
1516 | $ \displaystyle p_{X}(k)=\Pr(X=k)={\frac {{\binom {K}{k}}{\binom {N-K}{n-k}}}{\binom {N}{n}}} $
1517 |
1518 | where
1519 | \begin{itemize}
1520 | \item $N$ is the population size.
1521 | \item $K$ is the number of success states in the population.
1522 | \item $n$ is the number of draws (i.e. quantity drawn in each trial).
1523 | \item $k$ is the number of observed successes.
1524 | \item $\genfrac{(}{)}{0pt}{}{a}{b}$ is a binomial coefficient.
1525 | \end{itemize}
1526 |
1527 | \subsection{Binomial Distribution}
1528 | Binomial distribution expresses the discrete probability distribution of an experiment that is repeated multiple times, having only two possible outcomes: positive or negative.
1529 |
1530 | Binomial refers to the distribution having only two possible outcomes, positive or negative.
1531 |
1532 | Binomial distribution is expressed as $B(n,p)$, with $n$ being trials and $p$ being the probability of success.
1533 |
1534 | $ \displaystyle f(k,n,p)=\Pr(k;n,p)=\Pr(X=k)={\binom {n}{k}}p^{k}(1-p)^{n-k}$
1535 |
1536 | for $k = 0, 1, 2, \ldots, n$ where, as already explained:
1537 |
1538 | $ \displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}$
1539 |
1540 | Binomial distribution should respect the following criteria:
1541 | \begin{itemize}
1542 | \item Outcome is binomial (positive or negative, 1 or 0, + or - etc.).
1543 | \item Each event is independent.
1544 | \item The number of trials $n$ is fixed.
1545 | \item Success \textbackslash failure rate $p$ is constant.
1546 | \end{itemize}
1547 |
1548 | \paragraph{Binomial Distribution Overview}\mbox{} \\
1549 |
1550 | \begin{itemize}
1551 | \item Notation: $B(n,p)$
1552 | \item Parameters:
1553 | \begin{itemize}
1554 | \item $ \displaystyle n\in \{0,1,2,\ldots \} $ – number of trials
1555 | \item $ \displaystyle p\in [0,1] $ – success probability for each trial
1556 | \item $ {\displaystyle q=1-p} $
1557 | \end{itemize}
1558 | \item Support: $ \displaystyle k\in \{0,1,\ldots ,n\}$ – number of successes
1559 | \item PMF: $ \displaystyle {\binom {n}{k}}p^{k}q^{n-k} $
1560 | \item CDF: $ \displaystyle I_{q}(n-k,1+k) $ (the regularized incomplete beta function)
1561 | \item Mean: $np$
1562 | \item Median: $\lfloor np \rfloor$ or $\lceil np \rceil $
1563 | \item Mode: $\lfloor (n+1)p \rfloor$ or $\lceil (n+1)p \rceil-1 $
1564 | \item Variance: $npq$
1565 | \end{itemize}
1566 |
1567 | \textbf{Example of a binomial distribution:} \\
1568 |
1569 | According to a report, 80\% of prospects at company $\alpha$ will result in a signed contract. Each prospect is independent of each other.
1570 |
1571 | It's asked to calculate the probability to close a deal from a round of 3 prospects they are working on.
1572 |
1573 | Hence:
1574 | \begin{itemize}
1575 | \item $n$ = Number of trials = 3
1576 | \item Number of outcomes = binary (positive or negative) (contract or no contract).
1577 | \item $p$ = Probability of success = 0.8
1578 | \item Trials are independent = yes.
1579 | \item $k$ = Target result = 1 contract closed = 1
1580 | \end{itemize}
1581 |
1582 | $ \displaystyle {\binom {n}{k}} = {\binom {3}{1}} = \frac{3!}{1!(3 - 1)!} = 3$
1583 |
1584 | and then $B = 3 * (0.8^1)(1 - 0.8)^(3-1)$ = 0.096
1585 |
1586 | Calculating then the values for each value of $k \in n$, hence $P\binom {n}{k}$ = P(3,0), P(3,1), P(3,2), P(3,3) we should be able to plot a chart of the distribution.
1587 |
1588 |
1589 | \includegraphics[width=3cm, height=3cm]{excel_chart_1}
1590 |
1591 | \subsection{Bernoulli Distribution}
1592 | The Bernoulli distribution is the discrete probability distribution of a random variable which takes the value 1 with probability $p$ and the value 0 with probability $q = 1 - p$.
1593 |
1594 | Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes-no (boolean) question.
1595 |
1596 | Such questions lead to outcomes that are boolean-valued: a single bit whose value is success, yes, true, 1 with probability $p$ and failure, no, false, 0 with probability $q = 1 - p$.
1597 |
1598 | Bernoulli distribution can be see as a specific case of binomial distribution, where:
1599 | \begin{itemize}
1600 | \item Binomial distribution: $n$ trials.
1601 | \item Bernoulli distribution: one trial.
1602 | \end{itemize}
1603 |
1604 | It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent “heads” and “tails”, respectively, and $p$ would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and $p$ would be the probability of tails). In particular, unfair coins would have $p \neq 1/2 $
1605 |
1606 | \paragraph{Bernouilli Distribution Overview}\mbox{} \\
1607 |
1608 | \begin{itemize}
1609 | \item Notation: $X \sim B(1,p)$
1610 | \item Parameters:
1611 | \begin{itemize}
1612 | \item $ 0 \leq p \leq 1 $
1613 | \item $ q=1 - p $
1614 | \end{itemize}
1615 | \item Support: $ k\in \{0,1\} $
1616 | \item PMF: $ \displaystyle {\begin{cases}q=1-p&{\text{if }}k=0\\p&{\text{if }}k=1\end{cases}} $
1617 | \item CDF: $ \displaystyle {\begin{cases}0&{\text{if }}k<0\\1-p&{\text{if }}0\leq k<1\\1&{\text{if }}k\geq 1\end{cases}} $
1618 | \item Mean: $p$
1619 | \item Median: $ \displaystyle {\begin{cases}0&{\text{if }}p<1/2\\\left[0,1\right]&{\text{if }}p=1/2\\1&{\text{if }}p>1/2\end{cases}} $
1620 | \item Mode: $ \displaystyle {\begin{cases}0&{\text{if }}p<1/2\\0,1&{\text{if }}p=1/2\\1&{\text{if }}p>1/2\end{cases}} $
1621 | \item Variance: $ \displaystyle p(1-p)=pq $
1622 | \end{itemize}
1623 |
1624 | \subsection{Poisson Distribution}
1625 | Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event.
1626 |
1627 | A discrete random variable $X$ is said to have a Poisson distribution, with parameter $\lambda > 0$, if it has a probability mass function given by:
1628 |
1629 | $ \displaystyle f(k;\lambda )=\Pr(X{=}k)={\frac {\lambda ^{k}e^{-\lambda }}{k!}} $
1630 |
1631 | where:
1632 | \begin{itemize}
1633 | \item $k$ is the number of occurrences ($…k=0,1,2,\ldots) $.
1634 | \item $e$ is Euler's number ($ e=2.71828\ldots$).
1635 | \item $!$ is the factorial function.
1636 | \end{itemize}
1637 |
1638 | Poisson distribution should respect the following criteria:
1639 | \begin{itemize}
1640 | \item The mean number of events occurring within a given interval of time or space lambda ($\lambda$) is known and assumed to be constant.
1641 | \item Occurrences occur in an interval and are discrete and countable.
1642 | \item Events occur independently one to another.
1643 | \item The average rate at which events occur is independent of any occurrences (assumed to be constant).
1644 | \item Two events cannot occur exactly at the same instant. At each atomic interval, either or one event occurs or no event occurs (it means that intervals are not overlapping).
1645 | \item Probability is proportional to interval size.
1646 | \end{itemize}
1647 |
1648 | Some examples of Poisson distribution are:
1649 | \begin{itemize}
1650 | \item The number of chewing gum on a single tile over a sidewalk.
1651 | \item The number of planes that fly over a specific house in an hour.
1652 | \end{itemize}
1653 |
1654 | One of the first applications of Poisson distribution was the investigation of deaths by horse kick in soldiers in 1800.
1655 |
1656 | Researchers were interested in inferring the deaths by horse kick in a year.
1657 | \begin{itemize}
1658 | \item Event = death by kick
1659 | \item Time interval = 1 year
1660 | \item $\lambda$ = 0.61
1661 | \end{itemize}
1662 |
1663 | The number of time an event is investigated is $k$ and is assumed to be $k = 2$ (we want to calculate the probability that in a year 2 soldiers will die from a horse kick).
1664 |
1665 | Hence:
1666 |
1667 | P($X = k$ where $k = 2$) = $ \displaystyle \frac {0.61^{2}e^{-0.61 }}{2!} $ = 0.101
1668 |
1669 | \paragraph{Poisson Distribution Overview}\mbox{} \\
1670 |
1671 | \begin{itemize}
1672 | \item Notation: $Pois(\lambda)$
1673 | \item Parameters: $ \displaystyle \lambda \in (0,\infty)$ (rate)
1674 | \item Support: $ \displaystyle k\in \mathbb {N} _{0}$ (Natural numbers starting from 0)
1675 | \item PMF: $ \displaystyle {\frac {\lambda ^{k}e^{-\lambda }}{k!}} $
1676 | \item CDF: $ \displaystyle {\frac {\Gamma (\lfloor k+1\rfloor ,\lambda )}{\lfloor k\rfloor !}}$, or $\displaystyle e^{-\lambda }\sum _{j=0}^{\lfloor k\rfloor }{\frac {\lambda ^{j}}{j!}}$, \\ or $\displaystyle Q(\lfloor k+1\rfloor ,\lambda )$ \\ (for $\displaystyle k\geq 0$, where $\Gamma (x,y)$ is the upper incomplete gamma function, $ \displaystyle \lfloor k\rfloor $ is the floor function, and $Q$ is the regularized gamma function).
1677 | \item Mean: $\lambda$
1678 | \item Median: $ \displaystyle \approx \left\lfloor \lambda +{\frac {1}{3}}-{\frac {1}{50\lambda }}\right\rfloor $
1679 | \item Mode: $ \displaystyle \left\lceil \lambda \right\rceil -1,\left\lfloor \lambda \right\rfloor $
1680 | \item Variance: $ \lambda $
1681 | \end{itemize}
1682 |
1683 | \clearpage
1684 |
1685 | \section{Normal Distribution}
1686 | In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable.
1687 |
1688 | The general form of its probability density function is:
1689 |
1690 | $\displaystyle f(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}$
1691 |
1692 | where:
1693 | \begin{itemize}
1694 | \item $\sigma$ is the standard deviation of the distribution.
1695 | \item $\pi$ is the mathematical constant (3.14159...).
1696 | \item $e$ is Euler's number ($ e=2.71828\ldots$).
1697 | \item $\mu$ is the mean of the distribution.
1698 | \end{itemize}
1699 |
1700 | A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate
1701 |
1702 | Normal distribution is one of the most common distribution used in business, statistics, biology, etc., since so many real-life datasets end up resembling a normal distribution (see also: Central Limit Theorem).
1703 |
1704 | Normal distributions are referred to with capital letter $N$, such as $N(5,9)$ means a normal distribution with mean 5 and variance 9.
1705 |
1706 | Normal distributions have unique properties of mean and standard deviation.
1707 |
1708 | \paragraph{The Empirical Rule}\mbox{} \\
1709 | \mbox{} \\
1710 |
1711 | The below chart helps understanding the empirical rule.
1712 |
1713 | The empirical rule (also known as three-sigma rule or 68-95-99.7 rule) states that for a normal distribution, almost all observed data will fall within the range of $3 \sigma $ from the mean $ \mu $.
1714 |
1715 | \includegraphics[width=5cm, height=3cm]{empirical}
1716 |
1717 | In particular, it can be stated that:
1718 | \begin{itemize}
1719 | \item 68\% of observations will fall within the first standard deviations ($\mu \pm \sigma $).
1720 | \item 95\% of observations will fall within the first two standard deviations ($\mu \pm 2\sigma $).
1721 | \item 99.7\% of observations will fall within the first three standard deviations ($\mu \pm 3\sigma $)
1722 | \end{itemize}
1723 |
1724 | \subsection{Z-Score}
1725 | Z-score (also known as standard score) is the number of standard deviations by which the value of an observed value or data point is above or below the mean of the distribution.
1726 |
1727 | Less technically, z-scores represents how a given data point is far from the mean.
1728 |
1729 | Negative z-scores fall on the left of the distribution (-4 being the further) while positive z-scores fall on the right of the distribution (+4 being the further).
1730 |
1731 | \includegraphics[width=8cm, height=6cm]{normal_distribution}
1732 |
1733 | Z-score standardization formula is:
1734 |
1735 | $\displaystyle Z={\frac {X-\mu }{\sigma }}$
1736 |
1737 | \subsubsection{Z-Tables}
1738 | A z-table, or standard normal table, reveals what percentage of values fall below a certain z-score in a normal distribution. It allows to translate specifics z-scores to their statistic relevance in a normal distribution.
1739 |
1740 | The table has z decimal values on the row header, and hundredth values on the table header.
1741 | To extrapolate data from z-table:
1742 | \begin{itemize}
1743 | \item Turn data into a normal distribution.
1744 | \item Find the matching z-score to the left of the table and align it with the z-score at the top of the table.
1745 | \item The result gives you the probability.
1746 | \end{itemize}
1747 |
1748 | The image below shows the probability for a z-score of 1.2 + 0.05 = 1.25, that is 0.89435.
1749 |
1750 | \includegraphics{z-table}
1751 |
1752 | A z-table can also be created from the ground-up, in example:
1753 | \begin{enumerate}
1754 | \item with Excel using the formula:
1755 | \begin{enumerate}
1756 | \item =+NORM.S.DIST($A2+B$1;TRUE), having "A" column = [0, 0.1, 0.2, …, 3.4] and row 1 = [0, 0.01, 0.02, …, 0.09]
1757 | \end{enumerate}
1758 | \item with a programming language, such as R or Python (here is script for generating a z-table in Python: \href{https://gist.github.com/carloocchiena/f65e1381a30004352561a606ee0ba51a}{gist.github.com/carloocchiena/})
1759 | \end{enumerate}
1760 |
1761 | \subsection{Normality Test}
1762 | In theory, a normal distribution follows precisely a Gaussian curve. But real world data is rarely so precise and could be the case we may be willing to test if the distribution we are working with is effectively normal.
1763 |
1764 | Normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.
1765 |
1766 | Normality tests return a probability of normality. More specifically, these tests operate using a hypothesis paradigm, where you posit an hypothesis that your particular data sample is normally distributed.
1767 |
1768 | Normality tests are such as:
1769 | \begin{itemize}
1770 | \item D'Agostino's K-squared test.
1771 | \item Jarque–Bera test.
1772 | \item Anderson–Darling test.
1773 | \item Kolmogorov–Smirnov test.
1774 | \item Shapiro–Wilk test.
1775 | \end{itemize}
1776 |
1777 | \subsection{Skewed Distribution (Skewness)}
1778 | Skewness is a measure of the asymmetry of the probability distribution of a random variable in respect to the mean of the distribution.
1779 |
1780 | Would be useful to remember that for a symmetric distribution, and, specifically, for a normal distribution, median = mean = mode.
1781 |
1782 | \textbf{Negative Skew} \\
1783 | The left tail is longer; the mass of the distribution is concentrated on the right of the figure.
1784 |
1785 | The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve.
1786 |
1787 | \textbf{Positive Skew} \\
1788 | The right tail is longer; the mass of the distribution is concentrated on the left of the figure.
1789 |
1790 | The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve
1791 |
1792 | \includegraphics[width=5cm, height=5cm]{skew}
1793 |
1794 | \subsection{Kurtosis}
1795 | While skewness refers to the tendency of a distribution to propagate in respect to its mean, \textbf{kurtosis} measures the sharpness of the curve in respect of its distribution.
1796 |
1797 | It can also be stated more simply that skewness represents the vertical distance of the curve from a normal distribution, while kurtosis represents the horizontal distance of the curve from a normal distribution.
1798 |
1799 | \begin{itemize}
1800 | \item \textbf{Skewness}: lack of symmetry in the distribution.
1801 | \item \textbf{Kurtosis}: height and sharpness of the central peak.
1802 | \end{itemize}
1803 |
1804 | \includegraphics[width=5cm, height=5cm]{kurtosis}
1805 |
1806 | \subsection{Standard Normal Distribution}
1807 | The standard normal distribution, also known as the z-distribution (Z), is a particular case of normal distribution. Standard Normal Distribution has a mean = 0 and the standard deviation = 1.
1808 |
1809 | $Z = N(0,1)$
1810 |
1811 | Any normal distribution can be standardized by converting its values into z scores. Z scores tell you how many standard deviations from the mean each value lies.
1812 |
1813 | \clearpage
1814 |
1815 | \section{Sampling}
1816 | Sampling is the selection of a subset (a \textbf{sample}) of individuals from within a statistical \textbf{population} to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population in question. Sampling has lower costs and faster data collection than measuring the entire population and can provide insights in cases where it is infeasible to measure an entire population.
1817 |
1818 | Sampling allows to test a hypothesis about general characteristics of a population.
1819 | Samples are used to make inferences about a population.
1820 |
1821 | One can think of sampling as grabbing data instances from a larger data distribution.
1822 |
1823 | \subsection{Sampling Methodologies}
1824 | The representativeness of the selected sample is a crucial element in creating a good sampling. Therefore, there are different types of sampling that can be chosen appropriately according to the characteristics of the population.
1825 |
1826 | Some of the discriminating criteria in choosing a sampling method may be:
1827 | \begin{itemize}
1828 | \item Nature and quality of the frame.
1829 | \item Availability of auxiliary information about units on the frame.
1830 | \item Accuracy requirements, and the need to measure accuracy.
1831 | \item Whether detailed analysis of the sample is expected.
1832 | \item Cost/operational concerns.
1833 | \end{itemize}
1834 |
1835 | In general terms, it can be stated that a good sampling is:
1836 | \begin{itemize}
1837 | \item Representative of the population.
1838 | \item Unbiased.
1839 | \end{itemize}
1840 |
1841 | \subsubsection{Simple Random Sample (SRS)}
1842 | A simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sample in a random way.
1843 |
1844 | \subsubsection{Systematic Random Sample}
1845 | Systematic random sample is a subclass of SRS. Every member of the population is labeled with a number, and the sample is picked using a fixed interval (i.e. one out of every 10 members).
1846 |
1847 | \subsubsection{Stratified Random Sample}
1848 | Stratified random sample is a subclass of SRS, where the population can be further classified into subpopulations. Subpopulations can’t overlap and they should be proportional in order for the sample to be relevant.
1849 |
1850 | \subsubsection{Clustered Random Sample}
1851 | Clustered random sample is a sampling methodology that can be used whenever the population can be aggregated into mutually homogeneous yet internally heterogeneous groupings.
1852 | After the population is divided into clusters, then a SRS is performed on each cluster.
1853 | It’s a sampling method often used in marketing research.
1854 |
1855 | \subsection{Central Limit Theorem}
1856 | The central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution around the mean of the original dataset even if the original variables themselves are not normally distributed.
1857 |
1858 | It implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
1859 |
1860 | The CLT establishes a relationship between the data from the whole population (parameters) and the data from the sample (statistics).
1861 |
1862 | \begin{itemize}
1863 | \item The mean of the distribution of sample means is the \textbf{expected value of $M$} and is always equal to the population mean $\mu$.
1864 | \item The standard deviation of the distribution of sample means is \textbf{the standard error of $M$} (SE) and is computed by: $\sigma{M} = \frac{\sigma}{\sqrt{n}}$. As the sample size increases, the error decreases. As the sample size decreases, the error increases. At the extreme, when n = 1, the error is equal to the standard deviation.
1865 | \item The shape of the distribution of sample means tends to be normal. It is guaranteed to be normal if either:
1866 | \begin{itemize}
1867 | \item The population from which the samples are obtained is normal.
1868 | \item The sample size ($n$) is $n \geq 30$.
1869 | \end{itemize}
1870 | \end{itemize}
1871 |
1872 | \subsubsection{Sampling Distribution of the Sample Mean}
1873 | The distribution of sample means is defined as the set of means from all the possible random samples of a specific size ($n$) selected from a specific population.
1874 |
1875 | If repeated random samples of a given size $n$ are taken from a population of values for a quantitative variable, the mean of all sample means is population mean.
1876 |
1877 | $\bar{x} = \mu $
1878 |
1879 | \paragraph{More on mean of the sampling being equal to the mean of the population}\mbox{} \\
1880 | \mbox{} \\
1881 | This statement can leads to wrong assumptions - since the mean of the sample typically is not equal to the mean of the population \textbf{unless samples are taken listing all the possible samples over the population (combinations)}.
1882 |
1883 | The idea is that when we think of taking a sample, there are a large number of possible samples, from which we will choose just one. However, we need to imagine having all the samples available, and knowing the mean of each one. This list of all the possible sample means makes up the distribution of the sampling means (“the sampling distribution of the means”). Now, this distribution itself has a mean, which is how we get the somewhat confusing phrase “mean of means.”
1884 |
1885 | So, while the mean of one sample is an estimate of the population mean and rarely equal to it, the mean of (all the means of all the possible samples), is exactly equal to the population mean. This is the basis of an important result in statistical theory, that the sample mean is an unbiased estimator of the population mean. There are other candidates for an estimator of the population mean, such as the median, the mode, or the geometric average. While they might be unbiased in certain situations, none of them are “in general” an unbiased estimator of the mean, like the sample mean is.\footnote{This explanation has been provided in an online debate from Dwight Galster, Statistics PhD from NDSU.}
1886 |
1887 | \paragraph{Finite Population Correction Factor (FPC)}\mbox{} \\
1888 | \mbox{} \\
1889 | Finite Population Correction Factor (FPC) is used to adjust sampling bias whenever sampling is done without replacement and over more than the 5\% of a finite population (both frequent real case scenarios).
1890 |
1891 | \textbf{Example}: you have to apply FPC if picking 600 people (>5\%) from a city telephone address book of 10’000 members (population is finite and whenever a person is picked from the list, can’t be picked again).
1892 |
1893 | $FPC = \sqrt{\frac{N -n}{N - 1}}$
1894 |
1895 | To apply a finite population correction, multiply it by the standard error that you would have originally used.
1896 |
1897 | For example, the standard error of a mean is calculated as:
1898 |
1899 | $\sigma{M} = \frac{\sigma}{\sqrt{n}}$
1900 |
1901 | By applying the finite population correction, the formula becomes:
1902 |
1903 | $\sigma{M} = \frac{\sigma}{\sqrt{n}}\sqrt{\frac{N -n}{N - 1}}$
1904 |
1905 | \subsection{The Student's T-Distribution}
1906 | Student's t-distribution a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown.
1907 |
1908 | The t-distribution was developed by English statistician William Sealy Gosset.
1909 | At the time (1912) he published more than 20 academic papers, mostly using the pseudonym “Student”.
1910 | The t-distribution was originally called “Test of statistical significance” or “Student’s z” (for its similarity to Z distribution).
1911 | In the end he could have called it “Gosset distribution”, but he passed to history as “Student”.
1912 |
1913 | The formula to calculate the T-distribution is:
1914 |
1915 | $t= \frac{(\bar{x} - \mu)}{s/\sqrt{n}}$
1916 |
1917 | where:
1918 | \begin{itemize}
1919 | \item $\bar{x}$ is the sample mean.
1920 | \item $\mu$ is the population mean.
1921 | \item $s$ is the standard deviation.
1922 | \item $n$ is the size of the given sample.
1923 | \end{itemize}
1924 |
1925 | \subsubsection{Degrees of freedom (DF)}
1926 | The number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary (independent, known, available data).
1927 |
1928 | Degrees of freedom (DF) is equal to the size of the sample n minus 1.
1929 |
1930 | $DF = n - 1$
1931 |
1932 | The explanation for this lies in the fact that the punctual features of a dataset can be computed backward knowing the global features of the dataset.
1933 | For example, having a dataset of 15 elements, and knowing the mean, we are free to delete one of these values, since knowing the mean we are able to calculate it backwards.
1934 | But what happens if, for example, we delete two values? We are able to trace the global value of the two variables but not to assign their value in a timely and independent way.
1935 | That is why DF is calculated as $n - 1$.
1936 |
1937 | \subsubsection{T-Score}
1938 | The T-distribution (and those associated T-score values) is used in hypothesis testing
1939 | when determining if one should reject or accept the null hypothesis.
1940 |
1941 | Values of T-score have to be compared on a T-table\footnote{https://www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf} with a process similar to the Z-scores.
1942 |
1943 | \paragraph{When to use Z-distribution and when T-distribution?}\mbox{} \\
1944 | \begin{itemize}
1945 | \item If sample < 30, t-distribution will provide a more accurate value.
1946 | \item If sample > 30, z-distribution will provide a more accurate one.
1947 | \end{itemize}
1948 |
1949 | \subsection{Mean Estimation and Confidence Intervals}
1950 | Estimation, in statistics, is concerned with inference about the numerical value of unknown population values from incomplete data such as a sample from a population.
1951 |
1952 | \begin{itemize}
1953 | \item Estimates about the population can be made sampling from the entire population dataset.
1954 | \item If the estimate is made from a single value, it is called a “point estimate”.
1955 | \item If the estimate is made from a range of values where the parameter is expected to lie, it’s called an “interval estimate”.
1956 | \end{itemize}
1957 |
1958 | \subsubsection{Point Estimation}
1959 | Point estimation involves the use of sample data to calculate a single value (called point estimate since it identifies a point in some parameter space) which is to serve as a “best guess” or “best estimate” of an unknown population parameter (for example, the population mean).
1960 |
1961 | More informally, point estimation is the process of finding an appropriate value of a population parameter, such as the mean of the population, from random samples of the population.
1962 |
1963 |
1964 | Hence, at $t0$, the accuracy of a particular approximation is not known precisely.
1965 |
1966 | \subsubsection{Interval Estimation}
1967 | Interval estimation is the use of sample data to estimate an interval of plausible values of a parameter of interest. This is in contrast to point estimation, which gives a single value.
1968 |
1969 | Interval estimation involves the evaluation of a parameter of a population (for example, the population mean) by computing an interval, or range of values, within which the parameter is most likely to be located.
1970 |
1971 | Intervals are commonly chosen within \textbf{confidence intervals (CI)}, such that the parameter falls within a \textbf{confidence level (CL)} of 95\% or 99\% (confidence coefficient). For example, out of all intervals computed at the 95\% level, 95\% of them should contain the parameter's true value.
1972 |
1973 | The end points of such an interval are called upper and lower confidence limits.
1974 |
1975 | \subsubsection{Confidence Interval (CI)}
1976 | A confidence interval refers to the probability that a population parameter will fall between a set of values for a certain probability. That probability is known as confidence level.
1977 |
1978 | Thus, if a point estimate is generated from a statistical model of 10.00 with a 95\% confidence interval of 9.50 - 10.50, it can be inferred that there is a 95\% probability that the true value falls within that range.
1979 |
1980 | The percentage falling outside the confidence level is called $\alpha$ (alpha) value or level of significance (LOS). So the $alpha$ value associated with a 90\% confidence level is 10\%.
1981 | Mathematically speaking, the confidence interval can be defined as:
1982 |
1983 | $CI = \bar{x} \pm MOE$
1984 |
1985 | where:
1986 | \begin{itemize}
1987 | \item $\bar{x}$ is the sample mean.
1988 | \item $MOE$ = $z_y SE$ = $z_y \frac{\sigma}{\sqrt{n}}$ is the Standard Error, $z_y$ being the quantile or z-score.
1989 | \end{itemize}
1990 |
1991 | If standard deviation is not known, then we need to use a t-score instead of a z-score.
1992 |
1993 | \subsubsection{Confidence Level (CL)}
1994 | The confidence level measures the level of trust\footnote{a note for Italian readers: “Confidence” translates to “Fiducia” and not to “Confidenza” as we’re used to say} of the accuracy of the provided interval.
1995 |
1996 | \begin{itemize}
1997 | \item Higher confidence levels means a wider range of values.
1998 | \item Lower confidence levels means a tighter range of values.
1999 | \end{itemize}
2000 |
2001 | \subsubsection{Confidence Intervals and Z-scores}
2002 | Since we already know that the most common confidence intervals are 90\%, 95\% and 99\%, we can already get confidence with the Z-values associated with such levels.
2003 | \begin{itemize}
2004 | \item 90\% CL = Z +/- 1.65
2005 | \item 95\% CL = Z +/- 1.96
2006 | \item 99\% CL = Z +/- 2.58
2007 | \end{itemize}
2008 |
2009 | These values are particularly useful in estimating the sample size from a population, assuming it is following a normal distribution.
2010 |
2011 | \subsubsection{Calculate the Sample Size from a Population}
2012 | In identifying the sample size, one must make sure that it is significant for the population size and the estimated margin of error and confidence level.
2013 |
2014 | This dimension can be obtained by applying the properties of the normal distribution, with the following formula:
2015 |
2016 | $\displaystyle {n = N \frac{\frac{z^2p(1 - p)}{MOE^2}}{N - 1 + \frac{z^2p(1 - p)}{MOE^2}}}$
2017 |
2018 | where:
2019 | \begin{itemize}
2020 | \item $N$ is the population size.
2021 | \item $n$ is the sample size.
2022 | \item $p$ is the p-value.
2023 | \item $z$ is the z-score for the given confidence level.
2024 | \item $MOE$ is the margin of error.
2025 | \end{itemize}
2026 |
2027 | \textbf{Calculating Sample Size: an Example}
2028 | \begin{itemize}
2029 | \item $N$ = 500
2030 | \item $CI$ = 95\%
2031 | \item $MOE$ = 2\%
2032 | \item $p$ = 0.5 (conservative value for unknown or uncertain scenarios).
2033 | \item $z$ = 1.96 (z-value associated with such CI, see previous section).
2034 | \end{itemize}
2035 |
2036 | $n$ = 414
2037 |
2038 | There are myriads of tools and online calculators for measuring sample size (or verifying the calculation).
2039 |
2040 | Among the many, a good one is: \url{https://www.checkmarket.com/sample-size-calculator/}.
2041 |
2042 | \clearpage
2043 |
2044 | \section{Hypothesis Testing}
2045 | Hypothesis testing is a method used in statistical inference to validate specific assumptions made over a population parameters starting from a data sample.
2046 |
2047 | The process of testing an hypothesis is the following:
2048 | \begin{itemize}
2049 | \item State null and alternative hypotheses.
2050 | \item Determine the levels of significance.
2051 | \item Calculate the statistics.
2052 | \item Find critical values.
2053 | \item Determine regions of acceptance and rejections.
2054 | \item State the conclusion.
2055 | \end{itemize}
2056 |
2057 | \subsection{Null \& Alternative Hypotheses}
2058 | \textbf{Null hypothesis (H0)} states that no difference or relationship exists between two sets of data or variables being analyzed.
2059 |
2060 | \textbf{Alternative hypothesis (H1, Ha)} states that there exists a relationship between two sets of data or variables being analyzed.
2061 |
2062 | The two hypotheses are tested together.
2063 |
2064 | \subsubsection{Type I and Type II Errors}
2065 | \textbf{Type I error} is a false positive conclusion (rejecting an actually true null hypothesis).
2066 |
2067 | Type I error is equal to the $\alpha$ probability (the percentage falling outside the level of significance).
2068 | These errors mean that the results are assumed to be statistically significant while they are actually not.
2069 |
2070 | \textbf{Type II error} is a false negative conclusion (accepting an actually false null hypothesis).
2071 | This has not to be mixed with accepting the null hypothesis.
2072 | It may happens whenever the analysis has not enough statistical power to detect an effect of such a size.
2073 |
2074 | Statistical power is determined by:
2075 | \begin{itemize}
2076 | \item Size of the effect.
2077 | \item Measurement errors.
2078 | \item Sample size.
2079 | \item Significance level.
2080 | \end{itemize}
2081 |
2082 | \begin{center}
2083 | \begin{tabular}{|c|c|c|}
2084 | \hline
2085 | Null hypothesis is: & True & False \\ \hline
2086 | &&\\[-1em]
2087 | Rejected & \makecell {Type I Error \\ False positive \\ Probability = LOS = $\alpha$} & \makecell {Correct decision \\ True positive \\ Probability = $1-\beta$} \\ \hline
2088 | &&\\[-1em]
2089 | Accepted & \makecell {Correct decision \\ True negative \\ Probability = 1 - $\alpha$ } & \makecell{Type II error \\ False negative \\ Probability = $\beta$} \\
2090 | \hline
2091 | \end{tabular}
2092 | \end{center}
2093 |
2094 | \paragraph{Tradeoff between Type I and Type II errors}\mbox{} \\
2095 | \mbox{} \\
2096 | Type I and Type II errors influence each other.
2097 | \begin{itemize}
2098 | \item Setting a lower LOS decreases Type I but increases incidence of Type II errors.
2099 | \item Increasing power of a test decreases Type II but increases incidence of a Type I error.
2100 | \end{itemize}
2101 |
2102 | What’s worse?
2103 |
2104 | The example of the innocent convicted to jail is often used to explain why - usually - Type I errors (rejecting $H0$ when it’s in fact true) may lead to worse consequences than a Type II error\footnote{Neyman, J.; Pearson, E.S. (1967) [1933]. “The testing of statistical hypotheses in relation to probabilities a priori”. Joint Statistical Papers. Cambridge University Press. pp. 186–202.}.
2105 |
2106 | In more general terms, we may say that:
2107 | \begin{itemize}
2108 | \item Type I errors may lead to a change in a given process, hence, to an active, wrong, action.
2109 | \item Type II errors may lead to overlooking an action that would have been positive. A passive point of view.
2110 | \end{itemize}
2111 |
2112 | But, obviously, it depends on the context and on the scope of the analysis.
2113 |
2114 | \subsection{Test Statistics}
2115 | A test statistic is a statistic measurement (hence, a quantity derived from a sample taken from a population), specifically used in hypothesis testing.
2116 |
2117 | Test statistics can be done via different methods such as t-value, z-value, F-value, $x^2$-value.
2118 |
2119 | Here we focus on z-test and t-test.
2120 |
2121 | The choice criteria is the same as for z-scores and t-scores, with t-scores being used where the sample size is small and the population's standard deviation is unknown.
2122 |
2123 | \paragraph{One-sample z-test}\mbox{} \\
2124 | \mbox{} \\
2125 |
2126 | $ \displaystyle z={\frac {{\overline {x}}-\mu _{0}}{({\sigma }/{\sqrt {n}})}} $
2127 |
2128 | \mbox{} \\
2129 |
2130 | With:
2131 | \begin{itemize}
2132 | \item Normal population or $n$ large) and $\sigma$ known.
2133 | \item $z$ being the distance from the mean in relation to the standard deviation of the mean.
2134 | \end{itemize}
2135 |
2136 | \paragraph{One-sample t-test}\mbox{} \\
2137 | \mbox{} \\
2138 |
2139 | $ t=\frac{\overline{x}-\mu_0} {( s / \sqrt{n} )} $
2140 |
2141 | \mbox{} \\
2142 |
2143 | With:
2144 | \begin{itemize}
2145 | \item Normal population or $n$ large and $\sigma$ unknown.
2146 | \item $df = n-1$
2147 | \end{itemize}
2148 |
2149 | \subsubsection{Two-Tailed and One-Tailed Tests}
2150 | \textbf{Two-tailed test} is used whenever the estimated value can be greater or smaller than a certain range of values (for example, a target score from an exercise).
2151 | If the estimated value exists in the critical areas, the alternative hypothesis is accepted over the null hypothesis.
2152 |
2153 | Noting that in a two-tailed test the range of rejection is equal to $\alpha/2$ (both sides of the distribution, right and left), more extreme values are needed in order to incur in a rejection of the null hypothesis. Hence, two-tailed tests are more conservative, and one should assumes to have strong evidence to state a specific direction in the hypothesis testing.
2154 |
2155 | \textbf{One-tailed test} is used whenever the estimated value can differ from the reference value only in one direction, greater or smaller. But not both.
2156 | For example, the defective rate of a machine.
2157 | If the estimated value exists in one of the one-sided critical areas, depending on the direction of interest, the alternative hypothesis is accepted over the null hypothesis.
2158 |
2159 | Range of rejection is equal to $\alpha$ (one side of the distribution, right or left).
2160 |
2161 | \subsection{P-Value and Critical Value}
2162 | $p$-value and critical value are meant to do the same thing: support or reject the null hypothesis. Both are measure of evidence to differentiate between randomness and causality.
2163 | The same result is then obtained following different approaches.
2164 |
2165 | The test process will follow this roadmap:
2166 | \begin{itemize}
2167 | \item State Null ($H0$) and Alternate ($Ha$) hypotheses.
2168 | \item Choose LOS (level of significance, $\alpha$).
2169 | \item Calculate test statistics from the sample.
2170 | \end{itemize}
2171 |
2172 | Then, for \textbf{p-value} approach:
2173 | \begin{itemize}
2174 | \item Compute $p$-value.
2175 | \item Compare $p$-value to $\alpha$ level.
2176 | \item Reject null if $p \leq \alpha$ level.
2177 | \end{itemize}
2178 |
2179 | And for \textbf{Critical Value} approach:
2180 | \begin{itemize}
2181 | \item Find critical value.
2182 | \item Compare test statistics to critical value.
2183 | \item Reject null if the test value falls in the rejection region ($|t| <$ critical value).
2184 | \end{itemize}
2185 |
2186 | \textbf{Try it with Python}: \\
2187 | I made an interactive Jupyter Notebook to support the understanding of both approaches; it can be found in the GitHub repository of the handbook.
2188 |
2189 | \paragraph{P-value and Critical Value: a comparison}\mbox{} \\
2190 | \mbox{} \\
2191 | \textbf{P-value: compare areas}\\
2192 | For the $p$-value approach, the likelihood ($p$-value) of the numerical value of the test statistic is compared to the specified significance level $\alpha$ of the hypothesis test.
2193 |
2194 | The $p$-value corresponds to the probability of observing sample data at least as extreme as the actually obtained test statistic. This means measure the statistical significance of the observations.
2195 |
2196 | Small $p$-values provide evidence against the null hypothesis. The smaller (closer to 0) the $p$-value, the stronger is the evidence against the null hypothesis.\footnote{Hartmann, K., Krois, J., Waske, B. (2018): E-Learning Project SOGA: Statistics and Geospatial Data Analysis. Department of Earth Sciences, Freie Universitaet Berlin.}
2197 |
2198 | \textbf{Common errors regarding the use of $p$-value}:
2199 | \begin{itemize}
2200 | \item The $p$-value is not the probability that the null hypothesis is true or the probability that the null hypothesis is false. It is not related to either.
2201 | \item The $p$-value is not the probability that an observation is a random event. The calculation of $p$-value is based on the assumption that every observation is a random event, a random result. The phrase "the result is due to random chance" usually means that the null hypothesis is probably correct, but remember that $p$-value cannot be used to represent the probability that a hypothesis is true.
2202 | \item The $p$-value is not the probability of rejecting the null hypothesis when it is true.
2203 | \item The $p$-value is not the probability that replicating the experiment would yield the same conclusion. To quantify the replicability of an experiment, the concept of $p$-rep was introduced.
2204 | \item The significance level $\alpha$ is not determined by the $p$-value. The significance level is decided by the person conducting the experiment before seeing the data.
2205 | \end{itemize}
2206 |
2207 | \textbf{Critical value: compare scores}\\
2208 | It is determined whether or not the observed test statistic is more extreme than a defined critical value. Therefore the observed test statistic (calculated on the basis of sample data) is compared to the critical value, some kind of cutoff value.
2209 | \begin{itemize}
2210 | \item If the test statistic is more extreme than the critical value, the null hypothesis is rejected.
2211 | \item If the test statistic is not as extreme as the critical value, the null hypothesis is not rejected.
2212 | \end{itemize}
2213 |
2214 | \subsection{A/B Testing}
2215 | A/B testing has become particularly well known due to its extensive use in marketing and its declinations (product validation, UX, CTAs).
2216 | From a statistical point of view, it can be defined as a two-sample hypothesis test (two variables that are independent of each other).
2217 |
2218 | From a practical point of view, implementing an A/B testing strategy requires grounding most of skill set of a statistician:
2219 | \begin{itemize}
2220 | \item Identifying the target population.
2221 | \item Identify a sample.
2222 | \item Formulate $H0$ null and $Ha$ alternative hypotheses.
2223 | \item Define a confidence interval.
2224 | \item Perform the experiment.
2225 | \item Collect the results.
2226 | \item Assess the statistical significance of observations.
2227 | \item Validate or reject the hypothesis made.
2228 | \end{itemize}
2229 |
2230 | Common use-cases requiring A/B testing are\footnote{You may want to check this \href{https://github.com/FrancescoCasalegno/AB-Testing/blob/main/AB_Testing.ipynb}{Jupyter Notebook (link)} created by F.Casalegno, with a neat real world example.}:
2231 | \begin{itemize}
2232 | \item Email marketing (open rates, click-thru rates).
2233 | \item User Interfaces (buttons, image size, background colors).
2234 | \item Programming routines (APIs performances, HTTP routing).
2235 | \item Advertising campaign.
2236 | \end{itemize}
2237 |
2238 | \clearpage
2239 |
2240 | \section{Regression Analysis}
2241 | Regression analysis is a set of statistical processes for estimating the relationships between a \textbf{dependent variable} (often called the “outcome” or “response” variable, or a “label” in machine learning jargon) and one or more \textbf{independent variables} (often called “predictors”, “covariates”, “explanatory variables” or “features”).
2242 |
2243 | Regression analysis is primarily used for two conceptually distinct purposes:
2244 | \begin{itemize}
2245 | \item For prediction and forecasting, where its use has substantial overlap with the field of machine learning.
2246 | \item To infer causal relationships between the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset.
2247 | \end{itemize}
2248 |
2249 | Linear regression line is expressed as:
2250 |
2251 | $y = mx + b$
2252 |
2253 | where:
2254 | \begin{itemize}
2255 | \item $y$ is the independent variable.
2256 | \item $m$ is the slope.
2257 | \item $x$ is the independent variable.
2258 | \item $b$ is the intercept (the expected mean value of $y$ when all $x$=0, where the function crosses the x-axis).
2259 | \end{itemize}
2260 |
2261 | To solve the linear regression function in a context where there are many possible features of $x$, such as in a prediction under uncertainty, a specific methodology is needed.
2262 |
2263 | Ordinarry least squares (OLS) is one of the possible approaches.
2264 |
2265 | \subsection{Ordinary Least Squares (OLS)}
2266 | Ordinary least squares (OLS)\footnote{Minimi Quadrati Ordinari in Italian translation}, also known as Multiple Regression, is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the given dataset and those predicted by the linear regression function.
2267 |
2268 | Assuming that in the case scenario the dependent and independent variables are linearly related, and impact of different variables are additive, the equation of a typical linear regression can be written as below:
2269 |
2270 | $\hat{y} = \beta_0 x_0 + \beta_1 x_1 + \ldots + \beta_n x_n$
2271 |
2272 | or, expressed as a summation:
2273 |
2274 | $ \displaystyle \hat{y} = \sum \limits^{n}_{i=0}{\beta_i x_i}$
2275 |
2276 | OLS allows us to directly solve the $y = mx + b$ equation for the slope $m$ and the intercept $b$ in a context where there are many possible features of $x$, such as in a prediction under uncertainty, hence:
2277 |
2278 | $ \displaystyle {y = x_1, x_2, x_3, \ldots, x_n}$
2279 |
2280 | Thus the linear regression function will translate to:
2281 |
2282 | $b = \hat{y} - m\hat{x}$
2283 |
2284 | $m = \displaystyle \frac{\sum(x_i - \hat{x})(y_i - \hat{y})}{\sum(x_i - \hat{x})^2}$
2285 |
2286 | where:
2287 | \begin{itemize}
2288 | \item $x$ are the independent variables.
2289 | \item $\hat{x}$ is the average of the independent variables.
2290 | \item $y$ are the dependent variables.
2291 | \item $\hat{y}$ is the average of dependent variables.
2292 | \end{itemize}
2293 |
2294 |
2295 | OLS is the default regression method for continuous dependent variables, but not the only one.
2296 | Other methodologies are, in example:
2297 | \begin{itemize}
2298 | \item Quantile Regression.
2299 | \item LAD (Least Absolute Deviation).
2300 | \item GLS (Generalized Least Squares).
2301 | \item SGD (Stochastic Grandient Descent) (with some caveat for linear regression).
2302 | \end{itemize}
2303 |
2304 | \subsection{Correlation Coefficient}
2305 | The sample correlation coefficient $r$ is a measure of the closeness of association of the points in a scatter plot to a linear regression line based on those points.
2306 |
2307 | Possible values of the correlation coefficient range from -1 to +1, with -1 indicating a perfectly linear negative, i.e., inverse, correlation (sloping downward) and +1 indicating a perfectly linear positive correlation (sloping upward).
2308 |
2309 | $r$ can be calculated with correlation formula such as Pearson's.\footnote{see: 6.2 Correlation}
2310 |
2311 | \begin{center}
2312 | \begin{tabular}{|c|c|}
2313 | \hline
2314 | Correlation Coefficient $r$ & Association Strength \\ \hline
2315 | +1.0 & Perfect positive \\ \hline
2316 | +0.8 to 1.0 & Very strong positive \\ \hline
2317 | +0.6 to 0.8 & Strong positive \\ \hline
2318 | +0.4 to 0.6 & Moderate positive \\ \hline
2319 | +0.2 to 0.4 & Weak positive \\ \hline
2320 | 0.0 to 0.2 & Very weak or neutral \\ \hline
2321 | 0.0 to -0.2 & Very weak or neutral \\ \hline
2322 | -0.2 to -0.4 & Weak negative \\ \hline
2323 | -0.4 to -0.6 & Moderate negative \\ \hline
2324 | -0.6 to -0.8 & Strong negative \\ \hline
2325 | -0.8 to -1.0 & Very strong negative \\ \hline
2326 | -1.0 & Perfect negative \\
2327 | \hline
2328 | \end{tabular}
2329 | \end{center}
2330 |
2331 | \subsection{Line Fitting, Residuals and Errors}
2332 | \textbf{Line fitting} is the process of constructing a straight line that has the best fit to a series of data points.
2333 |
2334 | The definition of best fit in itself varies depending on the methodology used, for examples:
2335 | \begin{itemize}
2336 | \item Linear regression minimizes the vertical distance.
2337 | \item Orthogonal regression minimizes the perpendicular distance.
2338 | \end{itemize}
2339 |
2340 | \textbf{Residuals} are the differences between each data point and the estimated value, given by the regression line equation.
2341 |
2342 | Residuals have an overall sum and mean of zero.
2343 |
2344 | Residual have not to be confused with errors.
2345 |
2346 | An \textbf{error} is the (often not observable) difference between the observed value and the true value (generated by the data generating process).
2347 |
2348 | A \textbf{residual} is the difference between the observed value and the predicted value (by the model).
2349 |
2350 | \subsection{Linear Regression Trendlines}
2351 | There are four different ways in which you can describe a statistics trend, analyzing the trendlines (linear regression lines).
2352 |
2353 | \textbf{Form}: \\
2354 | The shape that the trend is following.
2355 |
2356 | It could be:
2357 | \begin{itemize}
2358 | \item Linear: a straight line.
2359 | \item Exponential: a parabolic line.
2360 | \item Sinusoidal: a upward-downward horizontal S curve shaped line.
2361 | \item Logarithmic: following a logarithmic distribution.
2362 | \item No correlation: for scattered data with no evident trend and correlation.
2363 | \end{itemize}
2364 |
2365 | \textbf{Direction}: \\
2366 | The orientation of the trend.
2367 |
2368 | It could be:
2369 | \begin{itemize}
2370 | \item Positive: the trendline is pointing upward.
2371 | \item Negative: the trendline is pointing downward.
2372 | \end{itemize}
2373 |
2374 | \textbf{Strength}: \\
2375 | How tightly clustered (distance of data points from the predicted value of the regression equations, or residual) the data points are from the trendline.
2376 |
2377 | It could be:
2378 | \begin{itemize}
2379 | \item Strong: tightly clustered, minimal distance.
2380 | \item Weak: sparse, mid-high distance from the trendline.
2381 | \end{itemize}
2382 |
2383 | \subsection{Regression model evaluation metrics}
2384 | Regression analysis is just the first part of the evaluation process during a statistical analysis. Evaluating the model accuracy is a paramount step in order to appreciate the accuracy of our analysis, and how representative it is of the dataset examined.
2385 |
2386 | The main metrics used to evaluate accuracy of a regression analysis are the MAE, MSE, RMSE, and R-Squared metric.
2387 |
2388 | \textbf{MAE - Mean Absolute Error} \\
2389 | MAE represents the average of the absolute distances between the initial dataset and the prediction.
2390 |
2391 | MAE is a rather simple metrics that returns a value in the same unit as the output variable.
2392 |
2393 | It is more robust to outliers than MSE (less impacted by value that are very far from the mean).
2394 |
2395 | $ \displaystyle MAE = \frac{1}{N} \sum^{N}_{i=1}{|y_i - \hat{y}|} $
2396 |
2397 | \textbf{MSE - Mean Squared Error} \\
2398 | MSE represent the average of the squared distances between the initial dataset and the prediction.
2399 |
2400 | $ \displaystyle MSE = \frac{1}{N} \sum^{N}_{i=1}{(y_i - \hat{y})^2} $
2401 |
2402 | \textbf{RMSE - Root Mean Squared Error} \\
2403 | RMSE is the square root of the MSE.
2404 |
2405 | $ \displaystyle RMSE = \sqrt{MSE} = \sqrt{\frac{1}{N} \sum^{N}_{i=1}{(y_i - \hat{y})^2}} $
2406 |
2407 | \textbf{$R^2$ - R-Squared} \\
2408 | R-Squared, also known as \textbf{coefficient of determination}, is a ratio, the value of which ranges between 0 and 1, and which represents the accuracy of the prediction model with respect to the original values.
2409 |
2410 | It's obtained as a ratio of the distance of the original points from the prediction values minus the distance of the original points from the dataset mean.
2411 |
2412 | The capitalized $R^2$ notation suggests that it is referred to a multiple linear regression model (a model with more than one independent variable).
2413 |
2414 | If, on the other hand, a simple linear regression model has been constructed, i.e., with only one independent variable, $r^2$ is usually preferred.
2415 |
2416 | Coefficient of determination is a scale-free score, this mean that the value will always be within a range of 1 and 0\footnote{There are cases where $R^2$ can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data}.
2417 |
2418 | $ \displaystyle R^2 = 1 - \frac{\sum(y_i - \hat{y})^2}{\sum(y_i - \bar{y})^2}$
2419 |
2420 | Usually, the higher the $R^2$ value (closer to 1), the higher the model has a predictive value. However, the significance of the $R^2$ ratio cannot be inferred by excluding the reference context. In some fields such as the behavioral sciences, it is usual to observe $R^2$ values of less than 50\%. This does not mean that the regression model is not performing accurately, but that, by its very nature, the dependent variable being analyzed depends on so many different factors, many of which have not been measured.
2421 | On the other hand, a high $R^2$ is a necessary but not sufficient condition for accurate predictions.
2422 |
2423 | \subsection{Chi-Square Test}
2424 | Chi-square is among the most common nonparametric tests used in statistics.
2425 |
2426 | Chi-square is one of the hypothesis testing tests used in statistics to decide whether or not to reject the null hypothesis.
2427 | Depending on the starting assumptions used such tests are considered parametric or nonparametric.
2428 |
2429 | The chi-square test is widely used to test that the frequencies of observed values fit the theoretical frequencies of a fixed probability distribution.
2430 |
2431 | For example, it is well known that the result of 100 tosses of a coin follows the uniform distribution, and it is difficult to obtain a result that differs significantly from obtaining 50 heads and 50 tails. The chi-square test makes it possible to determine, after fixing the maximum permissible error, whether the discrepancies between the observed and theoretical frequencies can be attributed entirely to chance or whether it is safe to assume that the coin is being cheated.
2432 |
2433 | $\displaystyle \chi^2 = \sum \frac{(O - E)^2}{E}$
2434 |
2435 | where:
2436 | \begin{itemize}
2437 | \item $\chi^2$ is the chi-square test statistic.
2438 | \item $O$ is the observed frequency.
2439 | \item $E$ is the expected frequency.
2440 | \end{itemize}
2441 |
2442 | Results of the chi-square equation have to be checked against a chi-square table\footnote{\url{https://cdn.scribbr.com/wp-content/uploads/2022/05/Chi-square-table.pdf}}, in order to get the test results.
2443 |
2444 | $\chi^2 > \text{right tail probability} \implies H0 = FALSE $ (null hypothesis can be rejected).
2445 |
2446 | \subsection{Analysis of Variance (ANOVA)}
2447 | ANOVA is a set of statistical techniques that allow to compare mean variance between groups.
2448 |
2449 | ANOVA can be calculated as follow:
2450 | \begin{itemize}
2451 | \item Calculate the \textbf{SST(SUM SQUARES TOTAL)} (variance of each datapoint from the Gran Mean, being Grand Mean the mean of the whole dataset).
2452 | \item Calculate the \textbf{SSW (SUM SQUARES WITHIN)} (variance of each datapoint from the mean of each group).
2453 | \item Calculate the \textbf{SSB (SUM SQUARES BETWEEN)} (variance of the group mean from the Grand Mean)
2454 | \end{itemize}
2455 |
2456 | At this point, F-test value can be calculated as a ratio of two variances.
2457 |
2458 | $ \displaystyle F = \frac{\frac{SSB}{m-1}}{\frac{SSW}{m(n-1)}} $
2459 |
2460 | where:
2461 | \begin{itemize}
2462 | \item $F$ is the F-test value.
2463 | \item $SSB$ is the variance of the group mean from the global mean (Grand Mean).
2464 | \item $SSW$ is the variance of each datapoint from the mean of each group.
2465 | \item $m$ is the number of clusters (groups).
2466 | \item $n$ is the number of items of each cluster.
2467 | \end{itemize}
2468 |
2469 | The F test value has then to be compared on an F-table\footnote{\url{https://statisticsbyjim.com/wp-content/uploads/2022/02/F-table_Alpha10.png}}. Note that there is an F-table for each value of $\alpha$ (for each level of confidence), and that both degrees of freedom (DF) (for clusters and for the sample) are needed in order to calculate the value.
2470 |
2471 | \clearpage
2472 |
2473 | \section{License}
2474 | Attribution 4.0 International (CC BY 4.0)\footnote{\url{https://creativecommons.org/licenses/by/4.0/}}
2475 |
2476 | This document is distributed under a Creative Common, Free Culture, License
2477 |
2478 | This work was created as a means of learning and diffusion for study purposes, so I hope for its free diffusion and dissemination.
2479 |
2480 | While producing it, I mainly observed a two-pronged approach:
2481 | \begin{itemize}
2482 | \item Maintaining the accuracy of definitions, formulas, and descriptions.
2483 | \item Describe everything with proprietary wording that does not infringe upon the intellectual property of the sources I have drawn on.
2484 | \end{itemize}
2485 |
2486 | For obvious reasons however, mathematical definitions are free up to a point (and one of the goals of this work was to refer back to the canonical definitions, as further detailed in the following section) so where in good faith I have traced the work of licensed material I am happy to take action to amend it. Again starting from the premise that the purpose of this paper is not commercial but rather informative and educational. See the \textbf{Contact} section for any needs.
2487 |
2488 | This work was originally released by the Author as a free downloadable pdf.
2489 |
2490 | \textbf{Machine-readable license metadata}:\\
2491 | \begin{sloppypar}
2492 | \url{
The Statistics Handbook di Carlo Occhiena è distribuito con Licenza Creative Commons Attribuzione 4.0 Internazionale}.
2493 | \end{sloppypar}
2494 |
2495 | \section{Source Code \& Additional Materials}
2496 | The source of this handbook, and some additional materials can be found on an open repository on my GitHub, at the following link:
2497 | \begin{itemize}
2498 | \item \url{https://github.com/carloocchiena/the_statistics_handbook}
2499 | \end{itemize}
2500 |
2501 | The repository is including:
2502 | \begin{itemize}
2503 | \item LICENSE: the license definition for this handbook.
2504 | \item README: the readme file.
2505 | \item The Statistics Handbook - main.pdf: this handbook.
2506 | \item Statistical\_Workbook.xlsx: the .xlsx file with exercises and demonstrations.
2507 | \item datachart\_template.ai: the .ai file with the charts and image source file.
2508 | \item main.tex: is the latex source code of the handbook.
2509 | \item z-table.xlsx: the z-table generated with an Excel formula.
2510 | \item z\_score\_table\_t\_score\_analysis.ipynb: interactive notebook to explore z-score and t-score testing.
2511 | \end{itemize}
2512 |
2513 | \section{Bibliography \& Sources}
2514 | This workbook was only made possible through research, consultations and studies of numerous sources.
2515 |
2516 | Some of them were institutional, such as University papers and websites.
2517 | Others were related to the dissemination of collective knowledge, such as Wikipedia, Britannica, Khan, Youmath, Statology.
2518 |
2519 | Some others were related to the work of specific science spreaders such as P. Pozzolo, K.King.
2520 |
2521 | I have not neglected to cite even minor sources that have been helpful in finding meaningful exercises or comparing the goodness of the solutions I have devised. For example, 20-year-old forum threads.
2522 |
2523 | The cue is to keep exploring and DYOR (do your own research).
2524 |
2525 | Happy reading!
2526 |
2527 | Links checked in December 2022.
2528 |
2529 | \textbf{Books}:
2530 | \begin{itemize}
2531 | \item Gambini A., Argomenti di statistica descrittiva.
2532 | \item Perisco L, Di Bella E., Mosto L., Applicazioni di probabilità e statistica.
2533 | \item C.Gosio, Matematica Finanziaria.
2534 | \item A. Pascucci, W.J. Runggaldier Finanza Matematica.
2535 | \item S. Fiorenzani, E. Edoli, S. Ravelli, The Handbook of Energy Trading.
2536 | \end{itemize}
2537 | \textbf{Online sources}:
2538 | \begin{sloppypar}
2539 | \begin{itemize}
2540 | \item \url{http://www.stat.yale.edu/ }
2541 | \item \url{https://corporatefinanceinstitute.com/ }
2542 | \item \url{https://datascience.eu/it/matematica-e-statistica }
2543 | \item \url{https://imstat.org/ }
2544 | \item \url{https://it.scienza.matematica.narkive.com/XNiiE9wo/probabilita-di-uscita-di-un-numero-su-una-ruota-del-lotto}
2545 | \item \url{https://it.wikipedia.org/wiki/Statistica }
2546 | \item \url{https://matematica.unibocconi.it/search/node/statistica }
2547 | \item \url{https://math.stackexchange.com/}
2548 | \item \url{https://meetheskilled.com/test-di-ipotesi-one-sample-t-test/ }
2549 | \item \url{https://ocw.mit.edu/courses/1-151-probability-and-statistics-in-engineering-spring-2005/pages/lecture-notes/ }
2550 | \item \url{https://paolapozzolo.it }
2551 | \item \url{https://sphweb.bumc.bu.edu/otlt/MPH-Modules/PH717-QuantCore/PH717-Module9-Correlation-Regression/PH717-Module9-Correlation-Regression4.html }
2552 | \item \url{https://statisticsbyjim.com/hypothesis-testing/hypothesis-tests-significance-levels-alpha-p-values/}
2553 | \item \url{https://statology.org }
2554 | \item \url{https://thestatsgeek.com/ }
2555 | \item \url{https://www.britannica.com/browse/Mathematics }
2556 | \item \url{https://www.datasciencecentral.com/ }
2557 | \item \url{https://www.datatechnotes.com/ }
2558 | \item \url{https://www.geo.fu-berlin.de/ }
2559 | \item \url{https://www.hwupgrade.it/forum/archive/index.php/t-1294440.html }
2560 | \item \url{https://www.investopedia.com/math-and-statistics-4689831 }
2561 | \item \url{https://www.khanacademy.org/}
2562 | \item \url{https://www.kristakingmath.com/}
2563 | \item \url{https://www.matematicamente.it/forum/viewtopic.php?t=16622}
2564 | \item \url{https://www.scribbr.com/statistics/type-i-and-type-ii-errors/ }
2565 | \item \url{https://www.statisticshowto.com/ }
2566 | \item \url{https://www.wallstreetmojo.com/t-distribution-formula/ }
2567 | \item \url{https://www.westga.edu/academics/research/vrc/assets/docs/confidence_intervals_notes.pdf }
2568 | \item \url{https://www.youmath.it/ }
2569 | \end{itemize}
2570 | \end{sloppypar}
2571 |
2572 | \subsection{Images}
2573 | All images used in this book were created by the author, either through a spreadsheet or a vector graphics software.
2574 | In any case, the source files are included in the GitHub repository.
2575 |
2576 | The only images drawn from the web are:
2577 | \begin{itemize}
2578 | \item The violin graph.
2579 | \item the Gaussian with z-scores
2580 | \end{itemize}
2581 |
2582 | These images appear to be part of the public domain; however, I have no problem removing them if that definition is incorrect.
2583 |
2584 | \subsection{Canonical Formulas and Definitions}
2585 | As much as I wrote every single line of that manual in my own hand, I found it counterproductive if not misleading to paraphrase mathematical formulas and definitions.
2586 |
2587 | I was therefore very careful to resort to canonical definitions and formulas.
2588 |
2589 | This work was done by comparing different sources, especially comparing university papers with what could be found on the Web.
2590 |
2591 | I think the final result is appreciable, nevertheless I consider it a possible point of discussion and improvement on which I gladly accept comparisons.
2592 |
2593 | \subsection{Datasets}
2594 | A very good source of datasets ready to be used, and with a fair amount of metadata, can be found on Kaggle \url{https://www.kaggle.com/datasets}.
2595 |
2596 | This has been my number one source of dataset for this handbook.
2597 |
2598 | Other relevant sources are:
2599 | \begin{itemize}
2600 | \item \url{ https://data.gov/ }
2601 | \item \url{https://archive.ics.uci.edu/ml/datasets.php }
2602 | \item \url{https://datahub.io/collections }
2603 | \item \url{https://datasetsearch.research.google.com/ }
2604 | \end{itemize}
2605 |
2606 | \section{Acknowledgement}
2607 | This work was inspired by all the people who taught me love for studying and sciences. I was welcomed into their circles without prejudice, although I often came across as an ant in their presence.
2608 |
2609 | To all of them goes my deepest gratitude and appreciation!
2610 |
2611 | \subsection{Proof-readers}
2612 | \begin{itemize}
2613 | \item Eng. M. Rezzaro
2614 | \item Braccinocorto (GitHub contributor)
2615 | \item Hongy19 (GitHub contributor)
2616 | \end{itemize}
2617 |
2618 |
2619 | \section{Contacts}
2620 | \begin{itemize}
2621 | \item Linkedin: \url{https://www.linkedin.com/in/carloocchiena/}
2622 | \item Twitter: \url{https://twitter.com/carloocchiena}
2623 | \end{itemize}
2624 |
2625 | \end{document}
2626 |
2627 |
2628 |
2629 |
2630 |
--------------------------------------------------------------------------------