"
481 | ],
482 | "text/plain": [
483 | " Obs X1 X2 X3 Y EuclideanDist\n",
484 | "4 5 -1 0 1 Green 1.414214"
485 | ]
486 | },
487 | "execution_count": 78,
488 | "metadata": {},
489 | "output_type": "execute_result"
490 | }
491 | ],
492 | "source": [
493 | "# (b) What is our prediction with K = 1? Why? \n",
494 | "\n",
495 | "K = 1\n",
496 | "df_euc.nsmallest(K, 'EuclideanDist')\n",
497 | "\n",
498 | "# Our prediction is Y=Green because that is the response value of the \n",
499 | "# first nearest neighbour to the point X1 = X2 = X3 = 0"
500 | ]
501 | },
502 | {
503 | "cell_type": "code",
504 | "execution_count": 80,
505 | "metadata": {},
506 | "outputs": [
507 | {
508 | "data": {
509 | "text/html": [
510 | "
\n",
511 | "\n",
524 | "
\n",
525 | " \n",
526 | "
\n",
527 | "
\n",
528 | "
Obs
\n",
529 | "
X1
\n",
530 | "
X2
\n",
531 | "
X3
\n",
532 | "
Y
\n",
533 | "
EuclideanDist
\n",
534 | "
\n",
535 | " \n",
536 | " \n",
537 | "
\n",
538 | "
4
\n",
539 | "
5
\n",
540 | "
-1
\n",
541 | "
0
\n",
542 | "
1
\n",
543 | "
Green
\n",
544 | "
1.414214
\n",
545 | "
\n",
546 | "
\n",
547 | "
5
\n",
548 | "
6
\n",
549 | "
1
\n",
550 | "
-1
\n",
551 | "
1
\n",
552 | "
Red
\n",
553 | "
1.732051
\n",
554 | "
\n",
555 | "
\n",
556 | "
1
\n",
557 | "
2
\n",
558 | "
2
\n",
559 | "
0
\n",
560 | "
0
\n",
561 | "
Red
\n",
562 | "
2.000000
\n",
563 | "
\n",
564 | " \n",
565 | "
\n",
566 | "
"
567 | ],
568 | "text/plain": [
569 | " Obs X1 X2 X3 Y EuclideanDist\n",
570 | "4 5 -1 0 1 Green 1.414214\n",
571 | "5 6 1 -1 1 Red 1.732051\n",
572 | "1 2 2 0 0 Red 2.000000"
573 | ]
574 | },
575 | "execution_count": 80,
576 | "metadata": {},
577 | "output_type": "execute_result"
578 | }
579 | ],
580 | "source": [
581 | "# (c) What is our prediction with K = 3? Why? \n",
582 | "\n",
583 | "K = 3\n",
584 | "df_euc.nsmallest(K, 'EuclideanDist')\n",
585 | "\n",
586 | "# Red, because majority of the 3 nearest neighbours are Red."
587 | ]
588 | },
589 | {
590 | "cell_type": "markdown",
591 | "metadata": {},
592 | "source": [
593 | "(d) If the Bayes decision boundary in this problem is highly non-linear, then would we expect the best value for K to be large or small? Why? \n",
594 | "\n",
595 | "Small. A smaller value of K results in a more flexible classification model because the prediciton is based upon a smaller subset of all observations in the dataset."
596 | ]
597 | }
598 | ],
599 | "metadata": {
600 | "kernelspec": {
601 | "display_name": "Python 3",
602 | "language": "python",
603 | "name": "python3"
604 | },
605 | "language_info": {
606 | "codemirror_mode": {
607 | "name": "ipython",
608 | "version": 3
609 | },
610 | "file_extension": ".py",
611 | "mimetype": "text/x-python",
612 | "name": "python",
613 | "nbconvert_exporter": "python",
614 | "pygments_lexer": "ipython3",
615 | "version": "3.6.5"
616 | }
617 | },
618 | "nbformat": 4,
619 | "nbformat_minor": 2
620 | }
621 |
--------------------------------------------------------------------------------
/Notebooks/ch4_classification_conceptual.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# 4. Classification – Conceptual\n",
8 | "\n",
9 | "Excercises from **Chapter 4** of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani.\n",
10 | "\n",
11 | "I've elected to use Python instead of R."
12 | ]
13 | },
14 | {
15 | "cell_type": "code",
16 | "execution_count": 71,
17 | "metadata": {},
18 | "outputs": [],
19 | "source": [
20 | "import numpy as np"
21 | ]
22 | },
23 | {
24 | "cell_type": "markdown",
25 | "metadata": {},
26 | "source": [
27 | "## 1. Using a little bit of algebra, prove that (4.2) is equivalent to (4.3). In other words, the logistic function representation and logit representation for the logistic regression model are equivalent.\n",
28 | "\n",
29 | "\n",
30 | "\n",
31 | "## 2. It was stated in the text that classifying an observation to the class for which (4.12) is largest is equivalent to classifying an observation to the class for which (4.13) is largest. Prove that this is the case. In other words, under the assumption that the observations in the kth class are drawn from a N(μk,σ2) distribution, the Bayes’ classifier assigns an observation to the class for which the discriminant function is maximized.\n",
32 | "\n",
33 | "\n",
34 | "\n",
35 | "## 3. This problem relates to the QDA model, in which the observations within each class are drawn from a normal distribution with a class- specific mean vector and a class specific covariance matrix. We con- sider the simple case where p = 1; i.e. there is only one feature.\n",
36 | "\n",
37 | "### Suppose that we have K classes, and that if an observation belongs to the kth class then X comes from a one-dimensional normal dis- tribution, X ∼ N(μk,σk2). Recall that the density function for the one-dimensional normal distribution is given in (4.11). Prove that in this case, the Bayes’ classifier is not linear. Argue that it is in fact quadratic.\n",
38 | "\n",
39 | "Hint: For this problem, you should follow the arguments laid out in Section 4.4.2, but without making the assumption that σ12 = . . . = σK2 .\n",
40 | "\n",
41 | "\n",
42 | "\n",
43 | "\n",
44 | "## 4. When the number of features p is large, there tends to be a deteri- oration in the performance of KNN and other local approaches that perform prediction using only observations that are near the test ob- servation for which a prediction must be made. This phenomenon is known as the curse of dimensionality, and it ties into the fact that non-parametric approaches often perform poorly when p is large. We will now investigate this curse.\n",
45 | "\n",
46 | "### (a) Suppose that we have a set of observations, each with measure- ments on p = 1 feature, X. We assume that X is uniformly (evenly) distributed on [0,1]. Associated with each observation is a response value. Suppose that we wish to predict a test obser- vation’s response using only observations that are within 10 % of the range of X closest to that test observation. For instance, in order to predict the response for a test observation with X = 0.6, we will use observations in the range [0.55,0.65]. On average, what fraction of the available observations will we use to make the prediction?\n",
47 | "\n",
48 | "### (b) Now suppose that we have a set of observations, each with measurements on p = 2 features, X1 and X2. We assume that (X1,X2) are uniformly distributed on [0,1]×[0,1]. We wish to predict a test observation’s response using only observations that are within 10 % of the range of X1 and within 10 % of the range of X2 closest to that test observation. For instance, in order to predict the response for a test observation with X1 = 0.6 and X2 = 0.35, we will use observations in the range [0.55, 0.65] for X1 and in the range [0.3, 0.4] for X2. On average, what fraction of the available observations will we use to make the prediction?\n",
49 | "\n",
50 | "### (c) Now suppose that we have a set of observations on p = 100 fea- tures. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1. We wish to predict a test observation’s response using observations within the 10 % of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?\n",
51 | "\n",
52 | "### (d) Using your answers to parts (a)–(c), argue that a drawback of KNN when p is large is that there are very few training obser- vations “near” any given test observation.\n",
53 | "\n",
54 | "\n",
55 | "\n",
56 | "### (e) Now suppose that we wish to make a prediction for a test obser- vation by creating a p-dimensional hypercube centered around the test observation that contains, on average, 10 % of the train- ing observations. For p = 1,2, and 100, what is the length of each side of the hypercube? Comment on your answer.\n",
57 | "\n",
58 | "Note: A hypercube is a generalization of a cube to an arbitrary number of dimensions. When p = 1, a hypercube is simply a line segment, when p = 2 it is a square, and when p = 100 it is a 100-dimensional cube.\n",
59 | "\n",
60 | ""
61 | ]
62 | },
63 | {
64 | "cell_type": "markdown",
65 | "metadata": {},
66 | "source": [
67 | "## 5. We now examine the differences between LDA and QDA.\n",
68 | "\n",
69 | "### (a) If the Bayes decision boundary is linear, do we expect LDA or QDA to perform better on the training set? On the test set?\n",
70 | "\n",
71 | "- Training: QDA should perform best as higher variance model has increased flexibility to fit noise in the data\n",
72 | "- Test: LDA should perform best as increased bias is without cost if Bayes decision boundary is linear.\n",
73 | "\n",
74 | "\n",
75 | "### (b) If the Bayes decision boundary is non-linear, do we expect LDA or QDA to perform better on the training set? On the test set?\n",
76 | "\n",
77 | "- Training: QDA should perform best as higher variance model has increased flexibility to fit non-linear relationship in data and noise\n",
78 | "- Test: QDA should perform best as higher variance model has increased flexibility to fit non-linear relationship in data\n",
79 | "\n",
80 | "\n",
81 | "### (c) In general, as the sample size n increases, do we expect the test prediction accuracy of QDA relative to LDA to improve, decline, or be unchanged? Why?\n",
82 | "\n",
83 | "Improve, as increased sample size reduces a more flexible models tendency to overfit the training data.\n",
84 | "\n",
85 | "### (d) True or False: Even if the Bayes decision boundary for a given problem is linear, we will probably achieve a superior test error rate using QDA rather than LDA because QDA is flexible enough to model a linear decision boundary. Justify your answer.\n",
86 | "\n",
87 | "False. If the bayes decision boundary is linear, then a more flexible model is prone to overfit and take account of noise in the training data that will reduce its accuracy in making predictions during test.\n",
88 | "\n",
89 | "## 6. Suppose we collect data for a group of students in a statistics class with variables X1 = hours studied, X2 = undergrad GPA, and Y = receive an A. We fit a logistic regression and produce estimated coefficient, βˆ0 = −6, βˆ1 = 0.05, βˆ2 = 1.\n",
90 | "\n",
91 | "### (a) Estimate the probability that a student who studies for 40 h and has an undergrad GPA of 3.5 gets an A in the class.\n",
92 | "\n",
93 | "For multiple logistic regression a prediction p(X) is given by \n",
94 | "\n",
95 | "$$p(X) = \\frac{\\exp{(β_0+β_1 X_1 + β_2 X_2)}}{1 + \\exp{(β_0+β_1 X_1 + β_2 X_2)}}$$"
96 | ]
97 | },
98 | {
99 | "cell_type": "code",
100 | "execution_count": 43,
101 | "metadata": {},
102 | "outputs": [
103 | {
104 | "name": "stdout",
105 | "output_type": "stream",
106 | "text": [
107 | "p(X) = 0.3775\n"
108 | ]
109 | }
110 | ],
111 | "source": [
112 | "beta = np.array([-6, 0.05, 1])\n",
113 | "X = np.array([1, 40, 3.5])\n",
114 | "\n",
115 | "pX = np.exp(beta.T@X) / (1 + np.exp(beta.T@X))\n",
116 | "\n",
117 | "print('p(X) = ' + str(np.around(pX, 4)))"
118 | ]
119 | },
120 | {
121 | "cell_type": "markdown",
122 | "metadata": {},
123 | "source": [
124 | "### (b) How many hours would the student in part (a) need to study to have a 50 % chance of getting an A in the class?\n",
125 | "\n",
126 | "\n",
127 | "\n",
128 | "50 hrs"
129 | ]
130 | },
131 | {
132 | "cell_type": "markdown",
133 | "metadata": {},
134 | "source": [
135 | "## 7. Suppose that we wish to predict whether a given stock will issue a dividend this year (“Yes” or “No”) based on X, last year’s percent profit. We examine a large number of companies and discover that the mean value of X for companies that issued a dividend was X ̄ = 10, while the mean for those that didn’t was X ̄ = 0. In addition, the variance of X for these two sets of companies was σˆ2 = 36. Finally, 80 % of companies issued dividends. Assuming that X follows a nor- mal distribution, predict the probability that a company will issue a dividend this year given that its percentage profit was X = 4 last year.\n",
136 | "\n",
137 | "### Hint: Recall that the density function for a normal random variable is f(x) = √ 1 e−(x−μ)2/2σ2 . You will need to use Bayes’ theorem.\n",
138 | "\n",
139 | "$p_1(4) = 0.752$\n"
140 | ]
141 | },
142 | {
143 | "cell_type": "markdown",
144 | "metadata": {},
145 | "source": [
146 | "### 8. Suppose that we take a data set, divide it into equally-sized training and test sets, and then try out two different classification procedures. First we use logistic regression and get an error rate of 20 % on the training data and 30 % on the test data. Next we use 1-nearest neighbors (i.e. K = 1) and get an average error rate (averaged over both test and training data sets) of 18%. Based on these results, which method should we prefer to use for classification of new observations? Why?\n",
147 | "\n",
148 | "KNN with k=1 is a highly flexible non-parametric model, it is prone to overfitting in which case we would observe a low training error and high test error.\n",
149 | "\n",
150 | "The test error will be most indicative of the models performance on new observations.\n",
151 | "\n",
152 | "We know that the average error rate for KNN is 18%. We expect the error in test to be higher than training therefore the best possible test error is 18% (assuming 18% error in training).\n",
153 | "\n",
154 | "The worst possible test error is 36% (assuming 0% error in training). \n",
155 | "\n",
156 | "Therefore the knn test error is somewhere in the range 18 - 36%.\n",
157 | "\n",
158 | "The logistic regression achieves a test error of 30%. This inflexible model is failing to account for some variance in the data, but we do no know if this variance is noise (an irreducible error), or variance in the true relationship which could be accounted for by a more flexible model.\n",
159 | "\n",
160 | "Without any further information we can calculate the probability that KNN produces lower than 30% error in test as:\n",
161 | "\n",
162 | "$p = \\frac{30-18}{36-18} = \\frac{2}{3}$\n",
163 | "\n",
164 | "Therefore we should prefere the KNN method.\n",
165 | "\n",
166 | "**INCORRECT: We know k=1 so training error will be 0%**\n"
167 | ]
168 | },
169 | {
170 | "cell_type": "markdown",
171 | "metadata": {},
172 | "source": [
173 | "## 9. This problem has to do with odds.\n",
174 | "### (a) On average, what fraction of people with an odds of 0.37 of defaulting on their credit card payment will in fact default?\n",
175 | "\n",
176 | "$odds = \\frac{p(X)}{1 - p(X)} = 0.37$ \n",
177 | "\n",
178 | "Rearranging for p(X)\n",
179 | " \n",
180 | "$p(X) = 0.37 - 0.37p(X)$ \n",
181 | " \n",
182 | "$p(X) + 0.37p(X) = 0.37$ \n",
183 | " \n",
184 | "$p(X) = \\frac{0.37}{1 + 0.37}$ \n",
185 | " \n",
186 | "$p(X) = 0.27$\n",
187 | "\n",
188 | "### (b) Suppose that an individual has a 16% chance of defaulting on her credit card payment. What are the odds that she will default?\n",
189 | "\n",
190 | "$odds = \\frac{p(X)}{1 - p(X)} = \\frac{0.16}{1 - 0.16} = 0.19$"
191 | ]
192 | },
193 | {
194 | "cell_type": "code",
195 | "execution_count": null,
196 | "metadata": {},
197 | "outputs": [],
198 | "source": []
199 | }
200 | ],
201 | "metadata": {
202 | "kernelspec": {
203 | "display_name": "Python 3",
204 | "language": "python",
205 | "name": "python3"
206 | },
207 | "language_info": {
208 | "codemirror_mode": {
209 | "name": "ipython",
210 | "version": 3
211 | },
212 | "file_extension": ".py",
213 | "mimetype": "text/x-python",
214 | "name": "python",
215 | "nbconvert_exporter": "python",
216 | "pygments_lexer": "ipython3",
217 | "version": "3.6.5"
218 | }
219 | },
220 | "nbformat": 4,
221 | "nbformat_minor": 2
222 | }
223 |
--------------------------------------------------------------------------------
/Notebooks/ch5_resampling_methods_conceptual.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# 5. Resampling Methods – Conceptual\n",
8 | "\n",
9 | "Excercises from **Chapter 5* of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani."
10 | ]
11 | },
12 | {
13 | "cell_type": "code",
14 | "execution_count": 1,
15 | "metadata": {},
16 | "outputs": [],
17 | "source": [
18 | "import numpy as np\n",
19 | "import matplotlib.pyplot as plt\n",
20 | "import seaborn as sns"
21 | ]
22 | },
23 | {
24 | "cell_type": "markdown",
25 | "metadata": {},
26 | "source": [
27 | "## 1. Using basic statistical properties of the variance, as well as single- variable calculus, derive (5.6). In other words, prove that α given by (5.6) does indeed minimize Var(αX + (1 − α)Y ).\n",
28 | "\n",
29 | "\n",
30 | "\n",
31 | "\n",
32 | "## 2. We will now derive the probability that a given observation is part of a bootstrap sample. Suppose that we obtain a bootstrap sample from a set of n observations.\n",
33 | "\n",
34 | "- (a) What is the probability that the first bootstrap observation is not the jth observation from the original sample? Justify your answer.\n",
35 | "- (b) What is the probability that the second bootstrap observation is not the jth observation from the original sample?\n",
36 | "- (c) Argue that the probability that the jth observation is not in the bootstrap sample is (1 − 1/n)n.\n",
37 | "- (d) When n = 5, what is the probability that the jth observation is in the bootstrap sample?\n",
38 | "- (e) When n = 100, what is the probability that the jth observation is in the bootstrap sample?\n",
39 | "- (f) When n = 10, 000, what is the probability that the jth observa- tion is in the bootstrap sample?\n",
40 | "\n",
41 | "\n",
42 | "\n",
43 | "### 2(g) Create a plot that displays, for each integer value of n from 1 to 100,000, the probability that the jth observation is in the bootstrap sample. Comment on what you observe."
44 | ]
45 | },
46 | {
47 | "cell_type": "code",
48 | "execution_count": 2,
49 | "metadata": {},
50 | "outputs": [
51 | {
52 | "data": {
53 | "text/plain": [
54 | "Text(0,0.5,'probability')"
55 | ]
56 | },
57 | "execution_count": 2,
58 | "metadata": {},
59 | "output_type": "execute_result"
60 | },
61 | {
62 | "data": {
63 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYwAAAEKCAYAAAAB0GKPAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAGMhJREFUeJzt3X+QJ3V95/Hni8UFTUQXWQ1hF3fJrR4bNYJTiGL8ecDKpcREy9vNJaLxjsopemq8HJQWmM1ZehdjLAylrLmNaEUIh7lky1qLcPwwGn/tcPyS1YVhjTAuCeOBQEUUl33fH9+e0Awz8+1dtmdmd56Pqm99uz/d/f2+e3v4vujuT3enqpAkaZhD5rsASdKBwcCQJHViYEiSOjEwJEmdGBiSpE4MDElSJwaGJKkTA0OS1ImBIUnq5ND5LmB/Oeqoo2rVqlXzXYYkHVCuv/76H1bV8i7zHjSBsWrVKkZHR+e7DEk6oCT5ftd5PSQlSerEwJAkdWJgSJI6MTAkSZ0YGJKkTnoLjCSbk9yT5NszTE+SC5OMJbk5yYmtaWclub15ndVXjZKk7vrcw/gMsG6W6a8F1jSvs4FPAiQ5ErgAeDFwEnBBkmU91ilJ6qC3wKiqvwPunWWWM4HP1sA3gKcnORo4Hbiqqu6tqvuAq5g9eJ6QHz+8m4/97Q5uuPO+vr5Ckg4K83kO4xjgrtb4eNM2U/vjJDk7yWiS0YmJiX0q4qGHH+HCa8a45Qf379PykrRYzGdgZJq2mqX98Y1Vm6pqpKpGli/vdGW7JGkfzWdgjAMrW+MrgF2ztEuS5tF8BsYW4M1Nb6mTgfur6m7gSuC0JMuak92nNW2SpHnU280Hk1wKvBI4Ksk4g55PTwKoqk8BW4EzgDHgx8Bbm2n3JvlDYFvzURuraraT55KkOdBbYFTVhiHTC3jHDNM2A5v7qEuStG+80rtR055WlyRNWvSBkUzXKUuSNNWiDwxJUjcGhiSpEwNDktSJgSFJ6sTAaJTdpCRpVos+MOwjJUndLPrAkCR1Y2BIkjoxMCRJnRgYkqRODAxJUicGRsNOtZI0u0UfGN57UJK6WfSBIUnqxsCQJHXSa2AkWZdkR5KxJOdOM/3ZSa5OcnOS65KsaE17JMmNzWtLn3VKkobr85neS4CLgFOBcWBbki1Vtb0120eBz1bVJUleDXwY+O1m2kNV9cK+6pMk7Z0+9zBOAsaqamdVPQxcBpw5ZZ61wNXN8LXTTJckLRB9BsYxwF2t8fGmre0m4A3N8K8DT03yjGb88CSjSb6R5PU91gn4TG9JGqbPwJiuw+rUn+X3Aa9IcgPwCuAHwO5m2rFVNQL8JvDxJL/0uC9Izm5CZXRiYmIfi7RfrSR10WdgjAMrW+MrgF3tGapqV1X9RlWdALy/abt/clrzvhO4Djhh6hdU1aaqGqmqkeXLl/eyEpKkgT4DYxuwJsnqJEuB9cBjejslOSrJZA3nAZub9mVJDpucBzgFaJ8slyTNsd4Co6p2A+cAVwLfAS6vqluTbEzyuma2VwI7ktwGPAv4UNN+PDCa5CYGJ8M/MqV3lSRpjvXWrRagqrYCW6e0nd8avgK4YprlvgY8v8/aJEl7xyu9JUmdGBgNe9VK0uwMDHvVSlInBoYkqRMDQ5LUiYEhSerEwJAkdWJgNMq7D0rSrAwMSVIniz4wYrdaSepk0QeGJKkbA0OS1ImBIUnqxMCQJHViYEiSOln0gWEnKUnqZtEHhiSpGwNDktRJr4GRZF2SHUnGkpw7zfRnJ7k6yc1JrkuyojXtrCS3N6+z+qxTkjRcb4GRZAlwEfBaYC2wIcnaKbN9FPhsVb0A2Ah8uFn2SOAC4MXAScAFSZb1Vaskabg+9zBOAsaqamdVPQxcBpw5ZZ61wNXN8LWt6acDV1XVvVV1H3AVsK7HWiVJQ/QZGMcAd7XGx5u2tpuANzTDvw48NckzOi67X3mzWkmaXZ+BMV2P1ak/y+8DXpHkBuAVwA+A3R2XJcnZSUaTjE5MTOxbkd59UJI66TMwxoGVrfEVwK72DFW1q6p+o6pOAN7ftN3fZdlm3k1VNVJVI8uXL9/f9UuSWvoMjG3AmiSrkywF1gNb2jMkOSrJZA3nAZub4SuB05Isa052n9a0SZLmSW+BUVW7gXMY/NB/B7i8qm5NsjHJ65rZXgnsSHIb8CzgQ82y9wJ/yCB0tgEbmzZJ0jw5tM8Pr6qtwNYpbee3hq8Arphh2c08uschSZpnXuktSerEwGjU4zthSZJaFn1g2KlWkrpZ9IEhSerGwJAkdWJgSJI6MTAkSZ0YGA1vPihJszMwJEmdLPrA8Ga1ktTNog8MSVI3BoYkqRMDQ5LUiYEhSerEwGjYq1aSZtcpMJIs6buQ+RJvPyhJnXTdwxhL8kdJ1vZajSRpweoaGC8AbgP+LMk3kpyd5Ige65IkLTCdAqOqHqyqT1fVS4HfBy4A7k5ySZJ/NdNySdYl2ZFkLMm500w/Nsm1SW5IcnOSM5r2VUkeSnJj8/rUPq6fJGk/6fRM7+Ycxr8F3gqsAv4Y+AvgVxk8s/s5MyxzEXAqMA5sS7Klqra3ZvsAcHlVfbI53LW1+XyAO6rqhfuwTpKkHnQKDOB24Frgj6rqa632K5K8fIZlTgLGqmonQJLLgDOBdmAUMHlo62nArq6FS5LmVtdzGG+uqre1wyLJKQBV9a4ZljkGuKs1Pt60tX0Q+K0k4wz2Lt7Zmra6OVT15SS/2rHOfebdaiVpdl0D48Jp2j4xZJnp+qtO/VneAHymqlYAZwCfS3IIcDdwbFWdALwX+Px0J9mbk++jSUYnJiaGrsS0RdqrVpI6mfWQVJKXAC8Flid5b2vSEcCwazPGgZWt8RU8/pDT24B1AFX19SSHA0dV1T3AT5v265PcweA8yWh74araBGwCGBkZcR9Bkno0bA9jKfDzDILlqa3XA8Abhyy7DViTZHWSpcB6YMuUee4EXgOQ5HjgcGAiyfLJiwWTHAesAXZ2XSlJ0v436x5GVX0Z+HKSz1TV9/fmg6tqd5JzgCsZ7I1srqpbk2wERqtqC/B7wKeTvIfB4aq3VFU1J9I3JtkNPAL8blXdu/erJ0naX4Ydkvp4Vb0b+NMkjzvkU1Wvm235qtrK4GR2u+381vB24JRplvsC8IXZS5ckzaVh3Wo/17x/tO9CJEkL27BDUtc371+em3LmT3m/Wkma1bBDUrcwy52/q+oF+70iSdKCNOyQ1K/NSRWSpAVv2CGpveoZJUk6eM16HUaSrzbvDyZ5YOr73JQoSVoIhu1hvKx5f+rclCNJWqi63q2WJCcCL2NwEvyrVXVDb1XNA28+KEmz6/pM7/OBS4BnAEcBn0nygT4LkyQtLF33MDYAJ1TVTwCSfAT4v8B/66uwueLdaiWpm663N/8HBjcGnHQYcMd+r0aStGANu3DvEwzOWfwUuDXJVc34qcBX+y9PkrRQDDskNfn8ieuB/91qv66XaiRJC9awbrWXzFUhkqSFrdNJ7yRrgA8Da2mdy6iq43qqS5K0wHQ96f3nwCeB3cCrgM/y6K3PD2iZ9tHjkqSpugbGk6vqaiBV9f2q+iDw6v7KkiQtNF2vw/hJkkOA25vHrv4AeGZ/ZUmSFpquexjvBp4CvAt4EfDbwFnDFkqyLsmOJGNJzp1m+rFJrk1yQ5Kbk5zRmnZes9yOJKd3rFOS1JNOexhVtQ2g2ct4V1U9OGyZJEuAixhcszEObEuypXmO96QPAJdX1SeTrGXw/O9VzfB64JeBXwT+T5LnVNUje7FukqT9qOu9pEaap+/dDNyS5KYkLxqy2EnAWFXtrKqHgcuAM6fMU8ARzfDTgF3N8JnAZVX106r6HjDWfJ4kaZ50PSS1GXh7Va2qqlXAOxj0nJrNMcBdrfHxpq3tg8BvJRlnsHfxzr1Ydr8qb1crSbPqGhgPVtVXJkeq6qvAsMNS0/VXnfqrvAH4TFWtAM4APtcc9uqyLEnOTjKaZHRiYmJIOTMUaa9aSepk2L2kTmwGv5XkYuBSBj/c/47htwcZB1a2xlfw6CGnSW8D1gFU1deTHM7g9uldlqWqNgGbAEZGRtxFkKQeDTvp/cdTxi9oDQ/7gd4GrEmymkE33PXAb06Z507gNQyer3E8g6vIJ4AtwOeTfIzBSe81wLeGfJ8kqUfD7iX1qn394Kra3VyzcSWwBNhcVbcm2QiMVtUW4PeATyd5D4MAeksNTibcmuRyYDuDq8vfYQ8pSZpfXe8l9TQGexcvb5q+DGysqvtnW66qtjI4md1uO781vB04ZYZlPwR8qEt9kqT+7U0vqQeBNzWvBxjeS0qSdBDpemuQX6qqN7TG/yDJjX0UNF/sVStJs+u6h/FQkpdNjiQ5BXion5Lmlr1qJambrnsYvwt8tjmXAXAfHe4lJUk6eAwNjOZCuudW1a8kOQKgqh7ovTJJ0oIy9JBUVe0BzmmGHzAsJGlx6noO46ok70uyMsmRk69eK5MkLShdz2H8DoML694+pf2geaa3naQkaXZdA2Mtg7B4GYPf1q8An+qrKEnSwtM1MC5hcLHehc34hqbtTX0UNZfi7WolqZOugfHcqvqV1vi1SW7qoyBJ0sLU9aT3DUlOnhxJ8mLg7/spSZK0EHXdw3gx8OYkdzbjxwLfaR7bWlX1gl6qkyQtGF0DY12vVUiSFrxOgVFV3++7kPnmzQclaXZdz2EctOwjJUndLPrAkCR1Y2BIkjrpNTCSrEuyI8lYknOnmf4nSW5sXrcl+VFr2iOtaVv6rFOSNFzXXlJ7LckS4CLgVGAc2JZkS/McbwCq6j2t+d8JnND6iIeq6oV91SdJ2jt97mGcBIxV1c6qehi4DDhzlvk3AJf2WI8k6QnoMzCOAe5qjY83bY+T5NnAauCaVvPhSUaTfCPJ6/src6C8X60kzaq3Q1JM32N1pl/l9cAVVfVIq+3YqtqV5DjgmiS3VNUdj/mC5GzgbIBjjz1234q0X60kddLnHsY4sLI1vgLYNcO865lyOKqqdjXvO4HreOz5jcl5NlXVSFWNLF++fH/ULEmaQZ+BsQ1Yk2R1kqUMQuFxvZ2SPBdYBny91bYsyWHN8FHAKcD2qctKkuZOb4ekqmp3knOAK4ElwOaqujXJRmC0qibDYwNwWdVjbs5xPHBxkj0MQu0j7d5VkqS51+c5DKpqK7B1Stv5U8Y/OM1yXwOe32dtkqS945XekqRODIyGd6uVpNkt+sDwmd6S1M2iDwxJUjcGhiSpEwNDktSJgSFJ6sTAaNhJSpJmZ2BIkjoxMCRJnRgYkqRODAxJUicGhiSpEwNDktSJgTHJuw9K0qwMDHyutyR1YWBIkjoxMCRJnfQaGEnWJdmRZCzJudNM/5MkNzav25L8qDXtrCS3N6+z+qxTkjRcb8/0TrIEuAg4FRgHtiXZUlXbJ+epqve05n8ncEIzfCRwATDC4DZP1zfL3tdXvZKk2fW5h3ESMFZVO6vqYeAy4MxZ5t8AXNoMnw5cVVX3NiFxFbCux1olSUP0GRjHAHe1xsebtsdJ8mxgNXDN3iyb5Owko0lGJyYmnlCxdqqVpNn1GRjTdVad6Xd5PXBFVT2yN8tW1aaqGqmqkeXLl+9jmdN/mSTpsfoMjHFgZWt8BbBrhnnX8+jhqL1dVpI0B/oMjG3AmiSrkyxlEApbps6U5LnAMuDrreYrgdOSLEuyDDitaZMkzZPeeklV1e4k5zD4oV8CbK6qW5NsBEarajI8NgCXVT16b46qujfJHzIIHYCNVXVvX7VKkobrLTAAqmorsHVK2/lTxj84w7Kbgc29FSdJ2ite6S1J6sTAaHizWkmanYEBxNvVStJQBoYkqRMDQ5LUiYEhSerEwJAkdWJgNMrbD0rSrAwMSVInBgberVaSujAwJEmdGBiSpE4MDElSJwaGJKkTA6PhzQclaXYGBuC9ByVpOANDktSJgSFJ6qTXwEiyLsmOJGNJzp1hnjcl2Z7k1iSfb7U/kuTG5rVlumUlSXOnt2d6J1kCXAScCowD25JsqartrXnWAOcBp1TVfUme2fqIh6rqhX3VJ0naO33uYZwEjFXVzqp6GLgMOHPKPP8RuKiq7gOoqnt6rEeS9AT0GRjHAHe1xsebtrbnAM9J8vdJvpFkXWva4UlGm/bXT/cFSc5u5hmdmJh4QsXaq1aSZtfbISmmv6ff1N/lQ4E1wCuBFcBXkjyvqn4EHFtVu5IcB1yT5JaquuMxH1a1CdgEMDIyss+/+fH2g5I0VJ97GOPAytb4CmDXNPP8TVX9rKq+B+xgECBU1a7mfSdwHXBCj7VKkoboMzC2AWuSrE6yFFgPTO3t9NfAqwCSHMXgENXOJMuSHNZqPwXYjiRp3vR2SKqqdic5B7gSWAJsrqpbk2wERqtqSzPttCTbgUeA/1JV/y/JS4GLk+xhEGofafeukiTNvT7PYVBVW4GtU9rObw0X8N7m1Z7na8Dz+6xNkrR3vNJbktSJgQEsPfQQfvKzR+a7DEla0AwM4FlHHMZt//QgD+/eM9+lSNKC1es5jAPFa45/Fpv+bifP+cCXOPxJh/CUpYdySEIChwQOSVrj6XQ79K5XdqTDh3X6rP1UU5d6JC0sxx99BJ/Y0P+VBwYG8F/X/WtOPHYZO/7xQf754d38+OHd7CmoKvbsgaLYU7Cnij17hl8f2PUKwi4PberyWdXhgzrV5OXu0gFp5bInz8n3GBjAkkPCuuf9Auue9wvzXYokLView5AkdWJgSJI6MTAkSZ0YGJKkTgwMSVInBoYkqRMDQ5LUiYEhSeokXa4SPhAkmQC+/wQ+4ijgh/upnAPFYlvnxba+4DovFk9knZ9dVcu7zHjQBMYTlWS0qkbmu465tNjWebGtL7jOi8VcrbOHpCRJnRgYkqRODIxHbZrvAubBYlvnxba+4DovFnOyzp7DkCR14h6GJKmTRR8YSdYl2ZFkLMm5813P3kqyMsm1Sb6T5NYk/7lpPzLJVUlub96XNe1JcmGzvjcnObH1WWc189+e5KxW+4uS3NIsc2EWwGP5kixJckOSLzbjq5N8s6n9L5MsbdoPa8bHmumrWp9xXtO+I8nprfYF9zeR5OlJrkjy3WZbv2QRbOP3NH/T305yaZLDD7btnGRzknuSfLvV1vt2nek7hqqqRfsClgB3AMcBS4GbgLXzXddersPRwInN8FOB24C1wP8Azm3azwX+ezN8BvAlBk9sPRn4ZtN+JLCzeV/WDC9rpn0LeEmzzJeA1y6A9X4v8Hngi8345cD6ZvhTwH9qht8OfKoZXg/8ZTO8ttnehwGrm7+DJQv1bwK4BPgPzfBS4OkH8zYGjgG+Bzy5tX3fcrBtZ+DlwInAt1ttvW/Xmb5jaL3z/R/CPP9RvgS4sjV+HnDefNf1BNfpb4BTgR3A0U3b0cCOZvhiYENr/h3N9A3Axa32i5u2o4HvttofM988reMK4Grg1cAXm/8YfggcOnW7AlcCL2mGD23my9RtPTnfQvybAI5ofjwzpf1g3sbHAHc1P4KHNtv59INxOwOreGxg9L5dZ/qOYa/Ffkhq8o9y0njTdkBqdsNPAL4JPKuq7gZo3p/ZzDbTOs/WPj5N+3z6OPD7wJ5m/BnAj6pqdzPervFf1quZfn8z/97+O8yn44AJ4M+bw3B/luTnOIi3cVX9APgocCdwN4Ptdj0H93aeNBfbdabvmNViD4zpjtMekN3Gkvw88AXg3VX1wGyzTtNW+9A+L5L8GnBPVV3fbp5m1hoy7YBY38ahDA5bfLKqTgD+mcFhhJkc8OvcHFM/k8FhpF8Efg547TSzHkzbeZh5X8fFHhjjwMrW+Apg1zzVss+SPIlBWPxFVf1V0/xPSY5uph8N3NO0z7TOs7WvmKZ9vpwCvC7JPwCXMTgs9XHg6UkObeZp1/gv69VMfxpwL3v/7zCfxoHxqvpmM34FgwA5WLcxwL8BvldVE1X1M+CvgJdycG/nSXOxXWf6jlkt9sDYBqxpel4sZXCybMs817RXml4P/xP4TlV9rDVpCzDZW+IsBuc2Jtvf3PS4OBm4v9klvRI4Lcmy5v/uTmNwjPdu4MEkJzff9ebWZ825qjqvqlZU1SoG2+uaqvr3wLXAG5vZpq7v5L/DG5v5q2lf3/SuWQ2sYXCCcMH9TVTVPwJ3JXlu0/QaYDsH6TZu3AmcnOQpTU2T63zQbueWudiuM33H7ObzxNZCeDHoeXAbgx4T75/vevah/pcx2M28GbixeZ3B4Pjt1cDtzfuRzfwBLmrW9xZgpPVZvwOMNa+3ttpHgG83y/wpU06+zuO6v5JHe0kdx+CHYAz4X8BhTfvhzfhYM/241vLvb9ZpB61eQQvxbwJ4ITDabOe/ZtAb5qDexsAfAN9t6vocg55OB9V2Bi5lcI7mZwz2CN42F9t1pu8Y9vJKb0lSJ4v9kJQkqSMDQ5LUiYEhSerEwJAkdWJgSJI6MTAkSZ0YGJKkTgwMqUdJVmXw/IpPN892+NskT57vuqR9YWBI/VsDXFRVvwz8CHjDPNcj7RMDQ+rf96rqxmb4egbPP5AOOAaG1L+ftoYfYXC7cumAY2BIkjoxMCRJnXi3WklSJ+5hSJI6MTAkSZ0YGJKkTgwMSVInBoYkqRMDQ5LUiYEhSerEwJAkdfL/Ab5Bi83pB09DAAAAAElFTkSuQmCC\n",
64 | "text/plain": [
65 | ""
66 | ]
67 | },
68 | "metadata": {},
69 | "output_type": "display_data"
70 | }
71 | ],
72 | "source": [
73 | "def prob_j_in_sample(n):\n",
74 | " return 1 - (1 - 1/n)**n\n",
75 | "\n",
76 | "x = np.arange(1, 100000)\n",
77 | "y = np.array([prob_j_in_sample(n) for n in x])\n",
78 | "\n",
79 | "ax = sns.lineplot(x=x, y=prob_j_in_sample(x))\n",
80 | "plt.xlabel('n')\n",
81 | "plt.ylabel('probability')"
82 | ]
83 | },
84 | {
85 | "cell_type": "markdown",
86 | "metadata": {},
87 | "source": [
88 | "### 2(h) We will now investigate numerically the probability that a bootstrap sample of size n = 100 contains the jth observation. Here j = 4. We repeatedly create bootstrap samples, and each time we record whether or not the fourth observation is contained in the bootstrap sample."
89 | ]
90 | },
91 | {
92 | "cell_type": "code",
93 | "execution_count": 15,
94 | "metadata": {},
95 | "outputs": [
96 | {
97 | "data": {
98 | "text/plain": [
99 | "0.6353635363536354"
100 | ]
101 | },
102 | "execution_count": 15,
103 | "metadata": {},
104 | "output_type": "execute_result"
105 | }
106 | ],
107 | "source": [
108 | "store = [] \n",
109 | "for i in np.arange(1, 10000):\n",
110 | " store += [np.sum((np.random.randint(low=1, high=101, size=100) == 4)) > 0]\n",
111 | "\n",
112 | "np.mean(store)"
113 | ]
114 | },
115 | {
116 | "cell_type": "markdown",
117 | "metadata": {},
118 | "source": [
119 | "**Comment**\n",
120 | "\n",
121 | "The result observed from a numerical approach above is simimilar to to our probabilistic estimation for a sample size of 100 which was P = 0.634.\n",
122 | "\n",
123 | "It is interesting to note that there is a suprisingly high level of variability between results given that results are averaged over 10000 tests. This can be observed by running the above cell multiple time (note that no random seed is set)."
124 | ]
125 | },
126 | {
127 | "cell_type": "markdown",
128 | "metadata": {},
129 | "source": [
130 | "## 3. We now review k-fold cross-validation.\n",
131 | "\n",
132 | "### (a) Explain how k-fold cross-validation is implemented.\n",
133 | "\n",
134 | "In k-fold cross validation k independant samples are taken from the set of all available observations each of size, $1 - (\\frac{n}{k})$\n",
135 | "\n",
136 | "The model is then fitted to each of these training samples, and then tested on the observations that were excluded from that sample. This produces k error scores which are then averaged to produce the final cross-validation score.\n",
137 | "\n",
138 | "Note that the proportion of observations that are included in each training increases increases with k.\n",
139 | "\n",
140 | "### (b) What are the advantages and disadvantages of k-fold cross- validation relative to:\n",
141 | "\n",
142 | "### - i. The validation set approach?\n",
143 | "\n",
144 | "When $k>2$, cross-validation provides a larger training set than the validation set approach. This means that there is less bias in the training setting. This means crossvalidation can produce more accurate estimates for more flexible models that benefit from a larger number of observations in the training set.\n",
145 | "\n",
146 | "Cross-validation results will exhibit more variability than the validation set approach. The approach is also more computationally expensive because the model must be fitted and tested for each fold in k.\n",
147 | "\n",
148 | "### - ii. LOOCV?\n",
149 | "\n",
150 | "Cross-validation for k
7 |
8 | This repository contains my solutions to the labs and exercises as Jupyter Notebooks written in Python using:
9 |
10 | - Numpy
11 | - Pandas
12 | - Matplotlib
13 | - Seaborn
14 | - Patsy
15 | - StatsModels
16 | - Sklearn
17 |
18 |
19 | Perhaps of most interest will be the recreation of some functions from the R language that I couldn't find in the Python ecosystem. These took me some time to reproduce but the implementation details are not essential to the concepts taught in the book so please feel free to reuse. For example, a reproduction of R's `lm()` four-way diagnostic plot for linear regression in Chapter 3. Also, a collection of [all required datasets]((./Notebooks/data)) is provided in .csv format.
20 |
21 |
22 | ## To view notebooks
23 |
24 | Links to view each notebook below. The code is provided [here](./Notebooks).
25 |
26 | [Chapter 2 - Statistical Learning: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch2_statistical_learning_conceptual.ipynb)
27 | [Chapter 2 - Statistical Learning: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch2_statistical_learning_applied.ipynb)
28 |
29 |
30 | [Chapter 3 - Linear Regression: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch3_linear_regression_conceptual.ipynb)
31 | [Chapter 3 - Linear Regression: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch3_linear_regression_applied.ipynb)
32 |
33 |
34 | [Chapter 4 - Classification: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch4_classification_conceptual.ipynb)
35 | [Chapter 4 - Classification: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch4_classification_applied.ipynb)
36 |
37 |
38 | [Chapter 5 - Resampling Methods: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch5_resampling_methods_conceptual.ipynb)
39 | [Chapter 5 - Resampling Methods: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch5_resampling_methods_applied.ipynb)
40 |
41 |
42 | [Chapter 6 - Linear Model Selection and Regularization: Labs](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch6_linear_model_selection_and_regularisation_labs.ipynb)
43 | [Chapter 6 - Linear Model Selection and Regularization: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch6_linear_model_selection_and_regularisation_conceptual.ipynb)
44 | [Chapter 6 - Linear Model Selection and Regularization: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch6_linear_model_selection_and_regularisation_applied.ipynb)
45 |
46 |
47 | [Chapter 7 - Moving Beyond Linearity: Labs](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch7_moving_beyond_linearity_labs.ipynb)
48 | [Chapter 7 - Moving Beyond Linearity: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch7_moving_beyond_linearity_applied.ipynb)
49 |
50 |
51 | [Chapter 8 - Tree-Based Methods: Labs](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch8_tree_based_methods_labs.ipynb)
52 | [Chapter 8 - Tree-Based Methods: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch8_tree_based_methods_conceptual.ipynb)
53 | [Chapter 8 - Tree-Based Methods: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch8_tree_based_methods_applied.ipynb)
54 |
55 |
56 | [Chapter 9 - Support Vector Machines: Labs](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch9_support_vector_machines_labs.ipynb)
57 | [Chapter 9 - Support Vector Machines: Conceptual](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch9_support_vector_machines_conceptual.ipynb)
58 | [Chapter 9 - Support Vector Machines: Applied](https://nbviewer.jupyter.org/github/a-martyn/ISL-python/blob/master/Notebooks/ch9_support_vector_machines_applied.ipynb)
59 |
60 |
61 | ## To run notebooks
62 |
63 | Running the notebooks enables you to execute the code and play around with any interactive features.
64 |
65 | To run:
66 |
67 | 1. [Install Jupyter Notebooks](https://jupyter.readthedocs.io/en/latest/install.html). I recommend doing this via the Annaconda/Conda method to ensure that package versions play nicely together.
68 | 2. `cd` to this repo
69 | 3. Run `jupyter notebook` to run the Jupyter server locally on your machine. It should launch in your browser.
70 | 4. In the Jupyter browser app, navigate to the notebook you'd like to explore.
71 |
72 |
73 |
--------------------------------------------------------------------------------