├── Classification-Clas-K-Nearest-neighbors.ipynb
├── ML0101EN-Clus-Hierarchical-Cars-py-v1.ipynb
└── ML0101EN-Clus-K-Means-Customer-Seg-py-v1.ipynb
/Classification-Clas-K-Nearest-neighbors.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "button": false,
7 | "new_sheet": false,
8 | "run_control": {
9 | "read_only": false
10 | }
11 | },
12 | "source": [
13 | "
\n",
14 | "#
K-Nearest Neighbors"
15 | ]
16 | },
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "button": false,
21 | "new_sheet": false,
22 | "run_control": {
23 | "read_only": false
24 | }
25 | },
26 | "source": [
27 | "In this Lab you will load a customer dataset, fit the data, and use K-Nearest Neighbors to predict a data point. But what is **K-Nearest Neighbors**?"
28 | ]
29 | },
30 | {
31 | "cell_type": "markdown",
32 | "metadata": {
33 | "button": false,
34 | "new_sheet": false,
35 | "run_control": {
36 | "read_only": false
37 | }
38 | },
39 | "source": [
40 | "**K-Nearest Neighbors** is an algorithm for supervised learning. Where the data is 'trained' with data points corresponding to their classification. Once a point is to be predicted, it takes into account the 'K' nearest points to it to determine it's classification."
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {
46 | "button": false,
47 | "new_sheet": false,
48 | "run_control": {
49 | "read_only": false
50 | }
51 | },
52 | "source": [
53 | "### Here's an visualization of the K-Nearest Neighbors algorithm.\n",
54 | "\n",
55 | "
"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {
61 | "button": false,
62 | "new_sheet": false,
63 | "run_control": {
64 | "read_only": false
65 | }
66 | },
67 | "source": [
68 | "In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A."
69 | ]
70 | },
71 | {
72 | "cell_type": "markdown",
73 | "metadata": {
74 | "button": false,
75 | "new_sheet": false,
76 | "run_control": {
77 | "read_only": false
78 | }
79 | },
80 | "source": [
81 | "In this sense, it is important to consider the value of k. But hopefully from this diagram, you should get a sense of what the K-Nearest Neighbors algorithm is. It considers the 'K' Nearest Neighbors (points) when it predicts the classification of the test point."
82 | ]
83 | },
84 | {
85 | "cell_type": "markdown",
86 | "metadata": {
87 | "button": false,
88 | "new_sheet": false,
89 | "run_control": {
90 | "read_only": false
91 | }
92 | },
93 | "source": [
94 | "Lets load requiered libraries"
95 | ]
96 | },
97 | {
98 | "cell_type": "code",
99 | "execution_count": 1,
100 | "metadata": {
101 | "button": false,
102 | "new_sheet": false,
103 | "run_control": {
104 | "read_only": false
105 | }
106 | },
107 | "outputs": [],
108 | "source": [
109 | "import itertools\n",
110 | "import numpy as np\n",
111 | "import matplotlib.pyplot as plt\n",
112 | "from matplotlib.ticker import NullFormatter\n",
113 | "import pandas as pd\n",
114 | "import numpy as np\n",
115 | "import matplotlib.ticker as ticker\n",
116 | "from sklearn import preprocessing\n",
117 | "%matplotlib inline"
118 | ]
119 | },
120 | {
121 | "cell_type": "markdown",
122 | "metadata": {
123 | "button": false,
124 | "new_sheet": false,
125 | "run_control": {
126 | "read_only": false
127 | }
128 | },
129 | "source": [
130 | "### About dataset"
131 | ]
132 | },
133 | {
134 | "cell_type": "markdown",
135 | "metadata": {
136 | "button": false,
137 | "new_sheet": false,
138 | "run_control": {
139 | "read_only": false
140 | }
141 | },
142 | "source": [
143 | "Imagine a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, the company can customize offers for individual prospective customers. It is a classification problem. That is, given the dataset, with predefined labels, we need to build a model to be used to predict class of a new or unknown case. \n",
144 | "\n",
145 | "The example focuses on using demographic data, such as region, age, and marital, to predict usage patterns. \n",
146 | "\n",
147 | "The target field, called __custcat__, has four possible values that correspond to the four customer groups, as follows:\n",
148 | " 1- Basic Service\n",
149 | " 2- E-Service\n",
150 | " 3- Plus Service\n",
151 | " 4- Total Service\n",
152 | "\n",
153 | "Our objective is to build a classifier, to predict the class of unknown cases. We will use a specific type of classification called K nearest neighbour.\n"
154 | ]
155 | },
156 | {
157 | "cell_type": "markdown",
158 | "metadata": {
159 | "button": false,
160 | "new_sheet": false,
161 | "run_control": {
162 | "read_only": false
163 | }
164 | },
165 | "source": [
166 | "Lets download the dataset. To download the data, we will use !wget to download it from IBM Object Storage."
167 | ]
168 | },
169 | {
170 | "cell_type": "code",
171 | "execution_count": 5,
172 | "metadata": {
173 | "button": false,
174 | "new_sheet": false,
175 | "run_control": {
176 | "read_only": false
177 | }
178 | },
179 | "outputs": [
180 | {
181 | "data": {
182 | "text/plain": [
183 | "('teleCust1000t.csv', )"
184 | ]
185 | },
186 | "execution_count": 5,
187 | "metadata": {},
188 | "output_type": "execute_result"
189 | }
190 | ],
191 | "source": [
192 | "import urllib.request\n",
193 | "url = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/teleCust1000t.csv'\n",
194 | "urllib.request.urlretrieve(url, filename = 'teleCust1000t.csv')\n",
195 | "#!wget -O teleCust1000t.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/teleCust1000t.csv"
196 | ]
197 | },
198 | {
199 | "cell_type": "markdown",
200 | "metadata": {},
201 | "source": [
202 | "__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)"
203 | ]
204 | },
205 | {
206 | "cell_type": "markdown",
207 | "metadata": {
208 | "button": false,
209 | "new_sheet": false,
210 | "run_control": {
211 | "read_only": false
212 | }
213 | },
214 | "source": [
215 | "### Load Data From CSV File "
216 | ]
217 | },
218 | {
219 | "cell_type": "code",
220 | "execution_count": 8,
221 | "metadata": {
222 | "button": false,
223 | "new_sheet": false,
224 | "run_control": {
225 | "read_only": false
226 | }
227 | },
228 | "outputs": [
229 | {
230 | "data": {
231 | "text/html": [
232 | "\n",
233 | "\n",
246 | "
\n",
247 | " \n",
248 | " \n",
249 | " | \n",
250 | " region | \n",
251 | " tenure | \n",
252 | " age | \n",
253 | " marital | \n",
254 | " address | \n",
255 | " income | \n",
256 | " ed | \n",
257 | " employ | \n",
258 | " retire | \n",
259 | " gender | \n",
260 | " reside | \n",
261 | " custcat | \n",
262 | "
\n",
263 | " \n",
264 | " \n",
265 | " \n",
266 | " 0 | \n",
267 | " 2 | \n",
268 | " 13 | \n",
269 | " 44 | \n",
270 | " 1 | \n",
271 | " 9 | \n",
272 | " 64.0 | \n",
273 | " 4 | \n",
274 | " 5 | \n",
275 | " 0.0 | \n",
276 | " 0 | \n",
277 | " 2 | \n",
278 | " 1 | \n",
279 | "
\n",
280 | " \n",
281 | " 1 | \n",
282 | " 3 | \n",
283 | " 11 | \n",
284 | " 33 | \n",
285 | " 1 | \n",
286 | " 7 | \n",
287 | " 136.0 | \n",
288 | " 5 | \n",
289 | " 5 | \n",
290 | " 0.0 | \n",
291 | " 0 | \n",
292 | " 6 | \n",
293 | " 4 | \n",
294 | "
\n",
295 | " \n",
296 | " 2 | \n",
297 | " 3 | \n",
298 | " 68 | \n",
299 | " 52 | \n",
300 | " 1 | \n",
301 | " 24 | \n",
302 | " 116.0 | \n",
303 | " 1 | \n",
304 | " 29 | \n",
305 | " 0.0 | \n",
306 | " 1 | \n",
307 | " 2 | \n",
308 | " 3 | \n",
309 | "
\n",
310 | " \n",
311 | " 3 | \n",
312 | " 2 | \n",
313 | " 33 | \n",
314 | " 33 | \n",
315 | " 0 | \n",
316 | " 12 | \n",
317 | " 33.0 | \n",
318 | " 2 | \n",
319 | " 0 | \n",
320 | " 0.0 | \n",
321 | " 1 | \n",
322 | " 1 | \n",
323 | " 1 | \n",
324 | "
\n",
325 | " \n",
326 | " 4 | \n",
327 | " 2 | \n",
328 | " 23 | \n",
329 | " 30 | \n",
330 | " 1 | \n",
331 | " 9 | \n",
332 | " 30.0 | \n",
333 | " 1 | \n",
334 | " 2 | \n",
335 | " 0.0 | \n",
336 | " 0 | \n",
337 | " 4 | \n",
338 | " 3 | \n",
339 | "
\n",
340 | " \n",
341 | "
\n",
342 | "
"
343 | ],
344 | "text/plain": [
345 | " region tenure age marital address income ed employ retire gender \\\n",
346 | "0 2 13 44 1 9 64.0 4 5 0.0 0 \n",
347 | "1 3 11 33 1 7 136.0 5 5 0.0 0 \n",
348 | "2 3 68 52 1 24 116.0 1 29 0.0 1 \n",
349 | "3 2 33 33 0 12 33.0 2 0 0.0 1 \n",
350 | "4 2 23 30 1 9 30.0 1 2 0.0 0 \n",
351 | "\n",
352 | " reside custcat \n",
353 | "0 2 1 \n",
354 | "1 6 4 \n",
355 | "2 2 3 \n",
356 | "3 1 1 \n",
357 | "4 4 3 "
358 | ]
359 | },
360 | "execution_count": 8,
361 | "metadata": {},
362 | "output_type": "execute_result"
363 | }
364 | ],
365 | "source": [
366 | "df = pd.read_csv('teleCust1000t.csv')\n",
367 | "df.head()"
368 | ]
369 | },
370 | {
371 | "cell_type": "markdown",
372 | "metadata": {
373 | "button": false,
374 | "new_sheet": false,
375 | "run_control": {
376 | "read_only": false
377 | }
378 | },
379 | "source": [
380 | "# Data Visualization and Anylisis \n",
381 | "\n"
382 | ]
383 | },
384 | {
385 | "cell_type": "markdown",
386 | "metadata": {
387 | "button": false,
388 | "new_sheet": false,
389 | "run_control": {
390 | "read_only": false
391 | }
392 | },
393 | "source": [
394 | "#### Let’s see how many of each class is in our data set "
395 | ]
396 | },
397 | {
398 | "cell_type": "code",
399 | "execution_count": 9,
400 | "metadata": {
401 | "button": false,
402 | "new_sheet": false,
403 | "run_control": {
404 | "read_only": false
405 | }
406 | },
407 | "outputs": [
408 | {
409 | "data": {
410 | "text/plain": [
411 | "3 281\n",
412 | "1 266\n",
413 | "4 236\n",
414 | "2 217\n",
415 | "Name: custcat, dtype: int64"
416 | ]
417 | },
418 | "execution_count": 9,
419 | "metadata": {},
420 | "output_type": "execute_result"
421 | }
422 | ],
423 | "source": [
424 | "df['custcat'].value_counts()"
425 | ]
426 | },
427 | {
428 | "cell_type": "markdown",
429 | "metadata": {
430 | "button": false,
431 | "new_sheet": false,
432 | "run_control": {
433 | "read_only": false
434 | }
435 | },
436 | "source": [
437 | "#### 281 Plus Service, 266 Basic-service, 236 Total Service, and 217 E-Service customers\n"
438 | ]
439 | },
440 | {
441 | "cell_type": "markdown",
442 | "metadata": {},
443 | "source": [
444 | "You can easily explore your data using visualization techniques:"
445 | ]
446 | },
447 | {
448 | "cell_type": "code",
449 | "execution_count": 10,
450 | "metadata": {},
451 | "outputs": [
452 | {
453 | "data": {
454 | "text/plain": [
455 | "array([[]], dtype=object)"
456 | ]
457 | },
458 | "execution_count": 10,
459 | "metadata": {},
460 | "output_type": "execute_result"
461 | },
462 | {
463 | "data": {
464 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYMAAAEICAYAAAC9E5gJAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAASIUlEQVR4nO3dfbBcdX3H8feHYMES5aFgJk2iN9SUEWTqwy3qoPam0IKihNpi41AbKk7qFDva2qlBOq39g05sp452kDqpMEaxXFKQISPDKJN6q51qkSBPAVOiBAjEpCIIUUsb/PaPPddu0nuTu7m7e3fh/Zq5s2d/+ztnP3tY7ueesw9JVSFJem47bK4DSJLmnmUgSbIMJEmWgSQJy0CShGUgScIy0LNMki1JxuY6hzRs4ucMJEkeGUiSLAM9uyTZnuTMJB9OsiHJZ5I81Zw+Gm2btyTJ55P8Z5LHklzejB+W5M+SPJhkd7P+0c1tI0kqye8leTjJ40nek+SXk9yV5InJ7bTdz7uS3NfM/WKSl/R3j0gzYxno2excYBw4BtgITP7Cnwd8AXgQGAEWNfMALmx+lgMnAvMn12vzGmAZ8NvAx4BLgTOBU4C3J/mV5n7OAz4EvA04AfgqcE03H6DULb5moGeVJNuBdwOvB15fVWc24ycDm6vq+UleR6scFlbV3v3W3wRcX1VXNNdPAu4Bng8sBh4AFlfVI83tjwF/UFXXNtevB75aVR9LcjNwXVVd2dx2GLAHeFlVPdjL/SB1yiMDPZt9t235R8CRSQ4HlgAP7l8EjZ+ndcQw6UHgcGBB29iutuUfT3F9frP8EuDjzemjJ4DvA6F1JCINFMtAz0UPAy9uimF/j9L6JT7pxcBe9v2F38n9/H5VHdP28/yq+rdD2JbUU5aBnotuBXYCa5McleTIJKc3t10D/FGSpUnmA38FXDvNUcTBfBK4JMkpAEmOTnJ+Nx6A1G2WgZ5zquoZ4K3AS4GHgB20XgwGuAr4LPAVWq8P/Bfwh4d4PzcAHwHGkzxJ67WHN80qvNQjvoAsSfLIQJJkGUiSsAwkSVgGkiRaH6aZc8cff3yNjIx0vN4Pf/hDjjrqqO4H6jFz988wZgZz99uw5t68efP3quqEbmxrIMpgZGSE2267reP1JiYmGBsb636gHjN3/wxjZjB3vw1r7iRd+1oTTxNJkiwDSZJlIEnCMpAkYRlIkrAMJElYBpIkLANJEpaBJIkB+QTybI2suWnK8e1rz+lzEkkaTh4ZSJIsA0mSZSBJwjKQJGEZSJKwDCRJWAaSJCwDSRKWgSQJy0CShGUgScIykCRhGUiSsAwkSVgGkiQsA0kSHZRBknlJvpnkC83145LckuT+5vLYtrmXJNmWZGuSs3oRXJLUPZ0cGbwPuK/t+hpgU1UtAzY110lyMrASOAU4G7giybzuxJUk9cKMyiDJYuAc4FNtwyuA9c3yeuC8tvHxqnq6qh4AtgGndSWtJKknZnpk8DHgT4GftI0tqKqdAM3li5rxRcDDbfN2NGOSpAF1+MEmJHkLsLuqNicZm8E2M8VYTbHd1cBqgAULFjAxMTGDTe9rz549TExM8IFT9055+6Fssx8mcw+bYcw9jJnB3P02rLm76aBlAJwOnJvkzcCRwAuTXA3sSrKwqnYmWQjsbubvAJa0rb8YeHT/jVbVOmAdwOjoaI2NjXUcfmJigrGxMS5cc9OUt2+/oPNt9sNk7mEzjLmHMTOYu9+GNXc3HfQ0UVVdUlWLq2qE1gvD/1xVvwNsBFY101YBNzbLG4GVSY5IshRYBtza9eSSpK6ZyZHBdNYCG5JcBDwEnA9QVVuSbADuBfYCF1fVM7NOKknqmY7KoKomgIlm+THgjGnmXQZcNstskqQ+8RPIkiTLQJJkGUiSsAwkSVgGkiQsA0kSloEkCctAkoRlIEnCMpAkYRlIkrAMJElYBpIkLANJEpaBJAnLQJKEZSBJwjKQJGEZSJKwDCRJWAaSJCwDSRKWgSQJy0CShGUgScIykCRhGUiSsAwkSVgGkiQsA0kSloEkCctAkoRlIEnCMpAkYRlIkrAMJElYBpIkLANJEjMogyRHJrk1yZ1JtiT5y2b8uCS3JLm/uTy2bZ1LkmxLsjXJWb18AJKk2ZvJkcHTwK9W1S8BrwDOTvJaYA2wqaqWAZua6yQ5GVgJnAKcDVyRZF4PskuSuuSgZVAte5qrz2t+ClgBrG/G1wPnNcsrgPGqerqqHgC2Aad1M7QkqbtSVQef1PrLfjPwUuATVfXBJE9U1TFtcx6vqmOTXA58vaqubsavBG6uquv22+ZqYDXAggULXj0+Pt5x+D179jB//nzufuQHU95+6qKjO95mP0zmHjbDmHsYM4O5+21Ycy9fvnxzVY12Y1uHz2RSVT0DvCLJMcANSV5+gOmZahNTbHMdsA5gdHS0xsbGZhJlHxMTE4yNjXHhmpumvH37BZ1vsx8mcw+bYcw9jJnB3P02rLm7qaN3E1XVE8AErdcCdiVZCNBc7m6m7QCWtK22GHh0tkElSb0zk3cTndAcEZDk+cCZwLeAjcCqZtoq4MZmeSOwMskRSZYCy4Bbu5xbktRFMzlNtBBY37xucBiwoaq+kORrwIYkFwEPAecDVNWWJBuAe4G9wMXNaSZJ0oA6aBlU1V3AK6cYfww4Y5p1LgMum3U6SVJf+AlkSZJlIEmyDCRJWAaSJCwDSRKWgSQJy0CShGUgScIykCRhGUiSmOFXWA+rkem+2nrtOX1OIkmDzSMDSZJlIEmyDCRJWAaSJCwDSRKWgSQJy0CShGUgScIykCRhGUiSsAwkSVgGkiQsA0kSloEkCctAkoRlIEnCMpAkYRlIkrAMJElYBpIkLANJEpaBJAnLQJKEZSBJwjKQJGEZSJKwDCRJzKAMkixJ8uUk9yXZkuR9zfhxSW5Jcn9zeWzbOpck2ZZka5KzevkAJEmzN5Mjg73AB6rqZcBrgYuTnAysATZV1TJgU3Od5raVwCnA2cAVSeb1IrwkqTsOWgZVtbOqbm+WnwLuAxYBK4D1zbT1wHnN8gpgvKqerqoHgG3AaV3OLUnqolTVzCcnI8BXgJcDD1XVMW23PV5Vxya5HPh6VV3djF8J3FxV1+23rdXAaoAFCxa8enx8vOPwe/bsYf78+dz9yA86Wu/URUd3fF/dNJl72Axj7mHMDObut2HNvXz58s1VNdqNbR0+04lJ5gPXA++vqieTTDt1irH/1zhVtQ5YBzA6OlpjY2MzjfJTExMTjI2NceGamzpab/sFnd9XN03mHjbDmHsYM4O5+21Yc3fTjN5NlOR5tIrgc1X1+WZ4V5KFze0Lgd3N+A5gSdvqi4FHuxNXktQLM3k3UYArgfuq6qNtN20EVjXLq4Ab28ZXJjkiyVJgGXBr9yJLkrptJqeJTgfeCdyd5I5m7EPAWmBDkouAh4DzAapqS5INwL203ol0cVU90+3gkqTuOWgZVNW/MvXrAABnTLPOZcBls8glSeojP4EsSbIMJEmWgSQJy0CShGUgScIykCRhGUiSsAwkSVgGkiQsA0kSloEkCctAkoRlIEnCMpAkYRlIkrAMJElYBpIkLANJEpaBJAnLQJKEZSBJwjKQJGEZSJKwDCRJWAaSJCwDSRKWgSQJy0CShGUgScIykCRhGUiSsAwkSVgGkiTg8LkOMBdG1tw05fj2tef0OYkkDQaPDCRJloEkyTKQJDGDMkhyVZLdSe5pGzsuyS1J7m8uj2277ZIk25JsTXJWr4JLkrpnJkcGnwbO3m9sDbCpqpYBm5rrJDkZWAmc0qxzRZJ5XUsrSeqJg5ZBVX0F+P5+wyuA9c3yeuC8tvHxqnq6qh4AtgGndSeqJKlXUlUHn5SMAF+oqpc315+oqmPabn+8qo5Ncjnw9aq6uhm/Eri5qq6bYpurgdUACxYsePX4+HjH4ffs2cP8+fO5+5EfdLzuVE5ddHRXtnMwk7mHzTDmHsbMYO5+G9bcy5cv31xVo93YVrc/Z5ApxqZsm6paB6wDGB0drbGxsY7vbGJigrGxMS6c5nMDndp+QecZDsVk7mEzjLmHMTOYu9+GNXc3Heq7iXYlWQjQXO5uxncAS9rmLQYePfR4kqR+ONQy2AisapZXATe2ja9MckSSpcAy4NbZRZQk9dpBTxMluQYYA45PsgP4C2AtsCHJRcBDwPkAVbUlyQbgXmAvcHFVPdOj7JKkLjloGVTVO6a56Yxp5l8GXDabUJKk/vITyJIky0CSZBlIkrAMJElYBpIkLANJEpaBJAnLQJKEZSBJwjKQJGEZSJKwDCRJWAaSJCwDSRKWgSQJy0CShGUgSWIG/9LZc8nImpumHN++9pw+J5Gk/vLIQJJkGUiSLANJEpaBJAnLQJKE7yaaEd9lJOnZziMDSZJlIEmyDCRJWAaSJCwDSRKWgSQJy0CShJ8zmBU/fyDp2cIjA0mSZSBJ8jRRX02eVvrAqXu5sO0Uk6eVJM01y6AHpnstQZIGlaeJJEmWgSSph6eJkpwNfByYB3yqqtb26r6GXTdPK/n6g6RD0ZMySDIP+ATwa8AO4BtJNlbVvb24Px2cn4mQdCC9OjI4DdhWVd8BSDIOrAAsgx7r9Cij05Lo1vzp9Ho7/TBoWQ+Uxz8G+mfQ/yBLVXV/o8lvAWdX1bub6+8EXlNV722bsxpY3Vw9Cdh6CHd1PPC9WcadC+bun2HMDObut2HNfVJVvaAbG+rVkUGmGNundapqHbBuVneS3FZVo7PZxlwwd/8MY2Ywd78Nc+5ubatX7ybaASxpu74YeLRH9yVJmqVelcE3gGVJlib5GWAlsLFH9yVJmqWenCaqqr1J3gt8kdZbS6+qqi09uKtZnWaaQ+bun2HMDObut+d87p68gCxJGi5+AlmSZBlIkoa0DJKcnWRrkm1J1sx1nnZJliT5cpL7kmxJ8r5m/MNJHklyR/Pz5rZ1Lmkey9YkZ81h9u1J7m7y3daMHZfkliT3N5fHDlLuJCe17dM7kjyZ5P2DuL+TXJVkd5J72sY63r9JXt38d9qW5O+STPVW7l7n/psk30pyV5IbkhzTjI8k+XHbfv/kXOSeJnPHz4kB2dfXtmXenuSOZry7+7qqhuqH1gvS3wZOBH4GuBM4ea5zteVbCLyqWX4B8B/AycCHgT+ZYv7JzWM4AljaPLZ5c5R9O3D8fmN/DaxpltcAHxm03Ps9N74LvGQQ9zfwRuBVwD2z2b/ArcDraH2e52bgTXOQ+9eBw5vlj7TlHmmft992+pZ7mswdPycGYV/vd/vfAn/ei309jEcGP/2qi6r6b2Dyqy4GQlXtrKrbm+WngPuARQdYZQUwXlVPV9UDwDZaj3FQrADWN8vrgfPaxgct9xnAt6vqwQPMmbPcVfUV4PtT5Jnx/k2yEHhhVX2tWv/Xf6Ztnb7lrqovVdXe5urXaX2WaFr9zj3Nvp7OQO/rSc1f928HrjnQNg419zCWwSLg4bbrOzjwL9s5k2QEeCXw783Qe5vD6qvaTgcM0uMp4EtJNqf1dSEAC6pqJ7SKDnhRMz5IuSetZN//UQZ9f0Pn+3dRs7z/+Fx6F62/PictTfLNJP+S5A3N2KDk7uQ5MSiZJ70B2FVV97eNdW1fD2MZHPSrLgZBkvnA9cD7q+pJ4O+BXwBeAeykdbgHg/V4Tq+qVwFvAi5O8sYDzB2k3KT14cZzgX9qhoZhfx/IdDkHKn+SS4G9wOeaoZ3Ai6vqlcAfA/+Y5IUMRu5OnxODkLndO9j3j52u7uthLIOB/6qLJM+jVQSfq6rPA1TVrqp6pqp+AvwD/3dqYmAeT1U92lzuBm6glXFXc9g5efi5u5k+MLkbbwJur6pdMBz7u9Hp/t3Bvqdk5ix/klXAW4ALmtMRNKdaHmuWN9M6//6LDEDuQ3hOzHnmSUkOB94GXDs51u19PYxlMNBfddGc17sSuK+qPto2vrBt2m8Ak+8W2AisTHJEkqXAMlov/vRVkqOSvGBymdYLhPc0+VY101YBNzbLA5G7zT5/NQ36/m7T0f5tTiU9leS1zXPtd9vW6Zu0/vGqDwLnVtWP2sZPSOvfMyHJiU3u7wxC7k6fE4OQuc2ZwLeq6qenf7q+r3v5ynivfoA303qXzreBS+c6z37ZXk/rkOwu4I7m583AZ4G7m/GNwMK2dS5tHstWevxuhQPkPpHWOyruBLZM7lfg54BNwP3N5XGDlLvJ8bPAY8DRbWMDt79pldVO4H9o/fV20aHsX2CU1i+ybwOX03yTQJ9zb6N1nn3yOf7JZu5vNs+fO4HbgbfORe5pMnf8nBiEfd2Mfxp4z35zu7qv/ToKSdJQniaSJHWZZSBJsgwkSZaBJAnLQJKEZSBJwjKQJAH/C+ch7ZFQjClTAAAAAElFTkSuQmCC\n",
465 | "text/plain": [
466 | ""
467 | ]
468 | },
469 | "metadata": {
470 | "needs_background": "light"
471 | },
472 | "output_type": "display_data"
473 | }
474 | ],
475 | "source": [
476 | "df.hist(column='income', bins=50)"
477 | ]
478 | },
479 | {
480 | "cell_type": "markdown",
481 | "metadata": {
482 | "button": false,
483 | "new_sheet": false,
484 | "run_control": {
485 | "read_only": false
486 | }
487 | },
488 | "source": [
489 | "### Feature set"
490 | ]
491 | },
492 | {
493 | "cell_type": "markdown",
494 | "metadata": {
495 | "button": false,
496 | "new_sheet": false,
497 | "run_control": {
498 | "read_only": false
499 | }
500 | },
501 | "source": [
502 | "Lets defind feature sets, X:"
503 | ]
504 | },
505 | {
506 | "cell_type": "code",
507 | "execution_count": 11,
508 | "metadata": {},
509 | "outputs": [
510 | {
511 | "data": {
512 | "text/plain": [
513 | "Index(['region', 'tenure', 'age', 'marital', 'address', 'income', 'ed',\n",
514 | " 'employ', 'retire', 'gender', 'reside', 'custcat'],\n",
515 | " dtype='object')"
516 | ]
517 | },
518 | "execution_count": 11,
519 | "metadata": {},
520 | "output_type": "execute_result"
521 | }
522 | ],
523 | "source": [
524 | "df.columns"
525 | ]
526 | },
527 | {
528 | "cell_type": "markdown",
529 | "metadata": {},
530 | "source": [
531 | "To use scikit-learn library, we have to convert the Pandas data frame to a Numpy array:"
532 | ]
533 | },
534 | {
535 | "cell_type": "code",
536 | "execution_count": 12,
537 | "metadata": {
538 | "button": false,
539 | "new_sheet": false,
540 | "run_control": {
541 | "read_only": false
542 | }
543 | },
544 | "outputs": [
545 | {
546 | "data": {
547 | "text/plain": [
548 | "array([[ 2., 13., 44., 1., 9., 64., 4., 5., 0., 0., 2.],\n",
549 | " [ 3., 11., 33., 1., 7., 136., 5., 5., 0., 0., 6.],\n",
550 | " [ 3., 68., 52., 1., 24., 116., 1., 29., 0., 1., 2.],\n",
551 | " [ 2., 33., 33., 0., 12., 33., 2., 0., 0., 1., 1.],\n",
552 | " [ 2., 23., 30., 1., 9., 30., 1., 2., 0., 0., 4.]])"
553 | ]
554 | },
555 | "execution_count": 12,
556 | "metadata": {},
557 | "output_type": "execute_result"
558 | }
559 | ],
560 | "source": [
561 | "X = df[['region', 'tenure','age', 'marital', 'address', 'income', 'ed', 'employ','retire', 'gender', 'reside']] .values #.astype(float)\n",
562 | "X[0:5]\n"
563 | ]
564 | },
565 | {
566 | "cell_type": "markdown",
567 | "metadata": {
568 | "button": false,
569 | "new_sheet": false,
570 | "run_control": {
571 | "read_only": false
572 | }
573 | },
574 | "source": [
575 | "What are our lables?"
576 | ]
577 | },
578 | {
579 | "cell_type": "code",
580 | "execution_count": 13,
581 | "metadata": {
582 | "button": false,
583 | "new_sheet": false,
584 | "run_control": {
585 | "read_only": false
586 | }
587 | },
588 | "outputs": [
589 | {
590 | "data": {
591 | "text/plain": [
592 | "array([1, 4, 3, 1, 3], dtype=int64)"
593 | ]
594 | },
595 | "execution_count": 13,
596 | "metadata": {},
597 | "output_type": "execute_result"
598 | }
599 | ],
600 | "source": [
601 | "y = df['custcat'].values\n",
602 | "y[0:5]"
603 | ]
604 | },
605 | {
606 | "cell_type": "markdown",
607 | "metadata": {
608 | "button": false,
609 | "new_sheet": false,
610 | "run_control": {
611 | "read_only": false
612 | }
613 | },
614 | "source": [
615 | "## Normalize Data "
616 | ]
617 | },
618 | {
619 | "cell_type": "markdown",
620 | "metadata": {
621 | "button": false,
622 | "new_sheet": false,
623 | "run_control": {
624 | "read_only": false
625 | }
626 | },
627 | "source": [
628 | "Data Standardization give data zero mean and unit variance, it is good practice, especially for algorithms such as KNN which is based on distance of cases:"
629 | ]
630 | },
631 | {
632 | "cell_type": "code",
633 | "execution_count": 14,
634 | "metadata": {
635 | "button": false,
636 | "new_sheet": false,
637 | "run_control": {
638 | "read_only": false
639 | }
640 | },
641 | "outputs": [
642 | {
643 | "data": {
644 | "text/plain": [
645 | "array([[-0.02696767, -1.055125 , 0.18450456, 1.0100505 , -0.25303431,\n",
646 | " -0.12650641, 1.0877526 , -0.5941226 , -0.22207644, -1.03459817,\n",
647 | " -0.23065004],\n",
648 | " [ 1.19883553, -1.14880563, -0.69181243, 1.0100505 , -0.4514148 ,\n",
649 | " 0.54644972, 1.9062271 , -0.5941226 , -0.22207644, -1.03459817,\n",
650 | " 2.55666158],\n",
651 | " [ 1.19883553, 1.52109247, 0.82182601, 1.0100505 , 1.23481934,\n",
652 | " 0.35951747, -1.36767088, 1.78752803, -0.22207644, 0.96655883,\n",
653 | " -0.23065004],\n",
654 | " [-0.02696767, -0.11831864, -0.69181243, -0.9900495 , 0.04453642,\n",
655 | " -0.41625141, -0.54919639, -1.09029981, -0.22207644, 0.96655883,\n",
656 | " -0.92747794],\n",
657 | " [-0.02696767, -0.58672182, -0.93080797, 1.0100505 , -0.25303431,\n",
658 | " -0.44429125, -1.36767088, -0.89182893, -0.22207644, -1.03459817,\n",
659 | " 1.16300577]])"
660 | ]
661 | },
662 | "execution_count": 14,
663 | "metadata": {},
664 | "output_type": "execute_result"
665 | }
666 | ],
667 | "source": [
668 | "X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))\n",
669 | "X[0:5]"
670 | ]
671 | },
672 | {
673 | "cell_type": "markdown",
674 | "metadata": {
675 | "button": false,
676 | "new_sheet": false,
677 | "run_control": {
678 | "read_only": false
679 | }
680 | },
681 | "source": [
682 | "### Train Test Split \n",
683 | "Out of Sample Accuracy is the percentage of correct predictions that the model makes on data that that the model has NOT been trained on. Doing a train and test on the same dataset will most likely have low out-of-sample accuracy, due to the likelihood of being over-fit.\n",
684 | "\n",
685 | "It is important that our models have a high, out-of-sample accuracy, because the purpose of any model, of course, is to make correct predictions on unknown data. So how can we improve out-of-sample accuracy? One way is to use an evaluation approach called Train/Test Split.\n",
686 | "Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. \n",
687 | "\n",
688 | "This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.\n"
689 | ]
690 | },
691 | {
692 | "cell_type": "code",
693 | "execution_count": 15,
694 | "metadata": {
695 | "button": false,
696 | "new_sheet": false,
697 | "run_control": {
698 | "read_only": false
699 | }
700 | },
701 | "outputs": [
702 | {
703 | "name": "stdout",
704 | "output_type": "stream",
705 | "text": [
706 | "Train set: (800, 11) (800,)\n",
707 | "Test set: (200, 11) (200,)\n"
708 | ]
709 | }
710 | ],
711 | "source": [
712 | "from sklearn.model_selection import train_test_split\n",
713 | "X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)\n",
714 | "print ('Train set:', X_train.shape, y_train.shape)\n",
715 | "print ('Test set:', X_test.shape, y_test.shape)"
716 | ]
717 | },
718 | {
719 | "cell_type": "markdown",
720 | "metadata": {
721 | "button": false,
722 | "new_sheet": false,
723 | "run_control": {
724 | "read_only": false
725 | }
726 | },
727 | "source": [
728 | "# Classification "
729 | ]
730 | },
731 | {
732 | "cell_type": "markdown",
733 | "metadata": {
734 | "button": false,
735 | "new_sheet": false,
736 | "run_control": {
737 | "read_only": false
738 | }
739 | },
740 | "source": [
741 | "## K nearest neighbor (K-NN)"
742 | ]
743 | },
744 | {
745 | "cell_type": "markdown",
746 | "metadata": {
747 | "button": false,
748 | "new_sheet": false,
749 | "run_control": {
750 | "read_only": false
751 | }
752 | },
753 | "source": [
754 | "#### Import library "
755 | ]
756 | },
757 | {
758 | "cell_type": "markdown",
759 | "metadata": {
760 | "button": false,
761 | "new_sheet": false,
762 | "run_control": {
763 | "read_only": false
764 | }
765 | },
766 | "source": [
767 | "Classifier implementing the k-nearest neighbors vote."
768 | ]
769 | },
770 | {
771 | "cell_type": "code",
772 | "execution_count": 16,
773 | "metadata": {
774 | "button": false,
775 | "new_sheet": false,
776 | "run_control": {
777 | "read_only": false
778 | }
779 | },
780 | "outputs": [],
781 | "source": [
782 | "from sklearn.neighbors import KNeighborsClassifier"
783 | ]
784 | },
785 | {
786 | "cell_type": "markdown",
787 | "metadata": {
788 | "button": false,
789 | "new_sheet": false,
790 | "run_control": {
791 | "read_only": false
792 | }
793 | },
794 | "source": [
795 | "### Training\n",
796 | "\n",
797 | "Lets start the algorithm with k=4 for now:"
798 | ]
799 | },
800 | {
801 | "cell_type": "code",
802 | "execution_count": 17,
803 | "metadata": {
804 | "button": false,
805 | "new_sheet": false,
806 | "run_control": {
807 | "read_only": false
808 | }
809 | },
810 | "outputs": [
811 | {
812 | "data": {
813 | "text/plain": [
814 | "KNeighborsClassifier(n_neighbors=4)"
815 | ]
816 | },
817 | "execution_count": 17,
818 | "metadata": {},
819 | "output_type": "execute_result"
820 | }
821 | ],
822 | "source": [
823 | "k = 4\n",
824 | "#Train Model and Predict \n",
825 | "neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)\n",
826 | "neigh"
827 | ]
828 | },
829 | {
830 | "cell_type": "markdown",
831 | "metadata": {
832 | "button": false,
833 | "new_sheet": false,
834 | "run_control": {
835 | "read_only": false
836 | }
837 | },
838 | "source": [
839 | "### Predicting\n",
840 | "we can use the model to predict the test set:"
841 | ]
842 | },
843 | {
844 | "cell_type": "code",
845 | "execution_count": 18,
846 | "metadata": {
847 | "button": false,
848 | "new_sheet": false,
849 | "run_control": {
850 | "read_only": false
851 | }
852 | },
853 | "outputs": [
854 | {
855 | "data": {
856 | "text/plain": [
857 | "array([1, 1, 3, 2, 4], dtype=int64)"
858 | ]
859 | },
860 | "execution_count": 18,
861 | "metadata": {},
862 | "output_type": "execute_result"
863 | }
864 | ],
865 | "source": [
866 | "yhat = neigh.predict(X_test)\n",
867 | "yhat[0:5]"
868 | ]
869 | },
870 | {
871 | "cell_type": "markdown",
872 | "metadata": {
873 | "button": false,
874 | "new_sheet": false,
875 | "run_control": {
876 | "read_only": false
877 | }
878 | },
879 | "source": [
880 | "### Accuracy evaluation\n",
881 | "In multilabel classification, __accuracy classification score__ function computes subset accuracy. This function is equal to the jaccard_similarity_score function. Essentially, it calculates how match the actual labels and predicted labels are in the test set."
882 | ]
883 | },
884 | {
885 | "cell_type": "code",
886 | "execution_count": 19,
887 | "metadata": {},
888 | "outputs": [
889 | {
890 | "name": "stdout",
891 | "output_type": "stream",
892 | "text": [
893 | "Train set Accuracy: 0.5475\n",
894 | "Test set Accuracy: 0.32\n"
895 | ]
896 | }
897 | ],
898 | "source": [
899 | "from sklearn import metrics\n",
900 | "print(\"Train set Accuracy: \", metrics.accuracy_score(y_train, neigh.predict(X_train)))\n",
901 | "print(\"Test set Accuracy: \", metrics.accuracy_score(y_test, yhat))"
902 | ]
903 | },
904 | {
905 | "cell_type": "markdown",
906 | "metadata": {},
907 | "source": [
908 | "## Practice\n",
909 | "Can you build the model again, but this time with k=6?"
910 | ]
911 | },
912 | {
913 | "cell_type": "code",
914 | "execution_count": 23,
915 | "metadata": {},
916 | "outputs": [
917 | {
918 | "name": "stdout",
919 | "output_type": "stream",
920 | "text": [
921 | "Train set Accuracy: 0.51625\n",
922 | "Test set Accuracy: 0.31\n"
923 | ]
924 | }
925 | ],
926 | "source": [
927 | "# write your code here\n",
928 | "k = 6\n",
929 | "#Train Model and Predict \n",
930 | "neigh1 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)\n",
931 | "neigh1\n",
932 | "yhat = neigh1.predict(X_test)\n",
933 | "yhat[0:5]\n",
934 | "from sklearn import metrics\n",
935 | "print(\"Train set Accuracy: \", metrics.accuracy_score(y_train, neigh1.predict(X_train)))\n",
936 | "print(\"Test set Accuracy: \", metrics.accuracy_score(y_test, yhat))"
937 | ]
938 | },
939 | {
940 | "cell_type": "markdown",
941 | "metadata": {},
942 | "source": [
943 | "Double-click __here__ for the solution.\n",
944 | "\n",
945 | ""
955 | ]
956 | },
957 | {
958 | "cell_type": "markdown",
959 | "metadata": {
960 | "button": false,
961 | "new_sheet": false,
962 | "run_control": {
963 | "read_only": false
964 | }
965 | },
966 | "source": [
967 | "#### What about other K?\n",
968 | "K in KNN, is the number of nearest neighbors to examine. It is supposed to be specified by User. So, how we choose right K?\n",
969 | "The general solution is to reserve a part of your data for testing the accuracy of the model. Then chose k =1, use the training part for modeling, and calculate the accuracy of prediction using all samples in your test set. Repeat this process, increasing the k, and see which k is the best for your model.\n",
970 | "\n",
971 | "We can calucalte the accuracy of KNN for different Ks."
972 | ]
973 | },
974 | {
975 | "cell_type": "code",
976 | "execution_count": 24,
977 | "metadata": {
978 | "button": false,
979 | "new_sheet": false,
980 | "run_control": {
981 | "read_only": false
982 | }
983 | },
984 | "outputs": [
985 | {
986 | "data": {
987 | "text/plain": [
988 | "array([0.3 , 0.29 , 0.315, 0.32 , 0.315, 0.31 , 0.335, 0.325, 0.34 ])"
989 | ]
990 | },
991 | "execution_count": 24,
992 | "metadata": {},
993 | "output_type": "execute_result"
994 | }
995 | ],
996 | "source": [
997 | "Ks = 10\n",
998 | "mean_acc = np.zeros((Ks-1))\n",
999 | "std_acc = np.zeros((Ks-1))\n",
1000 | "ConfustionMx = [];\n",
1001 | "for n in range(1,Ks):\n",
1002 | " \n",
1003 | " #Train Model and Predict \n",
1004 | " neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)\n",
1005 | " yhat=neigh.predict(X_test)\n",
1006 | " mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)\n",
1007 | "\n",
1008 | " \n",
1009 | " std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])\n",
1010 | "\n",
1011 | "mean_acc"
1012 | ]
1013 | },
1014 | {
1015 | "cell_type": "markdown",
1016 | "metadata": {
1017 | "button": false,
1018 | "new_sheet": false,
1019 | "run_control": {
1020 | "read_only": false
1021 | }
1022 | },
1023 | "source": [
1024 | "#### Plot model accuracy for Different number of Neighbors "
1025 | ]
1026 | },
1027 | {
1028 | "cell_type": "code",
1029 | "execution_count": null,
1030 | "metadata": {
1031 | "button": false,
1032 | "collapsed": true,
1033 | "new_sheet": false,
1034 | "run_control": {
1035 | "read_only": false
1036 | }
1037 | },
1038 | "outputs": [],
1039 | "source": [
1040 | "plt.plot(range(1,Ks),mean_acc,'g')\n",
1041 | "plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)\n",
1042 | "plt.legend(('Accuracy ', '+/- 3xstd'))\n",
1043 | "plt.ylabel('Accuracy ')\n",
1044 | "plt.xlabel('Number of Nabors (K)')\n",
1045 | "plt.tight_layout()\n",
1046 | "plt.show()"
1047 | ]
1048 | },
1049 | {
1050 | "cell_type": "code",
1051 | "execution_count": null,
1052 | "metadata": {
1053 | "button": false,
1054 | "collapsed": true,
1055 | "new_sheet": false,
1056 | "run_control": {
1057 | "read_only": false
1058 | }
1059 | },
1060 | "outputs": [],
1061 | "source": [
1062 | "print( \"The best accuracy was with\", mean_acc.max(), \"with k=\", mean_acc.argmax()+1) "
1063 | ]
1064 | },
1065 | {
1066 | "cell_type": "markdown",
1067 | "metadata": {
1068 | "button": false,
1069 | "new_sheet": false,
1070 | "run_control": {
1071 | "read_only": false
1072 | }
1073 | },
1074 | "source": [
1075 | "## Want to learn more?\n",
1076 | "\n",
1077 | "IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: [SPSS Modeler](http://cocl.us/ML0101EN-SPSSModeler).\n",
1078 | "\n",
1079 | "Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at [Watson Studio](https://cocl.us/ML0101EN_DSX)\n",
1080 | "\n",
1081 | "### Thanks for completing this lesson!\n",
1082 | "\n",
1083 | "Notebook created by: Saeed Aghabozorgi\n",
1084 | "\n",
1085 | "
\n",
1086 | "Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/)."
1087 | ]
1088 | }
1089 | ],
1090 | "metadata": {
1091 | "kernelspec": {
1092 | "display_name": "Python 3",
1093 | "language": "python",
1094 | "name": "python3"
1095 | },
1096 | "language_info": {
1097 | "codemirror_mode": {
1098 | "name": "ipython",
1099 | "version": 3
1100 | },
1101 | "file_extension": ".py",
1102 | "mimetype": "text/x-python",
1103 | "name": "python",
1104 | "nbconvert_exporter": "python",
1105 | "pygments_lexer": "ipython3",
1106 | "version": "3.8.8"
1107 | }
1108 | },
1109 | "nbformat": 4,
1110 | "nbformat_minor": 2
1111 | }
1112 |
--------------------------------------------------------------------------------