├── ALS_Image_Test.png ├── AirDelay.ipynb ├── DSMap.png ├── DeepLearn.ipynb ├── Deep_Learning.jpg ├── Delays.jpg ├── IndeedJobs.ipynb ├── MaskTrain.png ├── Movie_thtr.jpg ├── NLP_Movies.ipynb ├── Penn_State.jpg ├── README.md ├── RecEngine_NB.ipynb ├── Rec_Engine_Image_Amazon.png ├── Test_Image.png ├── Traintest_ex.png ├── UniSal.ipynb ├── XG_Boost_NB.ipynb └── XG_Cover.jpg /ALS_Image_Test.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/ALS_Image_Test.png -------------------------------------------------------------------------------- /DSMap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/DSMap.png -------------------------------------------------------------------------------- /DeepLearn.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# An Introduction to Deep Learning using nolearn" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "\n", 15 | "[Source](http://www.purdue.edu/newsroom/releases/2014/Q1/smartphone-to-become-smarter-with-deep-learning-innovation.html)\n", 16 | "\n", 17 | "One of the most well known problems in machine learning regards how to categorize [handwritten numbers](http://en.wikipedia.org/wiki/MNIST_database) automatically. Basically, the idea is that you have 10 different digits (0-9) and you want a computer to be able to correctly identify each number. This would come in handy at the post office, along with many other applications such as identifying address numbers in images. \n", 18 | "\n", 19 | "Different machine learning experts have worked on this problem for several years trying [a variety of approaches](http://yann.lecun.com/exdb/mnist/). Ultimately, however, the best algorithm for the task of image classification would be an algorithm that can fit features that aren't easily described. This is where neural networks truly shine. \n", 20 | "\n", 21 | "Deep learning has become quite the trendy subject recently. Here is an [example](http://venturebeat.com/2015/02/09/microsoft-researchers-say-their-newest-deep-learning-system-beats-humans-and-google/). Essentially, researchers are trying to use Deep Learning to categorize images better than a human could, something humans are typically better at doing naturally than a computer. \n", 22 | "\n", 23 | "If you want an idea as to how Deep Learning is being used (and how it works) listening to a talk from Dr. Andrew Ng is always a good idea!" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 1, 29 | "metadata": { 30 | "collapsed": false 31 | }, 32 | "outputs": [ 33 | { 34 | "data": { 35 | "text/html": [ 36 | "\n", 37 | " \n", 44 | " " 45 | ], 46 | "text/plain": [ 47 | "" 48 | ] 49 | }, 50 | "execution_count": 1, 51 | "metadata": {}, 52 | "output_type": "execute_result" 53 | } 54 | ], 55 | "source": [ 56 | "from IPython.display import YouTubeVideo\n", 57 | "YouTubeVideo('ZmNOAtZIgIk')" 58 | ] 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "metadata": {}, 63 | "source": [ 64 | "Basically, the key part of deep learning is finding features that can be used for classification from UNLABELED training examples. More data = a more accurate model, most of the time. Labeling the myriad of images available on the web one at a time into categories would be very difficult and time consuming (although Amazon paid people money to do this through their [Mechnical Turk](https://www.mturk.com/mturk/welcome) program). Instead, just find features from your unlabeled images (such as certain curves, color gradients, or other features that humans would have a difficult time describing). Then, use these features to help categorize your images. It greatly improves the accuracy because now you have more data to work with!" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": {}, 70 | "source": [ 71 | "As a way of practicing this, we are going to use some Deep Learning on the MNIST dataset of numbers to see if we can get a low error rate. I couldn't decide which neural network library to use at first, since scikit-learn doesn't really have any. I became inspired by this post [here](http://www.pyimagesearch.com/2014/09/22/getting-started-deep-learning-python/) from Dr. Adrian Rosebrock. He used the library [nolearn](https://pythonhosted.org/nolearn/), which we will also be using. \n", 72 | "\n", 73 | "I just found out while writing this that the dbn class of nolearn is being removed entirely for the new version 0.6 and that the author suggests switching to [lasagne](https://github.com/dnouri/nolearn/blob/master/nolearn/lasagne.py). If you are going to follow along with this notebook, make sure you have a version prior to nolearn 0.6. " 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "## Processing the Images" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "Just to get a starting idea of where we are beginning, let's try a very basic run of a deep belief network using nolearn. Because image processing can be fairly memory intensive, I am including the key steps of processing the data and wrapping them inside a function.\n", 88 | "\n", 89 | "Our function will load the MNIST data, transform it to a 0 to 1 scale for all of the pixel intensity values, and finally split the data into train and test sets. We need to scale the data between 0 to 1 so that the neural network will perform more efficiently. " 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": 4, 95 | "metadata": { 96 | "collapsed": false 97 | }, 98 | "outputs": [], 99 | "source": [ 100 | "import numpy as np\n", 101 | "from sklearn.datasets import fetch_mldata\n", 102 | "from sklearn.preprocessing import MinMaxScaler\n", 103 | "from sklearn.cross_validation import train_test_split" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": 11, 109 | "metadata": { 110 | "collapsed": false 111 | }, 112 | "outputs": [], 113 | "source": [ 114 | "def train_test_prep():\n", 115 | " '''\n", 116 | " This function will load the MNIST data, scale it to a 0 to 1 range, and split it into test/train sets. \n", 117 | " '''\n", 118 | " \n", 119 | " image_data = fetch_mldata('MNIST Original') # Get the MNIST dataset.\n", 120 | " \n", 121 | " basic_x = image_data.data\n", 122 | " basic_y = image_data.target # Separate images from their final classification. \n", 123 | " \n", 124 | " min_max_scaler = MinMaxScaler() # Create the MinMax object.\n", 125 | " basic_x = min_max_scaler.fit_transform(basic_x.astype(float)) # Scale pixel intensities only.\n", 126 | " \n", 127 | " x_train, x_test, y_train, y_test = train_test_split(basic_x, basic_y, \n", 128 | " test_size = 0.2, random_state = 0) # Split training/test.\n", 129 | " return x_train, x_test, y_train, y_test" 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "Now, let's run our function to split the data and scale it." 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": 12, 142 | "metadata": { 143 | "collapsed": false 144 | }, 145 | "outputs": [], 146 | "source": [ 147 | "x_train, x_test, y_train, y_test = train_test_prep()" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "## Creating A Deep Belief Network" 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "metadata": {}, 160 | "source": [ 161 | "Given that we now have train/test data, let's try making a simple deep belief network and get a baseline error rate to start with." 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": 5, 167 | "metadata": { 168 | "collapsed": false 169 | }, 170 | "outputs": [ 171 | { 172 | "name": "stdout", 173 | "output_type": "stream", 174 | "text": [ 175 | "gnumpy: failed to import cudamat. Using npmat instead. No GPU will be used.\n" 176 | ] 177 | } 178 | ], 179 | "source": [ 180 | "from nolearn.dbn import DBN" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "Notice that our version is not using cudamat. It would run a lot faster if we had a nice GPU to use, but since I am on a laptop that doesn't have a nice one, we are going to run this using the slower, CPU way. " 188 | ] 189 | }, 190 | { 191 | "cell_type": "code", 192 | "execution_count": 7, 193 | "metadata": { 194 | "collapsed": false 195 | }, 196 | "outputs": [], 197 | "source": [ 198 | "dbn_model = DBN([x_train.shape[1], 300, 10],\n", 199 | " learn_rates = 0.3,\n", 200 | " learn_rate_decays = 0.9,\n", 201 | " epochs = 10,\n", 202 | " verbose = 1)" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "We are using the example settings shown in the [documentation](https://pythonhosted.org/nolearn/dbn.html) just as a benchmark. So what do all of these settings mean?\n", 210 | "\n", 211 | "- The first row allows us to set the number of layers and nodes. Our input layer has 784 nodes, which is the number of pixels in each image. Our second layer, which is a hidden layer, has 300 nodes. These 300 nodes will become our unsupervised feature vector. This layer of nodes is then fed to the output layer, which has 10 nodes. One node for each possible digit (0-9). \n", 212 | "\n", 213 | "- The second row specifies the learning rate, which is essentially how large a step we are taking during gradient descent. Smaller steps mean a possibly more accurate result, but the training will take longer.\n", 214 | "\n", 215 | "- The learn_rate_decays specifies a factor the initial learning rate will be multiplied by after each iteration of the training. \n", 216 | "\n", 217 | "- Epochs is just a fancy way of saying the number of times the network will iterate over all of the training examples to minimize the cost function. More epochs is better, but it takes longer. \n", 218 | "\n", 219 | "- Verbose lets us see the progress of the deep belief network's training. Neural networks are slow to train! Especially compared to other machine learning algorithms. " 220 | ] 221 | }, 222 | { 223 | "cell_type": "markdown", 224 | "metadata": {}, 225 | "source": [ 226 | "Now it is time to train our deep belief network. This could take a little bit . . ." 227 | ] 228 | }, 229 | { 230 | "cell_type": "code", 231 | "execution_count": 8, 232 | "metadata": { 233 | "collapsed": false 234 | }, 235 | "outputs": [ 236 | { 237 | "name": "stdout", 238 | "output_type": "stream", 239 | "text": [ 240 | "[DBN] fitting X.shape=(56000L, 784L)\n", 241 | "[DBN] layers [784L, 300, 10]\n", 242 | "[DBN] Fine-tune..." 243 | ] 244 | }, 245 | { 246 | "name": "stderr", 247 | "output_type": "stream", 248 | "text": [ 249 | "100%\n" 250 | ] 251 | }, 252 | { 253 | "name": "stdout", 254 | "output_type": "stream", 255 | "text": [ 256 | "\n", 257 | "Epoch 1:" 258 | ] 259 | }, 260 | { 261 | "name": "stderr", 262 | "output_type": "stream", 263 | "text": [ 264 | "100%\n" 265 | ] 266 | }, 267 | { 268 | "name": "stdout", 269 | "output_type": "stream", 270 | "text": [ 271 | "\n", 272 | " loss 0.279465978376\n", 273 | " err 0.0809107142857\n", 274 | " (0:00:18)\n", 275 | "Epoch 2:" 276 | ] 277 | }, 278 | { 279 | "name": "stderr", 280 | "output_type": "stream", 281 | "text": [ 282 | "100%\n" 283 | ] 284 | }, 285 | { 286 | "name": "stdout", 287 | "output_type": "stream", 288 | "text": [ 289 | "\n", 290 | " loss 0.167409065944\n", 291 | " err 0.046625\n", 292 | " (0:00:18)\n", 293 | "Epoch 3:" 294 | ] 295 | }, 296 | { 297 | "name": "stderr", 298 | "output_type": "stream", 299 | "text": [ 300 | "100%\n" 301 | ] 302 | }, 303 | { 304 | "name": "stdout", 305 | "output_type": "stream", 306 | "text": [ 307 | "\n", 308 | " loss 0.132460369157\n", 309 | " err 0.0375357142857\n", 310 | " (0:00:18)\n", 311 | "Epoch 4:" 312 | ] 313 | }, 314 | { 315 | "name": "stderr", 316 | "output_type": "stream", 317 | "text": [ 318 | "100%\n" 319 | ] 320 | }, 321 | { 322 | "name": "stdout", 323 | "output_type": "stream", 324 | "text": [ 325 | "\n", 326 | " loss 0.0866091957473\n", 327 | " err 0.0255178571429\n", 328 | " (0:00:18)\n", 329 | "Epoch 5:" 330 | ] 331 | }, 332 | { 333 | "name": "stderr", 334 | "output_type": "stream", 335 | "text": [ 336 | "100%\n" 337 | ] 338 | }, 339 | { 340 | "name": "stdout", 341 | "output_type": "stream", 342 | "text": [ 343 | "\n", 344 | " loss 0.0682315404487\n", 345 | " err 0.0213928571429\n", 346 | " (0:00:17)\n", 347 | "Epoch 6:" 348 | ] 349 | }, 350 | { 351 | "name": "stderr", 352 | "output_type": "stream", 353 | "text": [ 354 | "100%\n" 355 | ] 356 | }, 357 | { 358 | "name": "stdout", 359 | "output_type": "stream", 360 | "text": [ 361 | "\n", 362 | " loss 0.0499247853743\n", 363 | " err 0.0155892857143\n", 364 | " (0:00:18)\n", 365 | "Epoch 7:" 366 | ] 367 | }, 368 | { 369 | "name": "stderr", 370 | "output_type": "stream", 371 | "text": [ 372 | "100%\n" 373 | ] 374 | }, 375 | { 376 | "name": "stdout", 377 | "output_type": "stream", 378 | "text": [ 379 | "\n", 380 | " loss 0.0399979346272\n", 381 | " err 0.0127142857143\n", 382 | " (0:00:17)\n", 383 | "Epoch 8:" 384 | ] 385 | }, 386 | { 387 | "name": "stderr", 388 | "output_type": "stream", 389 | "text": [ 390 | "100%\n" 391 | ] 392 | }, 393 | { 394 | "name": "stdout", 395 | "output_type": "stream", 396 | "text": [ 397 | "\n", 398 | " loss 0.0325180868268\n", 399 | " err 0.00998214285714\n", 400 | " (0:00:17)\n", 401 | "Epoch 9:" 402 | ] 403 | }, 404 | { 405 | "name": "stderr", 406 | "output_type": "stream", 407 | "text": [ 408 | "100%\n" 409 | ] 410 | }, 411 | { 412 | "name": "stdout", 413 | "output_type": "stream", 414 | "text": [ 415 | "\n", 416 | " loss 0.0249244378868\n", 417 | " err 0.00791071428571\n", 418 | " (0:00:17)\n", 419 | "Epoch 10:\n", 420 | " loss 0.0177347917291\n", 421 | " err 0.00503571428571\n", 422 | " (0:00:18)\n" 423 | ] 424 | } 425 | ], 426 | "source": [ 427 | "dbn_model.fit(x_train, y_train)" 428 | ] 429 | }, 430 | { 431 | "cell_type": "markdown", 432 | "metadata": {}, 433 | "source": [ 434 | "After about 3 and a half minutes, our network is ready! Let's see how accurate it is on the test data, just to get an idea of where we are starting from. " 435 | ] 436 | }, 437 | { 438 | "cell_type": "code", 439 | "execution_count": 6, 440 | "metadata": { 441 | "collapsed": false 442 | }, 443 | "outputs": [], 444 | "source": [ 445 | "from sklearn.metrics import classification_report, accuracy_score" 446 | ] 447 | }, 448 | { 449 | "cell_type": "code", 450 | "execution_count": 10, 451 | "metadata": { 452 | "collapsed": false 453 | }, 454 | "outputs": [ 455 | { 456 | "name": "stdout", 457 | "output_type": "stream", 458 | "text": [ 459 | " precision recall f1-score support\n", 460 | "\n", 461 | " 0.0 0.98 0.99 0.99 1312\n", 462 | " 1.0 0.99 0.98 0.99 1604\n", 463 | " 2.0 0.98 0.98 0.98 1348\n", 464 | " 3.0 0.98 0.97 0.98 1427\n", 465 | " 4.0 0.98 0.98 0.98 1362\n", 466 | " 5.0 0.97 0.98 0.97 1280\n", 467 | " 6.0 0.99 0.99 0.99 1397\n", 468 | " 7.0 0.98 0.98 0.98 1461\n", 469 | " 8.0 0.97 0.98 0.98 1390\n", 470 | " 9.0 0.97 0.97 0.97 1419\n", 471 | "\n", 472 | "avg / total 0.98 0.98 0.98 14000\n", 473 | "\n", 474 | "The accuracy is: 0.9795\n" 475 | ] 476 | } 477 | ], 478 | "source": [ 479 | "y_true, y_pred = y_test, dbn_model.predict(x_test) # Get our predictions\n", 480 | "print(classification_report(y_true, y_pred)) # Classification on each digit\n", 481 | "print 'The accuracy is:', accuracy_score(y_true, y_pred)" 482 | ] 483 | }, 484 | { 485 | "cell_type": "markdown", 486 | "metadata": {}, 487 | "source": [ 488 | "So, our starting accuracy is pretty close to 98% at 97.95%. It looks like the model is doing a good job, but it is having more trouble with the number 5 and 9 based on the lower f1-scores for those two numbers. Is there any way we can boost our score? \n", 489 | "\n", 490 | "Well, it would be nice if we had more images we could work with. That way, we could fit the missed examples better. There is a way we can solve this problem: create our own training examples!\n", 491 | "\n", 492 | "This is a common solution in the area of image classification. Some of the handwritten images may not be aligned correctly, or may be slightly rotated. To account for this in the model, let's add some \"noise\" to our training set and see if we can improve some. In addition, let's try using some more advanced features of the Deep Belief Network, such as more epochs and some regularization to avoid overfitting. \n", 493 | "\n" 494 | ] 495 | }, 496 | { 497 | "cell_type": "markdown", 498 | "metadata": {}, 499 | "source": [ 500 | "## Artificial Training Examples" 501 | ] 502 | }, 503 | { 504 | "cell_type": "markdown", 505 | "metadata": {}, 506 | "source": [ 507 | "To augment the size of our training set, we need to create a function that will change our original images just a little bit. We want to include a bit of translation (moving the pixels around) and rotation, both at random. To see how this will work, let's create a function that will randomly alter an existing training example.\n", 508 | "\n", 509 | "First, let's take a look at one of the training examples, reverting it back to its original form and plotting it." 510 | ] 511 | }, 512 | { 513 | "cell_type": "code", 514 | "execution_count": 21, 515 | "metadata": { 516 | "collapsed": false 517 | }, 518 | "outputs": [], 519 | "source": [ 520 | "import matplotlib.pyplot as plt" 521 | ] 522 | }, 523 | { 524 | "cell_type": "code", 525 | "execution_count": 22, 526 | "metadata": { 527 | "collapsed": false 528 | }, 529 | "outputs": [ 530 | { 531 | "data": { 532 | "text/plain": [ 533 | "" 534 | ] 535 | }, 536 | "execution_count": 22, 537 | "metadata": {}, 538 | "output_type": "execute_result" 539 | }, 540 | { 541 | "data": { 542 | "image/png": [ 543 | "iVBORw0KGgoAAAANSUhEUgAAAPwAAAD8CAYAAABTq8lnAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\n", 544 | "AAALEgAACxIB0t1+/AAAIABJREFUeJztnUuMW+X5/7+eGc94xp7xzCRziUikoUDELUwQEXRDBSqh\n", 545 | "vw1pUCoKaiGCsOkCCaUqdFEk2gUkC1QB7QLRtEpVqSobLhIXsSJQNlk0SEhsKjVpo3QumczV9nhs\n", 546 | "j89/0f9zePz4ec+xHdvjy/ORXp3jMxn7jOOvn/d9bm/E8zwPhmF0BT07fQOGYTQPE7xhdBEmeMPo\n", 547 | "IkzwhtFFmOANo4swwRtGF1Gz4D/55BPceuutuOWWW3D69Ol63pNhGI3Cq4FCoeDddNNN3sWLF71c\n", 548 | "LufNzs5633zzTcm/AWDDho0dHBo1Wfjz58/j5ptvxszMDKLRKB5//HG8//77tTyVYRhNpCbBX7ly\n", 549 | "Bfv27fMf7927F1euXKnbTRmG0RhqEnwkEqn3fRiG0QRqEvwNN9yAy5cv+48vX76MvXv31u2mDMNo\n", 550 | "ELU47fL5vPed73zHu3jxore1tWVOOxs2WnBo9KEG+vr68Lvf/Q4/+MEPsL29jRMnTuC2226r5akM\n", 551 | "w2gikf9vjev/xLbON4wdRZO2ZdoZRhdhgjeMLsIEbxhdhAneMLoIE7xhdBEmeMPoIkzwhtFFmOAN\n", 552 | "o4swwRtGF2GCN4wuwgRvGF2ECd4wuggTvGF0ESZ4w+giTPCG0UWY4A2jizDBG0YXYYI3jC7CBG8Y\n", 553 | "XYQJ3jC6CBO8YXQRJnjD6CJM8IbRRZjgDaOLMMEbRhdhgjeMLsIEbxhdhAneMLoIE7xhdBEmeMPo\n", 554 | "IkzwhtFF9F3PL8/MzGBkZAS9vb2IRqM4f/58ve7LMIwGcF2Cj0Qi+OyzzzA+Pl6v+zEMo4Fc95Te\n", 555 | "87x63IdhGE3gugQfiUTw0EMP4dChQ3j77bfrdU+GYTQK7zr473//63me5y0uLnqzs7Pe559/7v8M\n", 556 | "gA0bNnZwaFyXhd+zZw8AYGJiAo8++qg57QyjxalZ8JlMBhsbGwCAdDqNTz/9FAcOHKjbjRmGUX9q\n", 557 | "9tIvLCzg0UcfBQAUCgX85Cc/wcMPP1y3GzMMo/5EvAa52SORSCOe1jCMCtGkbZl2htFFmOANo4sw\n", 558 | "wRtGF3FdqbXdBvkluH+CX4tEIujp6Sk58vMwPM/zR9hjbYQ9t2GY4KsgSNS9vb3o6+tDX18fotFo\n", 559 | "yZFGEJ7noVgsYnt7G8ViseScH+Xg1+l5+HOGYV8E3YUJvkK4sHt6etDT01Ny3tfXh1gshoGBAQwM\n", 560 | "DPjndOzv7y97TinO7e1tFAoF5PN5FAqFspHP5/2f0TkN/nyuo/bakUjERN9FmOCrgKx6b2+vb9Hp\n", 561 | "vL+/H0NDQ4jH4xgaGio5j8fjiMVigcLyPA+5XA75fB65XE4939raKhs9PT3+77uWANprmdC7ExN8\n", 562 | "FXDLzqfr0WgU/f39SCQSGB4exvDwMEZGRkqOQ0NDgc9dLBZLhJzNZsvONzc3/dHX1+eLnab1UvTF\n", 563 | "YrHsdbjY5dHofEzwFcLX71zoJPZYLIZEIoFkMonR0VGMjo5ibGzMPyYSicDnLxaLJYLWRjqdRn9/\n", 564 | "vy92EnWhUEBvby+KxWKJ2OnfSPGbyLsXE3wV8Ck9t+wDAwMYHBxEPB5HMpnE+Pg4du3ahd27d/vH\n", 565 | "kZER5/OSIDOZDNLptH+U5yR28viT2HO5HLa2thCJRHzR8+ele5evSddN+N2DCb4KyGmnCZ4s/MjI\n", 566 | "CMbGxjAxMYHJyUlMTU1hcnISY2NjZc/Hhba9vY10Oo2NjQ2kUil/bGxsYHBwELFYrEzstMbPZrPo\n", 567 | "7e31LTeJnJ6/WCyWTN9d92B0Pib4CpHWnVv2WCyGeDyOeDzur+GHh4eRTCb9Kb4UvBTa9va2M5xH\n", 568 | "g5YVMvYP/M+/QOE8GdbTBll/Pvh9VeLpN9oPE3wV0Pqdi5288MPDw0gkEv4XwMDAgO/F51Y1SDjc\n", 569 | "ITgwMIBCoeCLkwubfk6vn0gkkE6nnfF5PijER+f8Gv8SkOcm+M7ABF8hPLlGrtvJO09huFgshv7+\n", 570 | "fkSjUT9WHyYY7hAkQXNHm/wyiMVivtiTySQymYwqZH7koT6K3/Nr2pdEJBLxk3qM9scEXwVSkGGC\n", 571 | "J296paXCXNQUZqMvAhkRILFvbm4im81ic3NTTdbhSTtayI8e9/b2lln/SCTizzKMzsAEXyHcApPo\n", 572 | "aEpNa3Y+pecWPkzwZP3p+fk0ns8quNi5YMlLr2Xi8XMK72UyGf+LIpPJ+LMQLXOPHH5GZ2CCrwKy\n", 573 | "wNqUfmRkpMTCu9bwhDbFJ4FHo9GScz6Nlym1XNRhU3YK78ViMT/cR2KPRCJ+5h7dL6X7muA7BxN8\n", 574 | "hXBrK6f03MJra/hKYt18BsFfK6x4hjvjKA3XNTY2NrCxseHn9pPYgW+Tcehe6fUqmaEY7YMJvgq0\n", 575 | "Kb1cw/MpfZCFdz0/Cb/SklieScen9/J8a2sLg4ODGBgY8MN+MjWXIO98oVCoygdhtD4m+ArhqbU8\n", 576 | "Hs8TcEhIlJxTrVh4XL1aqPhGip1f43F8Gc8H4C8f6O/if4f2JaSd8/vRzit5bDQOE3wVSIsqk1xo\n", 577 | "6k0/b3b8WlbzRaPRknsYHBz0vfCe55VEBfr7+0Nz+V1JPDJeL9+noCYe/L01Go8JvkK0D6xcV7s+\n", 578 | "/M1CZgPyfHkZYotEIr4lpyVKkNiz2WxonD8oi0+Kf6feo27HBF8FLguvid411W0UfMlBefX8Oi+s\n", 579 | "4bF9HurTRM4fBzXgyOfzqoPRleJL7wvP8zcajwm+BoKm85qVbxZS9HRNPtYy9qS45ePNzc0Sjz/5\n", 580 | "B/hjGTWQ5/Re0X3yc6M5mOCrQFufahbNtU5tNCRuft7b2+vflyZ2LtogwWezWX9Qdh4/7+vrK2nN\n", 581 | "xTP3+AyDp+radL75mOCrJGhKr1l4+p1mIbvlep7nT/F51h5PuaVzl+A1ay+77/T29qrJQPx+CoVC\n", 582 | "yXtomXzNxwRfBZqF1xJidtJpx6vq6J7pSGJ3dcPVRM6v8QYdMp4fiUSQy+VKxO+arpvYdw4TfBXQ\n", 583 | "h5Q7rXhiC6XckoWT4uc04oOu9cvn8LZY8lgsFsvyCujvoUG5BnSUI6wJp+zGyx9X2nk3KPnICMcE\n", 584 | "XyFS7NRUUqapcsFQ7rsMh9HzaRtbNBLZ9UbOBnjqMF+S8IQjXrnHnX7xeFzN4efnmsj5MSh855pN\n", 585 | "8WGEY4KvEC548kpvbm6W1LxLsVMFG1l6oDybztV6qlFoXzgASuL3mtj53ygr98iBp4Xq5LpeE7qc\n", 586 | "EWnhPN6zX+vJz+/ZcGOCrwJepEKCpzUs70RDhTVk2XgzCZnWCuzM1try9YvFoi9oTew8SYfELsNz\n", 587 | "0mGnlekG/cwVzuMNPPgSivf3MyojVPDPPPMMPvzwQ0xOTuLrr78GACwvL+PHP/4x/v3vf2NmZgbv\n", 588 | "vPMORkdHG36zO4mc0lPTCB4GoykuWfdcLudbNRIUZ6ccVtoMQ0vYkT38+vv7kc/nA8t0tZ1z5M8q\n", 589 | "/TKQv08ORF6BWCwWfStvhBMq+KeffhrPPfccnnrqKf/aqVOncPjwYbzwwgs4ffo0Tp06hVOnTjX0\n", 590 | "RlsBPqWXpaUASppiZLPZEgvPp/Ty95opfNcSgmL0QOmGG7xENyy1NqjjTth2WfS+anX8dI2adXCx\n", 591 | "84o+m9KHEyr4+++/H5cuXSq59sEHH+DcuXMAgOPHj+OBBx7oeMFLC89FS8UoVC67ubnpW3hN8MC3\n", 592 | "HvNWETvF6Wl6zze2CCqY0dbZrmaZWpyeDy2Lj59TrwD+f7G1teV/URnh1LSGX1hYwNTUFABgamoK\n", 593 | "CwsLdb2pVoVbeHpMH2TP8xCPx5FOp5FIJHwLz6f0wLdCl6PZTjvXdVkGK4+uyjcAJetvLc02LBff\n", 594 | "tcUW30ePvmBJ7NFotGypZLi5bqddt+RCc0tH6aL0YSeHXCaT8Uc6nfY3kojH437ojg9eyhpkpTSL\n", 595 | "XOt50PMG/dtKqKRFtms6z3MaeMouvybvjc9AeM4DoNfph31haUuCTlsm1CT4qakpzM/PY3p6GnNz\n", 596 | "c5icnKz3fbUkMh7Mv+yoSWQ6ncb6+rqfiUbTZLJGfGMJbf94GSPnx1oHf55GIl83rD23rN+XvgOK\n", 597 | "emxtbSEWi6m7/gwODvp+E9lbX1uSyCVI2JcF0SnCr0nwR44cwdmzZ/Hiiy/i7NmzOHr0aL3vqyWR\n", 598 | "RTNcRLTlUyaTwcbGhm/Rgf9Zomw2W5LBxs/5nnFSpDJdloZMhgl6TM/VDLjY+TUJF7fcepsSe2hd\n", 599 | "TyFA/kXJk36oZTcPgQb1/nMNV+FTp4gdqEDwTzzxBM6dO4elpSXs27cPv/nNb/DLX/4Sjz32GM6c\n", 600 | "OeOH5boFyv+mqi/6YPT09CCbzSKdTvsfXvr35GHmaao0qMNtf39/qKXmywB+1K7xJUIznVrSwgf9\n", 601 | "jLbH6uvrw/b2ti/ofD7vhwD5MkBm+PGegul0usRnooUHpeefnIHcmSmz/IhOEX3Ea9Bf0mnremlB\n", 602 | "pdCi0WjJvnK0L7zWs147DgwM+K/BRcGP0hJqI+jnjcaVFhvk6edTbFeYj86pgMc1NEHzCAB3AGpO\n", 603 | "Qa1rkWza0U5o0rZMuwqRVXL8GlkISsYBvrXsNM1PpVIYGhry15t80LRVm47zc77e59lvNORUVLOy\n", 604 | "jUam6/L3iCy6liNPzk9ZbszPKamJhz95RZ/WrZef07+T3YTpy0YKhM/mOiXOb4KvAl7SyaeAXPAA\n", 605 | "/BDU1taWL3b6kNIUNB6PlyTn5HK5smm6HLR+5bvMam21+Ayh2Tnm3P/g8pi7hmZZ+Tk56II8+XLw\n", 606 | "65lMxhc7rxyk6b8MjzY7T6IZmOCrQDpx+IeBqrXIsvPiGiozpek9WSW55pTLBLkm5/vLkdhlZRvd\n", 607 | "Fxd7swQfFD7k9xB0HhQyc22wQZacRO1q1EExe7ovXhtBiUb8taXTtBMwwVdJ0IeVrARNQ/P5fEnR\n", 608 | "iWu6Sv82zBHH69C12nR5TR6B8tAZv0bnrmvyXHtczbVK3mP+WNbry8QdKlcmRygdafBICPBtb0L+\n", 609 | "/vO2XEQ7rt9dmODrjEzGISgWz9ePfEqZzWadU3lp4fmUXhuuBhVhXn2X70A6E+nvqSS5px7w5QG/\n", 610 | "fzmzkWW9PMxHYVLNatP/A9+Qkzr20PN2Cib4OsLXohS2IyKRiD91lM4isk5hMXUudJm4U8mXgXTy\n", 611 | "yeHy7ssvBBlea3ZSD1X1cXHLLx+tP0GY4Pv6+vzcfPmFbE47owwpeH4N+LbBJP8g0Rq0v79ftab8\n", 612 | "WpBQNYeePPJEH+1x0JcB5RpwjzvPpKun6F3iotf1PK+sDTefeUixU9sxGbGQiVR8ezB+vZPKb03w\n", 613 | "dURL1uCDW3buySeHkmZB+eDidh2l1dey0+T6ls61JQGfJvMlANEop5Ymei54eU1m7tHfwRNw+H3L\n", 614 | "NGmZKi3X9p2CCb5O8Hi89PbKEBNZdi5KOd2Ua2W+Jg1KugkalANA4S05yBoWCgW/u61cE9PfRdca\n", 615 | "GQXgsyT+XvAvRXpvZZ69Vq3HS5rl/w1ffvFSX271OwETfB2RDiQ68g8nfYi09FjNU05H+qC70mul\n", 616 | "6OV5NBpFPB73G07SoMSfQqHgi50EIhN4pGWl642Gi41Pufl9eZ7nN+uQg4TNY+98yk7iltN4crKa\n", 617 | "hTdUguL0kUjEd/64hob8sAcNOcWX031K8dVyASh/QFaO8ZkFvx8ptmbAX5t8CoC7bl9e4zMSHj4l\n", 618 | "wdMXMq+1NwtvBOKK09cDLUymrV9d032Zmy4bVlAW29bWln/OE1xcPgI6ut4Heuzy8MslTBCV/jsN\n", 619 | "XnPPU295E06+x4BMwe0ETPBtBrdU9FiLE8t1KmWqUbyf/y75FHgTTu08LPGHv6527qoB4P0AtHTW\n", 620 | "egmOL4ukF5//rTxqYRbe2DGk2AH4a1Mpei52iltTmyiCp5Zms1n/w853mpHn/Civ8S8XOYrFovN5\n", 621 | "AKiiqrfIXIKnenv+HvBttOg964RYvAm+zdA+cDzuz/8d/zIgLzWPJFBGGVWRaQ065LZTWjiPjkGV\n", 622 | "cLz4hQY5BkmAMtzHv+DqIX4pePqbqFqRW3ju8NSWHO0qfBN8G6J92OQ0moudr5fJWcUtu0y+0TL2\n", 623 | "KKzHS1TluavmnQavFqQwGEUQaIbAqafYCa0QiaITfOswctjJvAN5b+2GCb7N0NbwPOzHRaLVc8s8\n", 624 | "AJkLIMN6/BrvH6ed84YW2nFkZAS5XM4XO6UL0+wAqKw4p1aCpvTb29vqGl7WEMjnazfRm+DbCG0N\n", 625 | "z68DwZVvMpkkLKYvr1EMnxp38POhoaGyCkA5qKsM8K1lj8Vifgxcs/D1RBP8wMBASYMNnnHInXad\n", 626 | "ggm+zXBZFJkUIwnK4JOhMlfojyfsUBMPOs9ms4H7wlGKK1Bq2anbrPRD8C+2RnnpKZuQ/AzcGcnF\n", 627 | "zq28a9nRLpjg25hKxF8NQUlBlDgUtruMtPL8MXWgkZtsym499Ddo59eDzFikWQbdh1ZIxMNyUvQy\n", 628 | "xbgdMMEbKq5oAO8Aq015pRhILL29vWp8u9kxbpmsxC1+WHmwzMWXf3M7YII3SgiyWjzPnG+oSYJ1\n", 629 | "lfXSUQpes55Eo74EtOWLTEMOEr2W0NROmOANFVeSD+/5zsVODSSkh186yfh0OcgDzqnnGp6OlaQj\n", 630 | "a6LnmY084aldhG+CN3xkoosUvey/x3+P1sAU6uLTYJmOq1n4ZqFZeJfw5XVeMcjDiO0idsAEbzgI\n", 631 | "svA8iYfETk44KpIhhxhV8bly1Jsp+LDpvNbqS34p8BTmduxbb4I3SpDecQ5fw/PUWRlyI7HTYwqB\n", 632 | "yTZaLgvfyC8Al9NONtFwWXztfWonTPBGGS5LxTvD8GYR3BKS2Pk6lwTPK+ukw66ZyPCc7JoTNOhv\n", 633 | "p+l9u4neBG+UILP0+LnWVIMf+SaPfEst3l2H9tFzpa9er7V3hct4wwvZz57vP0fn2o4+nYAJ3vDh\n", 634 | "YTQZXotEIiVdbrXhysTj1xKJRInwNWvvEn8YrrJcOpcNMOQGkysrK1hbW8PGxgbS6TQymYzfq54n\n", 635 | "CbXzF4EJ3vAhkUlnFU19gyrlYrEY4vG4ulkmDdpfb2hoqCRvnZegXs8Un4tcG3L/OWrzRedS8LRB\n", 636 | "JRX8dILVN8EbPkHx6b6+vrIpOp+qk9jlNtjyGj93WXi6l1rE76rUo+Id2nJa23p6bW2tTPDZbNZv\n", 637 | "cy3r+2Wzj3YgVPDPPPMMPvzwQ0xOTuLrr78GALz88sv4wx/+gImJCQDAq6++iv/7v/9r7J0aDUcW\n", 638 | "l8iut4ODg34TzJGREX9zTDqXjTHkY9nthgTf19dXJvBarDy38FoRD1nyTCaDdDqNVCqFdDrtn6+v\n", 639 | "r/uCT6VSvoXnU3ptqdBOhAr+6aefxnPPPYennnrKvxaJRHDy5EmcPHmyoTdnNBdZWMKz46LRKIaG\n", 640 | "hpBIJJBMJpFMJjE6OuqPZDLpC9i1xnc12JBTen4//BgGt7y8uIfChmThSeAbGxtlY3193Tmld7Xv\n", 641 | "aifRhwr+/vvvx6VLl8qut9MfaVQGn9Lz5hgkWLLwIyMjGBsbw/j4OHbt2oXx8XGMj4+HdswJymTj\n", 642 | "feOuB27hSeg0+JSeBL+2tuZb9lQqVWL1udNO9uqXx3ah5jX8m2++iT//+c84dOgQXnvtNYyOjtbz\n", 643 | "vowdgDvttDZQZOFJ8Lt378bu3bsxMTGB3bt3B/bF1xxzLkfd9Tjt5PZRJHpq58UtPAl9dXUVq6ur\n", 644 | "vsj5Ol9O6bXXbCciXgV3fOnSJTzyyCP+Gn5xcdFfv7/00kuYm5vDmTNnSp+4zRISugFtncxHUIPK\n", 645 | "WCyGiYkJX9zauauDDp1XwvVYUAqz8XAbP19fX8fq6qrvjV9ZWfHFvrKygkwmU+LFl958rR14K6O9\n", 646 | "dzVZ+MnJSf/82WefxSOPPFL7XRlNQ3rgpTiDPOqDg4MYHx/H2NgYkskkEomE/zPKmZfdYar1tEtn\n", 647 | "mLZmDoKsuDa44NfX17G+vu5P37k153vEy2l8J1CT4Ofm5rBnzx4AwLvvvosDBw7U9aaMxiDX5vKc\n", 648 | "YuVak8qhoSHfQTcyMuIn0MjQGh/VwotxZAxdm05LyBq7Bl+zkyeeBE9fDLQLDe/g01WCf+KJJ3Du\n", 649 | "3DksLS1h3759+PWvf43PPvsMX331FSKRCG688Ua89dZbzbhX4zoghxzPa+eD1uhac0p6PDIy4ofg\n", 650 | "SPBk4XksPShVNgjXGty1fua/B/xP8DK2TiG4TCZT5pHXLDx38nWi6Ctaw9f0xLaGbylojS63keLD\n", 651 | "lRYrG1aS804m4GjT+WoKZHgYTRtA8JreFV+nY9ggy67twUeddduJuq3hjfaE1urUi11O2SmphnLe\n", 652 | "adA1mTlH5/39/XVNj3WF1cIExy08iZhbdPoSkIMsPC+akY04OwUTfBcRiURK+rFzocfj8bIMOplJ\n", 653 | "59pXjqb09BrasRI0sVMFWy6XC+3Sq4XdaL2+vr7ui1sbm5ubfvpsO6fOhmGC7yL4Gp4ET9P04eFh\n", 654 | "JJNJf53OB12XPeu0/nSSWgXPRU/htbBOsTJtloROOfIkbNfgtf7tmlgThgm+g5DxdX7e09Pjx9jJ\n", 655 | "spPYNZHzQVN6VzivVq+8hHfUkTHwzc3NUBGSF5488dzCb2xslFTH8XAdzSI6aeruwgTfIWitm2R5\n", 656 | "q1yX03SdrDivV+fVbLIHneacqwfb29vI5XIl03LugAvqEOt5Xpknnv8+iZ0LnHvhuwUTfAfArbks\n", 657 | "aeVpsnz6zsVOw9WZxrXtUr0jMcVi0U+eSaVSJZZ6bW2tRJja9J6+KLgzjs55IYwWZ++0qbsLE3wb\n", 658 | "IwXHe8rJwpX+/v4SjzsXPVW/ycQbKXgeZmtEPzqazm9ubvpOt+XlZaysrGBlZaUkFq8JXi4B5JAt\n", 659 | "rbiTrlswwbcpWu83uVGiTKwJsvDJZFKNzZMXPihttlFTesp3X1pawtWrV0vEqQletqySQ8bYeY17\n", 660 | "t2CCb0NcAuMWnkTOE21ca3iy8FrYja/hGyFyjpzSk4VfWlrCwsJCYHmq53llcXs5ZIydp+52Cyb4\n", 661 | "DkGz8LwAhuLtQRZeLgVkLTu9jnasB3xKzy381atXsbCwUJLtph3D9qeXDSg7Mc4ehgm+zQiKdZPT\n", 662 | "jlt4HoKTU3pu4SnOHlTPXun91Aqf0ksLPz8/X5Jeqwm+0iF/r5swwbcR2u6sPBxH6a5ycGecdk7D\n", 663 | "FWOn6XwYQda3EmHK3HYKrVGYLUzwRjgm+DaBV7tpG0H09fU5BU+Dr895lpzcR63WtTqfJsvyVip7\n", 664 | "lVNs/nhxcRFLS0tYXV3FxsaG32KKhE6vYdSOCb6N4NN12RySd6rRutXw63zLJ9mCSibXVEPYOpqH\n", 665 | "w2R4LJ/PY3FxEdeuXfObVHDBm1WvDyb4NoIEz7d0kiE0rbiFHHj0paBZeS7yWq08CV52i6VzV+sp\n", 666 | "Or927ZoveGnhg8Rtwq8cE3ybQEIkwWuNKuQ+7HzILwBtSn+9MXZu3eV+bbyJJE+M4ddWV1f9RBtN\n", 667 | "8GHFM0Y4Jvg2gix8LBbzBU/edipu4RtIyMq2oCl9vfZ249tHS0suO9DIoyx4cVl4E3vtmODbCD6l\n", 668 | "px7xvKTV5WEnZ59rSs8tPFB7rJ0cc7I1NA2XB14Wyci+8Fr8nb+mUTkm+DZBm9LzHvHJZNJppWl9\n", 669 | "Tut8OaXniTXa61YKd9qR4GkDR0qm4VZclrHy8lX6PdeUnl7PqA4TfAuh1bHTOVlobTpP2z7Rv+VH\n", 670 | "/lwyP142nyS06XMlcXS+iYPWXUYKXR6pko0PV526ib02TPAthFbDTufRaDSw39zw8HDJc2nFNbzO\n", 671 | "XVbBAeEJM1rsnF+TIpdrdG0qTy2iucC10lUTeH0wwbcIZIW5442f8/JWbSQSidDnl9s0c4cdEWTB\n", 672 | "tTg6P+f155Wc857wfA83XtGmla+a+GvHBN9CkHNNhtNo3S6tuyyG4WiikLF4rRcdiVvbAUZ63mUs\n", 673 | "3dUVljehcA3eIloWu5iHvn6Y4FsIrbyVZ9BpIpdT+qBsNNlpVpvSB6XHkuBlPJ174YP6wgfVqm9t\n", 674 | "banLBFcNvFEbJvgWQZa38pRYyoXnQtfEH1RFBqCs5DVoSi+3eaJadb7lsrZGl5s+8Mc8nZbW61q9\n", 675 | "upaHb9QHE3wLIS08eeSpnbRL6HwN7yoF9TxPLXvlU3pZW87FTqE2Enw6nS6JoZMjTrtOg0/XtXOt\n", 676 | "Rt2cdvXFBN9C8I0i+O4wUuxyKk/HsDpwV0dbbQ0vRc+z53hXWR5L50LXzrXmE5on3qbwjcME3yJw\n", 677 | "D70mdi5ynkPPq+DCBB+UK8+tuCx8oSNt7CBj6HSuTeP5CBK0ibs5mOB3EJkcI6vheDZdMpks2eON\n", 678 | "dm0lT3slr0HwLwD+b8KaQPKe73wLJ753m9yJlTrD8tfW7sdoDib4HcDVpopbeG7dqckk9Y4ny85D\n", 679 | "a0HPraFVn1F7KRo8c47aTgUNXgFHoTreCtos+s5jgm8yWgYcHXmuPG88OTIygtHR0ZJ92Sm8Jote\n", 680 | "SEgu4bsceiT4bDarTsldGzxoCTQ0uOfdxN4aBG4IdvnyZTz44IO44447cOedd+KNN94AACwvL+Pw\n", 681 | "4cPYv38/Hn74YayurjblZjsNmTcvp/R8R1cu+KApvWud7kqu4U453hN+Y2PDr0+nJpKLi4u4evUq\n", 682 | "lpaWcO3aNaysrPjdafiUXmbOBU3pTfzNJeIFvOPz8/OYn5/HwYMHkUqlcM899+C9997Dn/70J+ze\n", 683 | "vRsvvPACTp8+jZWVFZw6dar0iRvQt7zdkWt2foxGo5icnMTU1FTZcWpqCrt27VL3Zuddb1xWlM61\n", 684 | "GDcftMOLHKurq1hZWSlLtJEbPvK0WG1ITOyNRXt/A6f009PTmJ6eBgAkEgncdtttuHLlCj744AOc\n", 685 | "O3cOAHD8+HE88MADZYI33Gj15i4Lz6f0PPuOyly5hdem9Nw5R2ihtyALT62nwjLleNxensvXN3aG\n", 686 | "itfwly5dwoULF3DfffdhYWEBU1NTAICpqSksLCw07AY7HdlTnrLsaA3Pe8fzLDlZ3iqfL+i1gPLk\n", 687 | "GhI8reHX19d9wV+9ehWLi4t+hhxvW8XPg/wDJvLWoCLBp1IpHDt2DK+//rpahmnT98rgzShkh1gu\n", 688 | "ct43ng/uBLSLAAAMTklEQVTXBhFappwmvKDpdj6fL2sxJePsYV1njdYnVPD5fB7Hjh3Dk08+iaNH\n", 689 | "jwL4n1Wfn5/H9PQ05ubmMDk52fAbbXdk6StZaTqPxWJ+55rh4WHE43G/l7xsRSV7yBOe56n56DwX\n", 690 | "Xuax0zl1jV1eXvYdcXybZS7ubt1quRMIFLzneThx4gRuv/12PP/88/71I0eO4OzZs3jxxRdx9uxZ\n", 691 | "/4vAcCPX6LQOp+PQ0JBT8BR+0zaLoOcGUDI9pyM/l2tunmiTzWaxsrLid41dW1vzG1Rwj7tWumqC\n", 692 | "bx8CvfR///vf8b3vfQ933XWX/6F69dVXce+99+Kxxx7Df/7zH8zMzOCdd97xWyz5T2zT/BJcW0GR\n", 693 | "pz2RSGD37t2YmJjwj/x8bGzMmQtPXwCy+kxWpHEPu/S2b25uYm1tDaurq/40ngZZfNcGjXRutBaa\n", 694 | "tAMFfz2Y4Evp7e31N3Qk77vc4HF8fBzj4+PYtWuXP+gxNamUm0XQNQDODR5oUNYcz6Dj5zx1Vo5U\n", 695 | "KhW4XOimLZfbharDckZ9kX3lKeRGI2wNLzvRagk1vGOstOBhHWm0bDq+hndttWy0Dyb4JkFreF7n\n", 696 | "zjvOjo6OlhTJkOB5Rh09j9bZFkDJNk98pxey5HwnVq0/vGuqT4IPqsQz2gMTfJOQTjuy8MlkEmNj\n", 697 | "YxgfHy+pddecdvQ88nkBd094Ejztt66F3mhobaJ5rD0sk89ofUzwDUQrf6VqOL4/HBc5xeFlK2nZ\n", 698 | "hkqeh3Wl0QQuH2s18Lz9lNH+mODrjJYvD3zbvorH4PnWzzJdlu/oSshEGn5OcfZsNltm0cnbrlXA\n", 699 | "0ZS9ko6xRvtjgq8jQXuz8c0lpOB5XJ6nzLp2heEip/Pt7W1V8FT4srq6qu4GQ5tAyDi7JdZ0Jib4\n", 700 | "OiMdatyrztNiaWov21HLPdu5Y04KnR/5/utaAczy8nKoU05adwu3dR4m+AYQtKFjmIXn6/YgC6/F\n", 701 | "w/P5vGrhqeJN20SCjjzsJltEm4XvHEzwdcTVeIJEW8kaXu7bHiZ2ssg8FMcFv7a2hpWVFSwtLTm3\n", 702 | "iKLzsG6yRvtjgm8AmtgrmdLLHWGC2kjLnvEuwXMLH5QWS33h+WuY2DsPE3ydkY46WQYbNKV37QpD\n", 703 | "jS20xhV8M4cgwS8tLQUmzQQ1mTTRdw4m+Dqhrdf5ls98myc5ZDhO2xGGp81q0/GtrS214SRPlTUM\n", 704 | "E3wdkVN3PrgnngQuhS6TbLglp66yvJxVFsYsLS35pa3pdBrZbNZ3xhkGYIKvK3JDSD640KWDjkZQ\n", 705 | "UwsAfs852S+eCl1I8LQLDG3DbFlyBmGCryNS8HxN7rLw/N/IKjjpLZf7uslBraOpWw0l1ZjgDcIE\n", 706 | "Xydkgg0Pv2mxdin4aDSqPq+WOktJNbzfHIXgeHsqm9IbEhN8HdE88SR22vCRi55+ThY+qOMrpc6S\n", 707 | "hedps5Q6y8tdKU/eLLzBMcHXEem0k4k1LrHT0DLcZOqsFDz1jF9eXi5Z11MKrVl4g2OCryOuNTxf\n", 708 | "v2uFMmThuSWWZa/UhFLLk6ftn7S0WbPwBscEXye0zDqZXaeF63hGHVl23iueYu3ZbLakS41sZkF9\n", 709 | "4+XvUUadYQAm+JaBhE7WmeLsdNzc3CzZwJE2b5R7sXORWz27ITHBtxDcMSfr1lOplF/mura2pgqe\n", 710 | "96S3enZDwwTfIkhPPJ++01hdXfX7xnPBUwMLXhBj9eyGhgm+heBda9LpdNlGELIfnauNtNWzGy5M\n", 711 | "8C0CX8Pz0Bvf/oni61pPOqtnNyrBBN8iaFN6EvzS0hKWl5fVXWPIqVcoFKxnvBGKCb6FkFN6Evy1\n", 712 | "a9dw9erVMu89r5izvvFGJZjgmwRPkeU17NRIsqenx69dl84610YRlFxDnnnDCMME30T4Gj2VSmFg\n", 713 | "YMCvkqPyVtqfnXLitVbScn92w6gUE3yT0NboJHbP8xCLxUoETxVv5JQjSy7bSds63aiGnqAfXr58\n", 714 | "GQ8++CDuuOMO3HnnnXjjjTcAAC+//DL27t2Lu+++G3fffTc++eSTptxsOxPkhV9aWsLi4mKZ4LmF\n", 715 | "5+mylk1n1EqghY9Go/jtb3+LgwcPIpVK4Z577sHhw4cRiURw8uRJnDx5sln32RFwCx+JRFAsFv21\n", 716 | "fF9fn99SWhM8FcFomXSGUSmBgp+ensb09DQAIJFI4LbbbsOVK1cAmOe3WsjCb21t+WLnFXA9PT0l\n", 717 | "DjqZWEOCl/3oTfRGNQRO6TmXLl3ChQsX8N3vfhcA8Oabb2J2dhYnTpzA6upqw26wU9CaWJBFv3r1\n", 718 | "auiUPshpZ4I3KiXiVfBpSaVSeOCBB/CrX/0KR48exeLiIiYmJgAAL730Eubm5nDmzJnSJxb7mHc6\n", 719 | "vb29GBwcxODgIGKxmH9OY2BgoKR9NR/0XsmCGbnpY1A/eRO9IdE+E6Fe+nw+j2PHjuGnP/0pjh49\n", 720 | "CgCYnJz0f/7ss8/ikUceqeNtdi40Bde+DIvFYtlmj7zsVe4Mox0NI4zAKb3neThx4gRuv/12PP/8\n", 721 | "8/71ubk5//zdd9/FgQMHGneHHYa2N5y2/ZOWE0+/bxi1Emjhv/zyS/zlL3/BXXfdhbvvvhsA8Mor\n", 722 | "r+Cvf/0rvvrqK0QiEdx444146623mnKz7Y5raye+V5wsbXWJ3YRv1EJFa/iantjW8GVreK2Kjafc\n", 723 | "ap1uaHqfy+X815L/ZSZ+Q6OmNbxRP2QnWn7UpvRayM3EblwPJvgmoU3hXWt5V7tq7TkNoxpM8E1G\n", 724 | "WneX407OAORzGEYtmOCbiJzSB4m9EgtvGNVigq8T2tqcRFwoFNDT01Mmbi5yyqDTsuhM7Ea9MMHX\n", 725 | "EbLKVMbK93qXDjnppKMvBi56nmxjGPXABF9HeEcbSpul64VCQe0qy6fxUuxWHGPUGxN8neBTehI9\n", 726 | "3+O9t7e3zDuvZd1ZrbvRSEzwdYRES0lH/Augp6enbJ0f5LG3vvJGIzDB1xGa0tM5t9pc8NrQLL7t\n", 727 | "HGPUGxN8nSCxEtyy026yWl68ll+vpd8aRj2wXPo6Qn8zbR0trwHhqbFWKGPUC8ulbzBWwmq0OhW3\n", 728 | "uDIMo/0xwRtGF2GCN4wuwgRvGF1Ew5x25rgyjNbDLLxhdBEmeMPoIpoi+E8++QS33norbrnlFpw+\n", 729 | "fboZL1kVMzMzfmfee++9d6dvB8888wympqZK2n8vLy/j8OHD2L9/Px5++OEd3e1Hu79W2WDUtQFq\n", 730 | "q7x/O75Bq9dgCoWCd9NNN3kXL170crmcNzs7633zzTeNftmqmJmZ8a5du7bTt+Hz+eefe//4xz+8\n", 731 | "O++807/2i1/8wjt9+rTneZ536tQp78UXX9yp21Pv7+WXX/Zee+21HbsnYm5uzrtw4YLneZ63sbHh\n", 732 | "7d+/3/vmm29a5v1z3V+z3r+GW/jz58/j5ptvxszMDKLRKB5//HG8//77jX7ZqvFayMl4//33Y2xs\n", 733 | "rOTaBx98gOPHjwMAjh8/jvfee28nbg2Afn9Aa7yH09PTOHjwIIDSDVBb5f1z3R/QnPev4YK/cuUK\n", 734 | "9u3b5z/eu3ev/we2CpFIBA899BAOHTqEt99+e6dvR2VhYQFTU1MAgKmpKSwsLOzwHZXTahuM0gao\n", 735 | "9913X0u+fzuxQWvDBd8ORTRffvklLly4gI8//hi///3v8cUXX+z0LQXCi3NahZ/97Ge4ePEivvrq\n", 736 | "K+zZswc///nPd/R+UqkUjh07htdffx3Dw8MlP2uF9y+VSuFHP/oRXn/9dSQSiaa9fw0X/A033IDL\n", 737 | "ly/7jy9fvoy9e/c2+mWrYs+ePQCAiYkJPProozh//vwO31E5U1NTmJ+fB/C/vf34hp6twOTkpC+k\n", 738 | "Z599dkffQ9oA9cknn/Q3QG2l98+1QWsz3r+GC/7QoUP45z//iUuXLiGXy+Fvf/sbjhw50uiXrZhM\n", 739 | "JoONjQ0AQDqdxqefftqSm2MeOXIEZ8+eBQCcPXvW/6C0Cq2ywajn2AC1Vd4/1/017f1ruFvQ87yP\n", 740 | "PvrI279/v3fTTTd5r7zySjNesmL+9a9/ebOzs97s7Kx3xx13tMT9Pf74496ePXu8aDTq7d271/vj\n", 741 | "H//oXbt2zfv+97/v3XLLLd7hw4e9lZWVlrm/M2fOeE8++aR34MAB76677vJ++MMfevPz8ztyb198\n", 742 | "8YUXiUS82dlZ7+DBg97Bgwe9jz/+uGXeP+3+Pvroo6a9fw1rgGEYRuthmXaG0UWY4A2jizDBG0YX\n", 743 | "YYI3jC7CBG8YXYQJ3jC6CBO8YXQRJnjD6CL+H2k8FPRBqMqzAAAAAElFTkSuQmCC\n" 744 | ], 745 | "text/plain": [ 746 | "" 747 | ] 748 | }, 749 | "metadata": {}, 750 | "output_type": "display_data" 751 | } 752 | ], 753 | "source": [ 754 | "sample = np.reshape(x_train[0], ((28,28))) # Get the training data back to its original form.\n", 755 | "sample = sample*255. # Get the original pixel values.\n", 756 | "plt.imshow(sample, cmap = plt.cm.gray)" 757 | ] 758 | }, 759 | { 760 | "cell_type": "markdown", 761 | "metadata": {}, 762 | "source": [ 763 | "We can see this is clearly a 7. Now, let's make our function that will rotate and translate the image and view the sample again." 764 | ] 765 | }, 766 | { 767 | "cell_type": "code", 768 | "execution_count": 7, 769 | "metadata": { 770 | "collapsed": false 771 | }, 772 | "outputs": [], 773 | "source": [ 774 | "from scipy.ndimage import convolve, rotate" 775 | ] 776 | }, 777 | { 778 | "cell_type": "code", 779 | "execution_count": 8, 780 | "metadata": { 781 | "collapsed": false 782 | }, 783 | "outputs": [], 784 | "source": [ 785 | "def random_image_generator(image):\n", 786 | " '''\n", 787 | " This function will randomly translate and rotate an image, producing a new, altered version as output.\n", 788 | " '''\n", 789 | " \n", 790 | " # Create our movement vectors for translation first. \n", 791 | " \n", 792 | " move_up = [[0, 1, 0],\n", 793 | " [0, 0, 0],\n", 794 | " [0, 0, 0]]\n", 795 | " \n", 796 | " move_left = [[0, 0, 0],\n", 797 | " [1, 0, 0],\n", 798 | " [0, 0, 0]]\n", 799 | " \n", 800 | " move_right = [[0, 0, 0],\n", 801 | " [0, 0, 1],\n", 802 | " [0, 0, 0]]\n", 803 | " \n", 804 | " move_down = [[0, 0, 0],\n", 805 | " [0, 0, 0],\n", 806 | " [0, 1, 0]]\n", 807 | " \n", 808 | " # Create a dict to store these directions in.\n", 809 | " \n", 810 | " dir_dict = {1:move_up, 2:move_left, 3:move_right, 4:move_down}\n", 811 | " \n", 812 | " # Pick a random direction to move.\n", 813 | " \n", 814 | " direction = dir_dict[np.random.randint(1,5)]\n", 815 | " \n", 816 | " # Pick a random angle to rotate (10 degrees clockwise to 10 degrees counter-clockwise).\n", 817 | " \n", 818 | " angle = np.random.randint(-10,11)\n", 819 | " \n", 820 | " # Move the random direction and change the pixel data back to a 2D shape.\n", 821 | " \n", 822 | " moved = convolve(image.reshape(28,28), direction, mode = 'constant')\n", 823 | " \n", 824 | " # Rotate the image\n", 825 | " \n", 826 | " rotated = rotate(moved, angle, reshape = False)\n", 827 | " \n", 828 | " return rotated\n", 829 | " " 830 | ] 831 | }, 832 | { 833 | "cell_type": "markdown", 834 | "metadata": {}, 835 | "source": [ 836 | "Now what happens to the image if we change it randomly?" 837 | ] 838 | }, 839 | { 840 | "cell_type": "code", 841 | "execution_count": 24, 842 | "metadata": { 843 | "collapsed": false 844 | }, 845 | "outputs": [ 846 | { 847 | "data": { 848 | "text/plain": [ 849 | "" 850 | ] 851 | }, 852 | "execution_count": 24, 853 | "metadata": {}, 854 | "output_type": "execute_result" 855 | }, 856 | { 857 | "data": { 858 | "image/png": [ 859 | "iVBORw0KGgoAAAANSUhEUgAAAPwAAAD8CAYAAABTq8lnAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\n", 860 | "AAALEgAACxIB0t1+/AAAIABJREFUeJztXVuoXOX1X3POmXO/xdacpCZwRE29xSQ0aB8MKBpLH0yV\n", 861 | "FKu0GjS++CBIpNVH2webPIio9UHEQoog9cUqVIP4h6YSKHloAqUWWmhSQkgO1XjMOTm3OXP2/8Gu\n", 862 | "7W/WrPXtPZc9s+fM+sHH3rPnsr+9Z/++df3WV4iiKCKHw9EV6Gl3BxwOR+vghHc4ughOeIeji+CE\n", 863 | "dzi6CE54h6OL4IR3OLoIdRP+2LFjdOONN9INN9xAR44caWafHA5HVojqwOrqanTddddFZ86ciVZW\n", 864 | "VqIdO3ZEn332WcVniMibN29tbBr6qA6cPHmSrr/+epqeniYioocffpjef/99uummmyo+Nzk5Ge8v\n", 865 | "Li7S0NBQPadrCbx/jcH71xia3b/Z2Vn1eF0q/fnz52nr1q3x6y1bttD58+fr65nD4WgZ6iJ8oVBo\n", 866 | "dj8cDkcLUJdKf80119C5c+fi1+fOnaMtW7ZUfW5xcTHez/sg0ddX161oGbx/jWG9969UKtHq6mri\n", 867 | "5wr1TJ5ZXV2l7373u/R///d/9J3vfIduv/12eueddyps+EKhUGHDOxyO1mF2dpY0atc1rPT19dFv\n", 868 | "fvMb+sEPfkDlcpkOHjxY5bBzOBz5Q10SPtUPu4R3ONoGS8J7pp3D0UVwwjscXQQnvMPRRXDCOxxd\n", 869 | "BCe8w9FFcMI7HF0EJ7zD0UVwwjscXQQnvMPRRXDCOxxdBCe8w9FFcMI7HF0EJ7zD0UVwwjscXQQn\n", 870 | "vMPRRXDCOxxdBCe8w9FFcMI7HF0EJ7zD0UVwwjscXQQnvMPRRXDCOxxdBCe8w9FFyPf6O+sEaUv/\n", 871 | "5305LkfnwyV8xqhlnY+M1gRxOGI44TNEPQR20juyhBM+h3DSO7KCEz5D1GOTFwoFt+UdmcGddhnD\n", 872 | "yevIE5zwOQAOCq7OO7KEE77JaFSiW9/3gcDRDDRE+OnpaRofH6fe3l4qFot08uTJZvWro1Cvra7B\n", 873 | "InahUAiSPooiNx8ciWiI8IVCgf70pz/RVVdd1az+dBySSFYrCetR7/lzvHXiOyw0rNJ3s6oZIla9\n", 874 | "7+H9lFI9Sco7HEloKCxXKBTo3nvvpd27d9Obb77ZrD51PCShOdRmhdzwuPxMkrTWBgAfFBwWGpLw\n", 875 | "J06coM2bN9N///tf2rt3L9144420Z8+e+P3FxcVvTtTXR8VisZHTdQykPa0RUHtffi7pNf+O1AIc\n", 876 | "3YdSqUSrq6uJn2uI8Js3byYioquvvpoefPBBOnnyZAXhh4aGGvn53CNkM0dR1DJJ6yR3FIvFCoG6\n", 877 | "vLysfq5ulX5hYYHm5uaIiOjKlSv08ccf0/bt2+v9uY4Gkxvb2tqaebyeJn9HQpoNVnN0N+qW8DMz\n", 878 | "M/Tggw8SEdHq6ir99Kc/pfvuu69pHetUaOq5tS+Paba/BB9Ds6EWIuP3Hd2HQpTRP18oFGhycjKL\n", 879 | "n84tkOyS+GkHAElibWvta31J0980yDpC4HkEzcXs7Kz6f3mmXZMgyW69lp/FYxqZpSTXtIGQJiDP\n", 880 | "Iz+TRGLtPM0kvnZfnPjZwQnfBGjk1mzupC3a2nIAYFhkCBG8XqKGzuXSvjPhhG8A1kNfC/FlSC2N\n", 881 | "g82K82sSMs+JO3nqS7fACZ8BpBqskR33NQlvedit42li9rUSTJO0zSSpNQC5dM8OTvgGkGQ7M9J6\n", 882 | "4ZMcdvhb+DoN2a0+Netz9cLJ3Vo44ZuInp6v0xpCHnVLUmuQRJfx/bW1tfg93s8KaYlfD4Fr+Y6b\n", 883 | "AY3BCd8kIHl7enqC+3IrSW859crlMq2trcVbPlZLVl9actVLrJDDrRnS3PMIGoMTvkFIe72np4d6\n", 884 | "e3upp6en5hZy9kVRRKurq1Qul6lQKNDq6mqFxOcBIIls0omnIY2JUCt5a3FCWueU36k1jyDN7653\n", 885 | "OOHrhGZ/I9l7e3urmvY+HpPqutz29PTEEyT4mHyQNQkrySG/Y/kIQuQIDRxpwmq1vm8NOmkIrDlR\n", 886 | "uxVO+DoQkqJM4L6+vnjL+9zka25a/jy+ZhOAj5fLZSLSyR9yKKYhZBLxtUEirdRPk1ug5RLUQ1Tr\n", 887 | "PnQr6Z3wNcJ6gNBOZwIXi8WY8MVisWIA0AYEts254WtJ7nK5HA8AfDyNdMdjaQhqaQ6WdpAUuQjt\n", 888 | "h7SQpGvR4BGAajjha4DlfSf62kNfLBapv78/3nLj10h22Xp6eiqccnK/XC5XDRboK+jt7a3oj+yv\n", 889 | "BstWtxKG0jrjQkTDQQF/N0mjCCU5OdLDCV8DQp743t5eGhwcpIGBARocHKxqSHhN0vf29sZOOWx4\n", 890 | "bHl5WW0rKyu0srIS7DcRqT4CzWeg+Q9qCfvVYi7I180ksA8G1XDCpwSSW2t9fX00ODhIw8PDahsc\n", 891 | "HKwgNxKfJTUTnJt8zcRGkvN+qVRKdLKhmWBtrX02KbTfTXu8UaLXokk4dDjha4CU6NJeHxwcpJGR\n", 892 | "ERoZGaHR0VEaGxuj0dFRGh0dpeHh4SonnVTNV1dX41JFspVKpZjYTHZufIxhpe/KQUQbYGTDyEDo\n", 893 | "9+W5QmEwNBPkVt5v+T00U2p1Fjqc8DUBJbwk7cDAAA0NDdHw8DCNjY3R+Pg4jY+P08TEBI2Pj9PI\n", 894 | "yIjqmefGsfVSqRSTXttngsv9UqlERDaZoiiq+Kxs2iCzurpqhse0rWaL8zHLPyD7jPdaOxZ6HUKa\n", 895 | "/INugBO+Rkiyc+vv748l/NjYGE1MTNDk5CRt2LCBJicnaWxszIzBM+EtIlsEla8tInFDU0Br/Fvc\n", 896 | "N3ROauRMIi1+T+uPRn6G5c2XkQDpmAyZDfJYNxLfCZ8SWuiNw23shWcJPzo6ShMTE7Rhwwb61re+\n", 897 | "RVdddRVNTExUZeEh6YmoQl2XKrwmieVrzQmHr9nJt7S0VLEv8wJWVlaqyI55+1orFAoVjr0kf4LW\n", 898 | "T3m/eYsErTUiEELa0OR6ghO+Bsg4O8bY2TuPEp4J/+1vf5smJyeDKbeFQqFC2kpprKnd0h63vPDs\n", 899 | "gFtaWqLFxUVaXFyM99FpaEn2tbU16u3tTZTQnB4sPfqWlJdRgJB9bhE9rbTuZqmOcMKnAEoxlM6s\n", 900 | "yvf391eF41jaDw8Px468ENkLhYLq1ONWKpWqwnQYurPCaUgo1kZk43OsrKxQsVisGHT4/WKxqA4k\n", 901 | "Sce0gQf3rbkAltou960BQfMpOJzwKkJ2IBOTScBkHxgYiJNsOJFGpsHig49k52m1UlLz+WRiDQ46\n", 902 | "nJAjf9+SoPhdJjwPViH7fmVlJXU57aSBxzqGSEoWshyTln9BwvIRhJ6D9QAnPNnpmfLhQRteEp6T\n", 903 | "ayThiajqQeeJMpLwFtmZoGwn8+sQ2bR9zBnAvi8vL9Pg4KAa+sNjMgNQZgUmETppm6R2J801wGvV\n", 904 | "/kfrv7ciEfhcrBeToKsJr+VyWxKC96XDjr3znGXHOfNMYv4+T2tF0rLU5YbkwQdMSvckYocIj47G\n", 905 | "gYGBoIMQ28rKSjAL0CJ9iPDymIQkaprz43/Gr+XcgVCKdJIp0OnE71rCJ3l3LVVReuktlZ4lMqr0\n", 906 | "SG5JdiKqklR4PiKKya71SyO5JBT7Avr7+xPj/dZrLWEHTZE0qr61z9eB/wPuy+gEbvle8cCB99dS\n", 907 | "6SXx5eAQ0gxwAOkkdCXh0/5RSJyQhE+j0kvCcz+kR1w6sJjwMhZt9dciPBOmv7+/irhIYEmmtMdK\n", 908 | "pVJQwsvBTOtryD4noipzg00U7Z6gCRay17X/IK2zrxNJ35WETwP5IOIDlOS0w/CWJuGJqr3R8iFD\n", 909 | "CR9KNJHf10iEBLTs8LW1NTPbTqb3cl6/VPvTOvXSNPwPeJ9zB/D+En0zoOK9Y21IIySSXLu3acje\n", 910 | "qeg6wlsPgEY2S8Kjlxsl/MDAQMUMOBnL1voQIjCfj19rDcN6FuE1+1nbalJfSn/pzMN9JLam4ock\n", 911 | "u0V2bDzjUN5b7qM2QIf+d3wv5KmXJpZ8v5OkfNcRXiJJwuKfjccl6bSEFVRrWcJr59TUTDwH74cK\n", 912 | "YvJvWbXxeMAJES3kECuXy8EswFDYTiYF1du0mgKYtyAHKtm0+4v3Gf8v3me/gOZU7CSiM7qa8LX8\n", 913 | "YZpqLCe1sDNMOrJ4i2qoJqllv+RW9oeI4t+tR03Wzodmi/yMpuWwI7BYLCba7rWSHvvLKj1rVEtL\n", 914 | "S/F05KWlJVpaWgoSnjUAeQ9wq5k5uNUG/k5DVxO+VkiJjaqu5smW4SIkfUg7CJ1fA0uitATCLRKZ\n", 915 | "+6MNCCwhsc99fX20urpKxWJRraJrET0N8bmP2NhXMjg4qBYC0QiP+9YghP+pNGV4erB2D7X7ifcs\n", 916 | "j3DCU6VqrZFKPqxSclgTXOSDJskkk2rkOZMgHzyLKNZDKe+Bpm1o77Fkl1LUInTIkZiG7PzaygDk\n", 917 | "eQchwmuDsKYFYNShp6cnLiyCfcZnJu1/lRckEv6JJ56gP/7xj7Rx40b629/+RkREly5dop/85Cf0\n", 918 | "n//8h6anp+ndd9/N9VrwTOYQ0oRiNAmfNK8cHzDpREJpKcNLIaJqr6WEx99IejClnwD7x8d4wFpb\n", 919 | "W1Oz/GpV19MSHvdldEDua8U9NOejNduQpwYz2fGeWj6Q0LOi+WbaPTgkEv7xxx+np59+mh577LH4\n", 920 | "2OHDh2nv3r30i1/8go4cOUKHDx+mw4cPZ9rRepHmIU/7O9xQImikl2momoSXElPa4Vb/LU0kiSxp\n", 921 | "gJEBPk8aZ2AastZCerxuPGbF/3kbIrz2P8nQogz5YT/Zp4Hk16S89TxZzuFWI5Hwe/bsobNnz1Yc\n", 922 | "++CDD+j48eNERHTgwAG66667ckl4SZxG7SqU8Pgw4cNjSXdu6AxDCY/TT2UIS7sWbR9fhyS7FRXg\n", 923 | "fSQfEj3pnPL82jErPCevU9tqJLYkuPYZqx4gb2V8n/uFTj/U1HBAxGvF76cJA7cSddnwMzMzNDU1\n", 924 | "RUREU1NTNDMz09RONQPWDa2F+JZU0ux3ma4q7UN8uImqyc5qMr+vSZGk6wuRhSFDgZqmY4UorXBh\n", 925 | "KIYtX6clvNZ/y/aWTlRLhZcFQHhbLBZpaWkpSHYekBlamA773QwTMgs07LQL/eGLi4vfnOh/mWmd\n", 926 | "Ahk2k3HacrlcoRpiyWirjLQknqx8k4YQjKRBwDIF5DVqW963iC7t+zQRBu5D6BpDZMd7byX3JNnu\n", 927 | "THJcPwDrAWgSnjULzqPAmL6lRbXDQ8+CJgl1EX5qaoouXrxImzZtogsXLtDGjRvVzw0NDdXz801B\n", 928 | "0gjKBLCIwMex8fdQui8vL8cPvvYAcSOiOI+9v7+/4mHFB18LZaHk5z5Y16PdB96mldBI9CQpFHo/\n", 929 | "yZ4lItV3Yf2mjB5g/+X18GBaLpfjqAImK2GREVwoxKr6E0VRhckmNQHtnrWS+PzsMZaXl9XP1UX4\n", 930 | "ffv20dGjR+m5556jo0eP0gMPPFBfLzNGWrXJUnl5K0d9JDw+RJLsSHoiosHBwQqVUz7kaVqo/9qD\n", 931 | "jwOX9Rl5r2pVN60HPPTgJ9m28n/Az2A0AY9JsrOZJBOfZHkyJrxczUcSXh6XkRftemsxIVuBRMI/\n", 932 | "8sgjdPz4cfr8889p69at9Ktf/Yqef/55euihh+itt96i6f+F5fIKS4oTJf8ZUsUlogr1UYbSQktB\n", 933 | "EVE8UKBEkITHfe097Rqwj3JmndxPI+XlgJekXSAxtPNqPgN5jK8R9zXwNSK5mHg4yGmhQ8whYLKj\n", 934 | "/0VKd9Sw0KdCRLEDVlPvtXuD12m9bgUSCf/OO++oxz/55JOmdyYraJJHPnhJNhg+ADjzDVU6jehI\n", 935 | "QMuBJ4tlyP0ke53BRMBBphaVPvQAhqS1du/kYGrdXzmwJBFCRg3wu0l+EByQi8ViVW0ATZVHJyFe\n", 936 | "C9r1Vt8Rae9x1tqAZ9qR7c3W9vkB4M+jyphEeLTZGUjOtP0MvY+/hQMAXkuI7Jq0TuqLJc1DD7B2\n", 937 | "f9OaMNh37fPW/urqagXhtaQbi+xsgqGGx4OEPE9Iq0y6n1mTvmsJH5JYDEuaoWOtXC5XOIeQ6Frh\n", 938 | "CungScqlr+fP16QIqrz4Wu6nUTmTpHcS6Rt5oLFPktga8DinAxeLRdWbL0txywQrDAFqA4SmdVj/\n", 939 | "awhZkr5rCc9IsrskmLCFQiFW6Ygotuk1L698KKRjSVbI0b5bywOA0k+7Jo3sSao6HpODg1TLQxJO\n", 940 | "ux5Nwofi3Nrva34CeT4pseVrNBe09GkmfalUMsN40h9h9TuNBpUF6bue8EkISXl5DNMzcQkp6QDS\n", 941 | "9jnUE2p4Lg1aXXue1SZn6WmE16SllFrWd60BSvY9NAjI84fsYmtAC/WRHXfowUeTrFwuq5NzNMJr\n", 942 | "mXmha8wLnPB1QhulUSrwA6HZhJihx8kgg4ODseTXthbhcV+uICPXsAuRNUQU7fPaAJL0G9hf6xqS\n", 943 | "BjsLluahnVO7FqJwvUKN8Fiw1KpMVC+yGjSc8A1ASkVp36FDJ0T2xcVFGhgYqLL/ZUuSgPLzmvNQ\n", 944 | "a9ogo7XQZ9NoJ5pTDjUd2We5j9eOkIOHRnzL1pf3QZYuQ6KzLb+yshJn7CHh+Ro0jcfqu/VcZQUn\n", 945 | "fIPAh0lKeKJvyI4Pi0Z2Jrwlna3ZanwOftgkOUMag3ytLWON5kFoMEgifU9PT2IBCrm8VhRFaq0A\n", 946 | "C5qzMOSc5feR7OjFZ9KjZC+VSrS0tFRVkpz/n3olPPYlSzjhmwB8yPjB4NcYwpGSnSvd8jZEOIvw\n", 947 | "2CSBLXJrn2FbX8aqZTKRpUGgyWCdG21lbct1/rFcljSd5GvNM46fk9qF/M+QYJpKj1mR7KPR1iDg\n", 948 | "AS2NOWJJ+qzJTuSEbyqY3ESVKbgrKyvxw8x12XACh3x4rK2caCJfW/Z2kvrOWzmpRPbP0j5kOqo1\n", 949 | "sMjqMnLWG84x4Ovg60+DWiW8PC5VekySIqJYlbcILyMt2jnaDSd8k4CeeyZ7T09PnJzR29tbtRqr\n", 950 | "trUax4hDk23kwyYfOkvV532sr69tQxoIeqyt81jTVvE4kp21jlqnCmshQu1zeJ/YfEDtRiM8m2AW\n", 951 | "4eVvJ8G6pqzghG8imOgy7xptZJTaUoIj+bWBwLJ9k+LWDMvG58a19XGtPN7nxBRN89Bmo2nnSVOx\n", 952 | "Bk0TvuY0tq1Gco30DM1mlio9nrunp4dWVlbi+6IRPq9SHeGEbyKknYjHmfBcTMFyjIUkf8jhZUkx\n", 953 | "PGZ53bnxA82+Bt7n15LgkvRapiE2WUdO7nMf+Xcxu02SM41UlKSXkCo/3g+8TvYzhGrio4QP9a2V\n", 954 | "0lyDE77JsKQJAm1wTSrw+2zbShve2spzSIRUenYKasU92G7VVHjtmIw28Faz3fE1Og5xDTwe1PD+\n", 955 | "yGu0CC2RRDgr9KkN5ppvJHS+NOZI1nDCZ4A0dhmS3jrOJOG4fpLTLgmWdx7DZuho5IgCO7EsMluO\n", 956 | "PHlMhuVkphs7DnkmmywGqknQpME16T9I837o/oY88pY50U444TOClEb8xycRHaWsVImTwnJWH/C1\n", 957 | "FbZjLzqr15pZEYq/s3TWzBTeT+o/S3ZtBZ80foo0dn6a9/F+Wvt4PryH+NmQwzBNf7KAEz5jhNQ8\n", 958 | "lNKoUlvETPMghsJPVoiOGy8bpTnkOM4e0hK0cCLuJ4EJLzPcsICFdV+TXieZPBoxayV70jnTvpcl\n", 959 | "nPAtBv7RmKghCattrYeSt6GHXiO8bBxClOo6SvJQk0SXocak7xeLRRoYGKiYtIIqvbwXocGtEUhy\n", 960 | "S+KH7qn2W2mOtQpO+BZB8zDzQxvyPqdxxoXOlYb0+F6I2Ph72jmQ4Nq+NZDwsf7+fhocHKyq8Y9O\n", 961 | "O21Q0/pUC6x7mtZs4vvGfhD8zSS1vtVwwmeINCO+JknS2ujaeUKE1AheK1lC/ZB5BPK1FsbD11K6\n", 962 | "owffUqmt1/VeQ+j/kJ9JMzDmgeQIJ3xGqMeBpM2Vl/XvrN8OmQL4mVpMBkuVtfaZ2JhBxwksnCu/\n", 963 | "trYWl1NGJ5c2GMn7lXQP0iB0bXKBEQxNykUrZY5A6Fx5ghM+I4RszDTg76J33vqM9jpJ0luf0c6B\n", 964 | "1yKvCwcKqaZLB15Srv7IyAgNDw/T4OBgVf5+kq2cBlJiy7CmnMW4uLhYsT8/P09XrlyhhYWFeNUa\n", 965 | "XLVWy4vIkzpP5ITPFLWQPs1n0zqpQjZ86LeQ3HIb6iMOThbZkfBarn5/f38F4fk4RggaITuDCall\n", 966 | "K0rCLywsxNuFhQWam5ujK1euxAMBmh74m9IMyAvZiZzwmaNeSW9JUfle6Hv1vJbnTCK7/I5Gem2O\n", 967 | "Oebq85YJPzQ0RAMDAxXOPsv5WAukdNfKW0nCs0S/cuWKKuFlJEGeJ09kJ3LCtwS1/uka2WsheTPe\n", 968 | "w3On/R0tE0+T8DwhZ2hoqGI7MjIS72uEr1fCyygIkh3Te9lWl4RHsqOER5Vey3p0wjsSoUnURh/y\n", 969 | "Wr+naRNJcX4iCpIds+hYog8NDcUSHffZhtdUenneWs0gJrwsQY1rBUrCz83N0dzcXKziI+E1lR7P\n", 970 | "lTc44XOKeu35JNQSx9eIH/IJSHVemwnIhEeSo+3O0t1S6WsZBC1PP0p4JLyl0s/Pz9Pc3FxMdJTu\n", 971 | "MiOw1nvdajjhc4ZG4stJ0KR02vi+dkySz/LSYyweJfzw8HBM+JGRkSpHHnvpLQlfjwaEpJdhuDSE\n", 972 | "lyE6KeGT7l+74YTPMWpRXREWiaW6Xk9/tHg5N+mJZ4nNjQluNQzZYXEJ69pr9YLLGDsm+aDtjqE4\n", 973 | "bgsLCxUkx1r1MldCakh5Ir4TPseQXvJGf6tRMLG1qbWFQqFCcqNNjqRm9Z23aLdrRTO1ajKaMyxk\n", 974 | "O/NrttGllOYte+DZC8+qO9vqMvtPc9TlidwanPA5g1S7G0ngabYNiXa6lguPzjhW07HJQYC3rAGE\n", 975 | "ymbJawp5xK0tk1trS0tLMeFZqvN7Mt1XzuKr1SxqJ5zwHYCQip72s40C1XiryAU741iCj46O0tjY\n", 976 | "GI2OjtLo6KgaisMwnFU1hzULJLCW5GIlvWAmHdrpsiHhNeecjLlLsueN3BoS1yl+4oknaGpqirZv\n", 977 | "3x4fe+GFF2jLli20a9cu2rVrFx07dizTTnYb0iZtWA95ms9xk3a41YjsxBrpjGMJPzY2RmNjYzQx\n", 978 | "MUGTk5M0OTlJ4+Pj8SDAUl/G3kPFIa3kGSvMhg2lOTrmMMFGC71J6S5TafPojbeQKOEff/xxevrp\n", 979 | "p+mxxx6LjxUKBTp06BAdOnQo0845dImdRpJk8RCiDa/Ne0cbniX8+Pg4TUxM0Pj4eEUaLe5z4wIX\n", 980 | "SV54jfQyXVZqAZg6u7y8XOWQW1xcNG141gzQG68VEO0EJBJ+z549dPbs2arjnXSR6w3WIJD1fyLV\n", 981 | "eZk2qxF+bGyMxsfHacOGDRUeeOmR5xl08vpwXxJYpsbKAUDmzUt1HiU8Snlpw2OCDZ4/SbPKo4pf\n", 982 | "tw3/2muv0e9+9zvavXs3vfTSSzQ5OdnMfjn+B/kwJYWo6v1+GqCEZ+LL+Dqr6FLCT05OVpV5ll55\n", 983 | "i0xy9pml0ltFMrXJMVr6LKr0moQvl8sV97TT1HmiFDa8hqeeeorOnDlDp0+fps2bN9Ozzz6rfg7V\n", 984 | "Jq477kgP7WFq9kOm/Z5G7JC9LsNsVkNbHTPpZBUcq749Ou8wUw5tdI3MnDjDbX5+Pm6S7EmZdGnX\n", 985 | "Ami1dC+VShV8s1CXhN+4cWO8/+STT9L999+vfm5oaKien3c0EUkPHj60Mq4uY+1yNRq5Qs3ExETs\n", 986 | "kMOceKx4i6RFSS298DK5h4+j6o4ZcjIZRtsPJdagtx5VeJkym2ROtUuNl2bR8vKy+rm6CH/hwgXa\n", 987 | "vHkzERG99957FR58R/PQTLtc/pa1zwTTwmMDAwNx3Fyb8MKEx9x4bcYbnlcrPa2RnejrTDm+Dqyf\n", 988 | "j/FyuZXHMJnGauikw/JanRRvt5BI+EceeYSOHz9On3/+OW3dupV++ctf0p/+9Cc6ffo0FQoFuvba\n", 989 | "a+mNN95oRV+7Eo2QXsvLt7LQUEIx0WUSDKrwMnNueHg4jrdzuA2LWEgJz+fEOeRaCBBDg0h4KeFR\n", 990 | "lZfxdVmwQtuilqCp8p1mq1soRBldSaFQcEdehkiyG7V9KyGFX2sr2OIxJDQ75fA1kl/a9CMjI6rk\n", 991 | "xn1roQtumlddFqrAJj8vY+lpGn6nk0g/Ozur9tcz7dYhkibdaFIeE3G0ODtPhmEJzyE3ttnHxsZi\n", 992 | "1R4z6Dj8hotIYB80h6GmbWASDqv0skrNwsJC7IhDpxzvYxwdnXAY2tPaepLwTvh1jDSJK5p9ioSX\n", 993 | "s99k6uz4+HicPcfedznNlVV6Pjf2AffRUcj9xr709vbGn0eVnhNpmPBzc3N0+fLl2CvP+7LctbYf\n", 994 | "SqxBZ6J1r/M+MDjh1zmQ9KEwH0pVJBqG4TjOLpNqJiYmaGJioiqRRqbJhmLq/B5X6ZX9kBJeEh4l\n", 995 | "/OXLl+mrr76K2+zsLH311VcVi1lY6cNyEJDHQvcY9/NKfCd8hyLtg8XkkZ54LYxkSXaMuWOTcXZc\n", 996 | "bEKb3qr1hfuokZ/VbCY+2tVaSWm02zm+jo0HFLkIJrc0CEl5+f/kkfRO+A5H6OGTBJOk5y0fT0N0\n", 997 | "TJzB9FhJcCRV6NxElQtSsH1ORDHhmeCyiqx00rFXXtaK53OlWUYriaRpw295JDuRE35dAqWQFYaT\n", 998 | "amyhUFDz4qU0t0jP32UtQZJJg/Ye5sCzVOff0UguvfKysKTUZKxFLUL+jlrud9rvtAtO+A6H9YDW\n", 999 | "kiTCJECbXdaewxLSuFCENp1VSnctHwD3k8KFvNXy3jXC48qzLOGlb0Aje+hepVXl80x2Iid8x6LW\n", 1000 | "h09LbOEtq7tarrxWTpqPYylpmVxjqfRoy8vr0VaDwSaJLl9ryz9JCa+p9tgHi7BJ9zvvRGc44TsQ\n", 1001 | "lsREm12TkPLhxobOOquctCwlLae4huzj0HXwlgkvi1nwPhLdSrTR1nvTrjdkx+N9W29wwncYQuqx\n", 1002 | "RCjWzd9FFRwXjGAb3ZLwmg2P/Qmpy6FYNhNeVqsplUpBdV5KeFbp+fotlT6pL+sNTvgORtr4r5Zc\n", 1003 | "I8luJdpgeWkmvOWl186L+1p/5T6r7zwxBifHaCTn9Fppv1sSHvfla9nn9Uh6J/w6QZrkEGt1V97X\n", 1004 | "suRkKSrDMyqCAAAS6ElEQVQkuLTduR9a3+R7WvRAm9eOU1Z5Lrucv46TYzhOz2YBajZ4L7R9676t\n", 1005 | "J9I74TsMoZAbgqUWfkYSFV/j+uxyppu1RrtsVmIN9t2qZhNFUdV8dbnltFkmvYy7c5NLN9dyb9Mc\n", 1006 | "62Q44TsQtSSHYF46quxym2Z9dq1ABp4vRHqZPadNXOH0WJk5p63kqoXh5CIRMg6Pg6Xm6OwGOOE7\n", 1007 | "GPIhDXnhC4VChfcdHXO8LxNrWH1HCR+S9GkkvFyiGT3xoaQaK+6O89zlLDdtnkAj93c9wAnfAZAk\n", 1008 | "CtnCSEBZqkrzwGOYDVNoNZVeK30VCmfJfqNDTpafKpVKZg48Ouekmq+F4dKuChPCeiQ7kRM+95Dq\n", 1009 | "shZf563MbpOOOm2qqyS5DLnJ9dktCS/7i33D13K1Vqw4g0UntTntuJCELElVKpXU2vQacaWkT+sX\n", 1010 | "WQ9wwucYlgoqY+t4jL9neeVlnrxcvVXzyEsJHyJ9UlgQq83Ktd5QqrNzDqvMIrnlyq9yFdda1fn1\n", 1011 | "TnSGE75DoT3USaS3loTihpl26MWXk2HSJq9IyNLSsk68VVaaG1aiDZWgCpk83Q4nfE5Rq4OJv8Mk\n", 1012 | "T1odBqU7F7RgcuMW57Zb9jwC1WlZNcaax85Ns9/RhkcvPO6vtzJUWcIJ36HQBgSL6KF0WSS8XBFW\n", 1013 | "NkyjlQs9orouicj7skgF7nOcnW12uZijtkxzmuWeHJVwwncYZPxYvqfZ7FiEUjrrmOwjIyMxibWa\n", 1014 | "9LisVBLhrQqwoQqzclKMtnqrtZyUkz09nPAdCi2RRKs2q81vl8UomfSh8k89PT0VhNemv2rLP1lh\n", 1015 | "N21fSn5J+LReeIcNJ3wHQSa3yHCYnAgj57db3nkmfFLoDVNxWcIzZFINe9GxWXF2zJyTsXYkvOaB\n", 1016 | "d9LXBid8h0EjOx/X1HkMsyVJeP4da6s576SER+muhd1CsXYmuBZn57Abwz3x9cEJ34GQyS1aCE6q\n", 1017 | "9VgzXha14BVk8Dc1SJs+SaXHJaAw7CZj7TLOLteEQwmvOSvriWh0K5zwHQ7NhtccdmkkvBbXx5ZU\n", 1018 | "wkoSHjPjpGMOiS7j7HI1WG58ndbEHUcynPA5hSXNLMiceW2JKG2ZZ5zvHrKRMWMNJ8sQUbzAgybZ\n", 1019 | "kei4Hrv01C8uLpprv7F3XjNjHLXBCZ9jaGmhlrqN1Vk5hCYz6nDqq/S2W9BIZTnKWA3HRBq00bGh\n", 1020 | "Uy7N1NZ6Mvsc1XDCdwDSFmZAtR6lu1aiSk6Mwd+Q50G13crjj6KoamFHmSarJd3IOLuc2orXhf1z\n", 1021 | "4tcHJ3yHIck7jeWmLQmvZcwR2UtQyfPJ9FlJeEyw4bXeLl++rK7frk1v1RJqQtEDR3oEF9Q6d+4c\n", 1022 | "3X333XTLLbfQrbfeSq+++ioREV26dIn27t1L27Zto/vuu49mZ2db0tluh+VMQ3JICY/OOU2lx4kx\n", 1023 | "DClFtRlxmDJr2e7siecVXC9fvlxRky6k0lt+A60/Tvr0CBK+WCzSyy+/TH//+9/pL3/5C73++uv0\n", 1024 | "j3/8gw4fPkx79+6lf/7zn3TPPffQ4cOHW9XfrodFela9NQkvV47BirPoaSfSJahGdisMp9nwc3Nz\n", 1025 | "9NVXX1UVoZSJNViPDmPuRGRO2HGy14agSr9p0ybatGkTERGNjo7STTfdROfPn6cPPviAjh8/TkRE\n", 1026 | "Bw4coLvuustJnyG0YhfShk7jtJPlq7SMOaJw3Xg8n1wwwrLheflmrHKjbS04qZuH1Db82bNn6dSp\n", 1027 | "U3THHXfQzMwMTU1NERHR1NQUzczMZNbBbkZSMQnNWy6XjEIJj8k3mtNOFrGwiIZTXzF8ZtnwLOGt\n", 1028 | "qa3c8Jxp1HbPsKsdqQg/Pz9P+/fvp1deeYXGxsYq3gt5SxcXF7850f8eQkdjQAIg4WX8XUu4wUo2\n", 1029 | "WnoskV5Fh4/J0lRyH2fCybnuS0tLas053Ofr0OLtcjDCe+HEp0QtiZFI+FKpRPv376dHH32UHnjg\n", 1030 | "ASL6WqpfvHiRNm3aRBcuXKCNGzeq3x0aGqqx244QQqp3b29v1aIR1uIRVhELzfuOHnO2tzHHHbez\n", 1031 | "s7M0NzdXYaNr5afkNaQNsaUNT3Yj+D9mLC8vq58LOu2iKKKDBw/SzTffTM8880x8fN++fXT06FEi\n", 1032 | "Ijp69Gg8EDiai9AEGa2whcycs8jOLU16LOeyc0gNU2PZNv/yyy/p0qVLNDs7G3viNcJbBStCHniG\n", 1033 | "E7s5CEr4EydO0Ntvv0233XYb7dq1i4iIfv3rX9Pzzz9PDz30EL311ls0PT1N7777bks6243A2XGY\n", 1034 | "w67NVdfInlbCo4mg1Y7XFoqQjUNwSHhc1NGS8HJef1r4IFA7goS/8847q8IjjE8++SSTDjmqYU2Q\n", 1035 | "wSbnvUsJr0l5KUk1Zxx60jFLDu11WdCCU2fl1NYQ6UP72rwCJ3t98Ey7DoKm0jOBNbKHVHpZ3YaB\n", 1036 | "4TbpmJPed1k3HotXyAUeWXDUQngNTvTG4ITvEKBKb5WxslR63rfs9zQ2PIbbtDLS8/PzMcFl8QqZ\n", 1037 | "G8/n4utKAyd6c+CE7yBIlR5t9CSnHWfVySKVkvBsw8vYeihHnps2hx1XdE26Ng1O9ObCCZ9zSG82\n", 1038 | "E1WWsMKUWS3eLtV4qdKz80yuDIMeerThZU26+fn5Kicf78uprhJppv86mgMnfJuhPexyKz8va9Zh\n", 1039 | "gQuZRZd2bXfuBxOeK9ZIksuKsliGStahR0ddSHV3krcOTvg2IY1zComP4Str6ShU6bVsuqTGEl5K\n", 1040 | "d1lSOqmMdLNqxvtA0Hw44XMA6SW3ZsLxZ7V8eZbw0jNfj4THmnRMeI6tI9mllNem7EpVPg2JnejZ\n", 1041 | "wQnfAkiVNm0Yypoog556WZwSVXq5YEQaCY8hOTnVVebJy+Wa05ok2ntO8tbACZ8xpFqOSEt2LOAo\n", 1042 | "bXicAisTbmSCTa0SHlV6nMcuJTyr//Ka5L5Faid76+CEzxBpH2Qp9STZJeGlSi8lfL0qPdrwUqXX\n", 1043 | "ZsFJGz5pMAmR3tEaOOHbhKTccU2dDzntWMJLlb5RL71GeM6Tlza8tS4dDihya6n/aRNyHLXBCZ8R\n", 1044 | "spBkSCRtwQlMoU2aBiunwBJRRVYdJtugROdsOlzCWZak6unpMfPms7w/jmQ44duEpNi0JoEtSW19\n", 1045 | "T0604fOyc45fE30t2dELLx1zVnVZhuUcxHPi+RztgRO+zdA8+JK0eFyT2NbggK23tzfOacc4OSbL\n", 1046 | "aHF2VNsxXVZmz6EqL/vG14nXHIKr89nBCZ9DIGnX1tZUMllOMTkoYO48kxRz5XEarLb8E5JelpPW\n", 1047 | "CK9pHYgk4jvZs4UTPqfgBx9JH1Kbk6Q8SniZK88OOpzLroXe5GQY6UjU+kKUrjSVE701cMK3AZq3\n", 1048 | "msh+6CXRk0ivqfP8HVbpMc4uV3hFOx5Veqwwiza8ND2sa3H7vf1wwucU0q6XEj4p1CbJzgk4UsLz\n", 1049 | "TDgMvUl1nklv5ckj4a1rcLLnA074FsIq1aR52vk47luSXX5XIz166TnOziE4OTnGSq6xUn0l4ZNU\n", 1050 | "eFff2wcnfEZAVR2P1fJ93JcJLZr6HgrZyeIWmDZrpc5K2z3kcJOmSeizSSFJR3ZwwmeIJKdV6H1N\n", 1051 | "6icRXZacxhAclq1im12WrMJqs+yVx3ntsq+1hNoc+YATvgWwpJkl6aR0JyKV2JbUZ2AYTqrx2hru\n", 1052 | "KOHlAo+h2W1JM9+kxHfp3j444duMEBG0MJeWq65Vn00r4ZHwrN7LuDtm1CVJ9ZCkd6K3H074nMIK\n", 1053 | "t2mJNSHSS8JLCY8qveaoSyPhtdfa9TjaDyd8DpAk5TXPu0V29Oyjo06q9Noa7jwQYCot2/Cyn1rf\n", 1054 | "HfmHE74FsJxzSeqvpspbyTVSuvPvY9xcFraQhMfMO7ThMY02qd+OfMMJnyFCamxaWzeJ5BbpLRs+\n", 1055 | "pNJjXj1rBJZKn3QdrsLnE074jFDLA2+l1lrpqZLMSGomqqxFv7a2VpE5xw2Xh8LUWVlqOtRvR+fA\n", 1056 | "CZ8RGkku0XLsNWmNKjqvB2555ovFYkXlWS2xBqfLWuu5I7TkIke+4YTPEGlJb0lyqZ4zma213/iz\n", 1057 | "KO35c319fXE2HRa3YMIj0Rslvavz+UX1jAfAuXPn6O6776ZbbrmFbr31Vnr11VeJiOiFF16gLVu2\n", 1058 | "0K5du2jXrl107NixlnS2E9FIpReZty6JLEmParpc5ZUTazi5BstV4bRXuWJMGsjwoSO/CEr4YrFI\n", 1059 | "L7/8Mu3cuZPm5+fpe9/7Hu3du5cKhQIdOnSIDh061Kp+djSapfZqtjqq9FL6syq/vLxMvb29sUof\n", 1060 | "ypUPTZDxpJrOR5DwmzZtok2bNhER0ejoKN100010/vx5InKHTdbQQnhSpZc2fLlcpr6+PlpdXa1a\n", 1061 | "Q76np6dqgowkPJ4XyZ6W9I78I6jSI86ePUunTp2i73//+0RE9Nprr9GOHTvo4MGDNDs7m1kHuxFW\n", 1062 | "gotmm7M6j+E2qdLjGu5sw8tsOvTKh9R6l+SdjVSEn5+fpx//+Mf0yiuv0OjoKD311FN05swZOn36\n", 1063 | "NG3evJmeffbZrPvZNZAJLpYdr0l4JLtG+iQvvaxIa9nxTvrORaKXvlQq0f79++lnP/sZPfDAA0RE\n", 1064 | "tHHjxvj9J598ku6//371u4uLi9+c6H/10x21Q6rZ0pZn4mvVcDAZR/POI8mtwhs+fz3/QLMshCDh\n", 1065 | "oyiigwcP0s0330zPPPNMfPzChQu0efNmIiJ67733aPv27er3h4aGaumzIwGatMfFJJiY5XK5avIN\n", 1066 | "EVXMgMOUWSS0E7szwQuRMDhMKxEk/IkTJ+jtt9+m2267jXbt2kVERC+++CK98847dPr0aSoUCnTt\n", 1067 | "tdfSG2+80cSuOzSEVH25KIT1fcyV1xaTSEt6d9x1LoKEv/POO9UH6Yc//GFmHXKEgSSX6bWWh52b\n", 1068 | "JeGJdJJrk3EcnQ3PtOsgWJJdkh4HBPycJeFDpMdzOzofTvgORRLxkdDcapHwbsuvTzjhOwyalJcx\n", 1069 | "eoynY0uS8Awn+/qFE74DYan2GKbTtlrMPY0671g/cMK3CUlpqlrs2yI4JuLIhSK1hSNxTfdaJsk4\n", 1070 | "Oh9O+DYiVJeegckvWrELTropFAqmZNfWhHOydyec8DlAUt16DLFJsuNCkRrBtWPabDhHd8AJn0PI\n", 1071 | "4hd4XFazwdRZy1kXkvDyHI71DSd8TqE50yzbnRdyDHnqrXi9k7274ITPOZIkPE50wfi7VrJKkt9J\n", 1072 | "331wwucASYtOMpDAci05SfikZBwne3fCCd9GaNltUqLjvpTwcpUZi+hyVh02R3chdcWbRlEqlVp1\n", 1073 | "qrrQ6v7VMiMtiqKKmnOa1LYcc1Z+fbMddv7/NoZW9a9lhE8zOb+d8P41Bu9fY2hV/1pGeIfD0X44\n", 1074 | "4R2OLkIhyshN65MxHI72QqN2Zl56D/c4HPmDq/QORxfBCe9wdBFaQvhjx47RjTfeSDfccAMdOXKk\n", 1075 | "FaesCdPT03Fl3ttvv73d3aEnnniCpqamKsp/X7p0ifbu3Uvbtm2j++67r62r/Wj9y8sCo9YCqHm5\n", 1076 | "f21foDXKGKurq9F1110XnTlzJlpZWYl27NgRffbZZ1mftiZMT09HX3zxRbu7EePPf/5z9Ne//jW6\n", 1077 | "9dZb42M///nPoyNHjkRRFEWHDx+OnnvuuXZ1T+3fCy+8EL300ktt6xPjwoUL0alTp6IoiqK5ublo\n", 1078 | "27Zt0WeffZab+2f1r1X3L3MJf/LkSbr++utpenqaisUiPfzww/T+++9nfdqaEeXIybhnzx7asGFD\n", 1079 | "xbEPPviADhw4QEREBw4coD/84Q/t6BoR6f0jysc93LRpE+3cuZOIKhdAzcv9s/pH1Jr7lznhz58/\n", 1080 | "T1u3bo1fb9myJb7AvKBQKNC9995Lu3fvpjfffLPd3VExMzNDU1NTREQ0NTVFMzMzbe5RNfK2wCgv\n", 1081 | "gHrHHXfk8v61Y4HWzAnfCfH4EydO0KlTp+ijjz6i119/nT799NN2dykInCWXF+RtgdH5+Xnav38/\n", 1082 | "vfLKKzQ2NlbxXh7uX7sWaM2c8Ndccw2dO3cufn3u3DnasmVL1qetCbxO3tVXX00PPvggnTx5ss09\n", 1083 | "qsbU1BRdvHiRiL5e2w8X9MwDNm7cGBPpySefbOs95AVQH3300XgB1DzdP2uB1lbcv8wJv3v3bvrX\n", 1084 | "v/5FZ8+epZWVFfr9739P+/bty/q0qbGwsEBzc3NERHTlyhX6+OOPzcUx24l9+/bR0aNHiYjo6NGj\n", 1085 | "8YOSF1y4cCHeDy0wmjUiYwHUvNw/q38tu3+ZuwWjKPrwww+jbdu2Rdddd1304osvtuKUqfHvf/87\n", 1086 | "2rFjR7Rjx47olltuyUX/Hn744Wjz5s1RsViMtmzZEv32t7+Nvvjii+iee+6Jbrjhhmjv3r3Rl19+\n", 1087 | "mZv+vfXWW9Gjjz4abd++PbrtttuiH/3oR9HFixfb0rdPP/00KhQK0Y4dO6KdO3dGO3fujD766KPc\n", 1088 | "3D+tfx9++GHL7l9mufQOhyN/8Ew7h6OL4IR3OLoITniHo4vghHc4ughOeIeji+CEdzi6CE54h6OL\n", 1089 | "4IR3OLoI/w/vk8r37ytmxAAAAABJRU5ErkJggg==\n" 1090 | ], 1091 | "text/plain": [ 1092 | "" 1093 | ] 1094 | }, 1095 | "metadata": {}, 1096 | "output_type": "display_data" 1097 | } 1098 | ], 1099 | "source": [ 1100 | "distorted_example = random_image_generator(x_train[0]*255.)\n", 1101 | "plt.imshow(distorted_example, cmap = plt.cm.gray)" 1102 | ] 1103 | }, 1104 | { 1105 | "cell_type": "markdown", 1106 | "metadata": {}, 1107 | "source": [ 1108 | "What you see will vary if you are using this notebook interactively (since the distortions are random), but in this case the 7 is more upright now and slightly lower. \n", 1109 | "\n", 1110 | "In theory we could make an infinite number of these artifical training examples to help improve our Deep Belief Network (if we had a lot of memory of course!) \n", 1111 | "\n", 1112 | "To make sure we don't run out of memory, let's create a function that will augment our training set internally with an additional amount of artifical examples. It will call the random_image_generator function inside itself." 1113 | ] 1114 | }, 1115 | { 1116 | "cell_type": "code", 1117 | "execution_count": 9, 1118 | "metadata": { 1119 | "collapsed": false 1120 | }, 1121 | "outputs": [], 1122 | "source": [ 1123 | "def extra_training_examples(features, targets, num_new):\n", 1124 | " '''\n", 1125 | " This function will take the training set and increase it by artifically adding new training examples.\n", 1126 | " We can also specify how many training examples we wish to add with the num_new parameter.\n", 1127 | " '''\n", 1128 | " \n", 1129 | " # First, create empty arrays that will hold our new training examples.\n", 1130 | " \n", 1131 | " x_holder = np.zeros((num_new, features.shape[1]))\n", 1132 | " y_holder = np.zeros(num_new)\n", 1133 | " \n", 1134 | " # Now, loop through our training examples, selecting them at random for distortion.\n", 1135 | " \n", 1136 | " for i in xrange(num_new):\n", 1137 | " # Pick a random index to decide which image to alter.\n", 1138 | " \n", 1139 | " random_ind = np.random.randint(0, features.shape[0])\n", 1140 | " \n", 1141 | " # Select our training example and target.\n", 1142 | " \n", 1143 | " x_samp = features[random_ind]\n", 1144 | " y_samp = targets[random_ind]\n", 1145 | " \n", 1146 | " # Change our image and convert back to 1D.\n", 1147 | " \n", 1148 | " new_image = random_image_generator(x_samp).ravel()\n", 1149 | " \n", 1150 | " # Store these in our arrays.\n", 1151 | " \n", 1152 | " x_holder[i,:] = new_image\n", 1153 | " y_holder[i] = y_samp\n", 1154 | "\n", 1155 | " # Now that our loop is over, combine our original training examples with the new ones.\n", 1156 | " \n", 1157 | " combined_x = np.vstack((features, x_holder))\n", 1158 | " combined_y = np.hstack((targets, y_holder))\n", 1159 | " \n", 1160 | " # Return our new training examples and targets.\n", 1161 | " \n", 1162 | " return combined_x, combined_y\n", 1163 | " \n", 1164 | " \n", 1165 | " " 1166 | ] 1167 | }, 1168 | { 1169 | "cell_type": "markdown", 1170 | "metadata": {}, 1171 | "source": [ 1172 | "Let's now make our new x_train and y_train objects and replace the old ones with these new, larger ones." 1173 | ] 1174 | }, 1175 | { 1176 | "cell_type": "code", 1177 | "execution_count": 13, 1178 | "metadata": { 1179 | "collapsed": false 1180 | }, 1181 | "outputs": [], 1182 | "source": [ 1183 | "x_train, y_train = extra_training_examples(x_train, y_train, 10000)" 1184 | ] 1185 | }, 1186 | { 1187 | "cell_type": "markdown", 1188 | "metadata": {}, 1189 | "source": [ 1190 | "Originally, we had 56,000 training examples. Now how many do we have?" 1191 | ] 1192 | }, 1193 | { 1194 | "cell_type": "code", 1195 | "execution_count": 14, 1196 | "metadata": { 1197 | "collapsed": false 1198 | }, 1199 | "outputs": [ 1200 | { 1201 | "data": { 1202 | "text/plain": [ 1203 | "66000L" 1204 | ] 1205 | }, 1206 | "execution_count": 14, 1207 | "metadata": {}, 1208 | "output_type": "execute_result" 1209 | } 1210 | ], 1211 | "source": [ 1212 | "x_train.shape[0]" 1213 | ] 1214 | }, 1215 | { 1216 | "cell_type": "markdown", 1217 | "metadata": {}, 1218 | "source": [ 1219 | "Good it worked. Now let's try running our same DBN model again with the expanded training set to see if we can improve our accuracy at all.\n", 1220 | "Let's use the same settings as last time." 1221 | ] 1222 | }, 1223 | { 1224 | "cell_type": "code", 1225 | "execution_count": 20, 1226 | "metadata": { 1227 | "collapsed": false 1228 | }, 1229 | "outputs": [ 1230 | { 1231 | "name": "stdout", 1232 | "output_type": "stream", 1233 | "text": [ 1234 | " precision recall f1-score support\n", 1235 | "\n", 1236 | " 0.0 0.99 0.99 0.99 1312\n", 1237 | " 1.0 0.98 0.99 0.99 1604\n", 1238 | " 2.0 0.97 0.98 0.97 1348\n", 1239 | " 3.0 0.98 0.97 0.98 1427\n", 1240 | " 4.0 0.99 0.96 0.97 1362\n", 1241 | " 5.0 0.98 0.97 0.97 1280\n", 1242 | " 6.0 0.99 0.99 0.99 1397\n", 1243 | " 7.0 0.99 0.97 0.98 1461\n", 1244 | " 8.0 0.95 0.98 0.97 1390\n", 1245 | " 9.0 0.95 0.97 0.96 1419\n", 1246 | "\n", 1247 | "avg / total 0.98 0.98 0.98 14000\n", 1248 | "\n", 1249 | "The accuracy is: 0.976857142857\n" 1250 | ] 1251 | } 1252 | ], 1253 | "source": [ 1254 | "dbn_model = DBN([x_train.shape[1], 300, 10],\n", 1255 | " learn_rates = 0.3,\n", 1256 | " learn_rate_decays = 0.9,\n", 1257 | " epochs = 10) # I am turning off the verbose statements this time.\n", 1258 | "\n", 1259 | "dbn_model.fit(x_train, y_train)\n", 1260 | "y_true, y_pred = y_test, dbn_model.predict(x_test) # Get our predictions\n", 1261 | "print(classification_report(y_true, y_pred)) # Classification on each digit\n", 1262 | "print 'The accuracy is:', accuracy_score(y_true, y_pred)\n" 1263 | ] 1264 | }, 1265 | { 1266 | "cell_type": "markdown", 1267 | "metadata": {}, 1268 | "source": [ 1269 | "Now hold on a minute! Our accuracy actually went DOWN a bit from 97.95% to 97.69%. The F1-scores have dropped for several numbers, especially our 9s. \n", 1270 | "\n", 1271 | "So what went wrong?\n", 1272 | "\n", 1273 | "Well there are several possibilities, but a likely candidate in this case is that our additional noise confused the model. In other words, we are underfitting now and we need to make the Network either more complex (additional hidden nodes), decrease the learning rate, or train it longer (more epochs). In the next section we shall try all three to see if that helps." 1274 | ] 1275 | }, 1276 | { 1277 | "cell_type": "markdown", 1278 | "metadata": {}, 1279 | "source": [ 1280 | "## Tuning the Network" 1281 | ] 1282 | }, 1283 | { 1284 | "cell_type": "markdown", 1285 | "metadata": {}, 1286 | "source": [ 1287 | "Because of our larger training set, we need to make some changes to our original Deep Belief Network. Next, we are going to try this again with three changes:\n", 1288 | "\n", 1289 | "- Decrease our learning rate from 0.3 to 0.2 for smaller steps during gradient descent\n", 1290 | "- Increase the number of hidden nodes from 300 to 500\n", 1291 | "- Increase the number of training epochs from 10 to 50\n", 1292 | "\n", 1293 | "After doing these changes, we should be able to get a better result. The downside is that model training will now take longer. Be prepared to wait a little while for this model to finish (my laptop ended up taking about 30 minutes). This might be a good time to watch the Deep Learning video from the beginning if you haven't already." 1294 | ] 1295 | }, 1296 | { 1297 | "cell_type": "code", 1298 | "execution_count": 21, 1299 | "metadata": { 1300 | "collapsed": false 1301 | }, 1302 | "outputs": [ 1303 | { 1304 | "name": "stdout", 1305 | "output_type": "stream", 1306 | "text": [ 1307 | " precision recall f1-score support\n", 1308 | "\n", 1309 | " 0.0 0.99 0.99 0.99 1312\n", 1310 | " 1.0 0.99 0.99 0.99 1604\n", 1311 | " 2.0 0.98 0.98 0.98 1348\n", 1312 | " 3.0 0.98 0.98 0.98 1427\n", 1313 | " 4.0 0.98 0.98 0.98 1362\n", 1314 | " 5.0 0.99 0.97 0.98 1280\n", 1315 | " 6.0 0.99 0.99 0.99 1397\n", 1316 | " 7.0 0.98 0.98 0.98 1461\n", 1317 | " 8.0 0.98 0.98 0.98 1390\n", 1318 | " 9.0 0.98 0.97 0.98 1419\n", 1319 | "\n", 1320 | "avg / total 0.98 0.98 0.98 14000\n", 1321 | "\n", 1322 | "The accuracy is: 0.983714285714\n" 1323 | ] 1324 | } 1325 | ], 1326 | "source": [ 1327 | "dbn_model = DBN([x_train.shape[1], 500, 10], # Increased the hidden nodes to add more complex features.\n", 1328 | " learn_rates = 0.2, # Smaller steps for gradient descent.\n", 1329 | " learn_rate_decays = 0.9,\n", 1330 | " epochs = 50) # Give the model more time to converge.\n", 1331 | "\n", 1332 | "dbn_model.fit(x_train, y_train)\n", 1333 | "y_true, y_pred = y_test, dbn_model.predict(x_test) # Get our predictions\n", 1334 | "print(classification_report(y_true, y_pred)) # Classification on each digit\n", 1335 | "print 'The accuracy is:', accuracy_score(y_true, y_pred)" 1336 | ] 1337 | }, 1338 | { 1339 | "cell_type": "markdown", 1340 | "metadata": {}, 1341 | "source": [ 1342 | "Well that certainly helped! Our overall accuracy went up to 98.37%, with increases in many of the f1-scores, especially for the 9 digit. However, with our added model complexity, it is likely our network overfit (neural networks tend to do this quite well!) There is a way we can correct for this to see if we can get just a little more accuracy. Most other machine learning algorithms that overfit can have this corrected through regularization. For neural networks, that means something called \"dropout,\" where we randomly disconnect a certain percentage of our nodes. " 1343 | ] 1344 | }, 1345 | { 1346 | "cell_type": "markdown", 1347 | "metadata": {}, 1348 | "source": [ 1349 | "## Regularization of The Network: Dropout" 1350 | ] 1351 | }, 1352 | { 1353 | "cell_type": "markdown", 1354 | "metadata": {}, 1355 | "source": [ 1356 | "Fortunately, nolearn has a parameter we can set to do this quite easily (we will just have to wait a while while our model is trained again!) Let's load our model object, this time with an added parameter for dropout. We will choose 25% (I found this to work well from some limited testing). " 1357 | ] 1358 | }, 1359 | { 1360 | "cell_type": "code", 1361 | "execution_count": 17, 1362 | "metadata": { 1363 | "collapsed": false 1364 | }, 1365 | "outputs": [ 1366 | { 1367 | "name": "stdout", 1368 | "output_type": "stream", 1369 | "text": [ 1370 | " precision recall f1-score support\n", 1371 | "\n", 1372 | " 0.0 0.99 1.00 0.99 1312\n", 1373 | " 1.0 0.99 0.99 0.99 1604\n", 1374 | " 2.0 0.98 0.99 0.98 1348\n", 1375 | " 3.0 0.99 0.98 0.98 1427\n", 1376 | " 4.0 0.99 0.99 0.99 1362\n", 1377 | " 5.0 0.99 0.97 0.98 1280\n", 1378 | " 6.0 0.99 0.99 0.99 1397\n", 1379 | " 7.0 0.98 0.98 0.98 1461\n", 1380 | " 8.0 0.98 0.99 0.98 1390\n", 1381 | " 9.0 0.98 0.98 0.98 1419\n", 1382 | "\n", 1383 | "avg / total 0.99 0.99 0.99 14000\n", 1384 | "\n", 1385 | "The accuracy is: 0.985357142857\n" 1386 | ] 1387 | } 1388 | ], 1389 | "source": [ 1390 | "dbn_model = DBN([x_train.shape[1], 500, 10], \n", 1391 | " learn_rates = 0.2, \n", 1392 | " learn_rate_decays = 0.9,\n", 1393 | " dropouts = 0.25, # Express the percentage of nodes that will be randomly dropped as a decimal.\n", 1394 | " epochs = 50) \n", 1395 | "\n", 1396 | "dbn_model.fit(x_train, y_train)\n", 1397 | "y_true, y_pred = y_test, dbn_model.predict(x_test) # Get our predictions\n", 1398 | "print(classification_report(y_true, y_pred)) # Classification on each digit\n", 1399 | "print 'The accuracy is:', accuracy_score(y_true, y_pred)" 1400 | ] 1401 | }, 1402 | { 1403 | "cell_type": "markdown", 1404 | "metadata": {}, 1405 | "source": [ 1406 | "Once again, we can see that our accuracy went up to 98.54%, and the dropout regularization prevented some of the overfitting! That's probably about the best we are going to be able to do given the constraints of using a laptop (locally that is: no cloud computing). What else could be done to increase performance? " 1407 | ] 1408 | }, 1409 | { 1410 | "cell_type": "markdown", 1411 | "metadata": {}, 1412 | "source": [ 1413 | "## Ideas for Improvement and Summary" 1414 | ] 1415 | }, 1416 | { 1417 | "cell_type": "markdown", 1418 | "metadata": {}, 1419 | "source": [ 1420 | "In this notebook, we examined a somewhat simplistic application of deep learning to classify images of numbers from the MNIST dataset. We tried several techniques to improve the performance of our classifier, such as augmentation of the training set with our own artificial examples, changing the learning rate, and adding additional hidden units. We also explored dropout to prevent overfitting of the network. In the end, we were able to get decent performance out of our Deep Belief Network, improving our initial accuracy from 97.95% to 98.54% and increasing the f1 score for most of our classification categories. \n", 1421 | "\n", 1422 | "Possible ideas for improvement:\n", 1423 | "\n", 1424 | "- Try utilizing a GPU (graphics processing unit) to train our network instead. Neural networks tend to be much faster when using one (or several!) You could either configure nolearn with cudamat on a remote instance such as AWS (Amazon Web Services) or use a graphics card on a more powerful desktop computer. Adrian Rosebrook's post experiments with that [here](http://www.pyimagesearch.com/2014/10/13/deep-learning-amazon-ec2-gpu-python-nolearn/).\n", 1425 | "\n", 1426 | "- Experiment with different parameters for further fine tuning. We could try increasing the number of layers for further complexity or increasing the number of epochs.\n", 1427 | "\n", 1428 | "- Try a larger number of artificial examples (if we have more memory available) to see how that improves the accuracy further. Doing so with enough additional examples should make pushing 99% accuracy a reality.\n", 1429 | "\n" 1430 | ] 1431 | } 1432 | ], 1433 | "metadata": { 1434 | "kernelspec": { 1435 | "display_name": "Python 2", 1436 | "language": "python", 1437 | "name": "python2" 1438 | }, 1439 | "language_info": { 1440 | "codemirror_mode": { 1441 | "name": "ipython", 1442 | "version": 2 1443 | }, 1444 | "file_extension": ".py", 1445 | "mimetype": "text/x-python", 1446 | "name": "python", 1447 | "nbconvert_exporter": "python", 1448 | "pygments_lexer": "ipython2", 1449 | "version": "2.7.6" 1450 | } 1451 | }, 1452 | "nbformat": 4, 1453 | "nbformat_minor": 0 1454 | } 1455 | -------------------------------------------------------------------------------- /Deep_Learning.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Deep_Learning.jpg -------------------------------------------------------------------------------- /Delays.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Delays.jpg -------------------------------------------------------------------------------- /MaskTrain.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/MaskTrain.png -------------------------------------------------------------------------------- /Movie_thtr.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Movie_thtr.jpg -------------------------------------------------------------------------------- /NLP_Movies.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Natural Language Processing in a Kaggle Competition: Movie Reviews" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "\n", 15 | "[Source](http://imgkid.com/movie-theater-wallpaper.shtml)\n", 16 | "\n", 17 | "I decided to try playing around with a Kaggle competition. In this case, I entered the [\"When bag of words meets bags of popcorn\"](http://www.kaggle.com/c/word2vec-nlp-tutorial) contest. This contest isn't for money; it is just a way to learn about various machine learning approaches. " 18 | ] 19 | }, 20 | { 21 | "cell_type": "markdown", 22 | "metadata": {}, 23 | "source": [ 24 | "The competition was trying to showcase Google's [Word2Vec](https://code.google.com/p/word2vec/). This essentially uses deep learning to find features in text that can be used to help in classification tasks. Specifically, in the case of this contest, the goal involves labeling the sentiment of a movie review from IMDB. Ratings were on a 10 point scale, and any review of 7 or greater was considered a positive movie review." 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "Originally, I was going to try out Word2Vec and train it on unlabeled reviews, but then one of the competitors [pointed out](http://www.kaggle.com/c/word2vec-nlp-tutorial/forums/t/11261/beat-the-benchmark-with-shallow-learning-0-95-lb) that you could simply use a less complicated classifier to do this and still get a good result. " 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": {}, 37 | "source": [ 38 | "I decided to take this basic inspiration and try a few various classifiers to see what I could come up with. The highest my score received was 6th place back in December of 2014, but then people started using [ensemble methods](http://sebastianraschka.com/Articles/2014_ensemble_classifier.html) to combine various models together and get a perfect score after a lot of fine tuning with the parameters of the ensemble weights. " 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "Hopefully, this notebook will help you understand some basic NLP (Natural Language Processing) techniques, along with some tips on using [scikit-learn](http://scikit-learn.org/stable/) to make your classification models." 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "## Cleaning the Reviews" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "The first thing we need to do is create a simple function that will clean the reviews into a format we can use. We just want the raw text, not all of the other associated HTML, symbols, or other junk. \n", 60 | "\n", 61 | "We will need a couple of very nice libraries for this task: BeautifulSoup for taking care of anything HTML related and re for regular expressions. " 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": 2, 67 | "metadata": { 68 | "collapsed": false 69 | }, 70 | "outputs": [], 71 | "source": [ 72 | "import re\n", 73 | "from bs4 import BeautifulSoup " 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "Now set up our function. This will clean all of the reviews for us." 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 3, 86 | "metadata": { 87 | "collapsed": false 88 | }, 89 | "outputs": [], 90 | "source": [ 91 | "def review_to_wordlist(review):\n", 92 | " '''\n", 93 | " Meant for converting each of the IMDB reviews into a list of words.\n", 94 | " '''\n", 95 | " # First remove the HTML.\n", 96 | " review_text = BeautifulSoup(review).get_text()\n", 97 | " \n", 98 | " # Use regular expressions to only include words.\n", 99 | " review_text = re.sub(\"[^a-zA-Z]\",\" \", review_text)\n", 100 | " \n", 101 | " # Convert words to lower case and split them into separate words.\n", 102 | " words = review_text.lower().split()\n", 103 | " \n", 104 | " # Return a list of words\n", 105 | " return(words)" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "metadata": {}, 111 | "source": [ 112 | "Great! Now it is time to go ahead and load our data in. For this, pandas is definitely the library of choice. If you want to follow along with a downloaded version of the notebook yourself, make sure you obtain the [data](http://www.kaggle.com/c/word2vec-nlp-tutorial/data) from Kaggle. You will need a Kaggle account in order to access it." 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": 4, 118 | "metadata": { 119 | "collapsed": false 120 | }, 121 | "outputs": [], 122 | "source": [ 123 | "import pandas as pd" 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": 5, 129 | "metadata": { 130 | "collapsed": false 131 | }, 132 | "outputs": [], 133 | "source": [ 134 | "train = pd.read_csv('labeledTrainData.tsv', header=0,\n", 135 | " delimiter=\"\\t\", quoting=3)\n", 136 | "test = pd.read_csv('testData.tsv', header=0, delimiter=\"\\t\",\n", 137 | " quoting=3 )\n", 138 | " \n", 139 | "# Import both the training and test data.\n" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "Now it is time to get the labels from the training set for our reviews. That way, we can teach our classifier which reviews are positive vs. negative." 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": 6, 152 | "metadata": { 153 | "collapsed": false 154 | }, 155 | "outputs": [], 156 | "source": [ 157 | "y_train = train['sentiment']" 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "metadata": {}, 163 | "source": [ 164 | "Now we need to clean both the train and test data to get it ready for the next part of our program. " 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": 8, 170 | "metadata": { 171 | "collapsed": false 172 | }, 173 | "outputs": [], 174 | "source": [ 175 | "traindata = []\n", 176 | "for i in xrange(0,len(train['review'])):\n", 177 | " traindata.append(\" \".join(review_to_wordlist(train['review'][i])))\n", 178 | "testdata = []\n", 179 | "for i in xrange(0,len(test['review'])):\n", 180 | " testdata.append(\" \".join(review_to_wordlist(test['review'][i])))" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "## TF-IDF Vectorization" 188 | ] 189 | }, 190 | { 191 | "cell_type": "markdown", 192 | "metadata": {}, 193 | "source": [ 194 | "The next thing we are going to do is make TF-IDF (term frequency-interdocument frequency) vectors of our reviews. In case you are not familiar with what this is doing, essentially we are going to evaluate how often a certain term occurs in a review, but normalize this somewhat by how many reviews a certain term also occurs in. [Wikipedia](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) has an explanation that is sufficient if you want further information. \n", 195 | "\n", 196 | "This can be a great technique for helping to determine which words (or ngrams of words) will make good features to classify a review as positive or negative. " 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "To do this, we are going to use the TFIDF vectorizer from scikit-learn. Then, decide what settings to use. The documentation for the TFIDF class is available [here](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html).\n", 204 | "\n", 205 | "In the case of the example code on Kaggle, they decided to remove all stop words, along with ngrams up to a size of two (you could use more but this will require a LOT of memory, so be careful which settings you use!)" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": 10, 211 | "metadata": { 212 | "collapsed": false 213 | }, 214 | "outputs": [], 215 | "source": [ 216 | "from sklearn.feature_extraction.text import TfidfVectorizer as TFIV" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": 11, 222 | "metadata": { 223 | "collapsed": false 224 | }, 225 | "outputs": [], 226 | "source": [ 227 | "tfv = TFIV(min_df=3, max_features=None, \n", 228 | " strip_accents='unicode', analyzer='word',token_pattern=r'\\w{1,}',\n", 229 | " ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1,\n", 230 | " stop_words = 'english')" 231 | ] 232 | }, 233 | { 234 | "cell_type": "markdown", 235 | "metadata": {}, 236 | "source": [ 237 | "Now that we have the vectorization object, we need to run this on all of the data (both training and testing) to make sure it is applied to both datasets. This could take some time on your computer!" 238 | ] 239 | }, 240 | { 241 | "cell_type": "code", 242 | "execution_count": 12, 243 | "metadata": { 244 | "collapsed": false 245 | }, 246 | "outputs": [], 247 | "source": [ 248 | "X_all = traindata + testdata # Combine both to fit the TFIDF vectorization.\n", 249 | "lentrain = len(traindata)\n", 250 | "\n", 251 | "tfv.fit(X_all) # This is the slow part!\n", 252 | "X_all = tfv.transform(X_all)\n", 253 | "\n", 254 | "X = X_all[:lentrain] # Separate back into training and test sets. \n", 255 | "X_test = X_all[lentrain:]\n" 256 | ] 257 | }, 258 | { 259 | "cell_type": "markdown", 260 | "metadata": {}, 261 | "source": [ 262 | "## Making Our Classifiers " 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "metadata": {}, 268 | "source": [ 269 | "Because we are working with text data, and we just made feature vectors of every word (that isn't a stop word of course) in all of the reviews, we are going to have sparse matrices to deal with that are quite large in size. Just to show you what I mean, let's examine the shape of our training set. " 270 | ] 271 | }, 272 | { 273 | "cell_type": "code", 274 | "execution_count": 14, 275 | "metadata": { 276 | "collapsed": false 277 | }, 278 | "outputs": [ 279 | { 280 | "data": { 281 | "text/plain": [ 282 | "(25000, 309798)" 283 | ] 284 | }, 285 | "execution_count": 14, 286 | "metadata": {}, 287 | "output_type": "execute_result" 288 | } 289 | ], 290 | "source": [ 291 | "X.shape" 292 | ] 293 | }, 294 | { 295 | "cell_type": "markdown", 296 | "metadata": {}, 297 | "source": [ 298 | "That means we have 25,000 training examples (or rows) and 309,798 features (or columns). We need something that is going to be somewhat computationally efficient given how many features we have. Using something like a random forest to classify would be unwieldy (plus random forests can't work with sparse matrices anyway yet in scikit-learn). That means we need something lightweight and fast that scales to many dimensions well. Some possible candidates are:\n", 299 | "\n", 300 | "* Naive Bayes\n", 301 | "* Logistic Regression\n", 302 | "* SGD Classifier (utilizes Stochastic Gradient Descent for much faster runtime)\n", 303 | "\n", 304 | "Let's just try all three as submissions to Kaggle and see how they perform. \n", 305 | "\n", 306 | "First up: Logistic Regression (see the scikit-learn documentation [here](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html))." 307 | ] 308 | }, 309 | { 310 | "cell_type": "markdown", 311 | "metadata": {}, 312 | "source": [ 313 | "While in theory L1 regularization should work well because p>>n (many more features than training examples), I actually found through a lot of testing that L2 regularization got better results. You could set up your own trials using scikit-learn's built-in GridSearch class, which makes things a lot easier to try. I found through my testing that using a parameter C of 30 got the best results." 314 | ] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": 15, 319 | "metadata": { 320 | "collapsed": false 321 | }, 322 | "outputs": [], 323 | "source": [ 324 | "from sklearn.linear_model import LogisticRegression as LR\n", 325 | "from sklearn.grid_search import GridSearchCV" 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": 17, 331 | "metadata": { 332 | "collapsed": false 333 | }, 334 | "outputs": [ 335 | { 336 | "data": { 337 | "text/plain": [ 338 | "GridSearchCV(cv=20,\n", 339 | " estimator=LogisticRegression(C=1.0, class_weight=None, dual=True, fit_intercept=True,\n", 340 | " intercept_scaling=1, penalty='L2', random_state=0, tol=0.0001),\n", 341 | " fit_params={}, iid=True, loss_func=None, n_jobs=1,\n", 342 | " param_grid={'C': [30]}, pre_dispatch='2*n_jobs', refit=True,\n", 343 | " score_func=None, scoring='roc_auc', verbose=0)" 344 | ] 345 | }, 346 | "execution_count": 17, 347 | "metadata": {}, 348 | "output_type": "execute_result" 349 | } 350 | ], 351 | "source": [ 352 | "grid_values = {'C':[30]} # Decide which settings you want for the grid search. \n", 353 | "\n", 354 | "model_LR = GridSearchCV(LR(penalty = 'L2', dual = True, random_state = 0), \n", 355 | " grid_values, scoring = 'roc_auc', cv = 20) \n", 356 | "# Try to set the scoring on what the contest is asking for. \n", 357 | "# The contest says scoring is for area under the ROC curve, so use this.\n", 358 | " \n", 359 | "model_LR.fit(X,y_train) # Fit the model." 360 | ] 361 | }, 362 | { 363 | "cell_type": "markdown", 364 | "metadata": {}, 365 | "source": [ 366 | "You can investigate which parameters did the best and what scores they received by looking at the model_LR object." 367 | ] 368 | }, 369 | { 370 | "cell_type": "code", 371 | "execution_count": 18, 372 | "metadata": { 373 | "collapsed": false 374 | }, 375 | "outputs": [ 376 | { 377 | "data": { 378 | "text/plain": [ 379 | "[mean: 0.96459, std: 0.00489, params: {'C': 30}]" 380 | ] 381 | }, 382 | "execution_count": 18, 383 | "metadata": {}, 384 | "output_type": "execute_result" 385 | } 386 | ], 387 | "source": [ 388 | "model_LR.grid_scores_" 389 | ] 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": 19, 394 | "metadata": { 395 | "collapsed": false 396 | }, 397 | "outputs": [ 398 | { 399 | "data": { 400 | "text/plain": [ 401 | "LogisticRegression(C=30, class_weight=None, dual=True, fit_intercept=True,\n", 402 | " intercept_scaling=1, penalty='L2', random_state=0, tol=0.0001)" 403 | ] 404 | }, 405 | "execution_count": 19, 406 | "metadata": {}, 407 | "output_type": "execute_result" 408 | } 409 | ], 410 | "source": [ 411 | "model_LR.best_estimator_" 412 | ] 413 | }, 414 | { 415 | "cell_type": "markdown", 416 | "metadata": {}, 417 | "source": [ 418 | "Feel free, if you have an interactive version of the notebook, to play around with various settings inside the grid_values object to optimize your ROC_AUC score. Otherwise, let's move on to the next classifier, Naive Bayes. \n", 419 | "\n", 420 | "Unlike Logistic Regression, Naive Bayes doesn't have a regularization parameter to tune. You just have to choose which \"flavor\" of Naive Bayes to use. \n", 421 | "\n", 422 | "According to the [documentation on Naive Bayes from scikit-learn](http://scikit-learn.org/0.13/modules/naive_bayes.html), Multinomial is our best version to use, since we no longer have just a 1 or 0 for a word feature: it has been normalized by TF-IDF, so our values will be BETWEEN 0 and 1 (most of the time, although having a few TF-IDF scores exceed 1 is technically possible). If we were just looking at word occurrence vectors (with no counting), Bernoulli would have been a better fit since it is based on binary values. \n", 423 | "\n", 424 | "Let's make our Multinomial Naive Bayes object, and train it." 425 | ] 426 | }, 427 | { 428 | "cell_type": "code", 429 | "execution_count": 20, 430 | "metadata": { 431 | "collapsed": false 432 | }, 433 | "outputs": [], 434 | "source": [ 435 | "from sklearn.naive_bayes import MultinomialNB as MNB" 436 | ] 437 | }, 438 | { 439 | "cell_type": "code", 440 | "execution_count": 21, 441 | "metadata": { 442 | "collapsed": false 443 | }, 444 | "outputs": [ 445 | { 446 | "data": { 447 | "text/plain": [ 448 | "MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)" 449 | ] 450 | }, 451 | "execution_count": 21, 452 | "metadata": {}, 453 | "output_type": "execute_result" 454 | } 455 | ], 456 | "source": [ 457 | "model_NB = MNB()\n", 458 | "model_NB.fit(X, y_train)" 459 | ] 460 | }, 461 | { 462 | "cell_type": "markdown", 463 | "metadata": {}, 464 | "source": [ 465 | "Pretty fast, right? This speed comes at a price, however. Naive Bayes assumes all of your features are ENTIRELY independent from each other. In the case of word vectors, that seems like a somewhat reasonable assumption but with the ngrams we included that probably isn't always the case. Because of this, Naive Bayes tends to be less accurate than other classification algorithms, especially if you have a smaller number of training examples. \n", 466 | "\n", 467 | "Why don't we see how Naive Bayes does (at least in a 20 fold CV comparison) so we have a rough idea of how well it performs compared to our Logistic Regression classifier?\n", 468 | "\n", 469 | "You could use GridSearch again, but that seems like overkill. There is a simpler method we can import from scikit-learn for this task." 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": 23, 475 | "metadata": { 476 | "collapsed": false 477 | }, 478 | "outputs": [], 479 | "source": [ 480 | "from sklearn.cross_validation import cross_val_score\n", 481 | "import numpy as np" 482 | ] 483 | }, 484 | { 485 | "cell_type": "code", 486 | "execution_count": 24, 487 | "metadata": { 488 | "collapsed": false 489 | }, 490 | "outputs": [ 491 | { 492 | "name": "stdout", 493 | "output_type": "stream", 494 | "text": [ 495 | "20 Fold CV Score for Multinomial Naive Bayes: 0.949631232\n" 496 | ] 497 | } 498 | ], 499 | "source": [ 500 | "print \"20 Fold CV Score for Multinomial Naive Bayes: \", np.mean(cross_val_score\n", 501 | " (model_NB, X, y_train, cv=20, scoring='roc_auc'))\n", 502 | " # This will give us a 20-fold cross validation score that looks at ROC_AUC so we can compare with Logistic Regression. " 503 | ] 504 | }, 505 | { 506 | "cell_type": "markdown", 507 | "metadata": {}, 508 | "source": [ 509 | "Well, it wasn't quite as good as our well-tuned Logistic Regression classifier, but that is a pretty good score considering how little we had to do!\n", 510 | "\n", 511 | "One last classifier to try is the [SGD classifier](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html), which comes in handy when you need speed on a really large number of training examples/features. \n", 512 | "\n", 513 | "Which machine learning algorithm it ends up using depends on what you set for the loss function. If we chose loss = 'log', it would essentially be identical to our previous logistic regression model. We want to try something different, but we also want a loss option that includes probabilities. We need those probabilities if we are going to be able to calculate the area under a ROC curve. Looking at the documentation, it seems a 'modified_huber' loss would do the trick! This will be a Support Vector Machine that uses a linear kernel. \n", 514 | "\n" 515 | ] 516 | }, 517 | { 518 | "cell_type": "code", 519 | "execution_count": 25, 520 | "metadata": { 521 | "collapsed": false 522 | }, 523 | "outputs": [], 524 | "source": [ 525 | "from sklearn.linear_model import SGDClassifier as SGD" 526 | ] 527 | }, 528 | { 529 | "cell_type": "code", 530 | "execution_count": 26, 531 | "metadata": { 532 | "collapsed": false 533 | }, 534 | "outputs": [ 535 | { 536 | "data": { 537 | "text/plain": [ 538 | "GridSearchCV(cv=20,\n", 539 | " estimator=SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0,\n", 540 | " fit_intercept=True, l1_ratio=0.15, learning_rate='optimal',\n", 541 | " loss='modified_huber', n_iter=5, n_jobs=1, penalty='l2',\n", 542 | " power_t=0.5, random_state=0, shuffle=True, verbose=0,\n", 543 | " warm_start=False),\n", 544 | " fit_params={}, iid=True, loss_func=None, n_jobs=1,\n", 545 | " param_grid={'alpha': [6e-05, 7e-05, 8e-05, 0.0001, 0.0005]},\n", 546 | " pre_dispatch='2*n_jobs', refit=True, score_func=None,\n", 547 | " scoring='roc_auc', verbose=0)" 548 | ] 549 | }, 550 | "execution_count": 26, 551 | "metadata": {}, 552 | "output_type": "execute_result" 553 | } 554 | ], 555 | "source": [ 556 | "sgd_params = {'alpha': [0.00006, 0.00007, 0.00008, 0.0001, 0.0005]} # Regularization parameter\n", 557 | "\n", 558 | "model_SGD = GridSearchCV(SGD(random_state = 0, shuffle = True, loss = 'modified_huber'), \n", 559 | " sgd_params, scoring = 'roc_auc', cv = 20) # Find out which regularization parameter works the best. \n", 560 | " \n", 561 | "model_SGD.fit(X, y_train) # Fit the model." 562 | ] 563 | }, 564 | { 565 | "cell_type": "markdown", 566 | "metadata": {}, 567 | "source": [ 568 | "Again, similar to the Logistic Regression model, we can see which parameter did the best." 569 | ] 570 | }, 571 | { 572 | "cell_type": "code", 573 | "execution_count": 27, 574 | "metadata": { 575 | "collapsed": false 576 | }, 577 | "outputs": [ 578 | { 579 | "data": { 580 | "text/plain": [ 581 | "[mean: 0.96477, std: 0.00484, params: {'alpha': 6e-05},\n", 582 | " mean: 0.96484, std: 0.00481, params: {'alpha': 7e-05},\n", 583 | " mean: 0.96486, std: 0.00480, params: {'alpha': 8e-05},\n", 584 | " mean: 0.96479, std: 0.00480, params: {'alpha': 0.0001},\n", 585 | " mean: 0.95869, std: 0.00484, params: {'alpha': 0.0005}]" 586 | ] 587 | }, 588 | "execution_count": 27, 589 | "metadata": {}, 590 | "output_type": "execute_result" 591 | } 592 | ], 593 | "source": [ 594 | "model_SGD.grid_scores_" 595 | ] 596 | }, 597 | { 598 | "cell_type": "markdown", 599 | "metadata": {}, 600 | "source": [ 601 | "Looks like this beat our previous Logistic Regression model by a very small amount. Now that we have our three models, we can work on submitting our final scores in the proper format. It was found that submitting predicted probabilities of each score instead of the final predicted score worked better for evaluation from the contest participants, so we want to output this instead. \n", 602 | "\n", 603 | "First, do our Logistic Regression submission. \n", 604 | "\n" 605 | ] 606 | }, 607 | { 608 | "cell_type": "code", 609 | "execution_count": 28, 610 | "metadata": { 611 | "collapsed": false 612 | }, 613 | "outputs": [], 614 | "source": [ 615 | "LR_result = model_LR.predict_proba(X_test)[:,1] # We only need the probabilities that the movie review was a 7 or greater. \n", 616 | "LR_output = pd.DataFrame(data={\"id\":test[\"id\"], \"sentiment\":LR_result}) # Create our dataframe that will be written.\n", 617 | "LR_output.to_csv('Logistic_Reg_Proj2.csv', index=False, quoting=3) # Get the .csv file we will submit to Kaggle." 618 | ] 619 | }, 620 | { 621 | "cell_type": "markdown", 622 | "metadata": {}, 623 | "source": [ 624 | "Repeat this with the other two." 625 | ] 626 | }, 627 | { 628 | "cell_type": "code", 629 | "execution_count": 29, 630 | "metadata": { 631 | "collapsed": false 632 | }, 633 | "outputs": [], 634 | "source": [ 635 | "# Repeat this for Multinomial Naive Bayes\n", 636 | "\n", 637 | "MNB_result = model_NB.predict_proba(X_test)[:,1]\n", 638 | "MNB_output = pd.DataFrame(data={\"id\":test[\"id\"], \"sentiment\":MNB_result})\n", 639 | "MNB_output.to_csv('MNB_Proj2.csv', index = False, quoting = 3)\n", 640 | "\n", 641 | "# Last, do the Stochastic Gradient Descent model with modified Huber loss.\n", 642 | "\n", 643 | "SGD_result = model_SGD.predict_proba(X_test)[:,1]\n", 644 | "SGD_output = pd.DataFrame(data={\"id\":test[\"id\"], \"sentiment\":SGD_result})\n", 645 | "SGD_output.to_csv('SGD_Proj2.csv', index = False, quoting = 3)" 646 | ] 647 | }, 648 | { 649 | "cell_type": "markdown", 650 | "metadata": {}, 651 | "source": [ 652 | "Submitting the SGD result (using the linear SVM with modified Huber loss), I received a score of 0.95673 on the Kaggle [leaderboard](http://www.kaggle.com/c/word2vec-nlp-tutorial/leaderboard). That was good enough for sixth place back in December of 2014. " 653 | ] 654 | }, 655 | { 656 | "cell_type": "markdown", 657 | "metadata": {}, 658 | "source": [ 659 | "## Ideas for Improvement and Summary" 660 | ] 661 | }, 662 | { 663 | "cell_type": "markdown", 664 | "metadata": {}, 665 | "source": [ 666 | "In this notebook, we examined a text classification problem and cleaned unstructured review data. Next, we created a vector of features using TF-IDF normalization on a Bag of Words. We then trained these features on three different classifiers, some of which were optimized using 20-fold cross-validation, and made a submission to a Kaggle competition.\n", 667 | "\n", 668 | "Possible ideas for improvement:\n", 669 | "\n", 670 | "- Try increasing the number of ngrams to 3 or 4 in the TF-IDF vectorization and see if this makes a difference\n", 671 | "- Blend the models together into an ensemble that uses a majority vote for the classifiers\n", 672 | "- Try utilizing Word2Vec and creating feature vectors from the unlabeled training data. More data usually helps!" 673 | ] 674 | } 675 | ], 676 | "metadata": { 677 | "kernelspec": { 678 | "display_name": "Python 2", 679 | "language": "python", 680 | "name": "python2" 681 | }, 682 | "language_info": { 683 | "codemirror_mode": { 684 | "name": "ipython", 685 | "version": 2 686 | }, 687 | "file_extension": ".py", 688 | "mimetype": "text/x-python", 689 | "name": "python", 690 | "nbconvert_exporter": "python", 691 | "pygments_lexer": "ipython2", 692 | "version": "2.7.6" 693 | } 694 | }, 695 | "nbformat": 4, 696 | "nbformat_minor": 0 697 | } 698 | -------------------------------------------------------------------------------- /Penn_State.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Penn_State.jpg -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Notebooks 2 | This is where all of the IPython Notebooks will be kept from the blog 3 | -------------------------------------------------------------------------------- /RecEngine_NB.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# A Gentle Introduction to Recommender Systems with Implicit Feedback" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": { 13 | "collapsed": true 14 | }, 15 | "source": [ 16 | "![](Rec_Engine_Image_Amazon.png)" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "## Part 1: Introduction" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "[Recommender systems](https://en.wikipedia.org/wiki/Recommender_system) have become a very important part of the retail, social networking, and entertainment industries. From providing advice on songs for you to try, suggesting books for you to read, or finding clothes to buy, recommender systems have greatly improved the ability of customers to make choices more easily. \n", 31 | "\n", 32 | "Why is it so important for customers to have support in decision making? A well-cited study (over 2000 citations so far!) by [Iyengar and Lepper](https://faculty.washington.edu/jdb/345/345%20Articles/Iyengar%20%26%20Lepper%20%282000%29.pdf) ran an experiment where they had two stands of jam on two different days. One stand had 24 varieties of jam while the second had only six. The stand with 24 varieties of jam only converted 3% of the customers to a sale, while the stand with only six varieties converted 30% of the customers. This was an increase in sales of nearly ten-fold!" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Given the number of possible choices available, especially for online shopping, having some extra guidance on these choices can really make a difference. Xavier Amatriain, now at Quora and previously at Netflix, gave an absolutely outstanding talk on recommender systems at Carnegie Mellon in 2014. I have included the talk below if you would like to see it." 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": 1, 45 | "metadata": { 46 | "collapsed": false 47 | }, 48 | "outputs": [ 49 | { 50 | "data": { 51 | "text/html": [ 52 | "\n", 53 | " \n", 60 | " " 61 | ], 62 | "text/plain": [ 63 | "" 64 | ] 65 | }, 66 | "execution_count": 1, 67 | "metadata": {}, 68 | "output_type": "execute_result" 69 | } 70 | ], 71 | "source": [ 72 | "from IPython.display import YouTubeVideo\n", 73 | "YouTubeVideo('bLhq63ygoU8')" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "Some of the key statistics about recommender systems he notes are the following:\n", 81 | "\n", 82 | "- At Netflix, 2/3 of the movies watched are recommended\n", 83 | "- At Google, news recommendations improved click-through rate (CTR) by 38%\n", 84 | "- For Amazon, 35% of sales come from recommendations" 85 | ] 86 | }, 87 | { 88 | "cell_type": "markdown", 89 | "metadata": {}, 90 | "source": [ 91 | "In addition, at Hulu, incorporating a recommender system improved their CTR by [three times](http://tech.hulu.com/blog/2011/09/19/recommendation-system/) over just recommending the most popular shows back in 2011. When well implemented, recommender systems can give your company a great edge. " 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "metadata": {}, 97 | "source": [ 98 | "That doesn't mean your company should necessarily build one, however. Valerie Coffman posted [this article](http://www.datacommunitydc.org/blog/2013/05/recommendation-engines-why-you-shouldnt-build-one) back in 2013, explaining that you need a fairly large amount of data on your customers and product purchases in order to have enough information for an effective recommender system. It's not for everyone, but if you have enough data, it's a good idea to consider it.\n", 99 | "\n", 100 | "So, let's assume you do have enough data on your customers and items to go about building one. How would you do it? Well, that depends a lot on several factors, such as:\n", 101 | "\n", 102 | "- The kind of data you have about your users/items\n", 103 | "- Ability to scale\n", 104 | "- Recommendation transparency\n", 105 | "\n", 106 | "I will cover a few of the options available and reveal the basic methodology behind each one." 107 | ] 108 | }, 109 | { 110 | "cell_type": "markdown", 111 | "metadata": {}, 112 | "source": [ 113 | "### Content Based (Pandora)" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "In the case of Pandora, the online streaming music company, they decided to engineer features from all of the songs in their catalog as part of the [Music Genome Project](https://en.wikipedia.org/wiki/Music_Genome_Project). Most songs are based on a feature vector of approximately 450 features, which were derived in a very long and arduous process. Once you have this feature set, one technique that works well enough is to treat the recommendation problem as a binary classification problem. This allows one to use more traditional machine learning techniques that output a probability for a certain user to like a specific song based on a training set of their song listening history. Then, simply recommend the songs with the greatest probability of being liked. \n", 121 | "\n", 122 | "Most of the time, however, you aren't going to have features already encoded for all of your products. This would be very difficult, and it took Pandora several years to finish so it probably won't be a great option. " 123 | ] 124 | }, 125 | { 126 | "cell_type": "markdown", 127 | "metadata": {}, 128 | "source": [ 129 | "### Demographic Based (Facebook)" 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "If you have a lot of demographic information about your users like Facebook or LinkedIn does, you may be able to recommend based on similar users and their past behavior. Similar to the content based method, you could derive a feature vector for each of your users and generate models that predict probabilities of liking certain items. \n", 137 | "\n", 138 | "Again, this requires a lot of information about your users that you probably don't have in most cases. \n", 139 | "\n", 140 | "So if you need a method that doesn't care about detailed information regarding your items or your users, collaborative filtering is a very powerful method that works with surprising efficacy." 141 | ] 142 | }, 143 | { 144 | "cell_type": "markdown", 145 | "metadata": {}, 146 | "source": [ 147 | "## Collaborative Filtering" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "This is based on the relationship between users and items, with no information about the users or the items required! All you need is a rating of some kind for each user/item interaction that occurred where available. There are two kinds of data available for this type of interaction: explicit and implicit. \n", 155 | "\n", 156 | "- Explicit: A score, such as a rating or a like\n", 157 | "- Implicit: Not as obvious in terms of preference, such as a click, view, or purchase\n", 158 | "\n", 159 | "The most common example discussed is movie ratings, which are given on a numeric scale. We can easily see whether a user enjoyed a movie based on the rating provided. The problem, however, is that most of the time, people don't provide ratings at all (I am totally guilty of this on Netflix!), so the amount of data available is quite scarce. Netflix at least knows whether I watched something, which requires no further input on my part. It may be the case that I watched something but didn't like it afterwards. So, it can be more difficult to infer whether this type of movie should be considered a positive recommendation or not.\n", 160 | "\n", 161 | "Regardless of this disadvantage, implicit feedback is usually the way to go. Hulu, in a [blog post about their recommendation system](http://tech.hulu.com/blog/2011/09/19/recommendation-system/) states:\n", 162 | "\n", 163 | ">As the quantity of implicit data at Hulu far outweighs the amount of explicit feedback, our system should be designed primarily to work with implicit feedback data." 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "Since more data usually means a better model, implicit feedback is where our efforts should be focused. While there are a variety of ways to tackle collaborative filtering with implicit feedback, I will focus on the method included in Spark's library used for collaborative filtering, alternating least squares (ALS). " 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "### Alternating Least Squares" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "Before we start building our own recommender system on an example problem, I want to explain some of the intuition behind how this method works and why it likely is the only chosen method in Spark's library. We discussed before how collaborative filtering doesn't require any information about the users or items. Well, is there another way we can figure out how the users and the items are related to each other? \n", 185 | "\n", 186 | "It turns out we can if we apply matrix factorization. Often, matrix factorization is applied in the realm of dimensionality reduction, where we are trying to reduce the number of features while still keeping the relevant information. This is the case with [principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) (PCA) and the very similar [singular value decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) (SVD). \n", 187 | "\n", 188 | "Essentially, can we take a large matrix of user/item interactions and figure out the latent (or hidden) features that relate them to each other in a much smaller matrix of user features and item features? That's exactly what ALS is trying to do through matrix factorization. \n", 189 | "\n", 190 | "As the image below demonstrates, let's assume we have an original ratings matrix $R$ of size $MxN$, where $M$ is the number of users and $N$ is the number of items. This matrix is quite sparse, since most users only interact with a few items each. We can factorize this matrix into two separate smaller matrices: one with dimensions $MxK$ which will be our latent user feature vectors for each user $(U)$ and a second with dimensions $KxN$, which will have our latent item feature vectors for each item $(V)$. Multiplying these two feature matrices together approximates the original matrix, but now we have two matrices that are dense including a number of latent features $K$ for each of our items and users. " 191 | ] 192 | }, 193 | { 194 | "cell_type": "markdown", 195 | "metadata": {}, 196 | "source": [ 197 | "![](ALS_Image_Test.png)" 198 | ] 199 | }, 200 | { 201 | "cell_type": "markdown", 202 | "metadata": {}, 203 | "source": [ 204 | "In order to solve for $U$ and $V$, we could either utilize SVD (which would require inverting a potentially very large matrix and be computationally expensive) to solve the factorization more precisely or apply ALS to approximate it. In the case of ALS, we only need to solve one feature vector at a time, which means it can be run in parallel! (This large advantage is probably why it is the method of choice for Spark). To do this, we can randomly initialize $U$ and solve for $V$. Then we can go back and solve for $U$ using our solution for $V$. Keep iterating back and forth like this until we get a convergence that approximates $R$ as best as we can. \n", 205 | "\n", 206 | "After this has been finished, we can simply take the dot product of $U$ and $V$ to see what the predicted rating would be for a specific user/item interaction, even if there was no prior interaction. This basic methodology was adopted for implicit feedback problems in the paper [Collaborative Filtering for Implicit Feedback Datasets](http://yifanhu.net/PUB/cf.pdf) by Hu, Koren, and Volinsky. We will use this paper's method on a real dataset and build our own recommender system. " 207 | ] 208 | }, 209 | { 210 | "cell_type": "markdown", 211 | "metadata": {}, 212 | "source": [ 213 | "## Part 2: Processing the Data" 214 | ] 215 | }, 216 | { 217 | "cell_type": "markdown", 218 | "metadata": {}, 219 | "source": [ 220 | "The data we are using for this example comes from the infamous UCI Machine Learning repository. The dataset is called \"Online Retail\" and is found [here](http://archive.ics.uci.edu/ml/datasets/Online+Retail). As you can see in the description, this dataset contains all purchases made for an online retail company based in the UK during an eight month period. \n", 221 | "\n", 222 | "We need to take all of the transactions for each customer and put these into a format ALS can use. This means we need each unique customer ID in the rows of the matrix, and each unique item ID in the columns of the matrix. The values of the matrix should be the total number of purchases for each item by each customer. \n", 223 | "\n", 224 | "First, let's load some libraries that will help us out with the preprocessing step. Pandas is always helpful!" 225 | ] 226 | }, 227 | { 228 | "cell_type": "code", 229 | "execution_count": 2, 230 | "metadata": { 231 | "collapsed": true 232 | }, 233 | "outputs": [], 234 | "source": [ 235 | "import pandas as pd\n", 236 | "import scipy.sparse as sparse\n", 237 | "import numpy as np\n", 238 | "from scipy.sparse.linalg import spsolve" 239 | ] 240 | }, 241 | { 242 | "cell_type": "markdown", 243 | "metadata": {}, 244 | "source": [ 245 | "The first step is to load the data in. Since the data is saved in an Excel file, we can use Pandas to load it." 246 | ] 247 | }, 248 | { 249 | "cell_type": "code", 250 | "execution_count": 3, 251 | "metadata": { 252 | "collapsed": true 253 | }, 254 | "outputs": [], 255 | "source": [ 256 | "website_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00352/Online%20Retail.xlsx'\n", 257 | "retail_data = pd.read_excel(website_url) # This may take a couple minutes" 258 | ] 259 | }, 260 | { 261 | "cell_type": "markdown", 262 | "metadata": {}, 263 | "source": [ 264 | "Now that the data has been loaded, we can see what is in it. " 265 | ] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": 4, 270 | "metadata": { 271 | "collapsed": false 272 | }, 273 | "outputs": [ 274 | { 275 | "data": { 276 | "text/html": [ 277 | "
\n", 278 | "\n", 279 | " \n", 280 | " \n", 281 | " \n", 282 | " \n", 283 | " \n", 284 | " \n", 285 | " \n", 286 | " \n", 287 | " \n", 288 | " \n", 289 | " \n", 290 | " \n", 291 | " \n", 292 | " \n", 293 | " \n", 294 | " \n", 295 | " \n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " \n", 300 | " \n", 301 | " \n", 302 | " \n", 303 | " \n", 304 | " \n", 305 | " \n", 306 | " \n", 307 | " \n", 308 | " \n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | " \n", 329 | " \n", 330 | " \n", 331 | " \n", 332 | " \n", 333 | " \n", 334 | " \n", 335 | " \n", 336 | " \n", 337 | " \n", 338 | " \n", 339 | " \n", 340 | " \n", 341 | " \n", 342 | " \n", 343 | " \n", 344 | " \n", 345 | " \n", 346 | " \n", 347 | " \n", 348 | " \n", 349 | "
InvoiceNoStockCodeDescriptionQuantityInvoiceDateUnitPriceCustomerIDCountry
053636585123AWHITE HANGING HEART T-LIGHT HOLDER62010-12-01 08:26:002.5517850.0United Kingdom
153636571053WHITE METAL LANTERN62010-12-01 08:26:003.3917850.0United Kingdom
253636584406BCREAM CUPID HEARTS COAT HANGER82010-12-01 08:26:002.7517850.0United Kingdom
353636584029GKNITTED UNION FLAG HOT WATER BOTTLE62010-12-01 08:26:003.3917850.0United Kingdom
453636584029ERED WOOLLY HOTTIE WHITE HEART.62010-12-01 08:26:003.3917850.0United Kingdom
\n", 350 | "
" 351 | ], 352 | "text/plain": [ 353 | " InvoiceNo StockCode Description Quantity \\\n", 354 | "0 536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 \n", 355 | "1 536365 71053 WHITE METAL LANTERN 6 \n", 356 | "2 536365 84406B CREAM CUPID HEARTS COAT HANGER 8 \n", 357 | "3 536365 84029G KNITTED UNION FLAG HOT WATER BOTTLE 6 \n", 358 | "4 536365 84029E RED WOOLLY HOTTIE WHITE HEART. 6 \n", 359 | "\n", 360 | " InvoiceDate UnitPrice CustomerID Country \n", 361 | "0 2010-12-01 08:26:00 2.55 17850.0 United Kingdom \n", 362 | "1 2010-12-01 08:26:00 3.39 17850.0 United Kingdom \n", 363 | "2 2010-12-01 08:26:00 2.75 17850.0 United Kingdom \n", 364 | "3 2010-12-01 08:26:00 3.39 17850.0 United Kingdom \n", 365 | "4 2010-12-01 08:26:00 3.39 17850.0 United Kingdom " 366 | ] 367 | }, 368 | "execution_count": 4, 369 | "metadata": {}, 370 | "output_type": "execute_result" 371 | } 372 | ], 373 | "source": [ 374 | "retail_data.head()" 375 | ] 376 | }, 377 | { 378 | "cell_type": "markdown", 379 | "metadata": {}, 380 | "source": [ 381 | "The dataset includes the invoice number for different purchases, along with the StockCode (or item ID), an item description, the number purchased, the date of purchase, the price of the items, a customer ID, and the country of origin for the customer. \n", 382 | "\n", 383 | "Let's check to see if there are any missing values in the data. " 384 | ] 385 | }, 386 | { 387 | "cell_type": "code", 388 | "execution_count": 5, 389 | "metadata": { 390 | "collapsed": false 391 | }, 392 | "outputs": [ 393 | { 394 | "name": "stdout", 395 | "output_type": "stream", 396 | "text": [ 397 | "\n", 398 | "RangeIndex: 541909 entries, 0 to 541908\n", 399 | "Data columns (total 8 columns):\n", 400 | "InvoiceNo 541909 non-null object\n", 401 | "StockCode 541909 non-null object\n", 402 | "Description 540455 non-null object\n", 403 | "Quantity 541909 non-null int64\n", 404 | "InvoiceDate 541909 non-null datetime64[ns]\n", 405 | "UnitPrice 541909 non-null float64\n", 406 | "CustomerID 406829 non-null float64\n", 407 | "Country 541909 non-null object\n", 408 | "dtypes: datetime64[ns](1), float64(2), int64(1), object(4)\n", 409 | "memory usage: 33.1+ MB\n" 410 | ] 411 | } 412 | ], 413 | "source": [ 414 | "retail_data.info()" 415 | ] 416 | }, 417 | { 418 | "cell_type": "markdown", 419 | "metadata": {}, 420 | "source": [ 421 | "Most columns have no missing values, but Customer ID is missing in several rows. If the customer ID is missing, we don't know who bought the item. We should drop these rows from our data first. We can use the pd.isnull to test for rows with missing data and only keep the rows that have a customer ID. " 422 | ] 423 | }, 424 | { 425 | "cell_type": "code", 426 | "execution_count": 6, 427 | "metadata": { 428 | "collapsed": true 429 | }, 430 | "outputs": [], 431 | "source": [ 432 | "cleaned_retail = retail_data.loc[pd.isnull(retail_data.CustomerID) == False]" 433 | ] 434 | }, 435 | { 436 | "cell_type": "code", 437 | "execution_count": 7, 438 | "metadata": { 439 | "collapsed": false 440 | }, 441 | "outputs": [ 442 | { 443 | "name": "stdout", 444 | "output_type": "stream", 445 | "text": [ 446 | "\n", 447 | "Int64Index: 406829 entries, 0 to 541908\n", 448 | "Data columns (total 8 columns):\n", 449 | "InvoiceNo 406829 non-null object\n", 450 | "StockCode 406829 non-null object\n", 451 | "Description 406829 non-null object\n", 452 | "Quantity 406829 non-null int64\n", 453 | "InvoiceDate 406829 non-null datetime64[ns]\n", 454 | "UnitPrice 406829 non-null float64\n", 455 | "CustomerID 406829 non-null float64\n", 456 | "Country 406829 non-null object\n", 457 | "dtypes: datetime64[ns](1), float64(2), int64(1), object(4)\n", 458 | "memory usage: 27.9+ MB\n" 459 | ] 460 | } 461 | ], 462 | "source": [ 463 | "cleaned_retail.info()" 464 | ] 465 | }, 466 | { 467 | "cell_type": "markdown", 468 | "metadata": {}, 469 | "source": [ 470 | "Much better. Now we have no missing values and all of the purchases can be matched to a specific customer. \n", 471 | "\n", 472 | "Before we make any sort of ratings matrix, it would be nice to have a lookup table that keeps track of each item ID along with a description of that item. Let's make that now. " 473 | ] 474 | }, 475 | { 476 | "cell_type": "code", 477 | "execution_count": 8, 478 | "metadata": { 479 | "collapsed": true 480 | }, 481 | "outputs": [], 482 | "source": [ 483 | "item_lookup = cleaned_retail[['StockCode', 'Description']].drop_duplicates() # Only get unique item/description pairs\n", 484 | "item_lookup['StockCode'] = item_lookup.StockCode.astype(str) # Encode as strings for future lookup ease" 485 | ] 486 | }, 487 | { 488 | "cell_type": "code", 489 | "execution_count": 9, 490 | "metadata": { 491 | "collapsed": false 492 | }, 493 | "outputs": [ 494 | { 495 | "data": { 496 | "text/html": [ 497 | "
\n", 498 | "\n", 499 | " \n", 500 | " \n", 501 | " \n", 502 | " \n", 503 | " \n", 504 | " \n", 505 | " \n", 506 | " \n", 507 | " \n", 508 | " \n", 509 | " \n", 510 | " \n", 511 | " \n", 512 | " \n", 513 | " \n", 514 | " \n", 515 | " \n", 516 | " \n", 517 | " \n", 518 | " \n", 519 | " \n", 520 | " \n", 521 | " \n", 522 | " \n", 523 | " \n", 524 | " \n", 525 | " \n", 526 | " \n", 527 | " \n", 528 | " \n", 529 | " \n", 530 | " \n", 531 | " \n", 532 | " \n", 533 | "
StockCodeDescription
085123AWHITE HANGING HEART T-LIGHT HOLDER
171053WHITE METAL LANTERN
284406BCREAM CUPID HEARTS COAT HANGER
384029GKNITTED UNION FLAG HOT WATER BOTTLE
484029ERED WOOLLY HOTTIE WHITE HEART.
\n", 534 | "
" 535 | ], 536 | "text/plain": [ 537 | " StockCode Description\n", 538 | "0 85123A WHITE HANGING HEART T-LIGHT HOLDER\n", 539 | "1 71053 WHITE METAL LANTERN\n", 540 | "2 84406B CREAM CUPID HEARTS COAT HANGER\n", 541 | "3 84029G KNITTED UNION FLAG HOT WATER BOTTLE\n", 542 | "4 84029E RED WOOLLY HOTTIE WHITE HEART." 543 | ] 544 | }, 545 | "execution_count": 9, 546 | "metadata": {}, 547 | "output_type": "execute_result" 548 | } 549 | ], 550 | "source": [ 551 | "item_lookup.head()" 552 | ] 553 | }, 554 | { 555 | "cell_type": "markdown", 556 | "metadata": {}, 557 | "source": [ 558 | "This can tell us what each item is, such as that StockCode 71053 is a white metal lantern. Now that this has been created, we need to:\n", 559 | "\n", 560 | "- Group purchase quantities together by stock code and item ID\n", 561 | "- Change any sums that equal zero to one (this can happen if items were returned, but we want to indicate that the user actually purchased the item instead of assuming no interaction between the user and the item ever took place)\n", 562 | "- Only include customers with a positive purchase total to eliminate possible errors\n", 563 | "- Set up our sparse ratings matrix\n", 564 | "\n", 565 | "This last step is especially important if you don't want to have unnecessary memory issues! If you think about it, our matrix is going to contain thousands of items and thousands of users with a user/item value required for every possible combination. That is a LARGE matrix, so we can save a lot of memory by keeping the matrix sparse and only saving the locations and values of items that are not zero. \n", 566 | "\n", 567 | "The code below will finish the preprocessing steps necessary for our final ratings sparse matrix:" 568 | ] 569 | }, 570 | { 571 | "cell_type": "code", 572 | "execution_count": 10, 573 | "metadata": { 574 | "collapsed": false 575 | }, 576 | "outputs": [ 577 | { 578 | "name": "stderr", 579 | "output_type": "stream", 580 | "text": [ 581 | "//anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: SettingWithCopyWarning: \n", 582 | "A value is trying to be set on a copy of a slice from a DataFrame.\n", 583 | "Try using .loc[row_indexer,col_indexer] = value instead\n", 584 | "\n", 585 | "See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n", 586 | " if __name__ == '__main__':\n", 587 | "//anaconda/lib/python3.5/site-packages/pandas/core/indexing.py:128: SettingWithCopyWarning: \n", 588 | "A value is trying to be set on a copy of a slice from a DataFrame\n", 589 | "\n", 590 | "See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n", 591 | " self._setitem_with_indexer(indexer, value)\n" 592 | ] 593 | } 594 | ], 595 | "source": [ 596 | "cleaned_retail['CustomerID'] = cleaned_retail.CustomerID.astype(int) # Convert to int for customer ID\n", 597 | "cleaned_retail = cleaned_retail[['StockCode', 'Quantity', 'CustomerID']] # Get rid of unnecessary info\n", 598 | "grouped_cleaned = cleaned_retail.groupby(['CustomerID', 'StockCode']).sum().reset_index() # Group together\n", 599 | "grouped_cleaned.Quantity.loc[grouped_cleaned.Quantity == 0] = 1 # Replace a sum of zero purchases with a one to\n", 600 | "# indicate purchased\n", 601 | "grouped_purchased = grouped_cleaned.query('Quantity > 0') # Only get customers where purchase totals were positive" 602 | ] 603 | }, 604 | { 605 | "cell_type": "markdown", 606 | "metadata": {}, 607 | "source": [ 608 | "If we look at our final resulting matrix of grouped purchases, we see the following:" 609 | ] 610 | }, 611 | { 612 | "cell_type": "code", 613 | "execution_count": 11, 614 | "metadata": { 615 | "collapsed": false 616 | }, 617 | "outputs": [ 618 | { 619 | "data": { 620 | "text/html": [ 621 | "
\n", 622 | "\n", 623 | " \n", 624 | " \n", 625 | " \n", 626 | " \n", 627 | " \n", 628 | " \n", 629 | " \n", 630 | " \n", 631 | " \n", 632 | " \n", 633 | " \n", 634 | " \n", 635 | " \n", 636 | " \n", 637 | " \n", 638 | " \n", 639 | " \n", 640 | " \n", 641 | " \n", 642 | " \n", 643 | " \n", 644 | " \n", 645 | " \n", 646 | " \n", 647 | " \n", 648 | " \n", 649 | " \n", 650 | " \n", 651 | " \n", 652 | " \n", 653 | " \n", 654 | " \n", 655 | " \n", 656 | " \n", 657 | " \n", 658 | " \n", 659 | " \n", 660 | " \n", 661 | " \n", 662 | " \n", 663 | "
CustomerIDStockCodeQuantity
012346231661
1123471600824
2123471702136
312347206656
4123472071940
\n", 664 | "
" 665 | ], 666 | "text/plain": [ 667 | " CustomerID StockCode Quantity\n", 668 | "0 12346 23166 1\n", 669 | "1 12347 16008 24\n", 670 | "2 12347 17021 36\n", 671 | "3 12347 20665 6\n", 672 | "4 12347 20719 40" 673 | ] 674 | }, 675 | "execution_count": 11, 676 | "metadata": {}, 677 | "output_type": "execute_result" 678 | } 679 | ], 680 | "source": [ 681 | "grouped_purchased.head()" 682 | ] 683 | }, 684 | { 685 | "cell_type": "markdown", 686 | "metadata": {}, 687 | "source": [ 688 | "Instead of representing an explicit rating, the purchase quantity can represent a \"confidence\" in terms of how strong the interaction was. Items with a larger number of purchases by a customer can carry more weight in our ratings matrix of purchases. \n", 689 | "\n", 690 | "Our last step is to create the sparse ratings matrix of users and items utilizing the code below:" 691 | ] 692 | }, 693 | { 694 | "cell_type": "code", 695 | "execution_count": 12, 696 | "metadata": { 697 | "collapsed": true 698 | }, 699 | "outputs": [], 700 | "source": [ 701 | "customers = list(np.sort(grouped_purchased.CustomerID.unique())) # Get our unique customers\n", 702 | "products = list(grouped_purchased.StockCode.unique()) # Get our unique products that were purchased\n", 703 | "quantity = list(grouped_purchased.Quantity) # All of our purchases\n", 704 | "\n", 705 | "rows = grouped_purchased.CustomerID.astype('category', categories = customers).cat.codes \n", 706 | "# Get the associated row indices\n", 707 | "cols = grouped_purchased.StockCode.astype('category', categories = products).cat.codes \n", 708 | "# Get the associated column indices\n", 709 | "purchases_sparse = sparse.csr_matrix((quantity, (rows, cols)), shape=(len(customers), len(products)))" 710 | ] 711 | }, 712 | { 713 | "cell_type": "markdown", 714 | "metadata": {}, 715 | "source": [ 716 | "Let's check our final matrix object:" 717 | ] 718 | }, 719 | { 720 | "cell_type": "code", 721 | "execution_count": 13, 722 | "metadata": { 723 | "collapsed": false 724 | }, 725 | "outputs": [ 726 | { 727 | "data": { 728 | "text/plain": [ 729 | "<4338x3664 sparse matrix of type ''\n", 730 | "\twith 266723 stored elements in Compressed Sparse Row format>" 731 | ] 732 | }, 733 | "execution_count": 13, 734 | "metadata": {}, 735 | "output_type": "execute_result" 736 | } 737 | ], 738 | "source": [ 739 | "purchases_sparse" 740 | ] 741 | }, 742 | { 743 | "cell_type": "markdown", 744 | "metadata": {}, 745 | "source": [ 746 | "We have 4338 customers with 3664 items. For these user/item interactions, 266723 of these items had a purchase. In terms of sparsity of the matrix, that makes:" 747 | ] 748 | }, 749 | { 750 | "cell_type": "code", 751 | "execution_count": 14, 752 | "metadata": { 753 | "collapsed": false 754 | }, 755 | "outputs": [ 756 | { 757 | "data": { 758 | "text/plain": [ 759 | "98.32190920694744" 760 | ] 761 | }, 762 | "execution_count": 14, 763 | "metadata": {}, 764 | "output_type": "execute_result" 765 | } 766 | ], 767 | "source": [ 768 | "matrix_size = purchases_sparse.shape[0]*purchases_sparse.shape[1] # Number of possible interactions in the matrix\n", 769 | "num_purchases = len(purchases_sparse.nonzero()[0]) # Number of items interacted with\n", 770 | "sparsity = 100*(1 - (num_purchases/matrix_size))\n", 771 | "sparsity" 772 | ] 773 | }, 774 | { 775 | "cell_type": "markdown", 776 | "metadata": {}, 777 | "source": [ 778 | "98.3% of the interaction matrix is sparse. For collaborative filtering to work, the maximum sparsity you could get away with would probably be about 99.5% or so. We are well below this, so we should be able to get decent results. " 779 | ] 780 | }, 781 | { 782 | "cell_type": "markdown", 783 | "metadata": {}, 784 | "source": [ 785 | "## Part 3: Creating a Training and Validation Set" 786 | ] 787 | }, 788 | { 789 | "cell_type": "markdown", 790 | "metadata": {}, 791 | "source": [ 792 | "Typically in Machine Learning applications, we need to test whether the model we just trained is any good on new data it hasn't yet seen before from the training phase. We do this by creating a test set completely separate from the training set. Usually this is fairly simple: just take a random sample of the training example rows in our feature matrix and separate it away from the training set. That normally looks like this:" 793 | ] 794 | }, 795 | { 796 | "cell_type": "markdown", 797 | "metadata": {}, 798 | "source": [ 799 | "![](Traintest_ex.png)" 800 | ] 801 | }, 802 | { 803 | "cell_type": "markdown", 804 | "metadata": {}, 805 | "source": [ 806 | "With collaborative filtering, that's not going to work because you need all of the user/item interactions to find the proper matrix factorization. A better method is to hide a certain percentage of the user/item interactions from the model during the training phase chosen at random. Then, check during the test phase how many of the items that were recommended the user actually ended up purchasing in the end. Ideally, you would ultimately test your recommendations with some kind of A/B test or utilizing data from a time series where all data prior to a certain point in time is used for training while data after a certain period of time is used for testing. \n", 807 | "\n", 808 | "For this example, because the time period is only 8 months and because of the purchasing type (products), it is most likely products won't be purchased again in a short time period anyway. This will be a better test. You can see an example here:" 809 | ] 810 | }, 811 | { 812 | "cell_type": "markdown", 813 | "metadata": {}, 814 | "source": [ 815 | "![](MaskTrain.png)" 816 | ] 817 | }, 818 | { 819 | "cell_type": "markdown", 820 | "metadata": {}, 821 | "source": [ 822 | "Our test set is an exact copy of our original data. The training set, however, will mask a random percentage of user/item interactions and act as if the user never purchased the item (making it a sparse entry with a zero). We then check in the test set which items were recommended to the user that they ended up actually purchasing. If the users frequently ended up purchasing the items most recommended to them by the system, we can conclude the system seems to be working. \n", 823 | "\n", 824 | "As an additional check, we can compare our system to simply recommending the most popular items to every user (beating popularity is a bit difficult). This will be our baseline. \n", 825 | "\n", 826 | "This method of testing isn't necessarily the \"correct\" answer, because it depends on how you want to use the recommender system. However, it is a practical way of testing performance I will use for this example.\n", 827 | "\n", 828 | "Now that we have a plan on how to separate our training and testing sets, let's create a function that can do this for us. We will also import the random library and set a seed so that you will see the same results as I did." 829 | ] 830 | }, 831 | { 832 | "cell_type": "code", 833 | "execution_count": 15, 834 | "metadata": { 835 | "collapsed": true 836 | }, 837 | "outputs": [], 838 | "source": [ 839 | "import random" 840 | ] 841 | }, 842 | { 843 | "cell_type": "code", 844 | "execution_count": 16, 845 | "metadata": { 846 | "collapsed": true 847 | }, 848 | "outputs": [], 849 | "source": [ 850 | "def make_train(ratings, pct_test = 0.2):\n", 851 | " '''\n", 852 | " This function will take in the original user-item matrix and \"mask\" a percentage of the original ratings where a\n", 853 | " user-item interaction has taken place for use as a test set. The test set will contain all of the original ratings, \n", 854 | " while the training set replaces the specified percentage of them with a zero in the original ratings matrix. \n", 855 | " \n", 856 | " parameters: \n", 857 | " \n", 858 | " ratings - the original ratings matrix from which you want to generate a train/test set. Test is just a complete\n", 859 | " copy of the original set. This is in the form of a sparse csr_matrix. \n", 860 | " \n", 861 | " pct_test - The percentage of user-item interactions where an interaction took place that you want to mask in the \n", 862 | " training set for later comparison to the test set, which contains all of the original ratings. \n", 863 | " \n", 864 | " returns:\n", 865 | " \n", 866 | " training_set - The altered version of the original data with a certain percentage of the user-item pairs \n", 867 | " that originally had interaction set back to zero.\n", 868 | " \n", 869 | " test_set - A copy of the original ratings matrix, unaltered, so it can be used to see how the rank order \n", 870 | " compares with the actual interactions.\n", 871 | " \n", 872 | " user_inds - From the randomly selected user-item indices, which user rows were altered in the training data.\n", 873 | " This will be necessary later when evaluating the performance via AUC.\n", 874 | " '''\n", 875 | " test_set = ratings.copy() # Make a copy of the original set to be the test set. \n", 876 | " test_set[test_set != 0] = 1 # Store the test set as a binary preference matrix\n", 877 | " training_set = ratings.copy() # Make a copy of the original data we can alter as our training set. \n", 878 | " nonzero_inds = training_set.nonzero() # Find the indices in the ratings data where an interaction exists\n", 879 | " nonzero_pairs = list(zip(nonzero_inds[0], nonzero_inds[1])) # Zip these pairs together of user,item index into list\n", 880 | " random.seed(0) # Set the random seed to zero for reproducibility\n", 881 | " num_samples = int(np.ceil(pct_test*len(nonzero_pairs))) # Round the number of samples needed to the nearest integer\n", 882 | " samples = random.sample(nonzero_pairs, num_samples) # Sample a random number of user-item pairs without replacement\n", 883 | " user_inds = [index[0] for index in samples] # Get the user row indices\n", 884 | " item_inds = [index[1] for index in samples] # Get the item column indices\n", 885 | " training_set[user_inds, item_inds] = 0 # Assign all of the randomly chosen user-item pairs to zero\n", 886 | " training_set.eliminate_zeros() # Get rid of zeros in sparse array storage after update to save space\n", 887 | " return training_set, test_set, list(set(user_inds)) # Output the unique list of user rows that were altered " 888 | ] 889 | }, 890 | { 891 | "cell_type": "markdown", 892 | "metadata": {}, 893 | "source": [ 894 | "This will return our training set, a test set that has been binarized to 0/1 for purchased/not purchased, and a list of which users had at least one item masked. We will test the performance of the recommender system on these users only. I am masking 20% of the user/item interactions for this example. " 895 | ] 896 | }, 897 | { 898 | "cell_type": "code", 899 | "execution_count": 17, 900 | "metadata": { 901 | "collapsed": true 902 | }, 903 | "outputs": [], 904 | "source": [ 905 | "product_train, product_test, product_users_altered = make_train(purchases_sparse, pct_test = 0.2)" 906 | ] 907 | }, 908 | { 909 | "cell_type": "markdown", 910 | "metadata": {}, 911 | "source": [ 912 | "Now that we have our train/test split, it is time to implement the alternating least squares algorithm from the Hu, Koren, and Volinsky paper." 913 | ] 914 | }, 915 | { 916 | "cell_type": "markdown", 917 | "metadata": {}, 918 | "source": [ 919 | "## Part 4: Implementing ALS for Implicit Feedback" 920 | ] 921 | }, 922 | { 923 | "cell_type": "markdown", 924 | "metadata": { 925 | "collapsed": true 926 | }, 927 | "source": [ 928 | "Now that we have our training and test sets finished, we can move on to implementing the algorithm. If you look at the paper previously linked above \n", 929 | "\n", 930 | "- [Hu, Koren, and Volinsky](http://yifanhu.net/PUB/cf.pdf)\n", 931 | "\n", 932 | "you can see the key equations will we need to implement into the algorithm. First, we have our ratings matrix which is sparse (represented by the product_train sparse matrix object). We need to turn this into a confidence matrix (from page 4): " 933 | ] 934 | }, 935 | { 936 | "cell_type": "markdown", 937 | "metadata": {}, 938 | "source": [ 939 | "\\begin{equation} \n", 940 | "C_{ui} = 1 + \\alpha r_{ui}\n", 941 | "\\end{equation}" 942 | ] 943 | }, 944 | { 945 | "cell_type": "markdown", 946 | "metadata": {}, 947 | "source": [ 948 | "Where $C_{ui}$ is the confidence matrix for our users $u$ and our items $i$. The $\\alpha$ term represents a linear scaling of the rating preferences (in our case number of purchases) and the $r_{ui}$ term is our original matrix of purchases. The paper suggests 40 as a good starting point. \n", 949 | "\n", 950 | "After taking the derivative of equation 3 in the paper, we can minimize the cost function for our users $U$:" 951 | ] 952 | }, 953 | { 954 | "cell_type": "markdown", 955 | "metadata": { 956 | "collapsed": true 957 | }, 958 | "source": [ 959 | "\\begin{equation} \n", 960 | "x_{u} = (Y^{T}C^{u}Y + \\lambda I)^{-1}Y^{T}C^{u}p(u)\n", 961 | "\\end{equation}" 962 | ] 963 | }, 964 | { 965 | "cell_type": "markdown", 966 | "metadata": {}, 967 | "source": [ 968 | "The authors note you can speed up this computation through some linear algebra that changes this equation to:" 969 | ] 970 | }, 971 | { 972 | "cell_type": "markdown", 973 | "metadata": {}, 974 | "source": [ 975 | "\\begin{equation} \n", 976 | "x_{u} = (Y^{T}Y + Y^{T}(C^{u}-I)Y + \\lambda I)^{-1}Y^{T}C^{u}p(u)\n", 977 | "\\end{equation}" 978 | ] 979 | }, 980 | { 981 | "cell_type": "markdown", 982 | "metadata": {}, 983 | "source": [ 984 | "Notice that we can now precompute the $Y^{T}Y$ portion without having to iterate through each user $u$. We can derive a similar equation for our items:" 985 | ] 986 | }, 987 | { 988 | "cell_type": "markdown", 989 | "metadata": {}, 990 | "source": [ 991 | "\\begin{equation} \n", 992 | "y_{i} = (X^{T}X + X^{T}(C^{i}-I)X + \\lambda I)^{-1}X^{T}C^{i}p(i)\n", 993 | "\\end{equation}" 994 | ] 995 | }, 996 | { 997 | "cell_type": "markdown", 998 | "metadata": {}, 999 | "source": [ 1000 | "These will be the two equations we will iterate back and forth between until they converge. We also have a regularization term $\\lambda$ that can help prevent overfitting during the training stage as well, along with our binarized preference matrix $p$ which is just 1 where there was a purchase (or interaction) and zero where there was not. \n", 1001 | "\n", 1002 | "Now that the math part is out of the way, we can turn this into code! Shoutout to [Chris Johnson's implicit-mf](https://github.com/MrChrisJohnson/implicit-mf/blob/master/mf.py) code that was a helpful guide for this. I have altered his to make things easier to understand. " 1003 | ] 1004 | }, 1005 | { 1006 | "cell_type": "code", 1007 | "execution_count": 18, 1008 | "metadata": { 1009 | "collapsed": true 1010 | }, 1011 | "outputs": [], 1012 | "source": [ 1013 | "def implicit_weighted_ALS(training_set, lambda_val = 0.1, alpha = 40, iterations = 10, rank_size = 20, seed = 0):\n", 1014 | " '''\n", 1015 | " Implicit weighted ALS taken from Hu, Koren, and Volinsky 2008. Designed for alternating least squares and implicit\n", 1016 | " feedback based collaborative filtering. \n", 1017 | " \n", 1018 | " parameters:\n", 1019 | " \n", 1020 | " training_set - Our matrix of ratings with shape m x n, where m is the number of users and n is the number of items.\n", 1021 | " Should be a sparse csr matrix to save space. \n", 1022 | " \n", 1023 | " lambda_val - Used for regularization during alternating least squares. Increasing this value may increase bias\n", 1024 | " but decrease variance. Default is 0.1. \n", 1025 | " \n", 1026 | " alpha - The parameter associated with the confidence matrix discussed in the paper, where Cui = 1 + alpha*Rui. \n", 1027 | " The paper found a default of 40 most effective. Decreasing this will decrease the variability in confidence between\n", 1028 | " various ratings.\n", 1029 | " \n", 1030 | " iterations - The number of times to alternate between both user feature vector and item feature vector in\n", 1031 | " alternating least squares. More iterations will allow better convergence at the cost of increased computation. \n", 1032 | " The authors found 10 iterations was sufficient, but more may be required to converge. \n", 1033 | " \n", 1034 | " rank_size - The number of latent features in the user/item feature vectors. The paper recommends varying this \n", 1035 | " between 20-200. Increasing the number of features may overfit but could reduce bias. \n", 1036 | " \n", 1037 | " seed - Set the seed for reproducible results\n", 1038 | " \n", 1039 | " returns:\n", 1040 | " \n", 1041 | " The feature vectors for users and items. The dot product of these feature vectors should give you the expected \n", 1042 | " \"rating\" at each point in your original matrix. \n", 1043 | " '''\n", 1044 | " \n", 1045 | " # first set up our confidence matrix\n", 1046 | " \n", 1047 | " conf = (alpha*training_set) # To allow the matrix to stay sparse, I will add one later when each row is taken \n", 1048 | " # and converted to dense. \n", 1049 | " num_user = conf.shape[0]\n", 1050 | " num_item = conf.shape[1] # Get the size of our original ratings matrix, m x n\n", 1051 | " \n", 1052 | " # initialize our X/Y feature vectors randomly with a set seed\n", 1053 | " rstate = np.random.RandomState(seed)\n", 1054 | " \n", 1055 | " X = sparse.csr_matrix(rstate.normal(size = (num_user, rank_size))) # Random numbers in a m x rank shape\n", 1056 | " Y = sparse.csr_matrix(rstate.normal(size = (num_item, rank_size))) # Normally this would be rank x n but we can \n", 1057 | " # transpose at the end. Makes calculation more simple.\n", 1058 | " X_eye = sparse.eye(num_user)\n", 1059 | " Y_eye = sparse.eye(num_item)\n", 1060 | " lambda_eye = lambda_val * sparse.eye(rank_size) # Our regularization term lambda*I. \n", 1061 | " \n", 1062 | " # We can compute this before iteration starts. \n", 1063 | " \n", 1064 | " # Begin iterations\n", 1065 | " \n", 1066 | " for iter_step in range(iterations): # Iterate back and forth between solving X given fixed Y and vice versa\n", 1067 | " # Compute yTy and xTx at beginning of each iteration to save computing time\n", 1068 | " yTy = Y.T.dot(Y)\n", 1069 | " xTx = X.T.dot(X)\n", 1070 | " # Being iteration to solve for X based on fixed Y\n", 1071 | " for u in range(num_user):\n", 1072 | " conf_samp = conf[u,:].toarray() # Grab user row from confidence matrix and convert to dense\n", 1073 | " pref = conf_samp.copy() \n", 1074 | " pref[pref != 0] = 1 # Create binarized preference vector \n", 1075 | " CuI = sparse.diags(conf_samp, [0]) # Get Cu - I term, which is just CuI since we never added 1\n", 1076 | " yTCuIY = Y.T.dot(CuI).dot(Y) # This is the yT(Cu-I)Y term \n", 1077 | " yTCupu = Y.T.dot(CuI + Y_eye).dot(pref.T) # This is the yTCuPu term, where we add the eye back in\n", 1078 | " # Cu - I + I = Cu\n", 1079 | " X[u] = spsolve(yTy + yTCuIY + lambda_eye, yTCupu) \n", 1080 | " # Solve for Xu = ((yTy + yT(Cu-I)Y + lambda*I)^-1)yTCuPu, equation 4 from the paper \n", 1081 | " # Begin iteration to solve for Y based on fixed X \n", 1082 | " for i in range(num_item):\n", 1083 | " conf_samp = conf[:,i].T.toarray() # transpose to get it in row format and convert to dense\n", 1084 | " pref = conf_samp.copy()\n", 1085 | " pref[pref != 0] = 1 # Create binarized preference vector\n", 1086 | " CiI = sparse.diags(conf_samp, [0]) # Get Ci - I term, which is just CiI since we never added 1\n", 1087 | " xTCiIX = X.T.dot(CiI).dot(X) # This is the xT(Cu-I)X term\n", 1088 | " xTCiPi = X.T.dot(CiI + X_eye).dot(pref.T) # This is the xTCiPi term\n", 1089 | " Y[i] = spsolve(xTx + xTCiIX + lambda_eye, xTCiPi)\n", 1090 | " # Solve for Yi = ((xTx + xT(Cu-I)X) + lambda*I)^-1)xTCiPi, equation 5 from the paper\n", 1091 | " # End iterations\n", 1092 | " return X, Y.T # Transpose at the end to make up for not being transposed at the beginning. \n", 1093 | " # Y needs to be rank x n. Keep these as separate matrices for scale reasons. " 1094 | ] 1095 | }, 1096 | { 1097 | "cell_type": "markdown", 1098 | "metadata": {}, 1099 | "source": [ 1100 | "Hopefully the comments are enough to see how the code was structured. You want to keep the matrices sparse where possible to avoid memory issues! Let's try just a single iteration of the code to see how it works (it's pretty slow right now!) I will choose 20 latent factors as my rank matrix size along with an alpha of 15 and regularization of 0.1 (which I found in testing does the best). This takes about 90 seconds to run on my MacBook Pro. " 1101 | ] 1102 | }, 1103 | { 1104 | "cell_type": "code", 1105 | "execution_count": 19, 1106 | "metadata": { 1107 | "collapsed": false 1108 | }, 1109 | "outputs": [], 1110 | "source": [ 1111 | "user_vecs, item_vecs = implicit_weighted_ALS(product_train, lambda_val = 0.1, alpha = 15, iterations = 1,\n", 1112 | " rank_size = 20)" 1113 | ] 1114 | }, 1115 | { 1116 | "cell_type": "markdown", 1117 | "metadata": {}, 1118 | "source": [ 1119 | "We can investigate ratings for a particular user by taking the dot product between the user and item vectors ($U$ and $V$). Let's look at our first user. " 1120 | ] 1121 | }, 1122 | { 1123 | "cell_type": "code", 1124 | "execution_count": 20, 1125 | "metadata": { 1126 | "collapsed": false 1127 | }, 1128 | "outputs": [ 1129 | { 1130 | "data": { 1131 | "text/plain": [ 1132 | "array([ 0.00644811, -0.0014369 , 0.00494281, 0.00027502, 0.01275582])" 1133 | ] 1134 | }, 1135 | "execution_count": 20, 1136 | "metadata": {}, 1137 | "output_type": "execute_result" 1138 | } 1139 | ], 1140 | "source": [ 1141 | "user_vecs[0,:].dot(item_vecs).toarray()[0,:5]" 1142 | ] 1143 | }, 1144 | { 1145 | "cell_type": "markdown", 1146 | "metadata": {}, 1147 | "source": [ 1148 | "This is a sample of the first five items out of the 3664 in our stock. The first user in our matrix has the fifth item with the greatest recommendation out of the first five items. However, notice we only did one iteration because our algorithm was so slow! You should iterate at least ten times according to the authors so that $U$ and $V$ converge. We could wait 15 minutes to let this run, or . . . use someone else's code that is much faster! " 1149 | ] 1150 | }, 1151 | { 1152 | "cell_type": "markdown", 1153 | "metadata": {}, 1154 | "source": [ 1155 | "## Speeding Up ALS" 1156 | ] 1157 | }, 1158 | { 1159 | "cell_type": "markdown", 1160 | "metadata": {}, 1161 | "source": [ 1162 | "This code in its raw form is just too slow. We have to do a lot of looping, and we haven't taken advantage of the fact that our algorithm is embarrasingly parallel, since we could do each iteration of the item and user vectors independently. Fortunately, as I was still finishing this up, Ben Frederickson at Flipboard had perfect timing and came out with a version of ALS for Python utilizing Cython and parallelizing the code among threads. You can read his blog post about using it for finding similar music artists using matrix factorization [here](http://www.benfrederickson.com/matrix-factorization/) and his implicit library [here](https://github.com/benfred/implicit). He claims it is even faster than Quora's C++ [QMF](https://github.com/quora/qmf), but I haven't tried theirs. All I can tell you is that it is over 1000 times faster than this bare bones pure Python version when I tested it. Install this library before you continue and follow the instructions. If you have conda installed, just do pip install implicit and you should be good to go. " 1163 | ] 1164 | }, 1165 | { 1166 | "cell_type": "markdown", 1167 | "metadata": { 1168 | "collapsed": true 1169 | }, 1170 | "source": [ 1171 | "First, import his library so we can utilize it for our matrix factorization. " 1172 | ] 1173 | }, 1174 | { 1175 | "cell_type": "code", 1176 | "execution_count": 21, 1177 | "metadata": { 1178 | "collapsed": true 1179 | }, 1180 | "outputs": [], 1181 | "source": [ 1182 | "import implicit" 1183 | ] 1184 | }, 1185 | { 1186 | "cell_type": "markdown", 1187 | "metadata": {}, 1188 | "source": [ 1189 | "His version of the code doesn't have a parameter for the weighting $\\alpha$ and assumes you are doing this to the ratings matrix before using it as an input. I did some testing and found the following settings to work the best. Also make sure that we set the type of our matrix to double for the ALS function to run properly. " 1190 | ] 1191 | }, 1192 | { 1193 | "cell_type": "code", 1194 | "execution_count": 22, 1195 | "metadata": { 1196 | "collapsed": true 1197 | }, 1198 | "outputs": [], 1199 | "source": [ 1200 | "alpha = 15\n", 1201 | "user_vecs, item_vecs = implicit.alternating_least_squares((product_train*alpha).astype('double'), \n", 1202 | " factors=20, \n", 1203 | " regularization = 0.1, \n", 1204 | " iterations = 50)" 1205 | ] 1206 | }, 1207 | { 1208 | "cell_type": "markdown", 1209 | "metadata": {}, 1210 | "source": [ 1211 | "Much faster, right? We now have recommendations for all of our users and items. However, how do we know if these are any good? " 1212 | ] 1213 | }, 1214 | { 1215 | "cell_type": "markdown", 1216 | "metadata": {}, 1217 | "source": [ 1218 | "## Evaluating the Recommender System" 1219 | ] 1220 | }, 1221 | { 1222 | "cell_type": "markdown", 1223 | "metadata": {}, 1224 | "source": [ 1225 | "Remember that our training set had 20% of the purchases masked? This will allow us to evaluate the performance of our recommender system. Essentially, we need to see if the order of recommendations given for each user matches the items they ended up purchasing. A commonly used metric for this kind of problem is the area under the [Receiver Operating Characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) (or ROC) curve. A greater area under the curve means we are recommending items that end up being purchased near the top of the list of recommended items. Usually this metric is used in more typical binary classification problems to identify how well a model can predict a positive example vs. a negative one. It will also work well for our purposes of ranking recommendations.\n", 1226 | "\n", 1227 | "In order to do that, we need to write a function that can calculate a mean area under the curve (AUC) for any user that had at least one masked item. As a benchmark, we will also calculate what the mean AUC would have been if we had simply recommended the most popular items. Popularity tends to be hard to beat in most recommender system problems, so it makes a good comparison. \n", 1228 | "\n", 1229 | "First, let's make a simple function that can calculate our AUC. Scikit-learn has one we can alter a bit. " 1230 | ] 1231 | }, 1232 | { 1233 | "cell_type": "code", 1234 | "execution_count": 23, 1235 | "metadata": { 1236 | "collapsed": true 1237 | }, 1238 | "outputs": [], 1239 | "source": [ 1240 | "from sklearn import metrics" 1241 | ] 1242 | }, 1243 | { 1244 | "cell_type": "code", 1245 | "execution_count": 24, 1246 | "metadata": { 1247 | "collapsed": true 1248 | }, 1249 | "outputs": [], 1250 | "source": [ 1251 | "def auc_score(predictions, test):\n", 1252 | " '''\n", 1253 | " This simple function will output the area under the curve using sklearn's metrics. \n", 1254 | " \n", 1255 | " parameters:\n", 1256 | " \n", 1257 | " - predictions: your prediction output\n", 1258 | " \n", 1259 | " - test: the actual target result you are comparing to\n", 1260 | " \n", 1261 | " returns:\n", 1262 | " \n", 1263 | " - AUC (area under the Receiver Operating Characterisic curve)\n", 1264 | " '''\n", 1265 | " fpr, tpr, thresholds = metrics.roc_curve(test, predictions)\n", 1266 | " return metrics.auc(fpr, tpr) " 1267 | ] 1268 | }, 1269 | { 1270 | "cell_type": "markdown", 1271 | "metadata": {}, 1272 | "source": [ 1273 | "Now, utilize this helper function inside of a second function that will calculate the AUC for each user in our training set that had at least one item masked. It should also calculate AUC for the most popular items for our users to compare." 1274 | ] 1275 | }, 1276 | { 1277 | "cell_type": "code", 1278 | "execution_count": 25, 1279 | "metadata": { 1280 | "collapsed": true 1281 | }, 1282 | "outputs": [], 1283 | "source": [ 1284 | "def calc_mean_auc(training_set, altered_users, predictions, test_set):\n", 1285 | " '''\n", 1286 | " This function will calculate the mean AUC by user for any user that had their user-item matrix altered. \n", 1287 | " \n", 1288 | " parameters:\n", 1289 | " \n", 1290 | " training_set - The training set resulting from make_train, where a certain percentage of the original\n", 1291 | " user/item interactions are reset to zero to hide them from the model \n", 1292 | " \n", 1293 | " predictions - The matrix of your predicted ratings for each user/item pair as output from the implicit MF.\n", 1294 | " These should be stored in a list, with user vectors as item zero and item vectors as item one. \n", 1295 | " \n", 1296 | " altered_users - The indices of the users where at least one user/item pair was altered from make_train function\n", 1297 | " \n", 1298 | " test_set - The test set constucted earlier from make_train function\n", 1299 | " \n", 1300 | " \n", 1301 | " \n", 1302 | " returns:\n", 1303 | " \n", 1304 | " The mean AUC (area under the Receiver Operator Characteristic curve) of the test set only on user-item interactions\n", 1305 | " there were originally zero to test ranking ability in addition to the most popular items as a benchmark.\n", 1306 | " '''\n", 1307 | " \n", 1308 | " \n", 1309 | " store_auc = [] # An empty list to store the AUC for each user that had an item removed from the training set\n", 1310 | " popularity_auc = [] # To store popular AUC scores\n", 1311 | " pop_items = np.array(test_set.sum(axis = 0)).reshape(-1) # Get sum of item iteractions to find most popular\n", 1312 | " item_vecs = predictions[1]\n", 1313 | " for user in altered_users: # Iterate through each user that had an item altered\n", 1314 | " training_row = training_set[user,:].toarray().reshape(-1) # Get the training set row\n", 1315 | " zero_inds = np.where(training_row == 0) # Find where the interaction had not yet occurred\n", 1316 | " # Get the predicted values based on our user/item vectors\n", 1317 | " user_vec = predictions[0][user,:]\n", 1318 | " pred = user_vec.dot(item_vecs).toarray()[0,zero_inds].reshape(-1)\n", 1319 | " # Get only the items that were originally zero\n", 1320 | " # Select all ratings from the MF prediction for this user that originally had no iteraction\n", 1321 | " actual = test_set[user,:].toarray()[0,zero_inds].reshape(-1) \n", 1322 | " # Select the binarized yes/no interaction pairs from the original full data\n", 1323 | " # that align with the same pairs in training \n", 1324 | " pop = pop_items[zero_inds] # Get the item popularity for our chosen items\n", 1325 | " store_auc.append(auc_score(pred, actual)) # Calculate AUC for the given user and store\n", 1326 | " popularity_auc.append(auc_score(pop, actual)) # Calculate AUC using most popular and score\n", 1327 | " # End users iteration\n", 1328 | " \n", 1329 | " return float('%.3f'%np.mean(store_auc)), float('%.3f'%np.mean(popularity_auc)) \n", 1330 | " # Return the mean AUC rounded to three decimal places for both test and popularity benchmark" 1331 | ] 1332 | }, 1333 | { 1334 | "cell_type": "markdown", 1335 | "metadata": {}, 1336 | "source": [ 1337 | "We can now use this function to see how our recommender system is doing. To use this function, we will need to transform our output from the ALS function to csr_matrix format and transpose the item vectors. The original pure Python version output the user and item vectors into the correct format already. " 1338 | ] 1339 | }, 1340 | { 1341 | "cell_type": "code", 1342 | "execution_count": 26, 1343 | "metadata": { 1344 | "collapsed": false 1345 | }, 1346 | "outputs": [ 1347 | { 1348 | "data": { 1349 | "text/plain": [ 1350 | "(0.87, 0.814)" 1351 | ] 1352 | }, 1353 | "execution_count": 26, 1354 | "metadata": {}, 1355 | "output_type": "execute_result" 1356 | } 1357 | ], 1358 | "source": [ 1359 | "calc_mean_auc(product_train, product_users_altered, \n", 1360 | " [sparse.csr_matrix(user_vecs), sparse.csr_matrix(item_vecs.T)], product_test)\n", 1361 | "# AUC for our recommender system" 1362 | ] 1363 | }, 1364 | { 1365 | "cell_type": "markdown", 1366 | "metadata": {}, 1367 | "source": [ 1368 | "We can see that our recommender system beat popularity. Our system had a mean AUC of 0.87, while the popular item benchmark had a lower AUC of 0.814. You can go back and tune the hyperparameters if you wish to see if you can get a higher AUC score. Ideally, you would have separate train, cross-validation, and test sets so that you aren't overfitting while tuning the hyperparameters, but this setup is adequate to demonstrate how to check that the system is working. " 1369 | ] 1370 | }, 1371 | { 1372 | "cell_type": "markdown", 1373 | "metadata": {}, 1374 | "source": [ 1375 | "## A Recommendation Example" 1376 | ] 1377 | }, 1378 | { 1379 | "cell_type": "markdown", 1380 | "metadata": {}, 1381 | "source": [ 1382 | "We now have our recommender system trained and have proven it beats the benchmark of popularity. An AUC of 0.87 means the system is recommending items the user in fact had purchased in the test set far more frequently than items the user never ended up purchasing. To see an example of how it works, let's examine the recommendations given to a particular user and decide subjectively if they make any sense. \n", 1383 | "\n", 1384 | "First, however, we need to find a way of retrieving the items already purchased by a user in the training set. Initially, we will create an array of our customers and items we made earlier. " 1385 | ] 1386 | }, 1387 | { 1388 | "cell_type": "code", 1389 | "execution_count": 27, 1390 | "metadata": { 1391 | "collapsed": true 1392 | }, 1393 | "outputs": [], 1394 | "source": [ 1395 | "customers_arr = np.array(customers) # Array of customer IDs from the ratings matrix\n", 1396 | "products_arr = np.array(products) # Array of product IDs from the ratings matrix" 1397 | ] 1398 | }, 1399 | { 1400 | "cell_type": "markdown", 1401 | "metadata": {}, 1402 | "source": [ 1403 | "Now, we can create a function that will return a list of the item descriptions from our earlier created item lookup table. " 1404 | ] 1405 | }, 1406 | { 1407 | "cell_type": "code", 1408 | "execution_count": 28, 1409 | "metadata": { 1410 | "collapsed": true 1411 | }, 1412 | "outputs": [], 1413 | "source": [ 1414 | "def get_items_purchased(customer_id, mf_train, customers_list, products_list, item_lookup):\n", 1415 | " '''\n", 1416 | " This just tells me which items have been already purchased by a specific user in the training set. \n", 1417 | " \n", 1418 | " parameters: \n", 1419 | " \n", 1420 | " customer_id - Input the customer's id number that you want to see prior purchases of at least once\n", 1421 | " \n", 1422 | " mf_train - The initial ratings training set used (without weights applied)\n", 1423 | " \n", 1424 | " customers_list - The array of customers used in the ratings matrix\n", 1425 | " \n", 1426 | " products_list - The array of products used in the ratings matrix\n", 1427 | " \n", 1428 | " item_lookup - A simple pandas dataframe of the unique product ID/product descriptions available\n", 1429 | " \n", 1430 | " returns:\n", 1431 | " \n", 1432 | " A list of item IDs and item descriptions for a particular customer that were already purchased in the training set\n", 1433 | " '''\n", 1434 | " cust_ind = np.where(customers_list == customer_id)[0][0] # Returns the index row of our customer id\n", 1435 | " purchased_ind = mf_train[cust_ind,:].nonzero()[1] # Get column indices of purchased items\n", 1436 | " prod_codes = products_list[purchased_ind] # Get the stock codes for our purchased items\n", 1437 | " return item_lookup.loc[item_lookup.StockCode.isin(prod_codes)]" 1438 | ] 1439 | }, 1440 | { 1441 | "cell_type": "markdown", 1442 | "metadata": {}, 1443 | "source": [ 1444 | "We need to look these up by a customer's ID. Looking at the list of customers:" 1445 | ] 1446 | }, 1447 | { 1448 | "cell_type": "code", 1449 | "execution_count": 29, 1450 | "metadata": { 1451 | "collapsed": false 1452 | }, 1453 | "outputs": [ 1454 | { 1455 | "data": { 1456 | "text/plain": [ 1457 | "array([12346, 12347, 12348, 12349, 12350])" 1458 | ] 1459 | }, 1460 | "execution_count": 29, 1461 | "metadata": {}, 1462 | "output_type": "execute_result" 1463 | } 1464 | ], 1465 | "source": [ 1466 | "customers_arr[:5]" 1467 | ] 1468 | }, 1469 | { 1470 | "cell_type": "markdown", 1471 | "metadata": {}, 1472 | "source": [ 1473 | "we can see that the first customer listed has an ID of 12346. Let's examine their purchases from the training set. " 1474 | ] 1475 | }, 1476 | { 1477 | "cell_type": "code", 1478 | "execution_count": 30, 1479 | "metadata": { 1480 | "collapsed": false 1481 | }, 1482 | "outputs": [ 1483 | { 1484 | "data": { 1485 | "text/html": [ 1486 | "
\n", 1487 | "\n", 1488 | " \n", 1489 | " \n", 1490 | " \n", 1491 | " \n", 1492 | " \n", 1493 | " \n", 1494 | " \n", 1495 | " \n", 1496 | " \n", 1497 | " \n", 1498 | " \n", 1499 | " \n", 1500 | " \n", 1501 | " \n", 1502 | "
StockCodeDescription
6161923166MEDIUM CERAMIC TOP STORAGE JAR
\n", 1503 | "
" 1504 | ], 1505 | "text/plain": [ 1506 | " StockCode Description\n", 1507 | "61619 23166 MEDIUM CERAMIC TOP STORAGE JAR" 1508 | ] 1509 | }, 1510 | "execution_count": 30, 1511 | "metadata": {}, 1512 | "output_type": "execute_result" 1513 | } 1514 | ], 1515 | "source": [ 1516 | "get_items_purchased(12346, product_train, customers_arr, products_arr, item_lookup)" 1517 | ] 1518 | }, 1519 | { 1520 | "cell_type": "markdown", 1521 | "metadata": {}, 1522 | "source": [ 1523 | "We can see that the customer purchased a ceramic jar for storage, medium size. What items does the recommender system say this customer should purchase? We need to create another function that does this. Let's also import the MinMaxScaler from scikit-learn to help with this." 1524 | ] 1525 | }, 1526 | { 1527 | "cell_type": "code", 1528 | "execution_count": 31, 1529 | "metadata": { 1530 | "collapsed": true 1531 | }, 1532 | "outputs": [], 1533 | "source": [ 1534 | "from sklearn.preprocessing import MinMaxScaler" 1535 | ] 1536 | }, 1537 | { 1538 | "cell_type": "code", 1539 | "execution_count": 32, 1540 | "metadata": { 1541 | "collapsed": true 1542 | }, 1543 | "outputs": [], 1544 | "source": [ 1545 | "def rec_items(customer_id, mf_train, user_vecs, item_vecs, customer_list, item_list, item_lookup, num_items = 10):\n", 1546 | " '''\n", 1547 | " This function will return the top recommended items to our users \n", 1548 | " \n", 1549 | " parameters:\n", 1550 | " \n", 1551 | " customer_id - Input the customer's id number that you want to get recommendations for\n", 1552 | " \n", 1553 | " mf_train - The training matrix you used for matrix factorization fitting\n", 1554 | " \n", 1555 | " user_vecs - the user vectors from your fitted matrix factorization\n", 1556 | " \n", 1557 | " item_vecs - the item vectors from your fitted matrix factorization\n", 1558 | " \n", 1559 | " customer_list - an array of the customer's ID numbers that make up the rows of your ratings matrix \n", 1560 | " (in order of matrix)\n", 1561 | " \n", 1562 | " item_list - an array of the products that make up the columns of your ratings matrix\n", 1563 | " (in order of matrix)\n", 1564 | " \n", 1565 | " item_lookup - A simple pandas dataframe of the unique product ID/product descriptions available\n", 1566 | " \n", 1567 | " num_items - The number of items you want to recommend in order of best recommendations. Default is 10. \n", 1568 | " \n", 1569 | " returns:\n", 1570 | " \n", 1571 | " - The top n recommendations chosen based on the user/item vectors for items never interacted with/purchased\n", 1572 | " '''\n", 1573 | " \n", 1574 | " cust_ind = np.where(customer_list == customer_id)[0][0] # Returns the index row of our customer id\n", 1575 | " pref_vec = mf_train[cust_ind,:].toarray() # Get the ratings from the training set ratings matrix\n", 1576 | " pref_vec = pref_vec.reshape(-1) + 1 # Add 1 to everything, so that items not purchased yet become equal to 1\n", 1577 | " pref_vec[pref_vec > 1] = 0 # Make everything already purchased zero\n", 1578 | " rec_vector = user_vecs[cust_ind,:].dot(item_vecs.T) # Get dot product of user vector and all item vectors\n", 1579 | " # Scale this recommendation vector between 0 and 1\n", 1580 | " min_max = MinMaxScaler()\n", 1581 | " rec_vector_scaled = min_max.fit_transform(rec_vector.reshape(-1,1))[:,0] \n", 1582 | " recommend_vector = pref_vec*rec_vector_scaled \n", 1583 | " # Items already purchased have their recommendation multiplied by zero\n", 1584 | " product_idx = np.argsort(recommend_vector)[::-1][:num_items] # Sort the indices of the items into order \n", 1585 | " # of best recommendations\n", 1586 | " rec_list = [] # start empty list to store items\n", 1587 | " for index in product_idx:\n", 1588 | " code = item_list[index]\n", 1589 | " rec_list.append([code, item_lookup.Description.loc[item_lookup.StockCode == code].iloc[0]]) \n", 1590 | " # Append our descriptions to the list\n", 1591 | " codes = [item[0] for item in rec_list]\n", 1592 | " descriptions = [item[1] for item in rec_list]\n", 1593 | " final_frame = pd.DataFrame({'StockCode': codes, 'Description': descriptions}) # Create a dataframe \n", 1594 | " return final_frame[['StockCode', 'Description']] # Switch order of columns around" 1595 | ] 1596 | }, 1597 | { 1598 | "cell_type": "markdown", 1599 | "metadata": {}, 1600 | "source": [ 1601 | "Essentially, this will retrieve the $N$ highest ranking dot products between our user and item vectors for a particular user. Items already purchased are not recommended to the user. For now, let's use a default of 10 items and see what the recommender system decides to pick for our customer. " 1602 | ] 1603 | }, 1604 | { 1605 | "cell_type": "code", 1606 | "execution_count": 33, 1607 | "metadata": { 1608 | "collapsed": false 1609 | }, 1610 | "outputs": [ 1611 | { 1612 | "data": { 1613 | "text/html": [ 1614 | "
\n", 1615 | "\n", 1616 | " \n", 1617 | " \n", 1618 | " \n", 1619 | " \n", 1620 | " \n", 1621 | " \n", 1622 | " \n", 1623 | " \n", 1624 | " \n", 1625 | " \n", 1626 | " \n", 1627 | " \n", 1628 | " \n", 1629 | " \n", 1630 | " \n", 1631 | " \n", 1632 | " \n", 1633 | " \n", 1634 | " \n", 1635 | " \n", 1636 | " \n", 1637 | " \n", 1638 | " \n", 1639 | " \n", 1640 | " \n", 1641 | " \n", 1642 | " \n", 1643 | " \n", 1644 | " \n", 1645 | " \n", 1646 | " \n", 1647 | " \n", 1648 | " \n", 1649 | " \n", 1650 | " \n", 1651 | " \n", 1652 | " \n", 1653 | " \n", 1654 | " \n", 1655 | " \n", 1656 | " \n", 1657 | " \n", 1658 | " \n", 1659 | " \n", 1660 | " \n", 1661 | " \n", 1662 | " \n", 1663 | " \n", 1664 | " \n", 1665 | " \n", 1666 | " \n", 1667 | " \n", 1668 | " \n", 1669 | " \n", 1670 | " \n", 1671 | " \n", 1672 | " \n", 1673 | " \n", 1674 | " \n", 1675 | "
StockCodeDescription
023167SMALL CERAMIC TOP STORAGE JAR
123165LARGE CERAMIC TOP STORAGE JAR
222963JAM JAR WITH GREEN LID
323294SET OF 6 SNACK LOAF BAKING CASES
422980PANTRY SCRUBBING BRUSH
523296SET OF 6 TEA TIME BAKING CASES
623293SET OF 12 FAIRY CAKE BAKING CASES
722978PANTRY ROLLING PIN
823295SET OF 12 MINI LOAF BAKING CASES
922962JAM JAR WITH PINK LID
\n", 1676 | "
" 1677 | ], 1678 | "text/plain": [ 1679 | " StockCode Description\n", 1680 | "0 23167 SMALL CERAMIC TOP STORAGE JAR \n", 1681 | "1 23165 LARGE CERAMIC TOP STORAGE JAR\n", 1682 | "2 22963 JAM JAR WITH GREEN LID\n", 1683 | "3 23294 SET OF 6 SNACK LOAF BAKING CASES\n", 1684 | "4 22980 PANTRY SCRUBBING BRUSH\n", 1685 | "5 23296 SET OF 6 TEA TIME BAKING CASES\n", 1686 | "6 23293 SET OF 12 FAIRY CAKE BAKING CASES\n", 1687 | "7 22978 PANTRY ROLLING PIN\n", 1688 | "8 23295 SET OF 12 MINI LOAF BAKING CASES\n", 1689 | "9 22962 JAM JAR WITH PINK LID" 1690 | ] 1691 | }, 1692 | "execution_count": 33, 1693 | "metadata": {}, 1694 | "output_type": "execute_result" 1695 | } 1696 | ], 1697 | "source": [ 1698 | "rec_items(12346, product_train, user_vecs, item_vecs, customers_arr, products_arr, item_lookup,\n", 1699 | " num_items = 10)" 1700 | ] 1701 | }, 1702 | { 1703 | "cell_type": "markdown", 1704 | "metadata": {}, 1705 | "source": [ 1706 | "These recommendations seem quite good! Remember that the recommendation system has no real understanding of what a ceramic jar is. All it knows is the purchase history. It identified that people purchasing a medium sized jar may also want to purchase jars of a differing size. The recommender system also suggests jar magnets and a sugar dispenser, which is similar in use to a storage jar. I personally was blown away by how well the system seems to pick up on these sorts of shopping patterns. Let's try another user that hasn't made a large number of purchases. " 1707 | ] 1708 | }, 1709 | { 1710 | "cell_type": "code", 1711 | "execution_count": 34, 1712 | "metadata": { 1713 | "collapsed": false 1714 | }, 1715 | "outputs": [ 1716 | { 1717 | "data": { 1718 | "text/html": [ 1719 | "
\n", 1720 | "\n", 1721 | " \n", 1722 | " \n", 1723 | " \n", 1724 | " \n", 1725 | " \n", 1726 | " \n", 1727 | " \n", 1728 | " \n", 1729 | " \n", 1730 | " \n", 1731 | " \n", 1732 | " \n", 1733 | " \n", 1734 | " \n", 1735 | " \n", 1736 | " \n", 1737 | " \n", 1738 | " \n", 1739 | " \n", 1740 | " \n", 1741 | " \n", 1742 | " \n", 1743 | " \n", 1744 | " \n", 1745 | " \n", 1746 | " \n", 1747 | " \n", 1748 | " \n", 1749 | " \n", 1750 | "
StockCodeDescription
214837446MINI CAKE STAND WITH HANGING CAKES
214937449CERAMIC CAKE STAND + HANGING CAKES
485937450CERAMIC CAKE BOWL + HANGING CAKES
510822890NOVELTY BISCUITS CAKE STAND 3 TIER
\n", 1751 | "
" 1752 | ], 1753 | "text/plain": [ 1754 | " StockCode Description\n", 1755 | "2148 37446 MINI CAKE STAND WITH HANGING CAKES\n", 1756 | "2149 37449 CERAMIC CAKE STAND + HANGING CAKES\n", 1757 | "4859 37450 CERAMIC CAKE BOWL + HANGING CAKES\n", 1758 | "5108 22890 NOVELTY BISCUITS CAKE STAND 3 TIER" 1759 | ] 1760 | }, 1761 | "execution_count": 34, 1762 | "metadata": {}, 1763 | "output_type": "execute_result" 1764 | } 1765 | ], 1766 | "source": [ 1767 | "get_items_purchased(12353, product_train, customers_arr, products_arr, item_lookup)" 1768 | ] 1769 | }, 1770 | { 1771 | "cell_type": "markdown", 1772 | "metadata": {}, 1773 | "source": [ 1774 | "This person seems like they want to make cakes. What kind of items does the recommender system think they would be interested in?" 1775 | ] 1776 | }, 1777 | { 1778 | "cell_type": "code", 1779 | "execution_count": 35, 1780 | "metadata": { 1781 | "collapsed": false 1782 | }, 1783 | "outputs": [ 1784 | { 1785 | "data": { 1786 | "text/html": [ 1787 | "
\n", 1788 | "\n", 1789 | " \n", 1790 | " \n", 1791 | " \n", 1792 | " \n", 1793 | " \n", 1794 | " \n", 1795 | " \n", 1796 | " \n", 1797 | " \n", 1798 | " \n", 1799 | " \n", 1800 | " \n", 1801 | " \n", 1802 | " \n", 1803 | " \n", 1804 | " \n", 1805 | " \n", 1806 | " \n", 1807 | " \n", 1808 | " \n", 1809 | " \n", 1810 | " \n", 1811 | " \n", 1812 | " \n", 1813 | " \n", 1814 | " \n", 1815 | " \n", 1816 | " \n", 1817 | " \n", 1818 | " \n", 1819 | " \n", 1820 | " \n", 1821 | " \n", 1822 | " \n", 1823 | " \n", 1824 | " \n", 1825 | " \n", 1826 | " \n", 1827 | " \n", 1828 | " \n", 1829 | " \n", 1830 | " \n", 1831 | " \n", 1832 | " \n", 1833 | " \n", 1834 | " \n", 1835 | " \n", 1836 | " \n", 1837 | " \n", 1838 | " \n", 1839 | " \n", 1840 | " \n", 1841 | " \n", 1842 | " \n", 1843 | " \n", 1844 | " \n", 1845 | " \n", 1846 | " \n", 1847 | " \n", 1848 | "
StockCodeDescription
022645CERAMIC HEART FAIRY CAKE MONEY BANK
122055MINI CAKE STAND HANGING STRAWBERY
222644CERAMIC CHERRY CAKE MONEY BANK
337447CERAMIC CAKE DESIGN SPOTTED PLATE
437448CERAMIC CAKE DESIGN SPOTTED MUG
522059CERAMIC STRAWBERRY DESIGN MUG
622063CERAMIC BOWL WITH STRAWBERRY DESIGN
722649STRAWBERRY FAIRY CAKE TEAPOT
822057CERAMIC PLATE STRAWBERRY DESIGN
922646CERAMIC STRAWBERRY CAKE MONEY BANK
\n", 1849 | "
" 1850 | ], 1851 | "text/plain": [ 1852 | " StockCode Description\n", 1853 | "0 22645 CERAMIC HEART FAIRY CAKE MONEY BANK\n", 1854 | "1 22055 MINI CAKE STAND HANGING STRAWBERY\n", 1855 | "2 22644 CERAMIC CHERRY CAKE MONEY BANK\n", 1856 | "3 37447 CERAMIC CAKE DESIGN SPOTTED PLATE\n", 1857 | "4 37448 CERAMIC CAKE DESIGN SPOTTED MUG\n", 1858 | "5 22059 CERAMIC STRAWBERRY DESIGN MUG\n", 1859 | "6 22063 CERAMIC BOWL WITH STRAWBERRY DESIGN\n", 1860 | "7 22649 STRAWBERRY FAIRY CAKE TEAPOT\n", 1861 | "8 22057 CERAMIC PLATE STRAWBERRY DESIGN\n", 1862 | "9 22646 CERAMIC STRAWBERRY CAKE MONEY BANK" 1863 | ] 1864 | }, 1865 | "execution_count": 35, 1866 | "metadata": {}, 1867 | "output_type": "execute_result" 1868 | } 1869 | ], 1870 | "source": [ 1871 | "rec_items(12353, product_train, user_vecs, item_vecs, customers_arr, products_arr, item_lookup,\n", 1872 | " num_items = 10)" 1873 | ] 1874 | }, 1875 | { 1876 | "cell_type": "markdown", 1877 | "metadata": {}, 1878 | "source": [ 1879 | "It certainly picked up on the cake theme along with ceramic items. Again, these recommendations seem very impressive given the system doesn't understand the content behind the recommendations. Let's try one more." 1880 | ] 1881 | }, 1882 | { 1883 | "cell_type": "code", 1884 | "execution_count": 36, 1885 | "metadata": { 1886 | "collapsed": false 1887 | }, 1888 | "outputs": [ 1889 | { 1890 | "data": { 1891 | "text/html": [ 1892 | "
\n", 1893 | "\n", 1894 | " \n", 1895 | " \n", 1896 | " \n", 1897 | " \n", 1898 | " \n", 1899 | " \n", 1900 | " \n", 1901 | " \n", 1902 | " \n", 1903 | " \n", 1904 | " \n", 1905 | " \n", 1906 | " \n", 1907 | " \n", 1908 | " \n", 1909 | " \n", 1910 | " \n", 1911 | " \n", 1912 | " \n", 1913 | " \n", 1914 | " \n", 1915 | " \n", 1916 | " \n", 1917 | " \n", 1918 | " \n", 1919 | " \n", 1920 | " \n", 1921 | " \n", 1922 | " \n", 1923 | " \n", 1924 | " \n", 1925 | " \n", 1926 | " \n", 1927 | " \n", 1928 | " \n", 1929 | " \n", 1930 | " \n", 1931 | " \n", 1932 | " \n", 1933 | " \n", 1934 | " \n", 1935 | " \n", 1936 | " \n", 1937 | " \n", 1938 | " \n", 1939 | " \n", 1940 | " \n", 1941 | " \n", 1942 | " \n", 1943 | " \n", 1944 | " \n", 1945 | " \n", 1946 | " \n", 1947 | " \n", 1948 | "
StockCodeDescription
3422326ROUND SNACK BOXES SET OF4 WOODLAND
3522629SPACEBOY LUNCH BOX
3722631CIRCUS PARADE LUNCH BOX
9320725LUNCH BAG RED RETROSPOT
36922382LUNCH BAG SPACEBOY DESIGN
54722328ROUND SNACK BOXES SET OF 4 FRUITS
54922630DOLLY GIRL LUNCH BOX
124122555PLASTERS IN TIN STRONGMAN
5813220725LUNCH BAG RED SPOTTY
\n", 1949 | "
" 1950 | ], 1951 | "text/plain": [ 1952 | " StockCode Description\n", 1953 | "34 22326 ROUND SNACK BOXES SET OF4 WOODLAND \n", 1954 | "35 22629 SPACEBOY LUNCH BOX \n", 1955 | "37 22631 CIRCUS PARADE LUNCH BOX \n", 1956 | "93 20725 LUNCH BAG RED RETROSPOT\n", 1957 | "369 22382 LUNCH BAG SPACEBOY DESIGN \n", 1958 | "547 22328 ROUND SNACK BOXES SET OF 4 FRUITS \n", 1959 | "549 22630 DOLLY GIRL LUNCH BOX\n", 1960 | "1241 22555 PLASTERS IN TIN STRONGMAN\n", 1961 | "58132 20725 LUNCH BAG RED SPOTTY" 1962 | ] 1963 | }, 1964 | "execution_count": 36, 1965 | "metadata": {}, 1966 | "output_type": "execute_result" 1967 | } 1968 | ], 1969 | "source": [ 1970 | "get_items_purchased(12361, product_train, customers_arr, products_arr, item_lookup)" 1971 | ] 1972 | }, 1973 | { 1974 | "cell_type": "markdown", 1975 | "metadata": {}, 1976 | "source": [ 1977 | "This customer seems like they are buying products suitable for lunch time. What other items does the recommender system think they might like?" 1978 | ] 1979 | }, 1980 | { 1981 | "cell_type": "code", 1982 | "execution_count": 37, 1983 | "metadata": { 1984 | "collapsed": false 1985 | }, 1986 | "outputs": [ 1987 | { 1988 | "data": { 1989 | "text/html": [ 1990 | "
\n", 1991 | "\n", 1992 | " \n", 1993 | " \n", 1994 | " \n", 1995 | " \n", 1996 | " \n", 1997 | " \n", 1998 | " \n", 1999 | " \n", 2000 | " \n", 2001 | " \n", 2002 | " \n", 2003 | " \n", 2004 | " \n", 2005 | " \n", 2006 | " \n", 2007 | " \n", 2008 | " \n", 2009 | " \n", 2010 | " \n", 2011 | " \n", 2012 | " \n", 2013 | " \n", 2014 | " \n", 2015 | " \n", 2016 | " \n", 2017 | " \n", 2018 | " \n", 2019 | " \n", 2020 | " \n", 2021 | " \n", 2022 | " \n", 2023 | " \n", 2024 | " \n", 2025 | " \n", 2026 | " \n", 2027 | " \n", 2028 | " \n", 2029 | " \n", 2030 | " \n", 2031 | " \n", 2032 | " \n", 2033 | " \n", 2034 | " \n", 2035 | " \n", 2036 | " \n", 2037 | " \n", 2038 | " \n", 2039 | " \n", 2040 | " \n", 2041 | " \n", 2042 | " \n", 2043 | " \n", 2044 | " \n", 2045 | " \n", 2046 | " \n", 2047 | " \n", 2048 | " \n", 2049 | " \n", 2050 | " \n", 2051 | "
StockCodeDescription
022662LUNCH BAG DOLLY GIRL DESIGN
120726LUNCH BAG WOODLAND
220719WOODLAND CHARLOTTE BAG
322383LUNCH BAG SUKI DESIGN
420728LUNCH BAG CARS BLUE
523209LUNCH BAG DOILEY PATTERN
622661CHARLOTTE BAG DOLLY GIRL DESIGN
720724RED RETROSPOT CHARLOTTE BAG
823206LUNCH BAG APPLE DESIGN
922384LUNCH BAG PINK POLKADOT
\n", 2052 | "
" 2053 | ], 2054 | "text/plain": [ 2055 | " StockCode Description\n", 2056 | "0 22662 LUNCH BAG DOLLY GIRL DESIGN\n", 2057 | "1 20726 LUNCH BAG WOODLAND\n", 2058 | "2 20719 WOODLAND CHARLOTTE BAG\n", 2059 | "3 22383 LUNCH BAG SUKI DESIGN \n", 2060 | "4 20728 LUNCH BAG CARS BLUE\n", 2061 | "5 23209 LUNCH BAG DOILEY PATTERN \n", 2062 | "6 22661 CHARLOTTE BAG DOLLY GIRL DESIGN\n", 2063 | "7 20724 RED RETROSPOT CHARLOTTE BAG\n", 2064 | "8 23206 LUNCH BAG APPLE DESIGN\n", 2065 | "9 22384 LUNCH BAG PINK POLKADOT" 2066 | ] 2067 | }, 2068 | "execution_count": 37, 2069 | "metadata": {}, 2070 | "output_type": "execute_result" 2071 | } 2072 | ], 2073 | "source": [ 2074 | "rec_items(12361, product_train, user_vecs, item_vecs, customers_arr, products_arr, item_lookup,\n", 2075 | " num_items = 10)" 2076 | ] 2077 | }, 2078 | { 2079 | "cell_type": "markdown", 2080 | "metadata": {}, 2081 | "source": [ 2082 | "Once again, the recommender system comes through! Definitely a lot of bags and lunch related items in this recommendation list. Feel free to play around with the recommendations for other users and see what the system came up with!" 2083 | ] 2084 | }, 2085 | { 2086 | "cell_type": "markdown", 2087 | "metadata": {}, 2088 | "source": [ 2089 | "## Summary" 2090 | ] 2091 | }, 2092 | { 2093 | "cell_type": "markdown", 2094 | "metadata": {}, 2095 | "source": [ 2096 | "In this post, we have learned about how to design a recommender system with implicit feedback and how to provide recommendations. We also covered how to test the recommender system. \n", 2097 | "\n", 2098 | "In real life, if the size of your ratings matrix will not fit on a single machine very easily, utilizing the implementation in [Spark](http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html) is going to be more practical. If you are interested in taking recommender systems to the next level, a hybrid system would be best that incorporates information about your users/items along with the purchase history. A Python library called LightFM from Maciej Kula at Lyst looks very interesting for this sort of application. You can find it [here](https://github.com/lyst/lightfm). \n", 2099 | "\n", 2100 | "Last, there are several other advanced methods you can incorporate in recommender systems to get a bump in performance. Part 2 of Xavier Amatriain's lecture would be a great place to [start](https://www.youtube.com/watch?v=mRToFXlNBpQ). \n", 2101 | "\n", 2102 | "If you are more interested in recommender systems with explicit feedback (such as with movie reviews) there are a couple of great posts that cover this in detail:\n", 2103 | "\n", 2104 | "- Alternating Least Squares Method for Collaborative Filtering by [Bugra Akyildiz](http://bugra.github.io/work/notes/2014-04-19/alternating-least-squares-method-for-collaborative-filtering/)\n", 2105 | "\n", 2106 | "- Explicit Matrix Factorization: ALS, SGD, and All That Jazz by [Ethan Rosenthal](http://blog.ethanrosenthal.com/2016/01/09/explicit-matrix-factorization-sgd-als/)\n", 2107 | "\n", 2108 | "If you are looking for great datasets to try a recommendation system out on for yourself, I found [this gist](https://gist.github.com/entaroadun/1653794) helpful. Some of the links don't work anymore but it's a great place to start looking for data to try a system out on your own!" 2109 | ] 2110 | } 2111 | ], 2112 | "metadata": { 2113 | "kernelspec": { 2114 | "display_name": "Python 3", 2115 | "language": "python", 2116 | "name": "python3" 2117 | }, 2118 | "language_info": { 2119 | "codemirror_mode": { 2120 | "name": "ipython", 2121 | "version": 3 2122 | }, 2123 | "file_extension": ".py", 2124 | "mimetype": "text/x-python", 2125 | "name": "python", 2126 | "nbconvert_exporter": "python", 2127 | "pygments_lexer": "ipython3", 2128 | "version": "3.5.1" 2129 | } 2130 | }, 2131 | "nbformat": 4, 2132 | "nbformat_minor": 0 2133 | } 2134 | -------------------------------------------------------------------------------- /Rec_Engine_Image_Amazon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Rec_Engine_Image_Amazon.png -------------------------------------------------------------------------------- /Test_Image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Test_Image.png -------------------------------------------------------------------------------- /Traintest_ex.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/Traintest_ex.png -------------------------------------------------------------------------------- /XG_Cover.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jmsteinw/Notebooks/76915d49c3ea61a88f68a016eadf83583741025d/XG_Cover.jpg --------------------------------------------------------------------------------