├── .gitignore ├── 1_notmnist.ipynb ├── 2_fullyconnected.ipynb ├── 3_regularization.ipynb ├── 4_convolutions.ipynb ├── 5_word2vec.ipynb ├── 6_lstm.ipynb └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | *.pickle 7 | notMNIST_large/ 8 | notMNIST_small/ 9 | *.gz 10 | # C extensions 11 | *.so 12 | 13 | # Distribution / packaging 14 | .Python 15 | env/ 16 | build/ 17 | develop-eggs/ 18 | dist/ 19 | downloads/ 20 | eggs/ 21 | .eggs/ 22 | lib/ 23 | lib64/ 24 | parts/ 25 | sdist/ 26 | var/ 27 | *.egg-info/ 28 | .installed.cfg 29 | *.egg 30 | 31 | # PyInstaller 32 | # Usually these files are written by a python script from a template 33 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 34 | *.manifest 35 | *.spec 36 | 37 | # Installer logs 38 | pip-log.txt 39 | pip-delete-this-directory.txt 40 | 41 | # Unit test / coverage reports 42 | htmlcov/ 43 | .tox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *,cover 50 | .hypothesis/ 51 | 52 | # Translations 53 | *.mo 54 | *.pot 55 | 56 | # Django stuff: 57 | *.log 58 | 59 | # Sphinx documentation 60 | docs/_build/ 61 | 62 | # PyBuilder 63 | target/ 64 | 65 | #Ipython Notebook 66 | .ipynb_checkpoints 67 | -------------------------------------------------------------------------------- /1_notmnist.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "5hIbr52I7Z7U" 8 | }, 9 | "source": [ 10 | "Deep Learning\n", 11 | "=============\n", 12 | "\n", 13 | "Assignment 1\n", 14 | "------------\n", 15 | "\n", 16 | "The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\n", 17 | "\n", 18 | "This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST." 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 16, 24 | "metadata": { 25 | "cellView": "both", 26 | "colab": { 27 | "autoexec": { 28 | "startup": false, 29 | "wait_interval": 0 30 | } 31 | }, 32 | "colab_type": "code", 33 | "collapsed": false, 34 | "id": "apJbCsBHl-2A" 35 | }, 36 | "outputs": [], 37 | "source": [ 38 | "# These are all the modules we'll be using later. Make sure you can import them\n", 39 | "# before proceeding further.\n", 40 | "from __future__ import print_function\n", 41 | "import matplotlib.pyplot as plt\n", 42 | "import numpy as np\n", 43 | "import os\n", 44 | "import random\n", 45 | "import hashlib\n", 46 | "import json\n", 47 | "import sys\n", 48 | "import tarfile\n", 49 | "from IPython.display import display, Image\n", 50 | "from scipy import ndimage\n", 51 | "from sklearn.linear_model import LogisticRegression\n", 52 | "from six.moves.urllib.request import urlretrieve\n", 53 | "from six.moves import cPickle as pickle\n", 54 | "\n", 55 | "# Config the matlotlib backend as plotting inline in IPython\n", 56 | "%matplotlib inline" 57 | ] 58 | }, 59 | { 60 | "cell_type": "markdown", 61 | "metadata": { 62 | "colab_type": "text", 63 | "id": "jNWGtZaXn-5j" 64 | }, 65 | "source": [ 66 | "First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine." 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 18, 72 | "metadata": { 73 | "cellView": "both", 74 | "colab": { 75 | "autoexec": { 76 | "startup": false, 77 | "wait_interval": 0 78 | }, 79 | "output_extras": [ 80 | { 81 | "item_id": 1 82 | } 83 | ] 84 | }, 85 | "colab_type": "code", 86 | "collapsed": false, 87 | "executionInfo": { 88 | "elapsed": 186058, 89 | "status": "ok", 90 | "timestamp": 1444485672507, 91 | "user": { 92 | "color": "#1FA15D", 93 | "displayName": "Vincent Vanhoucke", 94 | "isAnonymous": false, 95 | "isMe": true, 96 | "permissionId": "05076109866853157986", 97 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 98 | "sessionId": "2a0a5e044bb03b66", 99 | "userId": "102167687554210253930" 100 | }, 101 | "user_tz": 420 102 | }, 103 | "id": "EYRJ4ICW6-da", 104 | "outputId": "0d0f85df-155f-4a89-8e7e-ee32df36ec8d" 105 | }, 106 | "outputs": [ 107 | { 108 | "name": "stdout", 109 | "output_type": "stream", 110 | "text": [ 111 | "Found and verified notMNIST_large.tar.gz\n", 112 | "Found and verified notMNIST_small.tar.gz\n" 113 | ] 114 | } 115 | ], 116 | "source": [ 117 | "url = 'http://commondatastorage.googleapis.com/books1000/'\n", 118 | "\n", 119 | "def maybe_download(filename, expected_bytes, force=False):\n", 120 | " \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n", 121 | " if force or not os.path.exists(filename):\n", 122 | " filename, _ = urlretrieve(url + filename, filename)\n", 123 | " statinfo = os.stat(filename)\n", 124 | " if statinfo.st_size == expected_bytes:\n", 125 | " print('Found and verified', filename)\n", 126 | " else:\n", 127 | " raise Exception(\n", 128 | " 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n", 129 | " return filename\n", 130 | "\n", 131 | "train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\n", 132 | "test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": { 138 | "colab_type": "text", 139 | "id": "cC3p0oEyF8QT" 140 | }, 141 | "source": [ 142 | "Extract the dataset from the compressed .tar.gz file.\n", 143 | "This should give you a set of directories, labelled A through J." 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": 19, 149 | "metadata": { 150 | "cellView": "both", 151 | "colab": { 152 | "autoexec": { 153 | "startup": false, 154 | "wait_interval": 0 155 | }, 156 | "output_extras": [ 157 | { 158 | "item_id": 1 159 | } 160 | ] 161 | }, 162 | "colab_type": "code", 163 | "collapsed": false, 164 | "executionInfo": { 165 | "elapsed": 186055, 166 | "status": "ok", 167 | "timestamp": 1444485672525, 168 | "user": { 169 | "color": "#1FA15D", 170 | "displayName": "Vincent Vanhoucke", 171 | "isAnonymous": false, 172 | "isMe": true, 173 | "permissionId": "05076109866853157986", 174 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 175 | "sessionId": "2a0a5e044bb03b66", 176 | "userId": "102167687554210253930" 177 | }, 178 | "user_tz": 420 179 | }, 180 | "id": "H8CBE-WZ8nmj", 181 | "outputId": "ef6c790c-2513-4b09-962e-27c79390c762" 182 | }, 183 | "outputs": [ 184 | { 185 | "name": "stdout", 186 | "output_type": "stream", 187 | "text": [ 188 | "notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz.\n", 189 | "['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J']\n", 190 | "notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz.\n", 191 | "['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J']\n" 192 | ] 193 | } 194 | ], 195 | "source": [ 196 | "num_classes = 10\n", 197 | "np.random.seed(133)\n", 198 | "\n", 199 | "def maybe_extract(filename, force=False):\n", 200 | " root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n", 201 | " if os.path.isdir(root) and not force:\n", 202 | " # You may override by setting force=True.\n", 203 | " print('%s already present - Skipping extraction of %s.' % (root, filename))\n", 204 | " else:\n", 205 | " print('Extracting data for %s. This may take a while. Please wait.' % root)\n", 206 | " tar = tarfile.open(filename)\n", 207 | " sys.stdout.flush()\n", 208 | " tar.extractall()\n", 209 | " tar.close()\n", 210 | " data_folders = [\n", 211 | " os.path.join(root, d) for d in sorted(os.listdir(root))\n", 212 | " if os.path.isdir(os.path.join(root, d))]\n", 213 | " if len(data_folders) != num_classes:\n", 214 | " raise Exception(\n", 215 | " 'Expected %d folders, one per class. Found %d instead.' % (\n", 216 | " num_classes, len(data_folders)))\n", 217 | " print(data_folders)\n", 218 | " return data_folders\n", 219 | " \n", 220 | "train_folders = maybe_extract(train_filename)\n", 221 | "test_folders = maybe_extract(test_filename)" 222 | ] 223 | }, 224 | { 225 | "cell_type": "markdown", 226 | "metadata": { 227 | "colab_type": "text", 228 | "id": "4riXK3IoHgx6" 229 | }, 230 | "source": [ 231 | "---\n", 232 | "Problem 1\n", 233 | "---------\n", 234 | "\n", 235 | "Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.\n", 236 | "\n", 237 | "---" 238 | ] 239 | }, 240 | { 241 | "cell_type": "code", 242 | "execution_count": 20, 243 | "metadata": { 244 | "collapsed": false 245 | }, 246 | "outputs": [ 247 | { 248 | "data": { 249 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAACGklEQVR4nG2SX0iTYRTGn3Pebyua\nzfjcSiWdWaSjBllZQ1h/qG0llDeiiRhEUhfRVcG8KqhL6yIKRkSSIWldjWF/bCxHFEo3FmPJbqKL\nQArZoiytr+99u9gf1+y5PD/e5zyc5wWKYqBuMHoUjJUi2AcifZ4xC61kTL6JCzbg+t7/PGU88AJW\n6hyAVrIoZyrr5bSVDZVoEmY5FHQoid9S8fzXFsVlUClvnLZUg+mVH2WQzA22d+puP0i9aLKY/0Km\ng2nDLevZFNl5b8G3aLsvhu4rP9ySKHYEVArJdK19Y980+T4AqJeuSpNKIKNjSgZSiO1kU1tMB4tj\nAJBiTxT7H+Pj91YFRPxQy5CVN/NpszWlYaIDipPatlwkzt382DME3zJRfGulyXh6QlEBknTUTWrN\nI9JQC4kAQONuXVIesuqeWtRrj++qrt3dcBKSf073KgZAAGjV0PkMgq0bdf42m44CcN7pWVIKAAS6\nLi3XBBAELp+ByJ9uuJGwzuGs0vUqp8MOIlofXcMEDcI8kPmA0z1ZoXLhbo8KfEmcG7QYACPcQmK4\npuBa86SCSawebwaxkL5fM+Qy5/IViLn4Wclq6erN7QTGPY8G77V8ABBXRJ1MjLbRRyzbs8k/nlCq\nUJOihbGQtDBmjM+gcENwJNKnofhdBR4eBvyvT0Hr9dmM+8+BfBEAFIXClh1t/bMCt254ABIoEaNx\n6CJDw19C+61FGl7e4gAAAABJRU5ErkJggg==\n", 250 | "text/plain": [ 251 | "" 252 | ] 253 | }, 254 | "metadata": {}, 255 | "output_type": "display_data" 256 | }, 257 | { 258 | "name": "stdout", 259 | "output_type": "stream", 260 | "text": [ 261 | "A\n" 262 | ] 263 | }, 264 | { 265 | "data": { 266 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAACH0lEQVR4nE1RTUhUYRQ997vfTKWT\nMw01Fv1QtEiZQCUiNJAoIqqVqwqsRYt+FtEiWhS1lAiyWgRtaiNRUrSVMMggqZAEZTAMGxcWUWZN\nGTXj+/lOi/eeeFf3cDj3nnuuAICG3denv8wHYuuzjVkWGqrl8YcDEES1ou34xW8kPx5uyuV3XJgm\n2Z/Fkir5//wrUZt5RI/P0gljbdEx5H61IsYiNeJqPLNI4igDzhVgItRFz702i1PbEKA0Kw4AQnlb\nScm2hAzRCuAVNIKc+wU2xKRwZQss3iw1KJW4Uex2ISuF+DaD9RW6wUSJneJjfFYYQWnL+XIvVho8\nYY09sBGyGAjYL4lweZkBD0EhRi1wnhzKSSJs8Rzn10pkNt/L4HYdTOLnJBc4CAB1m/b2zlQf7AIk\nXgJ0QrB9yNj6/EbgRdcolC6yLkyPNYd6ZMJwWWOx86DOTN5/nJxrUAwcZ5MfdYyRfJoxEqd+igsc\nFjXGqFUUyn6Nd+IoFX2s8VYSbBpn6YV/1sEAkDDVAYORZEsgL8MUMlthopWbmfJKYPwSVjw4pCMl\n2jXA16mEFOTT0PA7DACHfSAmFuLUYaRVfU59EAOIy7YDGEv8GPI0Q3PTMxBN6x6GAbs1JSKiFrjM\ngH0S33nVrwbVQmI2c4P0r6kR2Fxz04Zz1kKPTX7+8VtXNx84serT4N1RCCGXeuR96a/vTEN1yxoY\n1Z/vng/PQR2B/4dc4cHo+n5kAAAAAElFTkSuQmCC\n", 267 | "text/plain": [ 268 | "" 269 | ] 270 | }, 271 | "metadata": {}, 272 | "output_type": "display_data" 273 | }, 274 | { 275 | "name": "stdout", 276 | "output_type": "stream", 277 | "text": [ 278 | "B\n" 279 | ] 280 | }, 281 | { 282 | "data": { 283 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAAB80lEQVR4nF2SX2jOURjHP885v9d6\npxgraZu2tFKm5ELLLuZPRCSulFYLV7iU3C4XXEi5cEG7X1IUK3ExJInGeu0Ok4ZpLmwvFnl/7znn\n6+Ld2LvP5fn0fJ9zvh2jhiPRsnfnpnVNmvtUuv9ohuzYzLzzxrahWUkKIUmaHjw+ouc1l9FxS1Ke\nS5IUqpKCbmQAPhy9tjIFX+D7+FTe1NXpA8h/qO0bUKqGlO4dbAbIuu8opaR+wDOgagya2INhPnMY\nJ1OSusFzStUUNLIa780A8wXOSOU2HFv/KAU9LpKxgFlW0ngjZM8UoiZb8fwn46KGcfT3BA/nvmRx\nkRRvmQY/qhj0BGMxjvUvdsMuSUFH6kL/cVkh6uOqJZNg3si2YOJl2cc6A4rgujAxtmRMkjdwTRhM\nLUktrlkWDVwGkNdnrhj9/OZ8AWNGqarTdZc1hlXRFTxjikHX66RnvyqabcOVkLGjGOuWCp9WteKG\ncS5tOLCodRyHkct/QENJMer9WgrOao8v0JsraBSgT0FRrzoxXOYdju1fFas6gccxpFxR5QtdBYBi\n92CukOsmhpkab++L4CxOvCv75o3txOTd3b5fJjAarkopVNLCv0yaO+vM5vun90Gldh4lafJSB85q\n/WMu2uZDPe0ty/X72+Trh09/mksC/gLb+A4q9YNQoQAAAABJRU5ErkJggg==\n", 284 | "text/plain": [ 285 | "" 286 | ] 287 | }, 288 | "metadata": {}, 289 | "output_type": "display_data" 290 | }, 291 | { 292 | "name": "stdout", 293 | "output_type": "stream", 294 | "text": [ 295 | "C\n" 296 | ] 297 | }, 298 | { 299 | "data": { 300 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAACkklEQVR4nDWSXWjVdRjHP7/f/78N\n95LF5nzJ2gvJDErZdrEi0UKZUWA3BWaUggOvXC7F4U0wsCAvetmc7MJ2GBEouzLJliO0IqwcZc29\nnA1rsDVnO3RO29k8nfP//b5dnONz++GBz/d5vgbr96V/v/BaMsQprNn80FrGfzGiMObDa71XYi1w\n6KuRlM5W1O2vwxRY7VAiGt+1OdxzekpOSg+/v7uSogJ9VS5SrOpY7LPU/UsH+/6YSWXfDbABgOVr\nRfKaOn7yrcfAUPOF163O9QRAwCtyXp8cWoshDItpGRn98l8tvISFgBe8190qAkJK9n/Qsa7M1i87\nRa0EwE/KqoMLE6cCTNGW5ZFKuKKMJsqAvXJeN3+VZiuwlH+nyac4qiind4DTyskrGVcvIYbynXG1\nt8k73TTwsXKSFhezLVgwlqZ+eclruQHalZMUqYcgH9zy4o2snEaLQuIEgGUAAZj6R9JDzc/I209z\nsOm+vOS0I79ZvKXj3ko2LafhcmhalZfTdBkGsDTHNXbgtnykM3BdkbySz2MBQ825sYHSYTlF6ueE\nvBSpjTCvE5Y3NN5QJHntY1BOXpmt2DwMmt9bUSQ5jQRsmJeP1F1ghqopOeflM3obu/A5MnxT+L1M\n4tme+bzZHFD5t7Lqf3ABsPRoKaes+sAypP80VvKgNoYN7k6X5s+eqyI0LOF9XfWsMRjjvXXP2e+3\n0f3XjwljPdVcn1lTT+C9ixS69T088XIqcW86FCHHteewBrGPP/pmbz1cnL6m8djRTRiw1Gbe4PXV\njzrPL2p2dHvX0tWrR1p3lgAYo/DnxsttRzp/y8yNJXYfcJOxlR/SfxoEBuzGsieXvjWlT99NPbx8\n5p87tyaTxniA/wHO7maGBqVghwAAAABJRU5ErkJggg==\n", 301 | "text/plain": [ 302 | "" 303 | ] 304 | }, 305 | "metadata": {}, 306 | "output_type": "display_data" 307 | }, 308 | { 309 | "name": "stdout", 310 | "output_type": "stream", 311 | "text": [ 312 | "D\n" 313 | ] 314 | }, 315 | { 316 | "data": { 317 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABcUlEQVR4nGWSvWpVURCFv5m9FYKI\nYKUYbJKIiFGSmEDwIazzAIq1r2Mv4hMIdgG9hWgMVxsLERHEkIgKkqBnZlmcn+s9rm6fb681s2cO\ngHPpu1IjNXrigLFyLo2RjJct3ERjhjOpgLjVwxgS0r++r2Bx6jrefiplZv1yUMG0uIy1tx9PFqJF\n8ilA4Y5CkjLzxlzZCsY66QB2+Lna0JqiQrLRpZa330xzTsuF1RaKd6U2QMygVi620HgW8Y8Ro8TO\no+heMPlhSj+4f9ymV2BruLoNwKsTzy5WrA0wgKY8VTeSann+Kt3MQoD56/6MszVe13I/zOq5QRZA\n9uveTxOcfCCHtzzUH0kK7TFSbVhrU9L3S4m+rc544Upf/01YzDu5dlYGUJjW2TYzgcpNogKyo92m\nGdVks0/9/SD6nWR5/sITTu+3mx7pLhXq5aXeqaEbWe6SUFfPqINWh2784ycEfnsYx0xieuwCX+e/\nfx2xhwN/AbvIwkQNZB69AAAAAElFTkSuQmCC\n", 318 | "text/plain": [ 319 | "" 320 | ] 321 | }, 322 | "metadata": {}, 323 | "output_type": "display_data" 324 | }, 325 | { 326 | "name": "stdout", 327 | "output_type": "stream", 328 | "text": [ 329 | "E\n" 330 | ] 331 | }, 332 | { 333 | "data": { 334 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAAAn0lEQVR4nMWRsQ0CMRAE985PgPiU\nIshogU4ogQa/AUKSr4GMAL1AyLckPL4zcghcZGu0e+u1EI2hnLXFAOAfsCvHKjeFDsqHq4M3i8pu\nktnMdHfsIzanvEz33AwURwBIyThe0/tiy8OQslNu/DpdQ7yt0cMH6KEvSyD4RfG+HlNGGE0W/imU\n/WlVxDoiA3xN5hapvVODsbEKZNWvfKmEJ9DoN4bHJFzOAAAAAElFTkSuQmCC\n", 335 | "text/plain": [ 336 | "" 337 | ] 338 | }, 339 | "metadata": {}, 340 | "output_type": "display_data" 341 | }, 342 | { 343 | "name": "stdout", 344 | "output_type": "stream", 345 | "text": [ 346 | "F\n" 347 | ] 348 | }, 349 | { 350 | "data": { 351 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAAB8ElEQVR4nGWSP0iVURjGf+fPvRYo\nkeLQH4ccHURCwxDiNiRNQtBfuNCUS0G4tTm4RUsobS1hQ5AVQlBD3ZKoQMK4UGNgCSVRVobg/c45\nT8P3fSj2jOc5v/c87/seKGQ8cPjaww8b0pf5UQzVHlN4NtF1rj7k4cdsY31k/Mkl93i18BzVK8uS\nWprdD9Cn2fsayj1P/2sphKBJ8M62cV16UHIX15VFZZrCW8DbsRBrBTchBSnoHt7kt0fSKyzguKwY\npZiWO40tSo2qjgdHLQtRUtQZXJlhcr0dg7WdnxUlBb00rmyM5m0cOG4qk6SokyVoORSHjQPT34pJ\nUtCSM+VI2D1cAeBWDgZdxbND3V+VJCVt9GK3jvMH6oo52MDsBG1NAhCPyjhb8oN5Cs8CaXtdQKwq\nSYpaaf+/rN+LAUTzr4sdT7sxa8qbabNv/J8uGRBNbLR3qqZjKqcW5ipLfFQ+gvEiT18rCyG00gnA\nLioB8B0wfpc/X8E5W/nd9FVn7xYzywCFzXAMA4nF1dSKtvHCRoB2AKN9A1hQeo4Fm01kNiJ6jAHr\njuxJhuDtMwQ4zkpZpgYV56pMazNL2pzMvwuOsRUpCxcATv2MkuYHty3gwMw3STeOn56T9Gn6aLkU\nA7jIwbHaQK/L1t6/XXj3C0ve3j/6HgLdtUioagAAAABJRU5ErkJggg==\n", 352 | "text/plain": [ 353 | "" 354 | ] 355 | }, 356 | "metadata": {}, 357 | "output_type": "display_data" 358 | }, 359 | { 360 | "name": "stdout", 361 | "output_type": "stream", 362 | "text": [ 363 | "G\n" 364 | ] 365 | }, 366 | { 367 | "data": { 368 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAACaUlEQVR4nF2TT2hUdxDHPzPz3tvN\nvu2GfU+z2cQmsS1utSHUQEyiUBSaHoSeNBRiSkUIovbiRRCRosdAWjxVRPAPGBRyitF6UBEs0l5E\nCJiI0gYaEaRpkoaISXbfz8MmDfbL3L58Z+Y7fEcA0ITCntam6eLC28LUb/eWBQcgAFapPbj17yvR\n79E/+S3abw+uowlVGLuunrYC5n1en0KCws/nGrE1rn8kLoQIfPslqhp5baPtVdboO91S0328EUEM\nId5Pfvv5NhSUjhtFrLQ7V50Puc/EZ/fYBhG09mI2z/+RynSdwzQ5+DKTiRCVNUJESOf+bNpToe6C\nbfqpF0FNAdQEpXsg3TYM/UOR3/jBqua/tpZm81irDJ6fXgah5buGictzrnQo/2R4Hodn/bE2x2UR\ncTZVGijMmj7vHliaM4ckkpT0rwmcc+Dmyws4l8yWFwFcUr6rmklvO1aPgHm+FwSe5ykgHGixtLey\n9GpyobrNJ1+lkpWNKAAzi4K3LDP3wIFj8rY4jpAAjjv28ZLmPsRMQEhUPFMSERA1v8f0j8WaSsWB\nU8U5l3haceASdSsvdPz77Ma9JZKiTqRyzjW8mdhU69jcU87vu0N20N/xQw8ahhoGENZoLhDaToal\nmwq9Z+L3LreK8FYvqqPiNRoORFYLyOTbl0cU5dPhOvx8V7imybZbwK5HLaKa2OS1wx/5cVcWpS5E\n4Is46Dh6YkoSwDg0GjVHokLfTtTYkO38pXM9fl//epY6UkEhBj8u/jhUxNZD3Xyq+cml6FH9rN8U\n7Zt7eL8a6qoFq9DxTWs4Hr7+N3j2+Cmr7/AOClOw4DpOc3cAAAAASUVORK5CYII=\n", 369 | "text/plain": [ 370 | "" 371 | ] 372 | }, 373 | "metadata": {}, 374 | "output_type": "display_data" 375 | }, 376 | { 377 | "name": "stdout", 378 | "output_type": "stream", 379 | "text": [ 380 | "H\n" 381 | ] 382 | }, 383 | { 384 | "data": { 385 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABzklEQVR4nG2SO4sTURSAv3PnJpug\nbNZ1ZrIEBFFBsZJVEMXaB2htK9iK/0B/gK2RxcLG0k6wWRB8bCVitYIPUiwmK5nJbPaVLHncuddi\nElfi3PJ+5zucl6x/Kve6O1G7vdXtM3miEIdz+sODWiUMji8vFrX097c7cafbPUizIA3x728AFOeD\nMAjOLMyVbL+3HcedTQ0iImDFJEkWVagEfhCeKl3R4JwDygMHKC1pOk4SQJu6mpTAiVfXz9+/oUZD\n4xClPK1BTyv8WVg12m58bqxsinPZ38R0Hi/xDCfv3lPIVJhALKtNUSrlbVPbWehU/z0Oj3c4ZiHC\naxzKrOVBaIx0yvcoD1o2GqT86As5puxu4Wihckyn6OHYI9/EAh7kQDEsIPjYHOhx+iweyyUr/0PH\nTd+Iu3Thn8xTWAyuPUK8MY8xM7MVGb1Y82Hd2VtP9GHeZ2hAcfXr3sc7PBw698tHAE2dpxQ87SmP\nUqhQXHzTXMksTV2XsdmtDQa6kHpfbqvDXnT5+fhgJ4riOOkbA2BFBLACwlwQVKtLC0fL48F+HLej\nZNdkmqlrGbZa2RSO+WF16dz8kTTtJUkUdZvI5G6d/buMih/WasFiZXj5D5CEuaDpg0PdAAAAAElF\nTkSuQmCC\n", 386 | "text/plain": [ 387 | "" 388 | ] 389 | }, 390 | "metadata": {}, 391 | "output_type": "display_data" 392 | }, 393 | { 394 | "name": "stdout", 395 | "output_type": "stream", 396 | "text": [ 397 | "I\n" 398 | ] 399 | }, 400 | { 401 | "data": { 402 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAAAAABXZoBIAAABUElEQVR4nG1RPy/DURQ97/WVEBIl\nFrGJ0EgsErOwCBZLJ2KoDyAmk/gIRotYu1iaNAZJRSId6IKFMBikJhURjfT37j0GET99707v5eT8\nufcAwTicU0jhbogZY68pbPMwxOCwplTh0wRsQFQsm8QQJ/ddATGDmTeq8n0aNmAS8wOJAS5ujAaq\nGLijUjgHE2bFIpXCWl8UrFDZ5gYyEWy8SfW8HYVF5yoOxZwaovycjcTpv6QXNvLhAeBQoKiwBBdx\nxBET0s9GolrkW1Tlmf39p4ZY6hEj2NfQEcg+UD3rw1HVFSXb3I7EQQZlivBxLLKHxdQLxfMA2Ygh\n9pj8FBkhDtbolZV0mt+nwxJ9Z5EmYwwAa1yJvqNI6wBYawxWqfLF9XSR49XNIcAgt/Wp4nncnVYt\nUFo31dP6B1XZ2On95zhUXJgccdDm69P1VbVpyT/wG28OroaCpplGAAAAAElFTkSuQmCC\n", 403 | "text/plain": [ 404 | "" 405 | ] 406 | }, 407 | "metadata": {}, 408 | "output_type": "display_data" 409 | }, 410 | { 411 | "name": "stdout", 412 | "output_type": "stream", 413 | "text": [ 414 | "J\n" 415 | ] 416 | } 417 | ], 418 | "source": [ 419 | "base_dir = os.getcwd() + \"/notMNIST_large/\"\n", 420 | "letters = [chr(ord('A') + i) for i in range(0,10) ]\n", 421 | "for letter in letters:\n", 422 | " letter_dir = base_dir + letter\n", 423 | " random_image = random.choice(os.listdir(letter_dir))\n", 424 | " display(Image(filename=letter_dir+ '/' + random_image))\n", 425 | " print(letter)\n" 426 | ] 427 | }, 428 | { 429 | "cell_type": "markdown", 430 | "metadata": { 431 | "colab_type": "text", 432 | "id": "PBdkjESPK8tw" 433 | }, 434 | "source": [ 435 | "Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\n", 436 | "\n", 437 | "We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. \n", 438 | "\n", 439 | "A few images might not be readable, we'll just skip them." 440 | ] 441 | }, 442 | { 443 | "cell_type": "code", 444 | "execution_count": 6, 445 | "metadata": { 446 | "cellView": "both", 447 | "colab": { 448 | "autoexec": { 449 | "startup": false, 450 | "wait_interval": 0 451 | }, 452 | "output_extras": [ 453 | { 454 | "item_id": 30 455 | } 456 | ] 457 | }, 458 | "colab_type": "code", 459 | "collapsed": false, 460 | "executionInfo": { 461 | "elapsed": 399874, 462 | "status": "ok", 463 | "timestamp": 1444485886378, 464 | "user": { 465 | "color": "#1FA15D", 466 | "displayName": "Vincent Vanhoucke", 467 | "isAnonymous": false, 468 | "isMe": true, 469 | "permissionId": "05076109866853157986", 470 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 471 | "sessionId": "2a0a5e044bb03b66", 472 | "userId": "102167687554210253930" 473 | }, 474 | "user_tz": 420 475 | }, 476 | "id": "h7q0XhG3MJdf", 477 | "outputId": "92c391bb-86ff-431d-9ada-315568a19e59" 478 | }, 479 | "outputs": [ 480 | { 481 | "name": "stdout", 482 | "output_type": "stream", 483 | "text": [ 484 | "Pickling notMNIST_large/A.pickle.\n", 485 | "notMNIST_large/A\n", 486 | "Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping.\n", 487 | "Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping.\n", 488 | "Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping.\n", 489 | "Full dataset tensor: (52912, 28, 28)\n", 490 | "Mean: -0.128478\n", 491 | "Standard deviation: 0.425835\n", 492 | "Pickling notMNIST_large/B.pickle.\n", 493 | "notMNIST_large/B\n", 494 | "Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping.\n", 495 | "Full dataset tensor: (52912, 28, 28)\n", 496 | "Mean: -0.0075604\n", 497 | "Standard deviation: 0.417294\n", 498 | "Pickling notMNIST_large/C.pickle.\n", 499 | "notMNIST_large/C\n", 500 | "Full dataset tensor: (52912, 28, 28)\n", 501 | "Mean: -0.142319\n", 502 | "Standard deviation: 0.421427\n", 503 | "Pickling notMNIST_large/D.pickle.\n", 504 | "notMNIST_large/D\n", 505 | "Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping.\n", 506 | "Full dataset tensor: (52912, 28, 28)\n", 507 | "Mean: -0.0574537\n", 508 | "Standard deviation: 0.434084\n", 509 | "Pickling notMNIST_large/E.pickle.\n", 510 | "notMNIST_large/E\n", 511 | "Full dataset tensor: (52912, 28, 28)\n", 512 | "Mean: -0.0701407\n", 513 | "Standard deviation: 0.428825\n", 514 | "Pickling notMNIST_large/F.pickle.\n", 515 | "notMNIST_large/F\n", 516 | "Full dataset tensor: (52912, 28, 28)\n", 517 | "Mean: -0.125928\n", 518 | "Standard deviation: 0.429618\n", 519 | "Pickling notMNIST_large/G.pickle.\n", 520 | "notMNIST_large/G\n", 521 | "Full dataset tensor: (52912, 28, 28)\n", 522 | "Mean: -0.0947753\n", 523 | "Standard deviation: 0.421783\n", 524 | "Pickling notMNIST_large/H.pickle.\n", 525 | "notMNIST_large/H\n", 526 | "Full dataset tensor: (52912, 28, 28)\n", 527 | "Mean: -0.0687492\n", 528 | "Standard deviation: 0.430445\n", 529 | "Pickling notMNIST_large/I.pickle.\n", 530 | "notMNIST_large/I\n", 531 | "Full dataset tensor: (52912, 28, 28)\n", 532 | "Mean: 0.0307373\n", 533 | "Standard deviation: 0.449686\n", 534 | "Pickling notMNIST_large/J.pickle.\n", 535 | "notMNIST_large/J\n", 536 | "Full dataset tensor: (52911, 28, 28)\n", 537 | "Mean: -0.153468\n", 538 | "Standard deviation: 0.397116\n", 539 | "Pickling notMNIST_small/A.pickle.\n", 540 | "notMNIST_small/A\n", 541 | "Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping.\n", 542 | "Full dataset tensor: (1873, 28, 28)\n", 543 | "Mean: -0.132516\n", 544 | "Standard deviation: 0.445818\n", 545 | "Pickling notMNIST_small/B.pickle.\n", 546 | "notMNIST_small/B\n", 547 | "Full dataset tensor: (1873, 28, 28)\n", 548 | "Mean: 0.00535639\n", 549 | "Standard deviation: 0.457048\n", 550 | "Pickling notMNIST_small/C.pickle.\n", 551 | "notMNIST_small/C\n", 552 | "Full dataset tensor: (1873, 28, 28)\n", 553 | "Mean: -0.141488\n", 554 | "Standard deviation: 0.441056\n", 555 | "Pickling notMNIST_small/D.pickle.\n", 556 | "notMNIST_small/D\n", 557 | "Full dataset tensor: (1873, 28, 28)\n", 558 | "Mean: -0.0492095\n", 559 | "Standard deviation: 0.46049\n", 560 | "Pickling notMNIST_small/E.pickle.\n", 561 | "notMNIST_small/E\n", 562 | "Full dataset tensor: (1873, 28, 28)\n", 563 | "Mean: -0.0599003\n", 564 | "Standard deviation: 0.456123\n", 565 | "Pickling notMNIST_small/F.pickle.\n", 566 | "notMNIST_small/F\n", 567 | "Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping.\n", 568 | "Full dataset tensor: (1873, 28, 28)\n", 569 | "Mean: -0.118168\n", 570 | "Standard deviation: 0.451113\n", 571 | "Pickling notMNIST_small/G.pickle.\n", 572 | "notMNIST_small/G\n", 573 | "Full dataset tensor: (1872, 28, 28)\n", 574 | "Mean: -0.0925151\n", 575 | "Standard deviation: 0.44846\n", 576 | "Pickling notMNIST_small/H.pickle.\n", 577 | "notMNIST_small/H\n", 578 | "Full dataset tensor: (1872, 28, 28)\n", 579 | "Mean: -0.0586728\n", 580 | "Standard deviation: 0.457382\n", 581 | "Pickling notMNIST_small/I.pickle.\n", 582 | "notMNIST_small/I\n", 583 | "Full dataset tensor: (1872, 28, 28)\n", 584 | "Mean: 0.0526455\n", 585 | "Standard deviation: 0.472764\n", 586 | "Pickling notMNIST_small/J.pickle.\n", 587 | "notMNIST_small/J\n", 588 | "Full dataset tensor: (1872, 28, 28)\n", 589 | "Mean: -0.151675\n", 590 | "Standard deviation: 0.449515\n" 591 | ] 592 | } 593 | ], 594 | "source": [ 595 | "image_size = 28 # Pixel width and height.\n", 596 | "pixel_depth = 255.0 # Number of levels per pixel.\n", 597 | "\n", 598 | "def load_letter(folder, min_num_images):\n", 599 | " \"\"\"Load the data for a single letter label.\"\"\"\n", 600 | " image_files = os.listdir(folder)\n", 601 | " dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n", 602 | " dtype=np.float32)\n", 603 | " print(folder)\n", 604 | " for image_index, image in enumerate(image_files):\n", 605 | " image_file = os.path.join(folder, image)\n", 606 | " try:\n", 607 | " image_data = (ndimage.imread(image_file).astype(float) - \n", 608 | " pixel_depth / 2) / pixel_depth\n", 609 | " if image_data.shape != (image_size, image_size):\n", 610 | " raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n", 611 | " dataset[image_index, :, :] = image_data\n", 612 | " except IOError as e:\n", 613 | " print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n", 614 | " \n", 615 | " num_images = image_index + 1\n", 616 | " dataset = dataset[0:num_images, :, :]\n", 617 | " if num_images < min_num_images:\n", 618 | " raise Exception('Many fewer images than expected: %d < %d' %\n", 619 | " (num_images, min_num_images))\n", 620 | " \n", 621 | " print('Full dataset tensor:', dataset.shape)\n", 622 | " print('Mean:', np.mean(dataset))\n", 623 | " print('Standard deviation:', np.std(dataset))\n", 624 | " return dataset\n", 625 | " \n", 626 | "def maybe_pickle(data_folders, min_num_images_per_class, force=False):\n", 627 | " dataset_names = []\n", 628 | " for folder in data_folders:\n", 629 | " set_filename = folder + '.pickle'\n", 630 | " dataset_names.append(set_filename)\n", 631 | " if os.path.exists(set_filename) and not force:\n", 632 | " # You may override by setting force=True.\n", 633 | " print('%s already present - Skipping pickling.' % set_filename)\n", 634 | " else:\n", 635 | " print('Pickling %s.' % set_filename)\n", 636 | " dataset = load_letter(folder, min_num_images_per_class)\n", 637 | " try:\n", 638 | " with open(set_filename, 'wb') as f:\n", 639 | " pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n", 640 | " except Exception as e:\n", 641 | " print('Unable to save data to', set_filename, ':', e)\n", 642 | " \n", 643 | " return dataset_names\n", 644 | "\n", 645 | "train_datasets = maybe_pickle(train_folders, 45000)\n", 646 | "test_datasets = maybe_pickle(test_folders, 1800)" 647 | ] 648 | }, 649 | { 650 | "cell_type": "markdown", 651 | "metadata": { 652 | "colab_type": "text", 653 | "id": "vUdbskYE2d87" 654 | }, 655 | "source": [ 656 | "---\n", 657 | "Problem 2\n", 658 | "---------\n", 659 | "\n", 660 | "Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.\n", 661 | "\n", 662 | "---" 663 | ] 664 | }, 665 | { 666 | "cell_type": "code", 667 | "execution_count": 7, 668 | "metadata": { 669 | "collapsed": false 670 | }, 671 | "outputs": [ 672 | { 673 | "data": { 674 | "text/plain": [ 675 | "" 676 | ] 677 | }, 678 | "execution_count": 7, 679 | "metadata": {}, 680 | "output_type": "execute_result" 681 | }, 682 | { 683 | "data": { 684 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAP4AAAD8CAYAAABXXhlaAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvUuMLEua5/UzM3fzR2TGOTfrVnbpTHP7jsRDYoFmNRtA\ntARCLJBm1xrYgFgi9g2rETtgyw40QiDBgFiMYIHQwKJGGrGZFasZDQzTA923p07VzZMZGeEPe7Iw\n9wgPT4/IzPOoW1U3/kd2zMLDw90i0v/2/77PXiLGyAUXXPDjgvyhK3DBBRf8+nEh/gUX/AhxIf4F\nF/wIcSH+BRf8CHEh/gUX/AhxIf4FF/wI8UnEF0L8G0KIvy+E+AdCiD/+XJW64IILvizEx/bjCyEk\n8A+AfxX4Dvi7wF+NMf792XmXgQIXXPADIcYolo5nn3DNvwz8XzHGfwwghPjvgb8C/P2np/61Sfnn\nwB9+wm2/NH7OpX6fgp/zsvqJIc3LI+KQ5mUNrIZ09aT8VrV8q/9sSH86Kf8Zf+PDP+Sv3VruDSeT\nCaA4TnJWHpOY5RIIgAPsidy8XWG+/Rrz7U8nKb3+P//G/8Ff/OM/ovkTS/snlmaf3L7sNpHnazji\n3z/563+Kqf8XgP9v8vpPh2MXXHDBbzg+RfEvuOAjsWh9vhARKQNSeoS0SGmQUiGkREq4ki1V1lBk\nHVlmEJklSofD4yP0Iam6DeAi+AghHuyJ4zs9tTnGFEiqeer9+WcP3zwiiEgikoDCo/BkeCSBDDeU\n02tJRBAQizWcX33p9TI+hfh/Bnwzef37w7EF/HxSLj/hlr8OfPtDV+AZfPtDV+AZfPvK88VCOU5e\nH5elCuS5ReuOXEdy7ch1R64bcl3wRrZ8Hb9nHT9QxQ0q7vCxp4uOv1RKHgw8Wtg56BwYnxqAEJ/W\nZqxJmL2OHEg/lqfvBZ6Sf/qtEuk92d4JMEg6fv9f+paSDo/D4rFDo6AIC+SfXx1SyO3/Xv6ZZ/gU\n4v9d4J8WQvwB8OfAXwX+reVT//ATbvPrxrc/dAWewbc/dAWewbcf8Zm5jz9vAA6QMqILS1lFqtpR\n1T1lrajqjKrOuBYtPzHfs7YfKM2GzDQE29EZyz+vE/Ebl1LrkwXgwkD8eKjJXN2Z1OgU4eXk/Dn5\nD83XQenBIbAIDIqeb//l36cLLY6AGVJGQA3KfxrjHf4Z4J+dHP9fTn7io4kfY/RCiP8A+Fuk7/zX\nY4x/72Ovd8GPDUvm/inCH5RfykCuE+mv1oLrteBqDVdrkZJoud7dc918oNo9opodftfRWcejDwib\nlL710PlB8QOE8LRWU8KORJ4q/bwBGD8TZvnxt0mmu8IjcEgsCoOnJ9ABETOknki2dwvioPhidsVT\n5fP4JB8/xvi/Av/cp1zjgh8zBMsR/en7MH2gE/ED1cpzvQ68uQm8vfG8vUnlFS3lwyPFw4Yie0Sx\nI9ierrXJpjeDn++h9wd/P8Tl2oTZsTnR51hS/Ok3SD0AEYFH4ojYCc1bINIDPZCTCKqO6jRvmsb8\ndb3ml+DeBb8hONWldwwpI1pbqtpytTa8vbH85Nbw9W3K69ghqgaZ7ZA0SNcQ2o5OOIwLRDsE9mZp\naurP7z7q7Fg+l06dc/iWSfGPO/4M0AEFAD2CDkGOIEOgkEgE4mxQ9HXkvxD/gt8qJMW3VHXH1ZuW\ntzcdX9+23L7r+L13LVXscHmPo8e7Dtf06bWweBfxJpE8jBH9cHgNpw3p0c+f+/9LCCz7+SNSoC4p\n/ujji4HuEClQaCQ5kgyFIiL3IwXmNfsBTP0LLvg8WDKylx9iIQNaO6q643q94+3Njp/cbvm9dzve\nfbOlDB0tjsY62tbRbixGO3rhaFzEmYVbzG41J/c0zRsGeKr2pwJ76dpxH9VXeCQWSY9EowZTPxE/\nI9+TXg2/zikn42LqX/AbD7GQZpCAlCBByDi8Trlae9S1IFtF8sqjS0OhO8psRyUfKehwwpOJgCRA\n9PgYsMFjQsSG5RpM00tM+SULIEzeP9GmDF8vDH33DjUE9zJ61EDHggxNICfb+/jJ2D/X5Jy62zIu\nxL/gC2Gpe+5UDvuHVoHIJWiBGNK0rK4l4ieWuOrxWY51in4raX8FOwLOBdrvAt37gLkL2E0kNJFo\nQCyQnqc1eKV2vg7T7rwMS4Ylx5DRkQ99BiV6IH4cfHyBRJ256nPOx1NciH/BF8DSGPxz0fsREaSA\nQiAriajVkFJZ1hK5knDVE640XuVYm9FvBS2w6wLWePr3gf59wNxF3Cbim0g0yZEfjeWlpufXgTG4\nNyp+jkEPPr0e7IeCQEEcgnsShZoN4HnOJnkeF+Jf8IVwypAeccJElYO61wqxzpBDEuscuc5QpUTk\nLTErcFmOcYruUdJ2sLsP2C4pvbmLmLuAGxSfQfGfm5zyegq9HpK4J77G7oN5xUDtEtCISXAvmwzg\nmfYxzGt8If4FPzjmhB/nscFxmOy4ARAyJuKvVCL9jUbeaNRNnnItEa4hugLvB1O/EzQ+snWRovW4\nTcQ+JLW3E8UfTf0lLFHmSzQAx6a+I8eikZRAOdyxYCS+IiMbgoBL4/Xnv+PLa3wh/gVfAHPzfjqR\nFZbDX8MxyaD4ErnOUTcaeVugbouUZxKxLYnbAr/Nsd3g429htwu4XcA3SeV9E/G7SGjY+/jTiTXw\nlCanGoDPh5H4YSC+RJNIXw39ASVyH9nPyJH4oe9/XivB6Ub0PC7Ev+ALYYn009HtY6fXtINsiODr\nwad/kxRf3Raod1VKQiK+L4lofJdj3eDj/wr09wH36IkGoolPcjEZlvvS7rkv86uMii/IgWLw60s8\nESbdefkQ+feTIbvzms5+vxfiQvwLPjNO+fbTASjTgbDLii9HH/8m3xM/+6ZGRgFUxK7A3+dYe4jq\nZ38asJuQTPowmPZDYixzWvHPdY59robg2NQHTUQPpK9wJMXP0OTk6Anx51OFluY1XIh/wW8sJsov\nxvJAORERKkPKjExKcinIZSSXgVxatDSUoaeKBu0smXGIzhF3Hv8YMA8BNnFxlZxpNP85enwukp+6\njiAiY0AFUAFyH8l9pHAeHyS5t+TekQWPCh4ZA+KJ4h8PC3ptzS/Ev+AzYN5nv9RDPpr3MhFeZEM+\nJFIupSIPiqJXlFtP+dBR/spS5DtKMsqwo/zuPeX77ynvHig3O4qmIzMWGeKTuy6FE5dqtjRIh4Xz\n51hyC55zFUQA4SPSR5QNKAOqB9VBFgKqDygbkDYgfUT4ZMGcr8X+6s/UOOFC/As+AecG6YwYH8qJ\neS9kIr5UIBTIQ1koQRYChfFUO8/q3rLKAis8tQ2UYUf+/o78/R367oF8syVvWjJjESEc3fUU6U/F\nv19L+vlnX9JIiBgHNyQiXUTalLI+kHWQRY/qA9IM77uYGrQXBx4uUf0LvijOjcY7ZXrGgfQiEV1q\nkHnKVSpLGclCR9H31DvLdd5zTcfadVw3PWXYIu8eUHcPyLsH5GaLajqksYuqOI99T4+/VOk/h+l/\npMkxKb5wEelGxRdJ9UNAGY8ySfGFjymdVfzpHS6Kf8EXx6kRelMsUErIQeU1qAJUuU9CebIARW+p\nd54rWt7aR942j7zdPFKGLXGzhYctcbODzQ6ajmjsYTWNyR3nHV7TmkzPWyL/0rlLn1n6/Mme9Uga\nQRhA+oi0oExAGUHWRbIQyPrBzLepYRAhpsbiSWU+3s+/EP+Cj8CST39m/P2cSkIkE1/loCpQNWQ1\nqBopLXmwFKah3nqubceb9pGbzff8pLijCI/4psM3bcp3La5p8cbiQ3i2w2s89py5Py8vXfOjEEfF\nB+kC0gqkFahekHUCFZKpr0xI7x8p/rx2S43sRfEv+KJYGoo776IbMVX7gfh7xS8T6bNryK4QqicL\nDUUvqa3nuu14IzfcyO/5qfwFRdhgjB2SmZQtIcQ9mefNzxIdzqn2KZyj1ktiA4Khq9En8isHysRk\n6neQxTBYAAFlkzsgltfwWrjzS/osEi7Ev+AzYd4AnDF8R8Ufia9qyK4gf4OkIwsPFFZRB8+Vb3kb\nNtz47/lp+HMKv6ENgTYEuhBoQ0SEQAgBF8LR1NhzMwNeG4k/h1ONzeL1xrYvRKQDaePex896Uhde\nfzDzpUvR/9M+/lIY83lciH/BZ8YzqiMiUqRlsmXmkZlDaofUFqkN17FnbTuuQsvKt6xMQ2V3VGZL\naR7R4REPwxq1x/vHnJtyO6+hmLRRcTw4IAIxHnImr2enHuHFMYKQ1F44EBaEAdlHZAsyRGQXkX1E\nGBD7xf+nFztn5l8U/4IvgnO0Wnr0j81/hSBXDp116FyiS48uenSxQxf3XNFy03/HT+R7rsUdJRtU\nbPDO0ImAI61O15NWq/McL4pxjvyRyRofY+eCTDOBx2PEw5Jc+3xYnsuHwzWem2t4KliYgntDxcfl\n9vrhSzXD+9Mv6GZf8sk3mocvL4p/wWfD0sN06rE/dU6iihRQSE+VddTaUxc9dbmjrnLqKmdFy7V6\nz7V4z5o7yrBBuYagEvEViROGY+KPj/yz8+0HkmcSMjWkSRnAebDDstvOH14jTq/GO/8VTvr7I/Hd\nkEbit0MKHIg//ZLnfIfFvovzuBD/gmewNEhniqUH7nTfvhQCLR115lnnPetCsK4E65VgXUMdW0px\nRxnvKMMdpd+gbIOXho6AIHFhTI5jxX9uvr0QoATkCnQ2pEkZwLhJEol/RIgLa++fGjG/FIPf46WK\nbzhW/Cemw3PjEk/jQvwLzuAU6c9p3fycY+9biohWjlXmWWvHTeG5qRw3tefmylPFFsUGFR6QfoOy\nG2TWEGRSfDhwYW4Fv8QJEYCSifhFBmU+pKEM0FnoJXRiqP1Aei8OPv8pLBH+pKk/rq49Kv5I/JZl\n4i/e7fWBPbgQ/4KTeMlw3ClOKf6YUgMghR8Uv2edd9wUHbdlx23dc3vVUcaWEBq8b/C2IfQ7fJYU\n34qw58A8TU39s8G10dQfiF/lUOtDIoKW0AykZ0J6y2n//VQfxuLPNFf8bkgtT338uT+zeMHXDd6B\nC/EveBbPkf6UZztV+sN8OSkCWnnqrOWN3nJTbLmttryrt7xbbSliS+8MnTV0vaHThi4zWHUI7p0i\n3bwz8ZQTogTkMpn4VQ4rDVdFSkTIZkofPFiZPre0Vv6ZjsuTUf0nPv45U3/qzyzi9R2RF+JfcAZL\nvjqc17ula4zEV0jsoPgd63zLTfGB2+qed/U931x9oIgtGxt4NIHHLkAesFnAy0BHwLAcvZ+mU4o/\ndtvNTf2VhusC1sNGzkfmvQfrIBep23BqdZ/y68/696ei+i1pz6y5qf+s4n8cLsS/YAGnzPyl8XBP\nn0Yhh7Xz5LScUlkGyitHWVnKsqPSLZXaUskNVbxHx44+QhchixNzG/b99wfH4UQUX7Cf6TtPOoes\nkqhKIisJlSBUEl9JbJU69Z2KOBnwBHwIBBeJalzR4ynmij8/flSe+/hTxR+Jf9SdJ5KfEQTE6d/g\n01qBC/EvOIHnesWXtXU/BF9HlA4oDZkeX0feFI5V6Sgqhyw9IQuYEGnayAbIHWw30G6ha8B24Ifw\nvYyHDSRPhhfFMPFvMtt3mvJCUlQaUefYOqepNL7OaSvNQ50TAnSPhja3dNLQYmm9pTcGJyyBcGZr\nrOO6iBPlI8Ufg3s57JfO7wT0InUpWAleQpgOVZre5Zy1dRoX4l/wDJbIv+ThDmfLiCoiugrkdRxS\nIK8leS1Y545aObT0KBkI0mNCoOkiGwO5hWabUt8m4gcDeFDxqas7r5EUifBZPkk65XkOspSIOkes\nKmxd4eqKpq6QdYWoq+TP5y1WtlharG8xpsW24KQnEJ714c9q8qj4Ux//sF3OQfGNACMHxZcQJcSR\n/NO7T5uUS3Dvgk/CKe95KWZ+/JgLCZmO5DUU60ixjpRrQbEOFGvBdWZZOYd2Huk80QWMizQmsnGR\nrE9K3zeD4vdJ8aM/KP6pcOJY9dHq0CXoIuVFkcrUEldr3KrGrq5wq2tsfY1bXWFX13gbiXJLiI9E\n/0gwithCyBxBdOfvzbLzc3RsycefjjlGHBTfCnCj4h/iJIeLnG6An8MnEV8I8SfAw1ALG2P8y59y\nvQt+0zD1nKdR+jB7/6A8QiazPl9FyjVUN4LqBuobqG7gSjnqnadoPLLxhCZgTKBpI1kDalB52w/5\naOrPiD995KcUkDKZ9JmGvICyhrKCqk7lWEualcZdVdjVNc3qK5qrtzSrtzSrr3A2IvmA9AXSZGn8\nfOGRWYcU54cHLfUgwAIl5z7+fOXxjmVTP07NgvHkpdWKn8enKn4A/jDG+OETr3PBbxzOKb7ksJTW\neG566KSMKM1e8asbuLqNrG7h6hZqYanvHfreoQgEE7Ah0nQRHkA14E0iuzcQ7MHUl8NzPT7q80D3\nGLUXKpn2ukxkr1ewukrJrSTuKqe5qrBX1zRXb7m/+nqfbB/RriA3GbqFfOvQRU+e7dBSHnF0+kst\nlc+a+qPiTy84tqcj6Y1Ipr4bFX809cPkA0t9Gc/jU4k/Daxe8DuBpcd6nk4/cKPi6zpSvInUN5HV\nbWT9LqUKR64dGo/sPWEbMCEQ27TrjdpCnAzNi5OpeDIe7jpG90fsG4LBx1c6mfZllQh/tYbrNZhr\nSXulEdcV7uqa3fVXfLj6ml9e/4z3Vz/DdoHKKKoOqq2jqnuqYkeV5UghyThu7sZfYP6rncQ8qi9m\nxyJD0G9Qezsx9fc+/lJo83UNwKcSPwL/mxDCA/9FjPG//MTrXfAbged8/FPvixTc0ymoV64D1U3g\n6jayfhd4802kDBaJQxiP2A5RfR+xHYgHkJthR+xJUmHI49Ous3lwLy4p/lUi/fot9GvJw3WOuK6x\n62t212+5v/4p769/xp9e/wVMG7hu4WrruX7ouap2BF0hMk0uxOLMPDHLx/qcRCSRfixPGwIYVJ6D\n2u9N/VH157/Ai5udPT6V+P9ijPHPhRA/JTUAfy/G+HeenvbzSfnbIV3w2wmRZF2QQuhCsl8QTkSo\nPLJ0qMKT6YDOIzpzFMpRSU8xdFLHaIjeEp3DW0/sA7ED0R2C3FN9WyLXFFPiBynxSuIyicslVktM\nITGlpCvf0JVvaco37Iq3bPVbNvotD/ot9/orjPPE/BGye1RWkamCQuY4oZ7sXneObicbgZgGBkWR\nRgQGDlN+fQAXBd4LvJcEL4leEoOCmJH6/DKe3nlsPf4R8CdnanXAJxE/xvjnQ/5LIcTfBP4ysED8\nP/yU21zwxXBKt+YqMtLKD2F7AVmWxrBm4jhfe6iHReJdhNYh7i2i6BF0CN8Sv9vC+wZx18PGQOPA\neAhx8a5hVqN5d9qUWCEqTMxpooaocUHTBc02aO6Dpgtv+HP3M97b3+P7/ic85G9pshW9KvEiI7YQ\nW0nsJLFP5nZ04kk/4imHaFr36XtzSyXENN/fRTAhufO9GwbsBYENEhcULmaEmBNiDughOZ7+IhL4\nixyL6t/mFD6a+EKIGpAxxq0QYgX868B//LHXu+DXjbmGntLVaexcJJXP5GQ+qzoqi5VDVDuEigjv\nEE1MxKdF9DuEa+B9SvGuG4hv09M/WV7qCVEmtZpPzjmO8EsMBcQKF2u6WLOLNTqk1Pk3/NJ9zS/N\n19zlP2HTvWEnr+hliScjtoHYjMSXRCNmI+fOB/emx06Z+yPpfUjW/H7kboQecSB+VPiQ4WNOHDbb\ngoKn3QCvD/J9iuL/HvA3hRBxuM5/G2P8W59wvQt+bTil7s+RPw7RO5GIXuo0y6XUUOk08L2yUEdQ\nFnwL7eDQmg7xuEXYHdx1h/QKxZ9H8xcVH4mJGkdNF9fIuEaFNXJInX/DB/eWO/sVH/qveJBv2Ikr\nelHiYg6NI4zEN4JoBdGJZJ4vcGrplxvrPqXhWI4xpZH8NsyJDyZIbJTYoPAxI8SMGHOIaQPtwx3G\n5m8e6nweH038GOM/Av7Sx37+gh8apwJ0c8wGiIh4UPxKw6qAVXlI2iTSZx14lRS/t4jHFrItwmxh\n08ODSaTfE/9Y8ad3Hx9xOH7c56RPryUOTYgrQlwT4g0x3hDiDSHc0Pk3PLprNvaaR3XNRl6xY0UX\nS7zPkK0aFF+kSfl2CLT5p7/Nks001kPwlPzTeo5mvgtgI5g4jNsBTBTYoPaKH2JO2Cu+nv0qw7Zk\nrwzyXUbu/WhxKmoPT0eDTXIZkz+/n9NawXUN6zrlWQ++A79FeJVkzFuE7xD+EdFvk2nfuJTvUh5N\ninTN/fixPCf+KT/fI7FRY2KNiWtMvMGEW2y4xYRbOv+GxlW0tqIRFS01TazoQ4n3ObQy+fht8vGj\nFUQvTs6OW7KPzhnco38fmJj6IhE/E4OpHwU2JlPfRUWIeVL8I+JPuwKWRhecx4X4P0rMH9OR9CfM\n+2kSI/GzZOKvykT6N9fw9gpkB+0W0RTQK0QbEY1FtC2i2SK6RzA+Ed34IbLlEYOpz0INRlN/3hAs\nqX7y8TVNrGniG5pwQxNvacM7mvCOLrzBuBwrNIYcEzUmaIzL8TZDdurYx7cisTOIoxud686bm/fz\nhiDGCW1jCuxlgv16gj1yT3xPNpj7o39fcPi05zDe93XDaS7E/9FirvRTxZ/POJ+YliImH79Qg6lf\nJqV/ewU/eUNSpHvoNcJLaCLi3sF9h7jfQrsZ9qofHN15mmFuysNT1Z++F5D0aBpqNnHNJt6wCbc8\nhndswjd0/g1BSAKCEFOXWfCSYCUhk2SdJDRq7+OPo+fG9fbmv+Cp8inV32v1YOpbEunHIfs9AoPA\nciB+ICeST0z9+QLjF1P/gmfxkodjPGc6YGfsMlJIIZEirZ8nZUAqj1QOqQxVNFxhqL2htAbd9WRN\nj3rs4b6HzryqJrBAoHEpbDkE2g8L/BAKgaoksVD4LMNITRsLHl3JfV/Siyp1h08j9ZMUnUzvDSlG\nkYJ6c9PiEzAPy+0bAMYFRAUOieOY+IiB+HFcrG++q8DLcSH+jx7HnWHLwb5DrmJO7hXaerTp0B3o\nxqB1g87vKWPD1fY7Vu17rro7VmZDZRsybxAxLPJnyU+GZbUX8niT3aM8B11EnPaYwtIVPUXWkscd\nqt8geEgKXigohtFwQoEa8kwlJk5HEM136pjVaR7Qm/+i537xKflH/U6kF3ghCSiiUESypPZ74lv2\ng3nGiTvxovgXPMGSITrFEhWnvuNkzbyoKIKkcp7adNSdpW4a6lxRZ5IyNhS795TNe4rujsJsKFxD\nHgxw2OJqXpOl6ML8deRA/Kw6pHxS1nnESE8nLI3oKURLHrYo84iwA/FdDjEHkR0m6gvSmlxLpF8Q\n1CXSn6v30rGR9NMwXepAkHhkcklERhQZUUxM/ThM4h/H7keZfphXWCQX4v/O45QnuvSYzv19RXpE\nsn1ZAtoHahtY95Z1G1jnnrUKrIWnjDvU9g7V3KG6O1Sf1sVX3kA87Gb7XHBsqV8hkqol80Ty4hr0\nmK7S6zyLdM7TWMvW9WjXkNsdqn9EuIfkUIcCRJG26Nbh0KJkWWLhKcVfwLzrbumXXfrMKcX3iJSE\nJAhFEBlBDObMXvE1hH6o6NTneTkuxP+dxjl6jZjr0TTgN44P10OeI2NA+56VtaxNx03XcZN13Iie\nm9hRsCNuN8TmgdhuiGZDdA3BG2IMTybzLjkWYVajo/Kg+HmVCF++heorKL+C6i3kKtI2nm1jqZqO\nomnJzUD8pk6BSVGlbWrzACXDlNcszQKak/6M+3yuv35J7afvTQcjTcN0SfEFQUi8UASpkuIfET/9\nLYhqWJJrqOQryH8h/u8sXkv6OUbij91IKZfRob2ldp5133GjHrmVj9zGDbfhkSLusG2DaRps12DN\nDmMbbDCYM4o/7VOAp8tLHCm+Tjtr6+tE+vqnsPoa6q8hF5Htg2d1b6joKUxLFgfib0vSVjoT0jsJ\nMSPtWT3U7hT5F3j1UtLPy6d9fA4+/oT0UeoZ8TMIWYpNXBT/gqc4Rfop5sb1kuJXQImMBu0baut5\nozpu5IZb7njnv+edu6NgS9MZms7QdoamN7TOEL3BxrQhxpTgU9JPj88JNSW+mpj65VeJ9Nc/g6uf\ngSay0Y4Vlsr2FLsxuFciHnUivh5Iv5LgMggaREjL+goO3s0J0sfjl09ej8dO/dJjPpJ+XLY7zcQV\neDEx9aUiygxG8qMhjLP0hhbKX6L6FxzhaVQ+4Zw+jY/xKHk5SfFLoEZGiQ6S2oVk6vPIrf+ed/af\n8I35BZpHNiati78xAWUC0QasD7TxeKHKaS2nij92HC4GyxYUf/V1Iv2b3wcdA/d4amOpdkNUP+xQ\nRiO2WVLGEqgl9Bk4DbEcFH/21c+Y+0vkXyqfej1X/PEWSf3H4N7o4w+m/kh8xuDk0BshZGrQhHhx\ngO9C/N9JLLX+c+N6rvKTITAyIKVHSo+QDikdUlqkNKxlz7XquFINK7njii2r8EhtH6jCB3TcYSz0\nNi2VnTvIPMjwtPl5rvZz85/htRASISRRSoKUOJWSySQmlFi5wlHiQp7mtbtIMA76HmIHfQnWgLPg\nXZoYHwfnYm7Sn6nsktIvnbN0bDrDcDrFZpwW4KXAK0FQkqgUUSlQo8KrNJDBTXz7i6l/wTGWnuKp\np31cliqic0uuBVoHcm3RuiPXmlxr3oiWn8Zf8ibcUYcHVNgRYkcXHI8+knnYOmg89MOIXB8hxsNi\nmUuDgyMHAiz1L4znZVGBzbGdpt1p4kZjPmh2peY+1zSh5Ltf1rz/UHO3qdnsapquxlpNCPL4plPm\nLeWnxgV/AqYWz9K8uv2iO4MLH/IUgtivwwGHbYIlT1fyuSj+BQeckrGnT4mUAV1YqipQ1Zaqbqlq\ntU9r0XLT3/HWfKAyG7J+SzAdvbE82oCysPNpDk437Cvvw+GuS3Gy8Zmd9+sv9fPLIBG2wHYVcVdj\nNjW7oibLapSoaWLBL3+l+eUHzYdNweNO03Ya4zQhyqfRwnmEbV7+jKRf+r7j7UbsV9RW4PMUfgh5\n6sGL4/ycDPZ7iU0v+ApciP87i3Pm/tKAnXRMykCuE+mv15HrYZHKq7FMy9XugavdPfVuQ7bbEejo\nrGXjI3Iu0BAgAAAgAElEQVQg/JjssKRUjIeRAU/vesinDcN8+JAARJAEq3FdjdmtiQ9rYrYmyDUx\nrmlCwd2d5O6D5G4j2ewkTS8xVibFn67PvRRWH02Ppcn/nwnTdsfPjjlSrM6rieKXHObnRNJMnjnp\nXxnfuxD/dxpz3Vx6H6ZPtZQRrR3VynG99ry9cXx14/nqxvH2xrOipbjfou+36HxLxo7gOvrWEX1a\nVcKGZOLbuanPU6LD0377cVW56YLSYw9biBJjNaZbYXZrTHZDL28w4QZjb9jFgs19YPMQ2GwCmybQ\ndAFjA2E60+YU+ecO+Bck/3Qq1Lhg+dTUHxU/FgP5q8nJ04s4ngZDnsGF+D8aLDUAJ0x9banrnuu1\n4aubnp/e9nx92/P1raGOHaJqIG8RtAjbEtqOTlr6EAn2sKzUPh+IM66LP+VcnKVp4zASfxiuQga4\nIDEuKX6zXbMTN+zCLVt7y667pYkFu62h2RqaR0OzS12LxhlCMOfN/CVT/xV+80swvfX4WkyOuYmp\nP5r4Ia0klog/YqzbfILeC3Eh/gVHSKa+pao7rt80fHXT8PVtw8/epVTGHpsbLAZrDbY12I3BSov1\ngeDYLy81zmoby6OKjw/9krsNx8G9+dhBGSQ7q7FdTSvesIk3fLC33HfvuN+9Yxc1pm0xTYNph9Q1\nGAshusMNlsg/N/W/QHBvvD2z32H0+aemvs+OFT/WkwuMdR6n9V2If8FTLJn8y0/yQfE7rtc7vrp5\n5OvbR372bsPvf/NIEXp2eBrr2bWB3cbTF55eeHY+4uxA2jiJysenA3Tm5v00yDVX/On4QaJEWI0V\nNU1Y82Bv+L675Ze7d7zX37BDE/oNwWwI5pFgMoKB4BwhdMcVmPepjab+UnDvC6j+nKeCSXBv8O+D\nhlAOil9P6re04eYrcCH+7xxOjYCfQAJSIvaMTLmQEbXOyK4F2SqiK4cuDaVuKbOGSj6i6bEiYkRE\nDo9wiBEbI32IuHB4Dkf/fKyVnJRHzEcSBCBKcZQ4ykucLLGypJclTajYmoqNq7jvqrSstjPghs33\nXAZuGN029nW/NLg3Jf8JvKQv/9xn56+DSFMHwhDc2y+8k8ZPHSKA8+21z4VyFnAh/m8tzo3FXxoq\nMzxmCkQuEHopQXYtED/piKsWl+VYp+i2iuZXgkegcJHmu0j7PtLfpW2vfAPRpFGvS03OPIS49MDv\noQQhVwStcFohtUIMS3dHrWjFmj7UmKCxQeJ9wAdDCA2ETYqIuUfwOwhtmsUWh+12iYn88470cZC8\n5UAsx4vN/aXvMx576VDeI0yDG1Pij8E9MyTNRfF/XJj3ds/LpxBBCkQhkJVE1ApZS0Qth1yRrQTy\nqiVeFXiVY2xGt5U0CB47MCaRPhEf7AZ8E/fEn/fTn6rRnEf7RkEKQqHwVY6rc0SdQ50T65xQ53Ri\nTd/XmF5je4kzAd8bQt8Q3SYpvN8NaSB+mBB/vNmpGTL7PjW+SF/+s87WtDtjPlViVHxD2lF3jHhe\ngns/JiyZ9M+PHhdyUPdaIdcKtc6Qa4VcZ8h1RlaCyAvIND7LMS6je5TsOsH2PmK6pPT9HTPFj0+I\nf0rtznJICqJW+DpHrAtYF8R1QVgX+HVByzXdrqbfaexW4naBgCHaBvxmMOvbgfQLij9WYB7UW1L8\nV0T2n2sXlv4y03EMRyc+p/gdhwmTo6l/If6PBXPCT//y006iWQMgI0IL5Eom0t/kqEnKNEhXEp3G\n+8HU7xSNFzw60MOutvaBlG94YuqfGp23NGDniZksBUEr/CpPpL+pCDcV7qZC3VR0rOjvK8x9jskE\nDo+3PUHt0h5zVg0qP00D8WM8DtZNu/HmxJ++dyLAN315jnPTYcjz32GR/GN3xjSqOSV+y4H4F1P/\nx4S5eT+fQjZ9SmfhM8mg+BK5zhLZbzXqtkh5BmJbErcFfptjusHU3wr0LlLsIq5J5r1vBov6hab+\nc6SHA/Gpc+K6wN9UyNvVPrWxpq9qTKbTSrQu4FtDkM1AYJkUPtjjnBco/tTUf0bxl4J6c1Iv4Vny\nzxX/eHJkOrEZjl1M/R8jlkg/H9L1VGfFqPi1Qr5RSelvC/J3Bdm7kkyA/L4gonFdjnWJ+LtfCbLv\nwTwmkgeTzPto2CdOBPemWCL9EVEG4sc6J7wp4KZC3K4Q767h3TVdqOhyjUFjncS1Ab8xRAkxuDQC\nJvrjxJhPiD8nv+Wg+GM78cJ+/LPqPXlvPHeJ/EcnLvVjThV/HMI7mvpLG+g+gwvxf+twyref9pRP\np7wsK76cKX72riT/phrWoiiJncbf5xibovrZrwTyT0Fvki8vBlKIeeL5cONJ0pMUH60I9cHU53ZF\nfHcN37yhDSU9kt5KbCtxm4AvDEE6CN2g2lPbPB6/PteVt2TqP0P+qfK/pGvvnNm/xyni10NdRuJf\nFP+CAybKL8ZyBJGeXKEUUmUoqdBSoFVES4+WjkIaymAoo0U7izIO0XnizuMeA+YhEjfLk2fmzsa5\naIMQw2dEKu+XtQe8FgQt8XmGz3J8pvGqxKsKL2taShoZ6UQKblsiPgZC9GlK/UtmqS1158378c9E\n9V9C8BHnejUWzX1BWldDRUQeIY+IIiDKgKgCIgQoA+hI1DGtGqTihfi/m5j32S95l+OTPK7GkrFf\nlWWShJJkQaKNpNx5qvuesrJUeUOFpAwNxXe/pHj/geJuQ7HZoZuOzNj00J2o3bn+6qMg2PBgZ+PC\nMbPcrwQmk/RR0vcKu82wdxm9ztMOOT5n+12keR9p7yL9JmKbiDeRGF7R7zYfuTf18edq/8opr3Oc\nI//4/pGPLyJCRaQKyNwjtUcVjqy0uCCh8ETtiLmHLBBlJMoXfu8BF+L/RmNO+PkxODwyE/NeSJDD\nQoxSpbJMyzQJBSp4CuOpdp7VvWWVe1Z4VtZThh3Z+zvy93fkdxvyzY686ZL6z7axXqrt3HSd+/RS\npKpkWVpQRmWTcg6uhiZPO9g4I4lbhdGKloydyWl8zu59pHkf6O4CZhNxTcCbsLDN1QkynDP34Tiw\ndyKi/xIs/fVOVWd6npCJ+CJLxFfao0qHKh1ZlITCEbUn5oGYBVABRCSKl1fyQvzfWMyJvuQ5P/GQ\n2a+/JhX7LWaU3pelimSxRfcd5c6yyjuu6bi2Heu2pfQ75N0D8m6DuntAbnaopkNOFP8lFuUssrAv\nj4qvMtAacp1ynaeyrYA87Rbb9ZLwqLBktCbncZezczndXaC9S8TvNx7bDMHGvUVyJtw2c/+PzPql\nsfofSf5Tf72l3+j4g0MAVgZkFpB5GBTfoyqLChIxED9knpgFgoog46vapmeJL4T468C/Cfwixvgv\nDMe+Av4H4A+APwH+KMb48Ir7XvAiLJH9lOJPciGTqS912jRClZCVoEqE8mQhoo2l2nlqeq7tI2/b\nR95uNpRhC5sdPAz5ZgdNB8YOpvRxjc5F8ecciwyxgEHxtYaigLKEcshNIXCZoIsSaRRxq7A2o9lm\nPBY5W6cxG0//4DEbj9mQuhdNTAPdX9JTfk7xR+K/cKz+OZwLcs7jBEe3EAuKXziy0uGCQhSOoD3k\nnqACQg2m/mf28f8r4D8H/pvJsf8Q+N9jjP+ZEOKPgf9oOHbBZ8GST/+SnvGhPCr+uA61qtOytFmN\nUJYsGLRpKAmsbMe63fJ2c8eN/p4iPBKajtB0+KYj7Dp80xOMJZzw8ec1n6v9yLFICuKJoWq5hqqE\nujqkPodOCLZRonpJNAq7y2hFzqPIebQ5rhHYRmAbcDtmPv4Lyf8aH/8j1X4sLxF/brMdB/ciQk0V\n3yGLZOqrKBGlR2if9gbIAlEGhIjPjiGY4lnixxj/jhDiD2aH/wrwrwzl/xr4ORfif2YsaemSFw1H\nVBMj8UfFLxPp82vIrhCqR8UdupdUNrBqe67FI2/lHTfyFxR+gzMWu08OayzOOGw4bU7OH+4p+Y/4\nMwb3BsUvB+Jfr+BqBZ2CrRVoK5BGEZzC2IzWZjy6nEeT440YEkOKBBOJwU9qc6KzbN4avcTH/wjV\nP/XXG6twqoZCxENUP5sE90qHqixZkPjCpQBfnkx9odJn+DX4+Lcxxl8AxBj/iRDi9iOvc8GLMX+E\nlnR1SGKU1QnxsyvI3yBFR+bv0V5Res/Kd1z7R974O278Lyj8A32I9CHskwiRGAIuhCfP/6m++qWa\nBSaKPxK/gLpOpH9zDRrBfSPQRqKMJO4UtlG0Tc7jTrMxOTEwpFSvGCQxCGKYNowvJP9rfPxPwFID\ncLKGYojoj6Z+cWzqU7ghqp+Ce1EFwitID58vuPfMXX8+KX87pAs+DqcMOrHPpASpYtq3PnfJVNQW\nqQ1X9FybnivfsfIttWmozI7SbCn6R3TYPhnNem58yDzuteTXHyUBUUiCknglcXlKVktMITFxhemv\nMNT0vqI3BX2b0z8q+kdBbybf9YmuntyG4/h3m6r9a3z8M0/53Gdfqt0S4Zf+mkJEhEh7G2TKkylH\nnlny3OCDQGQ2bQOmHFF6gkimfnT/D/APT1dygo8l/i+EEL8XY/yFEOJnwPvzp//hR97mx4jnNPTc\nuRElBLn06KxDa4nWAV326HKHLu9ZxZav1Hd8Jd+zFndUcUMWGqIzGJEUfZzuPbV4x7udmoAz8mVe\nyyklExSOnBaNEBqPpheardTcC00rar4Tb3gv1tyJN2zEigaNEYKAG642H10zv9OSRMfj4pz8H+Hj\nn4soTPFShy29F5EEFIEMT4Ylx6AxFPRAxGKQWMTwF4oEAgGZ/UUC/9Tkan974Q4JLyX+/G/9PwP/\nLvCfAv8O8D+98DoXLGKJ6CdjvrP3j3OJpFCeKuup80Bd9NTljrrOqauMVWxZyfdcifdcxTsqvyFz\nDVEm4nvGEXEH4s+ptVSzcOL4tIZy+N+KAiEqvKjpRc1W1GhRk8uaNla8FzXvRc2dqNmImkZoDIKw\nb1rmcjxSbzpXYWT2rEbz4MN8hh68ysefk3+Opcj+kiVwHBAMSDwZjhyHxuLo8XRARNADhoglpPmJ\niFf6Ii/pzvvvSJL9EyHE/wv8NeA/Af5HIcS/B/xj4I9eddcLJpjHgOdYDAFN8uMZ8FIItPTUmWed\n96wLWFeCdQXrlaCODYW4Q8c7Cn9H4TZk9kB8wfFktbmmPrd81qLpenRc4tCkjsQ1QkySXNPGijuZ\ncydy7tBsyGnIJ8SfsvFU7cb35eScmam/FNx7hY8/Jfr8e88bgbktskT2w3lpSTNFQOGOFN/TD+eN\ng5UdAY8nIAmvIv9Lovr/9om3/rUX3+WCEzhF+vkjcYpK85HyEikCWjpWyrHWjpvCcVM6bmrHzcpR\n0SLjBukfEG6DtBtkNhCf8OS5n1JtukLuvGbnzNxpOY7EFyu8WOPFDV6m5MQNHSUbIXiQsBGCjYBG\niIH40/2iln6X8fcYa7k0WYljFZ9PzX2Fj79E/nM4pf5PzxsV35Pj8FgCPXFQ/EhPwOCxONyrSQ+X\nkXs/IJaIfu7RWaLWSPrDtq5SMCh+zzpvuSk6bsuW27rj9qqljC3eN3jX4GyD73d41eDVwdSf82Ku\nqXMeLCndkmkL4JE4oemo6cWaXtzQiVt6cUsvb2ljSSMsjXA0wrETjgaHEW4g/jTiwOwOI8HnujrT\n2FOKP7Yrr4jqz8k/x6ng3lKeysuKHzBE+qEqhoDF43B4FP7zK/4FXxrPkT7OykvBrMOatlIEtPLU\nWcebfMuNfuS2euRd/ci71SNFbOmcobOGtjd0uaHLDE4aenHQ1KUanDLvp+nUAz4mj8Si6UTNljfs\nxA1bcctWvGMn3tFSYGSLES1GdBiGMn4gvp9871PknjdFk5ov+fhT1YenscNn+vHnTtgp++w1iq8G\nxQ84IgboEYPiBywOi8V+FOnhQvwfGKfa/nOaOn/MpsTPkiEtw6D4O26Ke27LD7yr7/hm9QEdGx5t\n4LEPPBYBkQfcMPor6chpOo1pPiJ+KqBTSs6/2Rjcc2haah7Fmntxw4O45V6+415+Qxs1QTwSxGbI\nk28f6AYf386+82iDTNcjiJNjC83Wc4r/ihV251iKxIzlpYZwfu6o+HKi+BELGCQdEXA47BD4y3AX\nxf/twSkzf66pS/oxTuJgyKcpUJWB6spSVYaybKl0Q6W2VGJDFe/RscUE6OJhGjfxuFdrPs9+icRL\nnWX7hkAKomSYiifSZEEpkFJAoQkrjSs1Rhd0qmRLycZX3NuKLuhh73qddpXwKo3Bj+NdwqQG5x70\nM27TKR9/bDeW5uPDi8l/DssKf1w5OSh+HBo6gUHSowZTP3n9HvORZj5ciP8D4oT/efT+3MwHoSDL\nY5q4odPgvEwHlA4orXhb9KxKS1FZVOmImccGT9NGNkRyB9sNNFvomrTnhLcQHYh4vDfDkjE9dzSe\nPG5KQC4RWoJO+XGqEVqntfKLCLlD0IPZweMDeA3bLewaaNPkIJwn7bw5r9UUR00PT6U6Pj31Jd15\nHzlyb+kjp9qPuZWQVD915wkcEoOiJxtMfUuyznIi2dDnf1H83zoskf/EA0tSeVUEdAV5DbqO5LVA\n1568lqxzw5UyFNKiZBrZZUKg7QIbA5lNpG+20LVgujTePfpE/DnZlxbPXCT8CAkUEiqFqDOoM0St\n9rnIKoQoEEIhiAjhIHYIswOzAZtD06TUddCbtO1umBJfzmoyZTKzY7PfcG7mTxX/EIF89fLa53Cq\nEXjarI+kTzdNpLcoDIGeMBA/hfkgJ6IG1+CMfbOIC/F/EJzynudDTp/GjKUEpSN5HSjXkXIdKNeC\nYg3lWnCd9aycoXAW5TzReawLNCaycZGsT0o/JtODs6eJf0rxT0KKpPZ1BuscMUtS1QirwSqEAawD\n20O/S7tz9HlS+q4bFN+kc16k+GN+ahJ9PGRTxZ+3I5+o+EuEntfwHMbgnsQN/r0lTrrzetKchnEF\nboWYuGQvawIuxP9BMQ99zcNnU9Knp1JIyHREryLlGuqb43Sleq52hrKxqMYRGo8xgaaNqAbUoPJm\n2Fpub+ovEB+OST9/veikpP5ExCpLZL8p4EYjbjTipkDIGrHTiJ2CbUTsHMJ0ydTf5dDmSeWNgd6m\nNJr6J6MOp4Kg8zTBPLi3RPzPYO5Pa3OqplMI4mCye8TEx08b5SXid0hyBDmCDIlCICbh05fgQvwf\nDOcUfxwq8zScJmRE6YiuI+U6Ut/A9W3kakgrYSjvLcW9Q+GIxmNDoOki8QFkk0x7Z4/zOfE5kZ96\n9vfkH1bJpVawzhPpb8tDEhXig0Z8kOkz1sFuUPxHCU2WTHvrEuGtS6+PFH8JL1H8yaEp6eeXWCL9\nJ+Ic+edI/npA4hCT4N7YnadRaCQ5igw5/FNDg3FR/N9QLAXx5unEiDPYK35eB8o3gfomcnUbePMu\nsH4XqOjJtSHDonpH3DpM8MQ2YDcRsU2BvOhTHvyhLCNPHp1T5ZPfTAKDqS/eJMUXtyXiXYV4VyOo\nELpAoBAmIrbJx8dI2AbYZonk+xSSfx8iqavglLpPy1PyT8+dnXZqGOIzq+y+BEt3fkk5Kf7B1FeT\n4J4ciF+QoVHkZCgUiuyi+L8deM7HP+1lC5ki+LoOlGvP6sZzdRtYv/N89U2gDGkCB8bCdojq+4Dt\nIjyA2CRlP0phUp7U8LnaL2Lw8cXg4+8V/12N+KZGhAqBBpNMfXKLiBJhImwtbDIO620vpKPGcE6f\ncyb+LMA3n1U0PWW+wu5nDu6dKo9II/f8vh9fYcjoUeQk4udocjIiGSARSCSXqP5vNUSSdcGw8Lw8\nYqmoHLJ0qMKSaU+uA0VmKZWjkpaCNg12iT3BG4JzBOsJfcB3EdEdr4mveBonP4V95FmkpIYqHvGy\nGIifK0SWEzONy0qCqnByRUdBT4aNGTao5L47TzAW+gj9dCTBvKananfOn3/G3F/6ghPCzxvJ12B+\n+ks+Pkb1x7H645DdnIxsoGtBQBOH7jyBQg49AS/HhfhfFEuG8lJIbHwS/WDLi2HNaTGUxaG89lC3\noDqiD8TGEu89seiIdETfEL9riO9b4l1P3Fhi44hmMJlnWLJiz6nTuLhPNl8TfyjHlcDnw2NrctxO\n4+8KfF7iqWl8wf13is17SXOn6DYS0yi8kZPlsZeIPA14ngnaPYepL7N0eVJ5agXNm5/nzPhPwRjc\nS/PxHTmWtGFYRo4CEvHTtnqJ9AqFILso/m8GlkJjS4b0VGZEktBMgs5SkEwrKCbllYNaErOYwvEt\nxAdHpCf2O6LbEd93xPcd3PWwMdA4MP6pws1qcSr4NM2lSF2KWX5YC3+6Ln5YQa8lHYpgctxW0+UF\nPRWdrWlcweN7weN7ye5O0G0EphE4My6dNY8ynDPdl8ovxPSj0xm8JNIzkn6B/OPdxEL5c2Acuaf2\nxFcUKPRA7bSzoSBHkqGQw+i91+BC/C+C5zrDpufA0VM4Kr5WUGmocij1oVxZYhXSSBzfEpsIOGLf\nEbc7otkS73q46/eKn4i/rPhzLIXJpq/3y/lloIvJuvjDGvm+FmxziY+K3mSJ+JRsbcV2W7NzBbs7\n9qndRMzQ03C8IcYSvabDdae1fSX2PsukPPX5Z+a9HBPHxP9yiv90IY4CSTHcp0CgB9JnZCjySTfg\ny3Ah/hfDsz3eA2YPsYhJ8YsskX1VwKpM6aoEbUBZUC3Rq6T4xhEfe6LaEc0ONob4YGBjU3mB+KfC\nYEuG9DTtd8LJh3Xxy2Fd/CHZQuC1pCcDk+O2BZ0peNyW3OuarStoN5HuIdBuAt0mYpqAM3Fh+e4p\nvUZZ/ox0m/70w59HMPPrD28dTQk6FR74HBin5ab5+BadwqGUxIH4chLVz/fj9V+DC/G/GE5F7eGs\nMS1j8ue1Sgq/qmBdp3RdQ96Da4m+AJelLaqdA9cR/ZbYb5NPPyR2dmLqx/3dXhomm/eIT1fJ3a+L\nXx+SyaAXkh0KYXKc0bSiYEvFB2oebYFp/CHtArbxOOMne98tSTIsh+I/Ak8CF0+PT4k/V/yZZ/BZ\nTf3Dmnuj4gtyknlfkhZKSf34GTk52X6SzmWs/m8A5kG9kfQnzPtpEnEw9QfFvyoT6d9ew9srUB3s\nttAUxF4lU79xxKYnNjtityUaDyYQjU9BPROG4N7zpJ8b1k/IP3Q0ZHky9cuB+NcruLqCXgl2RpJZ\nBSbDGU1nSx5NxQdbszEFzji8cbhhvf5xbfzDuvjz3+jUsU9oBOaXnpJ+zAeyz318OCb/5/Tv0/2T\nqZ9CdqCJFARK/KD4ifQ5ejIt93U1uBD/i2Gu9FPFPzOjfSR+oZJvvxqJfwU/eUPaMP0eTJGmrLYQ\n7x3xviN+2BHb7bDefNwPfEnbR8eTPv45M//JCndTH18fiH91BW/W0CK4R5KbLCn+TtNtCx53JR92\nNQ99QQyGEOx+LfwYkpl/CO5Nf5tpLU+5Sx/ZAMxv9f+z9y6htixbn9cvIvIxZ8615tp7nXvWvd+2\n6tTXsClS2NRGlSDYEQQb1VDEF2JHELShVqdARNDGhw+wU2hhCYJip7QjUo0PUbBh4RPtVunn8d59\nz1l7rfnIR7yGjcicM2eunHOtvc/+7j37MSCIyJw5MyMz4x//MUZGjBhdfvr2psCH02FWH0EP6a8r\nI1XfkyMUREoCC3wP/JyCov/G7w/Ova+M/3uVuU94544ZPg6NXEYqQyuDVirFx9cxJePRWVq//kos\nVbAsnKVoLVlt0VsLDxZp7cUaTbengB/vHx877r6U0ojWRJ1i4wejcZnGZhrLio5ruriidRVtu6Cp\nC5ptTr0xNHb4KDadHfOcD2Ra4/HdzOkx0zuak0k3J6mLOwbDSLPfUogT+k9ox0F9Uwvho4JfIiZC\nFiELQh4ihQ+EaMiDJwueLAZMjGh5P8cefAX+R5I51R6eWn9jjpifB2ckJw85hROKrqNoFUXtKco9\nRf7AQmpWu+9Z1W9Ztfesug1LV5OHFI5yuOo5hfjcKIJx7ebuwvRlgyFKjo0F+1gQQ0EXCna+4MEV\nNFLxvb/hbXjFfbhhEytqKbAyjos/Dt49HRM7rd25bmpOwZ7rwuZkTq9JA/cHNVsT+5FxQoFQAGWf\nxrE5x2d8P/favCgkLUceUtJOMFYwXcS0YGJMZZd+0/1x73vxr8D/STLXQM/BanzcaZy8cdKiKYNm\n6YTKdlSNoyprqlxTGc1C9pS7txT1W8rmntJuKAfgS5wF/VRJnuPWuVrOfZTUohEpcbJEQoUNFXtf\nkfuKzFXUUvHWp3QfVgn4MU/AlyGM7QCbc4tizD3fcU2n4P8QK/sp6Iek+uAWpg92kSO9cy0Bfxye\nbFyjj2LrDxZfFLQXlBO0i5hOY9qIkQR8bWP63fcdhch7Xfwr8D9Y1AtzmP1uhOGoPB4VSS1QhEDl\nPevOsm49631gbTxrFShlh9ndY/b3mPYe3W0wvsb0H8Ln3v1cjcYyrd007NbpkF6NSIGNFV1cQ1gj\nfg1ujbg1tSy5dwX3Puc+FGxCTh3HjD9drO4S8Odqeg70YxheYv1zbD8Evwj9kNlI1rN+0QN/YPxx\nLecm+f1UUSKooFBe0D6ircZ0kaxTmBgS6K2gXDxoBiq+n6nxFfg/SeY49ByXThvpAPyxElmgJVKE\nhpVzrG3HbdNwaxpuVc2tNJSyR/YbYv2INBui3SCuTg6zEfAvdUNztZu7s6GWw/fr5JHQWClwcYWL\na2y4xfk+uVtqWbDxiseg2ATFJipqUVihB/5TsJ0H/pzX4X0Y/7kOYOq6TGyvCD3jC3nP+uk7+hH4\n4zNMp/P/JBESiKOgA2gHxka0VZhWjVT9mFR9L+gPmEH4FfgfJJes5rnXP260A3cOwC9JTWoJLNDi\nKYKncjXrruPWbLlTG+7kkbv4SCk7bFPj6hrb1rhuj/U1LlisxJPPS3Oq+lCLuRoOtRvkGLv3mHvp\nGV8q6rCmDrfU4Y7a9UkW1N6nFDz76Kmjx8o4Lv6T0QGcgn+o0dRmP2fEPKfun7vbeVX/OcYf/3O8\nqOB5wLAAACAASURBVOhHkcHHGwTlE/C1E0yXgJ/FQdXvTYED478f8r8C/4Nlzhs9B/5xPm7cA5RK\nEugrYIUWSxFrKifcdB23ased3PMm/pY3/gdK2VK3lrqz1K2lsZbaWSRY3BnGP+czn9ZsnAYVf2D8\nwShhpOrX8YbHcMujv+PRv+HRvaGWEusarG+xoUkptlgJvY0/XQ1nXH7O8zB3/CV1/5ycU/VT1Juj\njS8928th5FzJKegzntr8P0UU9M49hQ49o1uF6SQxvkSMlaTuu97Oj/KV8X9/csmbPwersaq/BFbA\nFVpaipBTeWFtLbfsuIv3vPG/4Tv7PYVs2NjI1kY2fRIXcSHSToA/Bf8LRhGc3M1g2w8eiAKIPeO7\nWLGPax7CLT/6O37wb/jRfcdeCqLfEsOGGLbEqIkSiNL2jG9najb3/KYy7TjPgX/6n7ny+HxjhT11\nAEevfmL8YeTc2Ks/ePbHUf4/igh9jIQj4yevPon1Y0QPXv3BuRfoGf/l8hX4P0mmfHrJPj2m9G0+\n9MmjlUNpi9aWtW5Zm4ZrU3Nl9lyxZRU2VPaRZXhHITtsH4ou95B5MAF0PA+ncT6Uz+kjAn1cfE3U\nCtGK2KegNT6rcGVFVyxp8iW1rthSpbj4dkktRR8gswCfp0FGQfcnnlPpL5Vf8uzn7vJSB3HesTdm\n/ONS1UKu5GCUlRwBn8lRI/oYbH+QSAKzB+VAWVAd6EbQUdCtoDpJ+w8fRxQpIMLL5CvwP1imD/kS\n8I8N0OhIXti0dn0u5IWjyFuKYkteLLhRDd/Kb3gVf6CSd2Rxh0hDFy3bIGQBdh7qAG0AF/twdD1T\nzAXVmFOgz/KD0cTc4IsMigwpMkKR4QtDVmQ05oZaX9Ooik7nWAVBO2JsELtJYO92YJsUydM7iLFX\nJwbzZlyr59yP02c8sPvc/+eAPr7rqcx3AoqIVoJW6Tt+ppLKn6ve5JHe79EDfxq96CfJWAFJC+gc\n42zW/e9tv+9kOMQ5g25evgL/g2QO9HPHPG1s2kSKwlEthWXlqJYNy37t+uUyY60bXtsfeGV/oLIP\n5HZLtA2tdWxtRLsE+ton4NuQwtKNDYjh6pfs+nH5ZJ9WhDKDZUmsCkJV4qoCUxXoqqTVa+qwpvVL\nOp/hAnjvCKFBuj4uvq1Tcn3s7hATG4nh/SfaTJn6Euinx0/vbnzn55lfqYhWEaMEo3rgKyh64GfS\ns70cJ++oc5d6Xxk7EAbLqAOaPkV64CuwCpyCoPpb+Qr836HMNT54anemstaRorAsK8f6Gq6vFeu1\n4rovX6uG1f6Bq/odVf1Att8SaeicYxsiqgf8kGwEH3vG52hrzvHoJSgcjtGKWGRIVaDWS/x6iRql\nRl1Tt2uaZolt8kTqjSe6Bum2KZbeIXZ3B95DCCPGP8fC49q9pCN9zqk6Pdc5G//pd/wj8CNGSwqC\npEngp2f82DN+/MjAhxcwvkr7LOB74B9U/Y8EfKXUfwT8I8BvROTv7ff9FeCfB972h/1lEflv3uPW\nPhO5ZGPOO50S8ANVFbheB25fB16/9rx+HXj9OrCipXjcUjzuKLItGTvEN3SNJQYBl8BuYx+Feqzq\nczn49Dnb/uQ4rZEiQ1Ylsl7C7RVyewW3K+T2ilatqDdXtJsl3SZL2qZ1RKlTpNxGJ7B7d0yhV/XF\njK469S6ce77TZ3np2Q/Hxclvc86/KesLg4NPkdR8rQWjI5nu7XydbP08JuBkg2k1hOk6cxfvJWNV\nf3AmDIw/AL/p91nVq/pqpOq/zM34Esb/a8B/APz1yf4/EpE/etFVPmt5zn12KlpHitxSLS3r647X\nrzu+/bbj228t337bUUkLywbyXrfzNbFpaJWjDZHoEtB9HOUD48vTUBWDTJs4PIUBHBk/VgVxXRFv\nr4h360NqWFH/OITTytPSdnVv43cRWp0YPvQVCyHZ+Cer4Ayte07lngP59JlecmPCpXUJJnc7um6f\nerYfVP1Mp5SnpQCTqq/6MIghhU7QCtTUxfChMmfjt31qGNn4Kv0+qPofm/FF5L9XSv25mZ8+qiPz\n05NL7DN1MB1bRGJ8R1XVrNc1t69r7r7d8wd/UPMHv6pZSIvNLBaHdRbbOOzWYrXDhoj3CeRDiqPy\nOCwUk/JUzrbPHvi+Kgk3S/ztFf5uTXjzGv/mNa0saXJNg6ZzGldDyDxBImI7aNQIS2oyqX9g/DHA\nxwecY/Y5wJ/Lx+eaQ+IF0PeOPaUiamD8AfymBz/HQdbDasMfje0HmbPx55x7ViVV3/M7de79i0qp\nfxL4n4B/RUQef8K5PlGZA/9QPuPcO9j4DdfXW17fbvj22w1/8KsNf/bPbihjx47Izkd2TSRuI20R\naVVkHyLO9WeXkQNvXOYp4KdN/RwUIKn6A/Ddeom7XeHubnBvXmO/+4Y2LqiJtC7Q1RH3EPDGpbBZ\nXehXeeo/ckk/BEhG5ZPnM6jkY6A+95yn+85pXOfS9Ek87QCSV39k42sS6M3p7AozOPh6xv/oqv7Y\nxm/6C5+o+v0xfxqMf0b+Q+DfEBFRSv2bwB8B/9z5w/94VP7DPn2qcu7hjphKkxaW0Bo0KC2H8LRm\nbciuIa8ixdJTlh1l0bDI9yzNhoIO26uSuj9tjOAFut62H8fCH6cXa5uaFBa7/6MencSvFeHaoFYG\nWWb9OvYFXVbS6iUtJY1ytCisOJyAj4EYXB8CDE4H+Y6f2zj/05CXnnsK9vFEWw/Ko3RAmYg2gs4i\nJhOM6Z+97l8nnCxK8lHknI3f0gNfJefe4NH3Ko2TiBof/w7wJy+6zAcBX0R+O9r8q8B/ffkff/FD\nLvMzkHPq/BT8A9QiGIXKQRUKXShUYVAFh5RfCfqbEq4LQp5hQ0a7N+x/VGxQFA52v4b6t9C+A7uF\n0IDY1Lims/incsl9FQFlQB/ql+qU6pn2+WuN+kYRV+Cy5EQIu4D9IdDiaLym/d5j3wbcfcRtIqEW\n4kmU3Dm7fZi/NjD93ESd8YSd597LnAN1rO+cS0P9poDv6VXZfoC8B+MhD8mFn0tKcByjO77dj2Xj\nw1PGH+z8jFMb32rwuh8gZcjU3w38PaMT/c2zl3gp8E/amVLqVyLy637zHwP+jxee5xOSOVVyrjxI\neuNKgyoVeqnRlUJXQ57KeSXoVYlcFYQsx/qMZqfZo9m0KYhu/VtofgvdO3Bb8A1EC0yAz0wthprM\nqfcCieFLRbYEU6k+HctupYlX4K6gM4CLhF3E4mlaT20N9m2ge+ux9wHfA18svZ05bvnTb/YDQqZA\nn07ceU7GKHvqRzlv1Ix/G655GIfHAfjKJdBnIaUiQiHDZIWnVsPHnJc77ZMGVX8YIigk0Hc94zt9\nYPxkYr0M0i/5nPefkSj7G6XU/w38FeAfVEr9+b6Kfxv4F97v7n7uco7dn+NZAa3RRQK7WRvM2qDX\n+lDOF4I2JWQ5weTYYBLjd4rNg6LooLtPbN++A7uBUIO4oy1/DvyX2H5ISoMuEtDzdUrZ+li2C43L\nFW0GZmD8bcC2gfbB0bQadx+x9wF7Hw/Aj7aP7ffEjo6TWsEp6GVm+7l3M3YATt/N3LnnvBrjGXkD\n+A0wML47Mn4uCfjDDJ3hkmNL4eXm9WUZqjtV9cexWgdV/8D4BuJ4HuXz8hKv/j8+s/uvvejsn7Rc\ncg6NZdKotCT1edWD/TYju80OeV4IxpXgC4LPcS6jbTV7r1m49BXPbRPg3fbI+OdU/Tm9Ywyv8b4D\n4xdgVgnoxa2ivFUUt5ryVtEVitYrCg86CHghtAEXPI33NI3BbyLuMYHeb+Sg6vNE1Y/PbJ9TxS/J\nJRX/nPNu7mnMqfp6hvFjz/gRylH9xqD/qLN0OFX1B4fOUd8efc5TR1U/mo/L+F+uzIF+HBd/jkUE\npZO9nBhfk91mZHc52V1OfleQmYjelciuwO8ybJvR7A27nSLfKooafJ9C0+c1RMchysol0I9rNwsr\nnWx5U0G+TqBf3GkWd5rlnSLLNPVOke/A7ATaXtXfBdq9o9lrQp3A7msh7GXGxh/XZlD359T+OaPk\npe9myC+p+ufOO0btgLABXTM2ft6DfsGplRA4naXzsRh/sEDMqGrDb3BU9cc2fhyHcHtevgJ/VqZO\nvbGCPZYZqA2MX2n0Tc/4dznFm5L8TUGuIvqHFG0ntDnWG9qdYf+DxvwIxTbZ82IT2MUetwfGH9fw\nUo0mNTth/KxS5DeK8jaBvnqjqN5ojNJsf0wrt5hWRs49T/Ojp9kaok1AP+b0qv4csKfPcmqQzOXP\nyRTwz4F/KlMbf0TZyvWM79IInbGNP56MP/QXGR+X8afKyNSsgKNH/8D4/edS+Qr8jyBTph8z/lhl\nHfL0xpTWI8Y3Sb2/K8jflOTfleQiaEqkLQiPOTZkNHuN/lHB/6soNgwDyNInoqHcTxd/Sfu6xPii\nVc/4ybYvbhXLO8XqjWb1ncaIZoEib0E/COJGwP8TR73REOnj4fOkfFqLcW3mfjtX+0tyDvCM8lkj\nZ7JvrKePETZi/Mwnth8Yf1D1B9Dn6fCPPi93DPyhyoNyIv1+37O9mzL+V1X/A2XKUHPSj5FTQ8Mz\nyfOGgDFok5FpQ64VhRFKHSi1Y6EVi2gpxZIHh7Fpxk2sI34ndI9C3Dz9Nj8un5NLMDuBigKMRvIM\nKQ2xyghXGX5t8K8yfFzgH1b4RYUvlnizwEmJ8yWuK/Bdns6k+jMfPmg/pw2N9/1UOfel5cz1ZXx9\nDWoILZJxdIj1xnSuYalhoaHUoyl5nC6DMOdhfU/wzz6NvroSQfq5TcNaKCH2EQOCInpFDBoJBokZ\nSAZyjJP0nHwFPjCv2sNT9TGSGo4iLQivejd5nyuFyjRZVORWUe4jy4eO5dKyzBUVijLWlN//QPn2\nHeX9hmJTk9cdxvonUVSm7qqpojw+7rk7O7ZZTeyXYmz7BZc9JR0FNSW1WvJjds1DccV2ccW+uqK7\nusbfXCGvryArOAxTG+eM8jkVXp6r6fvIBOxqCv7p9fpcJL2nAcUqe1rOFBTLtD5YUaSlzPIsve+o\nToMDP/fx4Mzu54wQGQNdpTE6Nhw/6bugcdEQoiFIRpQckXHQ1uflCwf+HLtPu+6Z16YM6Az0kPdJ\nmbS8lHhy6ylrz/LRs8o9Kzwr51nEPfnbB/K3D2T3W/LNnrxuMdahRqvFngP9OSt5rqZzXKjQCBmO\nEqHCs6SjYs+SnIpaV/xorngoVmwXK+rqiu56hb9ZEXdXUORHwOshZ9QJDJWZgn6uhh8qM3c21jjG\noJfRNQ/AN33HbY7lIdeAWYJZ9J8/8rRemNKnwJ8D/wV5wSHHbmoEfK/6uTiAFbAoXFT4oPHREGIC\nfjwAf/HMVZJ8wcCfNp5zOZwwxzDWVRswRd84jrnKIkYacttQ7i3LrGNFw7VruG4aFmGPud9g7jfo\n+w1ms8fUHXqG8Ycrn3NdzeXTu3vqDFREMhwLPBUd1yiu+nRNo1b8mK14KCq2i4q6WtFdVbj1Cqkr\nKPOjet8PTT4pD7UZg33coj+KqNHrmQH/cJ2hAzjpeAZtbdwB6NE+gAqkD6YtOYd5B4Mv8AOA/5xM\nO/Aox9mXh3GF0jN+1PioCTE7Mj4FIl8Z/4XynrbiAD2tepYvIVskdjALyBZoEzAi5NYmVZ+Old9x\nXW+42WxYxB1qs4fHPWpTp1S3YN1ZVf8c058D/xj0apKkV/XTMowrItdEbgjcEHlFo654MBWP+ZJt\nWVGvlnTXFb5ZIl0Fi6w3h9Uo4H4PfsOkdxoB8KOBfpAZ8I9VpCnoZfR0hs770ImPylHALyEswPdx\nA33We88vqPoTmVoA0/d1zkIYz7pMNj04SckKWFE4Ubho8GKIMSPGr6r+C2XOph8Df84q63PVs77O\nEtObBWQVZCvIKpRxmGjJbU2JsHQdq3rHdfGOm+JHFnFLrNt+WesW2adytB6JkalMQT+UzzWkqaEy\n/SYRUfie8S0Vlmscr7B8g+WWRl2zzRbsiiXbxZK6WtBeLfHdguiWUPXAN2puBbCn4BsD8KNq+hOw\njx/O+PrQo2l0kFI92NWJfwatwQfoltAtoCuhy9PY5ahHgS+4CP6xmfYhEiVNzIq6Z3zVAz/2jC8K\nLxovhhDz3sYf4gB/VfWfkSkXwjz4Z1qSUhPgryC/gvwaZTqM7MmtofSRZWNZqR3X6h03+i1l2BCs\nJ1jXp7ROfLCOEOUEwIyuPIencx3A9M4U40UfNEKOo6SlouGahlc0fEPDt9RqTWNK6mJBvVjQVCXd\n9QLvFkgowWank++MOnWO62nl5LRyH0vGbH/utQ3Xn77KYR6tVpMy4Dzsl1AvYF+AyiFkabDMeMzP\n3NwiuQz6c2bZ9JiDqt879xypz8lV7+ATjRdN4FTV/8r4HyQnninOtCIOdr7O+pEwy8T4+TUUNyga\nTHggd4YyRJahY+X3XId33PgEfBdjnwQXY3LqDWvET2rAaJv52jxh/PGxA9MPmnjsbXxLScOKHWu2\nvGbHN+y4o1Y3WFPQ5SXdosRWJZ0rcKFnFGeOgfazScpHFbjUO/1UueSWeU6vhnnfxGHgnoeySiYc\nJcQ8xREcnHvnGP+CnOvMz/1NpO9j5Bhn4zhRT+FQOAye3qvPGPhfGf8DZQz+6T7gsG59mqetc48u\nPLp06MKywnLddVyFllVoqLqahd1TdnsKu6MIu8PZx6M+x/M8pur6WC6B/kXYEtL3YacIXZoH4PZp\n+m/3CHYBbi+EOkIbUNZjvCaPCRlx8HxPKzvdd6mCP0Uu+WDnrj33/7n69/tL1ZLTYcSixCEhEHzE\neaFzIA6sS4qB7yOMDVGQpjLtjOfez/RvB8ZnBH6O4E8RtVP0/6A0URmEDFHD0h+j7/gXYgR8gcB/\nDlbT405fjVGaXAeKrKPIdxRFoCgtxWJPsXigouFGf88r9ZYbuaeKG4pQo7TFE1GchkMfN4hzwJ9j\n++dkGBk/PVeIgliPqhvMZkd2/0i5KHCZSePYyj1+l+P3KblR2e9yYjCnqv60PNfKPybw4SnYp/te\nYgfNjY7SsLYbqu33FNu36N09cbvB7mrqvWXTRvIOthb2DhoHnU9BT4e4h+MqKJ6Cf1o9eB78p9OJ\nFEENoO+Brw2iMo4Dk3ppzz/CLwT4c0A/B63xb0+hqNGUOrI0HVUWqIqOarGnWuYpUVOpt1TyllW8\npwobctcDXyVVfrwy/FhTnA7HnavhS/A03j/t9GOMiHWousVs9uSLkiIzaf6JD5hiS6gzfJOlvM6P\n5SZHgp536o2X1T0H/LlH/b4yB/ppee7642uPQT/pCFZux3L/lnz/Fl3fE/cb7L6mri2bJpJZ2A/A\n9yna2CHS8aSqc2w//o2Z/eMqT0F/mESsFEErotb9ikcJ/BzA38uXDfxzrWOQc5bxvA6rlaLQgcpE\n1nnHuoB1CeulYl3BkppC7snjPUW4J3cbiuzI+APgp6vDj6921vZjvj0zU54DvZCAj3XoAfiZSXNP\nfEC1lixfENqM0GbE1vRlQ+hSfgJ8PZNPvZDM5B9Dpq/yJb3k+Ngz6v7S11TNPUVzj2ruCe2GrqnZ\nN4nxjU2Ab+aAL8fLTQ3GufdzSeWfzh88ML4iMb5WRKMT+E0PfDMB/gX5zIH/EnqAeXt+rAcek1aR\nQntWmWedO24Lz+3Cc7tw3FaeBQ0qblDhEeU2KLtBmRp6xodTFe7clc9h5jksTVlmOgM+qfqJ8XVm\nyIDCB2g71K4hzwqiNSl1hmD1cdsaYlRzj+VUZb5kzH5Mxr+079KDmrPv+/2L0FF1G4ruEd1tiN0G\na2vqzpJ1EeMS2Lse9Afgx/nbnfMYXTLdpprcNFyIhwPbB62JmUFMAr1kX4HP84bgVM61kLFP3KCV\np9CRynSss4bbouGubLhbNtxVDQsavK/xrsbbGp/v8abG98Cffvads/EvgfwSpqbbg50/7gxOVH0g\n70GvdzXmYUcwGeI00WvEa+KkLKKe2shjdWWuInOV/FjyEnfNc5bcaF8RHZWrKVyNdjXR7bGupnYW\nXET3QB8WMxnKc6r+3KWHfXPq//T3ORs/2fc9+E1ifckMkhnI+qHjL5DPGPiDPAf6c/3tuDUfwxpp\nYq/qd9zkO26LLXflhjfLLW+qLaXUNM7SWkvTWZrM0maW0Kv6w0zLOU31qSvxvKk8rfmlOxrOHSEN\nErIOBRgfoLXoXUNW5ORFlhxFQaeptqFPUaV9QR3PPceal9wmz+1/H7nUf7/0emd4wcRAGSxFsOhg\nicFig4Vg8SGierU+TPORqv/cpaesfu73KdsbRuDvVX0xmpil2ZbkX4HfyxT05yyvc/x7yvaQoVVI\njJ91rPM9t8UDd4t73izu+a66p5A6rV3fRbZtROeRYCKtjngVsTw1L8fj6QeADrUZq+2XWGIq4+7r\n5D8D4/uAaS26XxI7JQVKJVYXQNRxZeuhPH6MnNn+FKW/ByWClnhIUSJWIl4irSR9XuAwLmiaDzJn\n549/e64Dn7L9SQBwNSxd3qv6mUFykwL/f9nAP6fmTylpjmPph22nBS7VJFWLwPLKsVx2LMuGKt+z\nNFuWasNSHihig5UUtCWLoAWUHBvGAOox4M/haXpHl0D/YuwJqBBR4cJH3g8572ckc4629wmiOwX9\npQ5gWn7Wxlcp7kbQipgpYq6QIsUOkNKA+eIj8EzBPwU+nEIulZURsjySFQFTCFkhmCJtZ4Xnpmi4\nXrQsFpZs4ZDM42Kk6SKbjZAH2G2g3kPbpAVjgwMJHNa2O6cVTx1x0zs55/H/EHBecnV+iWAfyzmf\nyiXgfuzrz9n4mn6ekE6jiEMOUoAsUmLBixH9GQMfLhuhYzm+Qq2FrIwpFkMV+xQoKkVRadZ5w7Xu\nWKqOrA/R5GKgbiMbC5lNoK93Cfi2nQc+zIN/KF/yR83tfx+54Nu6eM4voUOYMu5cGo47x+Qfqw5j\n0A/vKnn1U6StmEMsQEqQJbDkpQF4Pnfgw2Xgz6v5WREpKmG5hsVaneRXWcu1a1l4S+Yd4jzOB5pW\n2Hgh66CtE+jbJi0V7yfAn8rU0zB3B+N87reXyhT0c+D/kjWAMejG+dhMmx7/nBn2ITIHesUp4w/A\njwsS6Fd8Bf5ToJ+z8cfbybY3hVCsYLEWVrfC6pY+F650y1XdsdxbTG2R2uNsoO4ieg+mB7vtwLXz\nqv7xai8H/bl9HwLScw75ua7xS+sEpiq2mpSnx35s0M/Z+OMO4GDjG4hZz/aLnvEr0nK+L5DPFPjn\n7Ps5yJza+VpDVghFFVmuI6vbyPoucn0nrO8ilWooH1oW7zoyHNik6jdtJG4EXSege9vnDoI9Bf65\nbwnjmswB7jm1/KWgfG6OzU8596cuY4fa3HsYj7QcjoeP2wGMQa84dSx6lWKCxOypqi8VL52V+7kC\nf5A5xRZO++rTV6a0kBWRsgosbwKr28D1XeDVm5SWtJiiw2Ax1iE7j5NAbCN2C2wTyIcUA4hPZS3H\nq58L0P1c7S+p5e/D9jPD1J9E8v3Qa3zKMgb91JN/zls/bH8smQJ/vD8wYvwB+APjr/gK/MsurHF+\nCv4B+EUVWKw9V7ee9Z3n1RvH7XeeRWwRWsRa4tYheWJ82wqyEdhwsnTyoSxHxo8nV3xqQz6Nk/cU\nmM/Z5ZeeypmJaWcdj+97jU9Z5lgWzgPxT0vm/ArCSNUfe/VLko1f8dLp+J8j8M8pxeNDdP+BXaWy\nokeloJYBtQBTRrICsiJQZI7SWJa6o6TF0xKkQ6Ilek9wgWAjvhVoLw9l/yl3NS3/FCC+lMW/JNDD\nvHr/MeSlXwGm9v10f1C9R9+AnDC+QCWwfFl39IkD/xwc5hx5ADHFVTMKMpNiqGfDtk7b1x5Z1Wlw\nRIjE2hIeA6Hs8DSYUBO+b4hvW+TeIhsHdQCbhnPOfZabvsipTT+u9dydwdPzTm3K92H84dpDuV8t\n4ITRvmTGn86Kmwuh/xI5b1A+79Q9935EpVh8h/UzSkEWcvTqL19Wt08Y+HPK7lx5kP4RK0mgL7M0\nxLHIUuq3ZeWgUkgWkeiIjSI+BAKW0NUEXxPftoS3HfHeEjcOqX2KhBhl1oswVROn34PP3REzx8zt\nex/H0vBkhvpNQf8l2vVjGTv34qQ8fW8v6Qim4B/2jfNL/xnkBPwaxIDkwiG47sD41WfN+OfY/Zyi\nNnrUSid2Lwwsi9O0KGBpkSoimSWGhthAIBI6i982ZLYm3veg7xlf6oDYiOqDZU6vOgXVJdBPy9Nz\nzcn72ppTZ+G5aD3TenwJHcD0c940nXPGTt/BnCPw3PHPdeyDRFLYP9EgmSTWL0AWcvTqrz4S8JVS\nfwb468Av+2v/VRH595VSr4H/HPhzwN8G/pKIPL7oqh9Fps33OdD3oqVfJimDZQmrBVwtjnnRIcYh\npiGGjNgogg2ErSWYGm/3yMYRH10C/cYfVH3iebaeeoFnanYWdFPt4ZI89/tcN3mp2/wSwD6V6eCd\nOTX/OdYefpuCf/qfS6w/t++g6vc2vhSSHHpLSaBfXajQSF7C+B74l0Xkf1FKXQF/Syn13wL/DPA3\nReTfUUr9q8C/DvxrL7vsx5A50I+Hx8z1wb1rPdNJtV8WCezrCtarlOcpAqX4EvFZWgLaB4K3eN+Q\ndTVS+8TytYd9OFH1p1cc8kvq+EvA9RJWeF91nwv59PgvRaYq/FyCl4F3+O2cU+99VP2DadYzfuxt\nfClByt7Gr+TjMb6I/Br4dV/eKaX+L+DPAP8o8Bf6w/4T4I/5nQF/yo0D6C/x7QB8Row/AH8Fr6/g\n1TViWmS/ReqS2PWMvw+E2uL3Nb7d9ysbRKTPhzSshHOu25nW/iXs+hwjvLQBTuWcdvElgfyczKnz\nP+V5v68pNvxnzhErjBlf0ki9wau/4uMBfyxKqT8E/jzwPwK/FJHfQOoclFJ373Ouny5Tph8z/rnh\nMf082THjr5aJ6V9fwzc3QIHwQLSLpOrXivgQCe8s4V1DaOrE7JEE9D4Na9kPNbvUYJ5j3JeCq1uk\n2gAAIABJREFU/X3UzpfIV9Af5UO1qT+tOhz8RQpEy8GrL4WMvuN/XFUfgF7N/y+Bf6ln/vd4Nn88\nKv9hnz5UXsJP405BODUBcjQZWuljfHwT0dqjjWUhliuxVMGycJa8TWPy1dYij5bYuPe2gy/Z+XPn\n0f1Hf9X3Z2o0ukYURPrQyigCOm33ZUGhJq9iuv1Vhnei+vLorYigGQXj6MtK+qcsHNa3O6xjP9r+\nndRbk+JDmJjWdig8pvCo//W/Q/1vf2tyj/PyIuArpTIS6P9TEfkb/e7fKKV+KSK/UUr9Cnh7/gx/\n8SWXuVSDSfmcv3T46DE3IDWVjWTkoaBwUHSWotlR7B1FUVPkDyykptp+T7V/S9XcU9lNisEWLFqe\nzs+a64bGtXquk3ji4NNpgR6dpxWah5WadZ5W7IpGY8mxqsCS41WOJ8eqHEtOQPd3Kz3gZbI9d/3P\nketl5tlL/xRSlKGhHEdlI5E8OopoKaJD93kRLXl0qBCJAYLvUzjNx+B/qQn1Pg4+pdKyp1oFMu3J\njKPQltJ0VH//30f2D/+Fw7G7f/ffOnvNlzL+fwz8nyLy7432/VfAPw3828A/BfyNmf/9RJl7dFPf\n85zioTmu9GBOyloMZVAsHVSdo2o81b6myhSV0ZSyp9i+pdi/pWzuKboNxQXgT2v0HITO/T7chdIJ\n5NkS8j5lozzkmkYVKLUgsAS1xKklrVpQqyUe0/O+oIijcto+dTSpJ+XPQ2T0PuRkXwJ6ryWJ6nON\n9Puy6FmGBuVrstBgQkPhG6oAVQgoF3E2TcJyo4Tt52XIcN2n+blWe87bf84pqFTEqIjRnlw7CmMp\nTMfCNOjsOC93d+EJveRz3j8A/BPA/66U+p/7evxlEuD/C6XUPwv8HeAvPXeu95Pn/M5zj3F4XEOc\nvAIOywqlshYogqdynnVnWTcpTPZae9bKU8oOvbtH7+/RzT2626BdjY6nwL/k/b7E9nP7T16sPi7J\nV1xDeQ3FVcrLa/CFRqmcoJd06hpRV3h1Rauu2OsrLHmv/MvICIiHfYn1jrUdyp8P+OVwZ3O5oAii\nCWKIYlIZQ+z3FcGC35G7LQu3xfiM0pHai+/QHXQtdH2QFd1HuooxzcSEy59LmZSnHcAcjY33KwSt\nIloHMpWAnxtLaVpc1p4A/5K8xKv/P5BQNCf/0Iuu8sEy119e8tyPjxmWE+oXP+wXFNQSKULNqgf+\nbVZza2puqbmVmlJ2xP2GuH8kNimuenQ1MVjihPGnL3bYN+2tXwqnwX7TOeRVAv7iFSxfweJ1yt1C\nE3RBp5cYdYXoVzh1Q6tfsVOvsKroQZ6sfzMpD8/rKSd+LuA/mjYcNB0OZRmWl5aszw1eMkJfLn1H\nZh9YuhKxBm2hsJ6V7VhbhWmgqSHLR6APCfRKc5jdo86kYy2Px73PZz2lBKUEM6j6ulf1sw6ftejs\nkw62OWfTX/J7T733Y8YvSdOW0vQlLb5n/DoBX++4Uw/cyQN34ZGSLbauU2prbJfiqttgsRJno7CM\nazmt/SWZ9cZPGH/xGqpfwOoXsPoG7Epjdc5eL9E98L3+hlZ/w15/Q6sWJA4bUsQQDvtU35yO1v8p\nVD4P4B8Bf+rrEKJovGQpxZS7Qzln6RsWbYnrDNKB6QJl21J1e9adJtv3oO/9xQPobQdq9OjmQD92\nMZ/zUs39zuQYTcSocKLqe9PhTYPJPvlgm3N95Rz4p6AfHtuY8ZfAFbBCi6UINZWDm85yy547eceb\n8JY39i0FW5rWUreWuuuT6+OqXwD+uNbjWj1n0z/ZHgG/7Bm/+gVc/RKufwXdlabWOaVZYvR1Ynz9\nDa3+JTvzSxq1HBZQJiM8KQ8OrlPwn3YEn7qoE/CfpigaF3Oc5LiY4STHH7Zzgttz3RhsC7EJ6Laj\naPZUTcG61eSLp+q97frVq9Rw/XFdnrqYpzrqc+CHcXsSlOqdeyqQmyPwQ9biP33gT2XcAUwf1Rj0\nw7FTxr8CrtHSUoRHKgdrZbmNO+7CPW/cb/iu/RMKNmxsZOMiWxvJbAQf8aGPq36hZh/ixX1SHjn3\nimtYvk5sf/UrWP9d0K0VG1NQmCVGXyHmBq9/QWt+yU6/odYrDJ5sNgUUccKJU4789CX5NZ6CXhMJ\nYrCxwMUcGwus5LhYpHLMiXbHqxpcHZCmw9Q7ynrJqs5Z14qiD211YPoW2iJ1BucYfzrK5BzYp3Lu\nN60Eo+NB1c91svEla/DmZZO/PwHgP2clySRFtE7OD618+j6vHVpZtLbc6I61abg2NVd6zxU7VmFL\n1T2y9A8UssV66By0HjIPxoOOZ9dZ/KC7GQoylPu8yCDLFDrTKKMQowhG47XCakWnV3S6olMVra7o\ndEWrl32qaPTyLOgz/Azwn1rDn7roGdAPXziCGCw5jvQ51Epxsp2bQJcvceUCLwsCC6IqEVOi8hKl\nLMoJqhVUI6hSUDkoIzAZ2jLH/B8ipw5A6f01AaM8uXJE3SG6A90SPh/gv58YHclzS1E0FDkUuafI\n234t+wduVMMv5f/jdfwtK3lHLhsk1nRi2YZIFmHXz7tpJyuhzsXFf8kLnXpu08Ch+ZQvNKXJUDHD\ndhn7XYZ/l1HnGQ9kNI9rfq1v+I1e8aMpedCGvRZabfFmT1BpKM+wsLKkuYVEPKG38Y/OvFPH3ucC\n/FP7ftwRQJCAixEnERc9XjwhZgSxxJgR4x4fWlxwdEFoomZPwS5fstHXFAF2daBeRtoy0BUBl0WC\nDoiaLoP6krqel3MOQNU7bDM8EYdggQ5FO/pge1k+O+BrHSkLx3JZUy0c1bKlWu6oFmn9+mvV8tq9\n5ZX9LSv3jtxuEdfQWcc2RnQfV6MO0PbD8EM/OmustsHTHn0qcyq9UgngwxqHuenLfTILhTIZKpa4\nriTsCuq8RFGiXEm9vOIH/Yof1Iofdcmj1ux0pNUOr2uiCngCQhyBPhJ7B59GRoDnCfg/B3mqlcmh\nHEXjJOAl4KMhiEuf9qJBJCOqmqAbrHF0OtKY5FPZ5ks25RWFwH7v2C89TemxucNlnmDou9SXPcWX\nfPWZM2wT40vvt/E96C2aDkND+JKBX+SWaulYX2nWVyrl1ym/omHVvGNV31PV78iaLVLXdN6yDYLy\nienbmHIbwcs88C/JnOsR0p+1TusbDjFAiryPA5IDpcbrDBdKXLvEbSs8Fc4tcXXFvljxoK95p654\np0setWGnIq22OFUTlOtBn1LsoT981nvqFlVP6/iJi0JNtDJ1BJFovGiCeEJfjmL6QTyGmDX4ssUt\nHO1CaBaa/SJnly15XFxRKkWztdQLS1taukLjMwg6Ih+gME3/ojiN5Pv0nRxVfcEDDo3F0GG+dMYv\nCs9qEVlfBW5fRW5f9flNZEVDvt2Sbzbk2ZacDdE3tK0lhoj444S7YRnkqap/SeZY/oQHRow/TBBc\nFLDMUx5LRW0yfCxxXcWea2p/xb6+Yv94zS6r2KplSrpkpwx7JbTK4lVNUAY5QH4YxJO4f6zcn6vn\npy4DxAfoH7eO3B9FEWQYyDOM5Ev7Y2Hx6wZ77eiuI43W7MuCbb6kqq4ptaatWrqloS0VNgeXRaIO\niHoedJeOGDv95soctocxGR6FSxGfacloX/wOP0/g55ZqaVlfW25fddx9Y7n7heXuG8uSFu5rJKsR\nGsTXSFvTKUcbI9EloHtJKv6QDy/BMA+US+WpAmh64Jc92FclVGXKfabxOqcegO+ueNfc8KBveKdv\n2OqKRuU0ZDQqp1aaRkVaZXEq9kN1Bi/20dbVB9CfNo3p1qcO/iPXqxH4T92y/eTKBPRRWYC4dIS2\nxXlHp4Wm1OwlZ5cvWVaRMtPYymAXGleCzQVnPMGYFwH/ct2P+RTwx2PkMCVL4dG9jR/pUnt+4Rv8\njIFfc3NVc/uq5u4XNW9+mdJCWrrM0uGw3tK1lm5n6bSlCxHfT7QYUuRYHhbDGI8WmJYHmX5rGPbN\nMf6qTGEBrhdgjaIOGSqUOFexD9e8ize8Dbf8JtyykSVOgQWcUsnCUxGnLB57EkYrjGzb0+E54+Zx\nWuvPA/jHD2jH7SP4BSHKYJELMirHK48PHVY7ujLSXGlqCraZUFaKsjD4SuOX4MtIKAI+swQ9dLPv\nU9f57ee+HCU/TUDjERz0zj2h5aV62+cJ/CIBf3295fbVhrtvNrz55Ybv3mwopWNLZOsj2yYStpE2\nj3QqsA0R55NKf/Lg5WkvPOXO5x734fjBxh8YP09sf72AmwpaFA9dBq7EdUv27RXvulf8pr3l/2m/\n5TEsiHgET1SeyJBcvz+NNVAnzUXNNMlzXohPH/qXA5wnK1r6KHoJ8n1ZhPgq4LXHlZ72SmhsYvwy\n1+RVziJkxAriIhLLQMwdMcuJ+v0Yf84xPOc0Hh8zdNqqDwGqehtf0aHoSK3nZcuf/wyB/4Kv5MNc\ndZ1QlMqJkrO1IbuCfBUplo6ybFkUNYtsx9I8UoSOTqUbH+z1IOAEut62Hwb8zjWZS3IJMicvVWnE\naILRhFzjc40rNLbUdLKi9WsartmHa3buim1T8VhXPOyXbHxJ6uGHKwY4NF7PEA1ELjanObB/6oAf\nZNw1z/02XXz6NKauqEh8HfGN4CzYYGhJfpeyyAlRQeGR3ELWISYDbRD1YasmnAP7+bY2jElIjK9x\nKCyaFk2OerIUyLz8noF/7oPYFPSjl2hA5QpdKFShUAWoYthOgDe3JVzn+DzDxoym0ezeKR5RFB62\nv4H6R2gfwe4gNCAWVHy68MXp0KBjs5oGYJxranN3oTAEldOqAq0Loi7oTME+K3jIChqp+N7c8Na8\n4l7fsFEVtSqwShGV78+YvtEfIw0NLGdmrjr3fD9XtofTZzHutoft8b3OE0zs/SRHThUcQgc9o6YZ\nn6qf7j32IbyvlX/6qe6479ybSHcn/RwMjznx6ufonz/w5xjpUr/X26haoUvQS42uhqQO5Xwp6KpA\nqoKQ59hoaGrDDsVjC4VNoN//AO0D2C34FqJ7qtLz5OrzYJ8D/jm+1UoTVEmnl8R+5N3eVOSmIssq\nGql4m1W8NRX3ZsVGV9Q6x6KIeI4sP472PjT2jPnFnKd3MtfUxvmnLIMLdg70U+CPPTRHRTp17Mdh\nUIlTk56VYhoUaDIUpk/64E14qUyf/iU9ZfyfYbr18B0/w5FhyejIyH7ujD/nxhhvn7v9pM6rIoHc\nrM0ope2ijBhdgikIumf8WrNvNRutKDpoHhLo24fE+L4BcaSQfDwF/iV+nAP++G6eDvPVBFUQVEWn\n12DWKLOGbI3K1tSy5N4U3Juce12w0fmR8Q+q/DiNr8Qon/NAzIF97s4+ZZkaaNNOAC6xffo1+cwT\n46dke0s6qdl5nzI0Bt2/5ZdNj5mX8Zs59/sgx5F7gRxHir2UkaPR+Bdd7/fM+Of8l2Pwn+ZKJxVf\nrxLQs9sMc5uR9SnPBe1KxOZ4l2OdoWkNO6coHRRNYnm7O+a+6Rk/nqr5cxAZtuc89mPgn/qWjymi\n8arAqxVer/H6Fm9u8VlKjSzYZIpHo9gYxUYraq2wih7441oM+fgqc7Wae5ZnTKlPXsbPYo75heNz\nerqUyBCn6NSK1r1CrUnf0PM+ZZj+Gooh3uHLZe4tPK9JHkfuGfxhlkGBpkBhPk3gT9lqJteSYtIN\njH+bkd/l5Hc52V1ObgS9LWBbELYZXdfb+FtNtlUU+wT0Q2ohtInxB1V/TuYAfonth/K0+dEzfqcr\nGr2mNbc05o7W3NFkdzSyoDY+Je3Za0+tPFb53safdi9zHedzBslTz8PnIy+x8VMXfI71h2FP4WBF\nm8OgWEUko0DIkJGNrz/wGc61m+f+MVb1E/ANJYoFYF4I6d+zqj+Ux/w4lQnMdO/MqxTmJjF+fpeT\nvykp3hTkSjC/LRFyQpdjQ7Lxs3cK/VvIdxBtAnp0IDblceTce871dYnt5+5qaHYGiEoTKGhVxU7f\nsNW3bM0d2+wN2+wNDSU2a7CmxZoGqxusarEq9IwfOG3QU71i8FyPa3lpycfPie3hZTb+qQ42bnfp\nCalh/ltvRZvekjZoIpEcIYfextcYfursxkttaXp3w8i9BPzE9AtggZC90OD4mTD+dMbyOC7+2O0h\nKC098I+qfnZXULwpKL5bUIigKaAr8I9HG1/da+TXkG84xMBX8bRMPOXMaX4J8EN5aiqMm14GeDRR\nFXSqYq/XPJhb3pk77s0b3uXfUUtBNFui2RD1lqg1UQcibQ98z7EbGZtG433jld7iZN/nzPbwvI0/\nvKVz/iVGqv4xsoElIyMnTWtO8RtV7+BL3cHHfY7nOoDxtNwEfHWIOrEgkp+0wPPyewD+9GHPiU56\nN6rPTZ8LKjPoLCMzhtxAYYSFCZTas9CWRewoxZEHh3EeukBoIm4vdBsIm/M8ee6RzXHlpbJoBVoh\nfRqXY7nArxbY5YK2WFBnS3Z6yUaWPPglTSwgWAhFWicp6rSKwqEGYVTbIZz4tBaX5HNj+I8r6Snr\nQ7CyxPg5VnKMFCiJpCVqj869IbTpc0/2Yz35Ida/kUAWFXlUFAHKEMnDzwr4Ux485zfvG7JWoMxh\ngM5hLqtKjj2DIveKohaWG8vyB0uVKyoUZawpv/+B4u0Dxf2WcrMnrzuMdYclroYrjuF07lvCSzyt\nJ/81CskN8bAEt0GKtB0Lg81vsMUKX5T4whByIYpFbI3sNhBy2O2grqFpwTrwoZ8pNGcSDaw+rvFY\ntZ9b+vFzlrn7m+pr40Wwp2bQEJfYHGLzWUmuMy0FSgQoUJKjJTn4hoVMnvoKnn6y+xiiRNAxYkJK\nmffkLn2qzu3PFvgw/xmlf2FKQBvQGZgh75M2qIIUecQ7ysazfPSscs8VjpXzLMKe7O0D+dt3ZPdb\nsk1NXrcY61OQtMkVX/L9dKjxOTn5r1ZImRGXOVQpSVUQq5xQ5dhsjWOFU2U/oSamYAq2BrcBmyXQ\n1zW0LXQW3Dngj1X5aY3mbPsvAfhzDs6xDM9lviM8ZfzE9lpytKSYCCAoKdFSYEiRevOe9afnmYJ+\nzg/03ncngoqCjoKJkSwE8j5SVAL/y876ewL+VMaWcj8e1xjIirR8TFYcyqqIGNWQOaFsLItNx4qG\nK9ewrhvKsMPcb9D325RvavSE8acvZZy/zwua5RatEttXObJeENclar1Ar0vUusSaa5xd4V2Bt4bg\nItHaI/C7LAG+aVNuLXif1ug7Af74DsbDdod959LnLpfe1iUt6NgC0hTd5NE30rvP0kL0iW2lxJCT\nSU5ORpBxjJ+nV1UzOZPye92hkJb3CgrjIXMD6OXnBvwX82V/V6pn+RLyxTFlC1TuMUrIvaWshSWW\nyu24rjesHzeUYYfa7FGPe9jUqM0e6g41Yfwx0Gdq8exdTF/a4f9aJ9V+VcC6hNsKdbvs8wqrV7hd\nhd+X+L0m7IRoLdHWyD6DxiSwW5sC/1l3ZPwnnohzvuA5l+TccZ+jnDqD5zu+ORU/yTChOaAJksbF\nIQVIiUgJEjEUZFKQk+PFHMKcTOUc6H8K+BVH1k+q/sD2Qm71zw3404dyTgWT/mO6Tip+ViTAFxXk\nKygqVO4wypH7PUUTWbqOVb3nKnvHOvuRRdgS6w6pW6Ruifu+bD0xPm30l2BwyQU57TAOzU2rZNf3\njM9tBXcr5O4K7q6wqsLdL3HvyhTp3kbivmf8HVDrBHTvU+78jI0/Bfk50E/zzx308DyMpjb+U/Cn\npbUS4yNpXL5QEnvGz6QkJ8dJ1o+f04e1+OauNtUifxL4RQ42vo6QeSHzQu4ihdMU9vlTwO8F+C9g\npoHxD8BfHdaRUqbDyJ7cZ5QuspCOley4lgdu+C2l3xCsI1iPt+6kLFFmrz4n0xcxx/aztptWSJEl\nu/6mRG6XyN0V8maNvFljZYnL00KX3mriLvbOPZCdh51KII+S8hCP2zJ48qfP7Jw6/9K7/Zxk+qae\nY/zTAJkHxhdNmoSTIVIQpSDIAkTIpcBKQUmOJy3FNWb8OW3wUvt57zsUQUd6xldkPpI7RWHVz5nx\np7bokPf7lSQP/gD8bJnWkyqvYXGDUg3GPpBbQ2mFpbNUdseVfcfavmXhH7FRUiTVPrcxIlGIMZ5c\n8ZyaPx1eNC2P8yn4Rauk6lc5cb0g3lbEuyvimzXxu1fYuMCh8E4R9orwTnrnnoe9gq1KzH4uMQf+\n6dj9LwXkczJW9efmWcJ5j770v2pU79WPkhMlJ0iJkRIlkUJKSukZX45e/THjnzUFedpu3kvVlwH4\nggnqoOpnTpFbyH9ejD+Vc7eenFbpK16/bn3u0YVHlw5dWlZYrkPHFS2r0FB1NYumpmz3FM2O3O9P\n4BBIQzjC4ezHK03lHFymx+oegyp9oocRLuNC4UuNLwwxzwh5js8KvFng9ZKGklYJnURsFHwQgo9E\nF5FOoBuPLpgORJk69Z5j/C9RBsCPWwGcmkjnn9WwhHYUDVEj0SAhQ4ec6AtM9PiQ4UNGiIYY00q7\nMhNpc2wGTvefU/fnDOCpqAgqgAqCdqAtmE4wXVre6yXyOwL+dKro+JH8/+2dX4gs2V3HP7/6193V\nc/veOwmT5WZdjeRZgqIvCaIIIfgS8UFDRFQk+GBU0AclL4vig/oQCEIejBESUUQFTXxRAyIhBk2i\niUbdGCHuYlx3J2T2Ts90V9WpOufnw6nqrqmp/nPXe3sGbn3h0Keqq6t+faq+9ftzzvmd9gCUgFAC\nYrEkQUESXpJEliQyJPGCZPSQVDPu2pe5V51ytzwjDeck4RIRQ1WPnypZz1jXztW6HYl9WrxP0tXx\nAlL3NEroex4lqj9DsDPIJ5AH4CpgKVQPhTyBHMissHhZyU6F4kwxc6iWfsjwupn6yNxuw4H0Twx1\nU6sVtBIwAWoCXB5AXqfhzkPUBGgZoFXgj1XZ2Px9FmWX6Fesxp7fdOXDQivrln+4luxtPtwQ8aGf\nan66wyhwTIKCNLSkYUEaL0iT2BeWpNUpaXTKNPLEj4M18ZWraSraHlzfaIJt29fMeOrwQ9PTGNfe\nSLzero4gGIOGUFaCLqF6zd+XhREWFWSnQnaq5GeCmSvVEqwR1Gnrqm0pusQeiP7EoKAqiBW0CqAM\nUBNCEeAybwG4ImgRX8DK7jQIVy+xl8bXzucKXeLnQIYn/p7YSXwReRb4OPCm+pK/o6q/LSLPA+8D\nTutDP6Cqf9l/lu7IMunU1yUQSMSSBo5ZWDCLYBbDLBZmI5joksScEcdnJNEZcTgnaYgv7koypW6q\nim5n2D7oNn4QeO0expCMW2XkP8sJ6ASqALIKdOHvT26EywtYlJCfQX7WaHyhXKqfOOS6V24/Dm2t\n3++bDnhMqDU+lUApuCJA8hBpNH4R4kyIlt4qUCt+WPWWxPrbfPxtx/Z+2WRZq1hr/MdN/Pr0v6iq\nXxKRI+AfReRT9XcfVNUP7j5Fr6fCmoprvzbAkUjFNKyYhSXHUeVLUnKcVIw1Q5I5Ys6RcO5LsITA\na3y4HrppX3FfbGp8rU39KIHRGMbp1WJGUEaQBX5wBUuojJBfwGUkXJZey5tzMHNWpr69RvxGinZA\nKujs71oBA/7fUDyJG1O/DMAEUASQeeJf0fhWUCc+E3PPqRrs6hXaW9vvMvX3fAx2El9VXwFeqeuX\nIvIC8Oae/7MFXY0fturNtg9gBVQkgSOtF7c8jjJO4rokGWPNqOIlVVSXcEEVLKlqjd8diNnn48P+\nb9ruzVDxGj9KvIYfp5DegekRpEdQxJA5uLR+sU01Qukgt8KlgwsD1VKolkq1FMqFbvDxG4nbEnSj\n+d36gMcBVU98qTy5MQFaNBo/8D5+sfbxWWn81jnYHC/aRPq2bdc+/kq9j/htU39Pl+ORfHwR+Tbg\nbcA/AO8A3i8iPw58AfglVT3v/2Vbmia62tb4QS1KSCCuNvUL7oaXHEcXnMRzHiQXPBhdMNIlWWHI\nY0MWGbLQkAcGW/v4Tf6RPv99lz+17a272t9o/HhN/OkRHN2FOzMf1LvMYZRDaEBzn/Ajz+EyEy4L\nwRrFGf9pDat6v8bftG8g/BOBen9d3drUVxMgeYBmoV9ma6Xx/TG6w8dvE77Z3lfj92Kbqf+4iV+b\n+X8K/EKt+T8M/Jqqqoj8OvBB4Kc3S7o6U6feTlMREWBJpNH4C46jh5xEZzyIz3guOSPRJRexYx45\nLiJHEDps4MjFZ5c3XJ9tHXSuvK2x+/ZdKR2NP5l6TX9nBnfvQwI8xM+UCitBF1DNhfwcFudwkfuH\nSuslXLRTrl65u921VwYf/4nACVjWpn7hNf4qql8EuDra3wT3NkX1++7avhqfzu9WG9tM/f1ybe5H\nfBGJ8KT/fVX9BICqfqN1yEeAv9h8hr9t1d+CBG8lCBzSU9JRRXpUMpkUTEYZabQgDS+YMGdiH5Jo\nhrGQOIgcBPWYn6Y9+tJPtht/U0Dl2n/ecHwoEEiASIBKgAt8fvwqDDBRgHETDEcYm2LKMSZPKBYR\nxUVIcQ4mp3PGTZ06DXZp/W3HDbiKPRy8FbHWGh8jfnxFJr7PPg/8vlVwjyu6bZOZ33d3tymh3lf6\nFlP/06fw2Ve2/P0W9tX4vwf8u6p+aCWwyDO1/w/ww8C/bv75961qEgpRrESJrSfcNfWKKAm5m2Tc\nSTImo4JoZCCuKNWSFY45SlzBRT1dPc/8XBZbglo2rl8P/Y3aV29+s6nEGhK4mKpKyEuf6cdkCctR\nwsMkYaljXl4ecZpNOSuOmJdTltUE4yKcCteng3YNwW31vcI/A1bo6tt2fUO7NV81ubWbvNqNH614\nDVuwHjDSF0nmKqF3mfa09u28o42p39b4GXzvMbzzmfVhv/rZzafYpzvv7cCPAV8WkS/Wcn0AeK+I\nvK0W40XgZ3adC/yIvGjkSCbKKHUkqSVJhVEakKTCLM65Q85ECmLxSxmUasmMY15CZDzpF4t61mpR\nz1qtid8laoMuTTYR/3q4sZPESQMCO8JWEzKTYoqURZ4SLlPCKGWpY06XI07zMWfFiHm5PKdbAAAL\nR0lEQVQ5YlmNMDbGzxHqOg+rlu5r/Q2tOJB+N/pIv8ma6uxvNGpDrsaUboifUy9eSP9IsR4pHvUl\nsPVvteVriB/j2Vzud5p9ovp/xzoM38aGPvvtkACixDFKYTLzZTyDyUyYzOBOmDErcyZlQVQaKCtK\nY1kax7xUogKyOliWZzXxa40P1/152G5KQT/xm3BjU5qceaIBziXYKqUsZ7hihstm2GiGC2YsdMzZ\nMuQsCzkrQuYmZGlDjAtr4veNp28/AtuwybQfXgCb0aVYs2/L4fsQv8CTrruoUeeqfRJs+myO3ar1\nG/naGj9nraFu15DdNYJAiRIlmSqTmTI9VqbHjqNjXz8KcqaLnMmiIFqUsPDEz4wjXChBTXZTrKet\nV+V1jd9F37t+m9ZviB7XJak/nQYUNsFUUwozI8+PKcJj8uCYgmMWjJgvlfNcmRfKvFSWlWKcq8cV\nbrqtu6QeSP/o2KZXN2j7Ph+6HTnvavySqyPFeq6+SZJdGn9vH78hvbA3ow9OfAm01viOycxydOyY\nnTjunFhmJ46p5Ixeyxm9ZogxYEqq2sd3FyBLT/R2sSVoM7Gvda1dhl0f8eFqB2OMz2A6wpPfElDW\nGj8rZ1wWdXpsOeGCExY6YpmVLDPDsihZmJJlZTC2xOmVznr6Ox3b0m7r2e3bHnAd+5K+Vd+l8bs+\nfo/G7169e7VHMfev7e+a+u1sbHsu53MDxPemfpJWTO5apscVsxPLvQcV9x9UTMgJkpyQgsAY9NIH\n92yhFJcKF167q/UJddT6bdcK7jVUaTfmJtO+7xFoa/x16mJfjAYsauLn5i7z4JgzOeFMH3DmHnCp\nI0yxxORLTJFhyiWmyjBOcc06XSu7pP1Y9Nkqm/TAgO3ouk19btSGduzzoRtzOuGqxm+b+lvs8z7C\nt+ubzP2t8jUvpaCz/7YSP1hpfMtkVnJ0XDE7Kbn/oOQNz5WMXY4lx5kCd1ni4ooSizNe4+ucJtN2\nPTd5XUc3B/T2rTc3oKvxx0BKE9xLsDYlL2fM5ZhvcsKr7gGvVM9xSYIr5jhzgTNznAlxleJcidOM\n649B87mrw3Hr4zDgGvrI31fvbDfN3KfxG/+58fH30PibpNpE/g1SretdU186+/acjHIY4qejVr1C\nJko48t15cWJJ4pJRWDAOCsZklORUWlA5g7MltrRUxlEVihZrYjafu/LiN9j3BeCCwCfNDMT30weC\nDQKqQCiDKSaakocpyzBlIVMuXMp5mfLQTVhoAqWBsoAyhioCG9a58XcF7x4pvjtgJ/o0fd8xHXRN\n/XbkvDH1u1H9Hh9/H6n2kfDa7+vBXs56N9cClULl/HijfXAY4p/cW9enJUwXaCw4Z3GZwT10uMRg\nybA2w72c4U4L3JnBzStYVmAcbMiZ1+xtjGjt7G+waQRVu65hgI0jiiSGJMIlEVUSUyQRWRKTBXd5\n6O5xoUcsNCF3gtEK6zLU1nnxq8t65k29/rZa0LYt0R1l0JZq0Ow3iq6p342cw2biH0q8muSVg9JB\nUa1DEHuup3EDxB8X6BG42KJqcLlgzx2WEmsyqnKJPS1wpwV6VqLzEl1aMM4vc8V2b21TAK+NbZ06\nGgjVKIbJCE3HVOmIIh0RpWPidEQmM87NfS7MlKUZkRnBGEtlMtRcQBl60ldLvyKnM3VQQlmnztqE\nTc7H8CI4KPqCe82o8raP3x7Ac6DbpLpOx1g6MPaqJ1LtaT4cnvhRhk4sGhucy3CZ4LDYwmAvMqxZ\n4s5Kr+3PDDqvVsSnsxJOG/uaTNtGTIE386skwqVjqllKMJsSzFKCWYrMpuQc8XB5l/niiMUiIV8G\nGCpsmaPV3BPf5q1ifCqeKxq/+w82vbq2+aUDngi6Pn7TXdZeqvAGNX4jonX1iGLWBkmkfqTxPjg4\n8VViCAwqGepiXC4447DzEhvkWJPh5iXuvMLNvcZnRfzd2n7T/97UpXLtHEGATWKq6QidTeH4Dno8\nqz/veOI/nHBxnrKIE3IEU1ZUWeZnaZVSk70eS+zKtcZfEb8dzun+g31slgFPDH2mfpv0jY/fF9w7\nhMZnrfErtw4/hAphcJs1vg3RMkPLC1wZ4Qrxq8mUBltm2GKJW1p06TW9LmzL1N+dHrtL/l095dd+\nHwg2ibHpGDtLsccz7Mk97Ml97Mk9ck25mMRcxBELYrJSMJmlkgy1lb8Tar2Wdxa0Wvv4V/Li9xG8\nb/8mSQc8MTRj7yvW0WNY34amK68ZvLOjO+9xYmXqu3r8kEKhfrJaEOw9YvcGiG8CuLxEL8eoibzG\nv3TYy5JqkRFlS9Q4nHFoXWg+e4J7cLXN290j+w6EvXKu2tQv0zHm7pTy+A7lyX3MgzdQPngjuY5Z\nxH4G5KJUP2x4XmGlwtnM3wltF7euX+t07XbkDOb9jaOr8aWzH66O2rsBU99pLZ7zkwQDV68zK372\n6D44PPGXoDxEizHOxWgmuHOL/abBnmXYZeYXvljNV1/Xm+DermDeJo2/S9sDuECokhiTjshnU/Lj\nGfnJPfIHbyR/7oTCjckw5GVJnhnyeYlJSqqgRKvSR1yQWrvL9foqL3771dRINATybgUaH79L+ibL\nS9Uphwzu0TL1ZT2GR8SLsS+hHzX35OvD0WRV7ItfgCTxC2aoT2SguXqT/rzEnZfoReW3M4sWzmtR\nu27Vrm/ejX/vQ6FNveovVRYNA6o4ohz5F0BxNCGfpWT3jljem5LfmVBME8pxRJUINvSLYuDyuhT4\npXGa2UNtB1A65VHx4uv4zSHx4k0LsAVf231I8+A05n5jynfJ3s7m+pj8+//a87iVeM0LoI7wN917\nTdmGwxC/BffCZw59yUfCS9WeKUxuDC/etAA78OJNC7AFexD/BrEv8R8HDk78AQMG3DwG4g8Y8BRC\nVJ9sREJEhmjVgAE3BO1b1I8DEH/AgAG3D4OpP2DAU4iB+AMGPIU4GPFF5F0i8hUR+aqI/PKhrrsv\nRORFEflnEfmiiHzuFsjzURF5VUT+pbXvvoj8tYj8h4j8lYjcvWXyPS8iXxeRf6rLu25QvmdF5G9E\n5N9E5Msi8vP1/lvRhj3y/Vy9/yBteBAfX0QC4KvADwAvA58H3qOqX3niF98TIvI14LtU9bWblgVA\nRN4BXAIfV9XvqPf9JvBNVf2t+uV5X1V/5RbJ9zxwsd9Cqk8WIvIM8Ex7sVfg3cBPcQvacIt8P8oB\n2vBQGv97gP9U1ZdUtQT+CP8nbxP2SeJzMKjqZ4DuS+jdwMfq+seAHzqoUC1skA9e33DExw5VfUVV\nv1TXL4EXgGe5JW24Qb5HXIz29eNQD/qbgf9ubX+d9Z+8LVDgUyLyeRF5300LswEnqvoq0KxifHLD\n8vTh/SLyJRH53Zt0RdpoLfb698CbblsbdhajhQO04a3RcLcAb1fV7wR+EPjZ2pS97bhtfbEfBr5d\nVd+GX1r9Npj8VxZ7pT+Hy42hR76DtOGhiP8/wHOt7WfrfbcGqvq/9ec3gD/Duye3Da+KyJtg5SOe\n3rA8V6Cq39B10OgjwHffpDx9i71yi9pw02K0h2jDQxH/88BbReRbRSQB3gN88kDX3gkRSes3LyIy\nBd7J1kVAD4buFL5PAj9Z138C+ET3BwfGFflqIjXYsZDqQXBtsVduVxv2Lkbb+v6JteHBRu7V3RIf\nwr9sPqqqv3GQC+8BEXkLXssrfkrzH9y0fCLyh/hlht8AvAo8D/w58CfAtwAvAT+iqg9vkXzfj/dV\nVwupNv70Dcj3duDTwJdZT7b9APA54I+54TbcIt97OUAbDkN2Bwx4CjEE9wYMeAoxEH/AgKcQA/EH\nDHgKMRB/wICnEAPxBwx4CjEQf8CApxAD8QcMeAoxEH/AgKcQ/wduIlVXFOdT/QAAAABJRU5ErkJg\ngg==\n", 685 | "text/plain": [ 686 | "" 687 | ] 688 | }, 689 | "metadata": {}, 690 | "output_type": "display_data" 691 | } 692 | ], 693 | "source": [ 694 | "A_list = pickle.load(open(\"notMNIST_large/A.pickle\", \"rb\"))\n", 695 | "random_letter = random.choice(A_list)\n", 696 | "%matplotlib inline\n", 697 | "plt.imshow(random_letter)" 698 | ] 699 | }, 700 | { 701 | "cell_type": "markdown", 702 | "metadata": { 703 | "colab_type": "text", 704 | "id": "cYznx5jUwzoO" 705 | }, 706 | "source": [ 707 | "---\n", 708 | "Problem 3\n", 709 | "---------\n", 710 | "Another check: we expect the data to be balanced across classes. Verify that.\n", 711 | "\n", 712 | "---" 713 | ] 714 | }, 715 | { 716 | "cell_type": "code", 717 | "execution_count": 8, 718 | "metadata": { 719 | "collapsed": false 720 | }, 721 | "outputs": [ 722 | { 723 | "name": "stdout", 724 | "output_type": "stream", 725 | "text": [ 726 | "A train data count : 52912\n", 727 | "A test data count : 1873\n", 728 | "B train data count : 52912\n", 729 | "B test data count : 1873\n", 730 | "C train data count : 52912\n", 731 | "C test data count : 1873\n", 732 | "D train data count : 52912\n", 733 | "D test data count : 1873\n", 734 | "E train data count : 52912\n", 735 | "E test data count : 1873\n", 736 | "F train data count : 52912\n", 737 | "F test data count : 1873\n", 738 | "G train data count : 52912\n", 739 | "G test data count : 1872\n", 740 | "H train data count : 52912\n", 741 | "H test data count : 1872\n", 742 | "I train data count : 52912\n", 743 | "I test data count : 1872\n", 744 | "J train data count : 52911\n", 745 | "J test data count : 1872\n" 746 | ] 747 | } 748 | ], 749 | "source": [ 750 | "letters = [chr(ord('A') + i) for i in range(0,10) ]\n", 751 | "for letter in letters:\n", 752 | " letter_train_data = pickle.load(open('notMNIST_large/' + letter + '.pickle', \"rb\"))\n", 753 | " print(letter + \" train data count : \" + str(len(letter_train_data)) )\n", 754 | " letter_test_data = pickle.load(open('notMNIST_small/' + letter + '.pickle', \"rb\"))\n", 755 | " print(letter + \" test data count : \" + str(len(letter_test_data)) )\n", 756 | " " 757 | ] 758 | }, 759 | { 760 | "cell_type": "markdown", 761 | "metadata": { 762 | "colab_type": "text", 763 | "id": "LA7M7K22ynCt" 764 | }, 765 | "source": [ 766 | "Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.\n", 767 | "\n", 768 | "Also create a validation dataset for hyperparameter tuning." 769 | ] 770 | }, 771 | { 772 | "cell_type": "code", 773 | "execution_count": 9, 774 | "metadata": { 775 | "cellView": "both", 776 | "colab": { 777 | "autoexec": { 778 | "startup": false, 779 | "wait_interval": 0 780 | }, 781 | "output_extras": [ 782 | { 783 | "item_id": 1 784 | } 785 | ] 786 | }, 787 | "colab_type": "code", 788 | "collapsed": false, 789 | "executionInfo": { 790 | "elapsed": 411281, 791 | "status": "ok", 792 | "timestamp": 1444485897869, 793 | "user": { 794 | "color": "#1FA15D", 795 | "displayName": "Vincent Vanhoucke", 796 | "isAnonymous": false, 797 | "isMe": true, 798 | "permissionId": "05076109866853157986", 799 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 800 | "sessionId": "2a0a5e044bb03b66", 801 | "userId": "102167687554210253930" 802 | }, 803 | "user_tz": 420 804 | }, 805 | "id": "s3mWgZLpyuzq", 806 | "outputId": "8af66da6-902d-4719-bedc-7c9fb7ae7948" 807 | }, 808 | "outputs": [ 809 | { 810 | "name": "stdout", 811 | "output_type": "stream", 812 | "text": [ 813 | "Training: (200000, 28, 28) (200000,)\n", 814 | "Validation: (10000, 28, 28) (10000,)\n", 815 | "Testing: (10000, 28, 28) (10000,)\n" 816 | ] 817 | } 818 | ], 819 | "source": [ 820 | "def make_arrays(nb_rows, img_size):\n", 821 | " if nb_rows:\n", 822 | " dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n", 823 | " labels = np.ndarray(nb_rows, dtype=np.int32)\n", 824 | " else:\n", 825 | " dataset, labels = None, None\n", 826 | " return dataset, labels\n", 827 | "\n", 828 | "def merge_datasets(pickle_files, train_size, valid_size=0):\n", 829 | " num_classes = len(pickle_files)\n", 830 | " valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n", 831 | " train_dataset, train_labels = make_arrays(train_size, image_size)\n", 832 | " vsize_per_class = valid_size // num_classes\n", 833 | " tsize_per_class = train_size // num_classes\n", 834 | " \n", 835 | " start_v, start_t = 0, 0\n", 836 | " end_v, end_t = vsize_per_class, tsize_per_class\n", 837 | " end_l = vsize_per_class+tsize_per_class\n", 838 | " for label, pickle_file in enumerate(pickle_files): \n", 839 | " try:\n", 840 | " with open(pickle_file, 'rb') as f:\n", 841 | " letter_set = pickle.load(f)\n", 842 | " # let's shuffle the letters to have random validation and training set\n", 843 | " np.random.shuffle(letter_set)\n", 844 | " if valid_dataset is not None:\n", 845 | " valid_letter = letter_set[:vsize_per_class, :, :]\n", 846 | " valid_dataset[start_v:end_v, :, :] = valid_letter\n", 847 | " valid_labels[start_v:end_v] = label\n", 848 | " start_v += vsize_per_class\n", 849 | " end_v += vsize_per_class\n", 850 | " \n", 851 | " train_letter = letter_set[vsize_per_class:end_l, :, :]\n", 852 | " train_dataset[start_t:end_t, :, :] = train_letter\n", 853 | " train_labels[start_t:end_t] = label\n", 854 | " start_t += tsize_per_class\n", 855 | " end_t += tsize_per_class\n", 856 | " except Exception as e:\n", 857 | " print('Unable to process data from', pickle_file, ':', e)\n", 858 | " raise\n", 859 | " \n", 860 | " return valid_dataset, valid_labels, train_dataset, train_labels\n", 861 | " \n", 862 | " \n", 863 | "train_size = 200000\n", 864 | "valid_size = 10000\n", 865 | "test_size = 10000\n", 866 | "\n", 867 | "valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(\n", 868 | " train_datasets, train_size, valid_size)\n", 869 | "_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)\n", 870 | "\n", 871 | "print('Training:', train_dataset.shape, train_labels.shape)\n", 872 | "print('Validation:', valid_dataset.shape, valid_labels.shape)\n", 873 | "print('Testing:', test_dataset.shape, test_labels.shape)" 874 | ] 875 | }, 876 | { 877 | "cell_type": "markdown", 878 | "metadata": { 879 | "colab_type": "text", 880 | "id": "GPTCnjIcyuKN" 881 | }, 882 | "source": [ 883 | "Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match." 884 | ] 885 | }, 886 | { 887 | "cell_type": "code", 888 | "execution_count": 10, 889 | "metadata": { 890 | "cellView": "both", 891 | "colab": { 892 | "autoexec": { 893 | "startup": false, 894 | "wait_interval": 0 895 | } 896 | }, 897 | "colab_type": "code", 898 | "collapsed": false, 899 | "id": "6WZ2l2tN2zOL" 900 | }, 901 | "outputs": [], 902 | "source": [ 903 | "def randomize(dataset, labels):\n", 904 | " permutation = np.random.permutation(labels.shape[0])\n", 905 | " shuffled_dataset = dataset[permutation,:,:]\n", 906 | " shuffled_labels = labels[permutation]\n", 907 | " return shuffled_dataset, shuffled_labels\n", 908 | "train_dataset, train_labels = randomize(train_dataset, train_labels)\n", 909 | "test_dataset, test_labels = randomize(test_dataset, test_labels)\n", 910 | "valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)" 911 | ] 912 | }, 913 | { 914 | "cell_type": "markdown", 915 | "metadata": { 916 | "colab_type": "text", 917 | "id": "puDUTe6t6USl" 918 | }, 919 | "source": [ 920 | "---\n", 921 | "Problem 4\n", 922 | "---------\n", 923 | "Convince yourself that the data is still good after shuffling!\n", 924 | "\n", 925 | "---" 926 | ] 927 | }, 928 | { 929 | "cell_type": "markdown", 930 | "metadata": { 931 | "colab_type": "text", 932 | "id": "tIQJaJuwg5Hw" 933 | }, 934 | "source": [ 935 | "Finally, let's save the data for later reuse:" 936 | ] 937 | }, 938 | { 939 | "cell_type": "code", 940 | "execution_count": 13, 941 | "metadata": { 942 | "cellView": "both", 943 | "colab": { 944 | "autoexec": { 945 | "startup": false, 946 | "wait_interval": 0 947 | } 948 | }, 949 | "colab_type": "code", 950 | "collapsed": true, 951 | "id": "QiR_rETzem6C" 952 | }, 953 | "outputs": [], 954 | "source": [ 955 | "pickle_file = 'notMNIST.pickle'\n", 956 | "\n", 957 | "try:\n", 958 | " f = open(pickle_file, 'wb')\n", 959 | " save = {\n", 960 | " 'train_dataset': train_dataset,\n", 961 | " 'train_labels': train_labels,\n", 962 | " 'valid_dataset': valid_dataset,\n", 963 | " 'valid_labels': valid_labels,\n", 964 | " 'test_dataset': test_dataset,\n", 965 | " 'test_labels': test_labels,\n", 966 | " }\n", 967 | " pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n", 968 | " f.close()\n", 969 | "except Exception as e:\n", 970 | " print('Unable to save data to', pickle_file, ':', e)\n", 971 | " raise" 972 | ] 973 | }, 974 | { 975 | "cell_type": "code", 976 | "execution_count": 14, 977 | "metadata": { 978 | "cellView": "both", 979 | "colab": { 980 | "autoexec": { 981 | "startup": false, 982 | "wait_interval": 0 983 | }, 984 | "output_extras": [ 985 | { 986 | "item_id": 1 987 | } 988 | ] 989 | }, 990 | "colab_type": "code", 991 | "collapsed": false, 992 | "executionInfo": { 993 | "elapsed": 413065, 994 | "status": "ok", 995 | "timestamp": 1444485899688, 996 | "user": { 997 | "color": "#1FA15D", 998 | "displayName": "Vincent Vanhoucke", 999 | "isAnonymous": false, 1000 | "isMe": true, 1001 | "permissionId": "05076109866853157986", 1002 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 1003 | "sessionId": "2a0a5e044bb03b66", 1004 | "userId": "102167687554210253930" 1005 | }, 1006 | "user_tz": 420 1007 | }, 1008 | "id": "hQbLjrW_iT39", 1009 | "outputId": "b440efc6-5ee1-4cbc-d02d-93db44ebd956" 1010 | }, 1011 | "outputs": [ 1012 | { 1013 | "name": "stdout", 1014 | "output_type": "stream", 1015 | "text": [ 1016 | "Compressed pickle size: 690800441\n" 1017 | ] 1018 | } 1019 | ], 1020 | "source": [ 1021 | "statinfo = os.stat(pickle_file)\n", 1022 | "print('Compressed pickle size:', statinfo.st_size)" 1023 | ] 1024 | }, 1025 | { 1026 | "cell_type": "markdown", 1027 | "metadata": { 1028 | "colab_type": "text", 1029 | "id": "gE_cRAQB33lk" 1030 | }, 1031 | "source": [ 1032 | "---\n", 1033 | "Problem 5\n", 1034 | "---------\n", 1035 | "\n", 1036 | "By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\n", 1037 | "Measure how much overlap there is between training, validation and test samples.\n", 1038 | "\n", 1039 | "Optional questions:\n", 1040 | "- What about near duplicates between datasets? (images that are almost identical)\n", 1041 | "- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.\n", 1042 | "---" 1043 | ] 1044 | }, 1045 | { 1046 | "cell_type": "code", 1047 | "execution_count": 15, 1048 | "metadata": { 1049 | "collapsed": false 1050 | }, 1051 | "outputs": [ 1052 | { 1053 | "name": "stdout", 1054 | "output_type": "stream", 1055 | "text": [ 1056 | "188\n", 1057 | "3435\n", 1058 | "3428\n" 1059 | ] 1060 | } 1061 | ], 1062 | "source": [ 1063 | "all_data = pickle.load(open('notMNIST.pickle', 'rb'))\n", 1064 | "\n", 1065 | "def count_duplicates(dataset1, dataset2):\n", 1066 | " hashes = [hashlib.sha1(x).hexdigest() for x in dataset1]\n", 1067 | " dup_indices = []\n", 1068 | " for i in range(0, len(dataset2)):\n", 1069 | " if hashlib.sha1(dataset2[i]).hexdigest() in hashes:\n", 1070 | " dup_indices.append(i)\n", 1071 | " return len(dup_indices)\n", 1072 | "\n", 1073 | "\n", 1074 | "print(count_duplicates(all_data['test_dataset'], all_data['valid_dataset']))\n", 1075 | "print(count_duplicates(all_data['valid_dataset'], all_data['train_dataset']))\n", 1076 | "print(count_duplicates(all_data['test_dataset'], all_data['train_dataset']))\n", 1077 | "\n", 1078 | "\n", 1079 | " \n", 1080 | " " 1081 | ] 1082 | }, 1083 | { 1084 | "cell_type": "markdown", 1085 | "metadata": { 1086 | "colab_type": "text", 1087 | "id": "L8oww1s4JMQx" 1088 | }, 1089 | "source": [ 1090 | "---\n", 1091 | "Problem 6\n", 1092 | "---------\n", 1093 | "\n", 1094 | "Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\n", 1095 | "\n", 1096 | "Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\n", 1097 | "\n", 1098 | "Optional question: train an off-the-shelf model on all the data!\n", 1099 | "\n", 1100 | "---" 1101 | ] 1102 | }, 1103 | { 1104 | "cell_type": "code", 1105 | "execution_count": 35, 1106 | "metadata": { 1107 | "collapsed": false 1108 | }, 1109 | "outputs": [], 1110 | "source": [ 1111 | "train_dataset = all_data['train_dataset']\n", 1112 | "train_labels = all_data['train_labels']\n", 1113 | "test_dataset = all_data['test_dataset']\n", 1114 | "test_labels = all_data['test_labels']\n", 1115 | "\n", 1116 | "\n" 1117 | ] 1118 | }, 1119 | { 1120 | "cell_type": "code", 1121 | "execution_count": 38, 1122 | "metadata": { 1123 | "collapsed": false 1124 | }, 1125 | "outputs": [ 1126 | { 1127 | "name": "stdout", 1128 | "output_type": "stream", 1129 | "text": [ 1130 | "100 trainsamples score: 0.7657\n", 1131 | "1000 trainsamples score: 0.8349\n", 1132 | "5000 trainsamples score: 0.8489\n", 1133 | "10000 trainsamples score: 0.8644\n", 1134 | "all trainsamples score: 0.8935\n" 1135 | ] 1136 | } 1137 | ], 1138 | "source": [ 1139 | "\n", 1140 | "def get_score(train_dataset, train_labels, test_dataset, test_labels):\n", 1141 | " model = LogisticRegression()\n", 1142 | " train_flatten_dataset = np.array([x.flatten() for x in train_dataset])\n", 1143 | " test_flatten_dataset = np.array([x.flatten() for x in test_dataset])\n", 1144 | " model.fit(train_flatten_dataset, train_labels)\n", 1145 | "\n", 1146 | " return model.score([x.flatten() for x in test_dataset], test_labels)\n", 1147 | "\n", 1148 | "print(\"100 trainsamples score: \" + str(get_score(train_dataset[:100], train_labels[:100], test_dataset, test_labels)))\n", 1149 | "print(\"1000 trainsamples score: \" + str(get_score(train_dataset[:1000], train_labels[:1000], test_dataset, test_labels)))\n", 1150 | "print(\"5000 trainsamples score: \" + str(get_score(train_dataset[:5000], train_labels[:5000], test_dataset, test_labels)))\n", 1151 | "print(\"10000 trainsamples score: \" + str(get_score(train_dataset[:10000], train_labels[:10000], test_dataset, test_labels)))\n", 1152 | "print(\"all trainsamples score: \" + str(get_score(train_dataset, train_labels, test_dataset, test_labels)))\n", 1153 | "\n" 1154 | ] 1155 | }, 1156 | { 1157 | "cell_type": "code", 1158 | "execution_count": null, 1159 | "metadata": { 1160 | "collapsed": true 1161 | }, 1162 | "outputs": [], 1163 | "source": [] 1164 | } 1165 | ], 1166 | "metadata": { 1167 | "colab": { 1168 | "default_view": {}, 1169 | "name": "1_notmnist.ipynb", 1170 | "provenance": [], 1171 | "version": "0.3.2", 1172 | "views": {} 1173 | }, 1174 | "kernelspec": { 1175 | "display_name": "Python 2", 1176 | "language": "python", 1177 | "name": "python2" 1178 | }, 1179 | "language_info": { 1180 | "codemirror_mode": { 1181 | "name": "ipython", 1182 | "version": 2 1183 | }, 1184 | "file_extension": ".py", 1185 | "mimetype": "text/x-python", 1186 | "name": "python", 1187 | "nbconvert_exporter": "python", 1188 | "pygments_lexer": "ipython2", 1189 | "version": "2.7.6" 1190 | } 1191 | }, 1192 | "nbformat": 4, 1193 | "nbformat_minor": 0 1194 | } 1195 | -------------------------------------------------------------------------------- /2_fullyconnected.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "kR-4eNdK6lYS" 8 | }, 9 | "source": [ 10 | "Deep Learning\n", 11 | "=============\n", 12 | "\n", 13 | "Assignment 2\n", 14 | "------------\n", 15 | "\n", 16 | "Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).\n", 17 | "\n", 18 | "The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow." 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 1, 24 | "metadata": { 25 | "cellView": "both", 26 | "colab": { 27 | "autoexec": { 28 | "startup": false, 29 | "wait_interval": 0 30 | } 31 | }, 32 | "colab_type": "code", 33 | "collapsed": true, 34 | "id": "JLpLa8Jt7Vu4" 35 | }, 36 | "outputs": [], 37 | "source": [ 38 | "# These are all the modules we'll be using later. Make sure you can import them\n", 39 | "# before proceeding further.\n", 40 | "from __future__ import print_function\n", 41 | "import numpy as np\n", 42 | "import tensorflow as tf\n", 43 | "from six.moves import cPickle as pickle\n", 44 | "from six.moves import range" 45 | ] 46 | }, 47 | { 48 | "cell_type": "code", 49 | "execution_count": 2, 50 | "metadata": { 51 | "collapsed": false 52 | }, 53 | "outputs": [ 54 | { 55 | "name": "stdout", 56 | "output_type": "stream", 57 | "text": [ 58 | "[[1 2]\n", 59 | " [4 5]\n", 60 | " [7 8]]\n", 61 | "Tensor(\"truncated_normal:0\", shape=(50, 7), dtype=float32)\n", 62 | "[1 2 3 4 0 0 0]\n" 63 | ] 64 | } 65 | ], 66 | "source": [ 67 | "a = np.array([[1,2,3,10],[4,5,6, 11],[7,8,9,12]])\n", 68 | "print(a[:,:2])\n", 69 | "print (tf.truncated_normal([10 * 5, 7]))\n", 70 | "\n", 71 | "graph = tf.Graph()\n", 72 | "\n", 73 | "x = tf.nn.relu([1,2,3,4, -1,-3,-5])\n", 74 | "\n", 75 | "model = tf.initialize_all_variables()\n", 76 | "\n", 77 | "with tf.Session() as session:\n", 78 | " print(session.run(x))\n" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": { 84 | "colab_type": "text", 85 | "id": "1HrCK6e17WzV" 86 | }, 87 | "source": [ 88 | "First reload the data we generated in `1_notmnist.ipynb`." 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": 3, 94 | "metadata": { 95 | "cellView": "both", 96 | "colab": { 97 | "autoexec": { 98 | "startup": false, 99 | "wait_interval": 0 100 | }, 101 | "output_extras": [ 102 | { 103 | "item_id": 1 104 | } 105 | ] 106 | }, 107 | "colab_type": "code", 108 | "collapsed": false, 109 | "executionInfo": { 110 | "elapsed": 19456, 111 | "status": "ok", 112 | "timestamp": 1449847956073, 113 | "user": { 114 | "color": "", 115 | "displayName": "", 116 | "isAnonymous": false, 117 | "isMe": true, 118 | "permissionId": "", 119 | "photoUrl": "", 120 | "sessionId": "0", 121 | "userId": "" 122 | }, 123 | "user_tz": 480 124 | }, 125 | "id": "y3-cj1bpmuxc", 126 | "outputId": "0ddb1607-1fc4-4ddb-de28-6c7ab7fb0c33" 127 | }, 128 | "outputs": [ 129 | { 130 | "name": "stdout", 131 | "output_type": "stream", 132 | "text": [ 133 | "Training set (200000, 28, 28) (200000,)\n", 134 | "Validation set (10000, 28, 28) (10000,)\n", 135 | "Test set (10000, 28, 28) (10000,)\n" 136 | ] 137 | } 138 | ], 139 | "source": [ 140 | "pickle_file = 'notMNIST.pickle'\n", 141 | "\n", 142 | "with open(pickle_file, 'rb') as f:\n", 143 | " save = pickle.load(f)\n", 144 | " train_dataset = save['train_dataset']\n", 145 | " train_labels = save['train_labels']\n", 146 | " valid_dataset = save['valid_dataset']\n", 147 | " valid_labels = save['valid_labels']\n", 148 | " test_dataset = save['test_dataset']\n", 149 | " test_labels = save['test_labels']\n", 150 | " del save # hint to help gc free up memory\n", 151 | " print('Training set', train_dataset.shape, train_labels.shape)\n", 152 | " print('Validation set', valid_dataset.shape, valid_labels.shape)\n", 153 | " print('Test set', test_dataset.shape, test_labels.shape)" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": { 159 | "colab_type": "text", 160 | "id": "L7aHrm6nGDMB" 161 | }, 162 | "source": [ 163 | "Reformat into a shape that's more adapted to the models we're going to train:\n", 164 | "- data as a flat matrix,\n", 165 | "- labels as float 1-hot encodings." 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": 4, 171 | "metadata": { 172 | "cellView": "both", 173 | "colab": { 174 | "autoexec": { 175 | "startup": false, 176 | "wait_interval": 0 177 | }, 178 | "output_extras": [ 179 | { 180 | "item_id": 1 181 | } 182 | ] 183 | }, 184 | "colab_type": "code", 185 | "collapsed": false, 186 | "executionInfo": { 187 | "elapsed": 19723, 188 | "status": "ok", 189 | "timestamp": 1449847956364, 190 | "user": { 191 | "color": "", 192 | "displayName": "", 193 | "isAnonymous": false, 194 | "isMe": true, 195 | "permissionId": "", 196 | "photoUrl": "", 197 | "sessionId": "0", 198 | "userId": "" 199 | }, 200 | "user_tz": 480 201 | }, 202 | "id": "IRSyYiIIGIzS", 203 | "outputId": "2ba0fc75-1487-4ace-a562-cf81cae82793" 204 | }, 205 | "outputs": [ 206 | { 207 | "name": "stdout", 208 | "output_type": "stream", 209 | "text": [ 210 | "Training set (200000, 784) (200000, 10)\n", 211 | "Validation set (10000, 784) (10000, 10)\n", 212 | "Test set (10000, 784) (10000, 10)\n" 213 | ] 214 | } 215 | ], 216 | "source": [ 217 | "image_size = 28\n", 218 | "num_labels = 10\n", 219 | "\n", 220 | "def reformat(dataset, labels):\n", 221 | " dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n", 222 | " # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]\n", 223 | " labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n", 224 | " return dataset, labels\n", 225 | "train_dataset, train_labels = reformat(train_dataset, train_labels)\n", 226 | "valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n", 227 | "test_dataset, test_labels = reformat(test_dataset, test_labels)\n", 228 | "print('Training set', train_dataset.shape, train_labels.shape)\n", 229 | "print('Validation set', valid_dataset.shape, valid_labels.shape)\n", 230 | "print('Test set', test_dataset.shape, test_labels.shape)" 231 | ] 232 | }, 233 | { 234 | "cell_type": "markdown", 235 | "metadata": { 236 | "colab_type": "text", 237 | "id": "nCLVqyQ5vPPH" 238 | }, 239 | "source": [ 240 | "We're first going to train a multinomial logistic regression using simple gradient descent.\n", 241 | "\n", 242 | "TensorFlow works like this:\n", 243 | "* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:\n", 244 | "\n", 245 | " with graph.as_default():\n", 246 | " ...\n", 247 | "\n", 248 | "* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:\n", 249 | "\n", 250 | " with tf.Session(graph=graph) as session:\n", 251 | " ...\n", 252 | "\n", 253 | "Let's load all the data into TensorFlow and build the computation graph corresponding to our training:" 254 | ] 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": 5, 259 | "metadata": { 260 | "cellView": "both", 261 | "colab": { 262 | "autoexec": { 263 | "startup": false, 264 | "wait_interval": 0 265 | } 266 | }, 267 | "colab_type": "code", 268 | "collapsed": true, 269 | "id": "Nfv39qvtvOl_" 270 | }, 271 | "outputs": [], 272 | "source": [ 273 | "# With gradient descent training, even this much data is prohibitive.\n", 274 | "# Subset the training data for faster turnaround.\n", 275 | "train_subset = 10000\n", 276 | "\n", 277 | "graph = tf.Graph()\n", 278 | "with graph.as_default():\n", 279 | "\n", 280 | " # Input data.\n", 281 | " # Load the training, validation and test data into constants that are\n", 282 | " # attached to the graph.\n", 283 | " tf_train_dataset = tf.constant(train_dataset[:train_subset, :])\n", 284 | " tf_train_labels = tf.constant(train_labels[:train_subset])\n", 285 | " tf_valid_dataset = tf.constant(valid_dataset)\n", 286 | " tf_test_dataset = tf.constant(test_dataset)\n", 287 | " \n", 288 | " # Variables.\n", 289 | " # These are the parameters that we are going to be training. The weight\n", 290 | " # matrix will be initialized using random valued following a (truncated)\n", 291 | " # normal distribution. The biases get initialized to zero.\n", 292 | " weights = tf.Variable(\n", 293 | " tf.truncated_normal([image_size * image_size, num_labels]))\n", 294 | " biases = tf.Variable(tf.zeros([num_labels]))\n", 295 | " \n", 296 | " # Training computation.\n", 297 | " # We multiply the inputs with the weight matrix, and add biases. We compute\n", 298 | " # the softmax and cross-entropy (it's one operation in TensorFlow, because\n", 299 | " # it's very common, and it can be optimized). We take the average of this\n", 300 | " # cross-entropy across all training examples: that's our loss.\n", 301 | " logits = tf.matmul(tf_train_dataset, weights) + biases\n", 302 | " loss = tf.reduce_mean(\n", 303 | " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", 304 | " \n", 305 | " # Optimizer.\n", 306 | " # We are going to find the minimum of this loss using gradient descent.\n", 307 | " optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n", 308 | " \n", 309 | " # Predictions for the training, validation, and test data.\n", 310 | " # These are not part of training, but merely here so that we can report\n", 311 | " # accuracy figures as we train.\n", 312 | " train_prediction = tf.nn.softmax(logits)\n", 313 | " valid_prediction = tf.nn.softmax(\n", 314 | " tf.matmul(tf_valid_dataset, weights) + biases)\n", 315 | " test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)" 316 | ] 317 | }, 318 | { 319 | "cell_type": "markdown", 320 | "metadata": { 321 | "colab_type": "text", 322 | "id": "KQcL4uqISHjP" 323 | }, 324 | "source": [ 325 | "Let's run this computation and iterate:" 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": 6, 331 | "metadata": { 332 | "cellView": "both", 333 | "colab": { 334 | "autoexec": { 335 | "startup": false, 336 | "wait_interval": 0 337 | }, 338 | "output_extras": [ 339 | { 340 | "item_id": 9 341 | } 342 | ] 343 | }, 344 | "colab_type": "code", 345 | "collapsed": false, 346 | "executionInfo": { 347 | "elapsed": 57454, 348 | "status": "ok", 349 | "timestamp": 1449847994134, 350 | "user": { 351 | "color": "", 352 | "displayName": "", 353 | "isAnonymous": false, 354 | "isMe": true, 355 | "permissionId": "", 356 | "photoUrl": "", 357 | "sessionId": "0", 358 | "userId": "" 359 | }, 360 | "user_tz": 480 361 | }, 362 | "id": "z2cjdenH869W", 363 | "outputId": "4c037ba1-b526-4d8e-e632-91e2a0333267" 364 | }, 365 | "outputs": [ 366 | { 367 | "name": "stdout", 368 | "output_type": "stream", 369 | "text": [ 370 | "Initialized\n", 371 | "Loss at step 0: 16.690657\n", 372 | "Training accuracy: 14.4%\n", 373 | "Validation accuracy: 16.8%\n", 374 | "Loss at step 100: 2.285818\n", 375 | "Training accuracy: 72.0%\n", 376 | "Validation accuracy: 70.4%\n", 377 | "Loss at step 200: 1.815837\n", 378 | "Training accuracy: 75.3%\n", 379 | "Validation accuracy: 73.0%\n", 380 | "Loss at step 300: 1.564086\n", 381 | "Training accuracy: 76.8%\n", 382 | "Validation accuracy: 73.6%\n", 383 | "Loss at step 400: 1.397563\n", 384 | "Training accuracy: 77.5%\n", 385 | "Validation accuracy: 74.1%\n", 386 | "Loss at step 500: 1.275849\n", 387 | "Training accuracy: 78.3%\n", 388 | "Validation accuracy: 74.6%\n", 389 | "Loss at step 600: 1.181371\n", 390 | "Training accuracy: 78.8%\n", 391 | "Validation accuracy: 74.7%\n", 392 | "Loss at step 700: 1.104932\n", 393 | "Training accuracy: 79.3%\n", 394 | "Validation accuracy: 74.9%\n", 395 | "Loss at step 800: 1.041311\n", 396 | "Training accuracy: 79.6%\n", 397 | "Validation accuracy: 75.0%\n", 398 | "Test accuracy: 82.9%\n" 399 | ] 400 | } 401 | ], 402 | "source": [ 403 | "num_steps = 801\n", 404 | "\n", 405 | "def accuracy(predictions, labels):\n", 406 | " return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n", 407 | " / predictions.shape[0])\n", 408 | "\n", 409 | "with tf.Session(graph=graph) as session:\n", 410 | " # This is a one-time operation which ensures the parameters get initialized as\n", 411 | " # we described in the graph: random weights for the matrix, zeros for the\n", 412 | " # biases. \n", 413 | " tf.initialize_all_variables().run()\n", 414 | " print('Initialized')\n", 415 | " for step in range(num_steps):\n", 416 | " # Run the computations. We tell .run() that we want to run the optimizer,\n", 417 | " # and get the loss value and the training predictions returned as numpy\n", 418 | " # arrays.\n", 419 | " _, l, predictions = session.run([optimizer, loss, train_prediction])\n", 420 | " if (step % 100 == 0):\n", 421 | " print('Loss at step %d: %f' % (step, l))\n", 422 | " print('Training accuracy: %.1f%%' % accuracy(\n", 423 | " predictions, train_labels[:train_subset, :]))\n", 424 | " # Calling .eval() on valid_prediction is basically like calling run(), but\n", 425 | " # just to get that one numpy array. Note that it recomputes all its graph\n", 426 | " # dependencies.\n", 427 | " print('Validation accuracy: %.1f%%' % accuracy(\n", 428 | " valid_prediction.eval(), valid_labels))\n", 429 | " print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))" 430 | ] 431 | }, 432 | { 433 | "cell_type": "markdown", 434 | "metadata": { 435 | "colab_type": "text", 436 | "id": "x68f-hxRGm3H" 437 | }, 438 | "source": [ 439 | "Let's now switch to stochastic gradient descent training instead, which is much faster.\n", 440 | "\n", 441 | "The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `session.run()`." 442 | ] 443 | }, 444 | { 445 | "cell_type": "code", 446 | "execution_count": 7, 447 | "metadata": { 448 | "cellView": "both", 449 | "colab": { 450 | "autoexec": { 451 | "startup": false, 452 | "wait_interval": 0 453 | } 454 | }, 455 | "colab_type": "code", 456 | "collapsed": false, 457 | "id": "qhPMzWYRGrzM" 458 | }, 459 | "outputs": [ 460 | { 461 | "name": "stdout", 462 | "output_type": "stream", 463 | "text": [ 464 | "(10,)\n", 465 | "(128, 10)\n" 466 | ] 467 | } 468 | ], 469 | "source": [ 470 | "batch_size = 128\n", 471 | "\n", 472 | "graph = tf.Graph()\n", 473 | "with graph.as_default():\n", 474 | "\n", 475 | " # Input data. For the training data, we use a placeholder that will be fed\n", 476 | " # at run time with a training minibatch.\n", 477 | " tf_train_dataset = tf.placeholder(tf.float32,\n", 478 | " shape=(batch_size, image_size * image_size))\n", 479 | " tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n", 480 | " tf_valid_dataset = tf.constant(valid_dataset)\n", 481 | " tf_test_dataset = tf.constant(test_dataset)\n", 482 | " \n", 483 | " # Variables.\n", 484 | " weights = tf.Variable(\n", 485 | " tf.truncated_normal([image_size * image_size, num_labels]))\n", 486 | " biases = tf.Variable(tf.zeros([num_labels]))\n", 487 | " print(biases.get_shape())\n", 488 | "\n", 489 | " # Training computation.\n", 490 | " logits = tf.matmul(tf_train_dataset, weights) + biases\n", 491 | " print(logits.get_shape())\n", 492 | " loss = tf.reduce_mean(\n", 493 | " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", 494 | " \n", 495 | " # Optimizer.\n", 496 | " optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n", 497 | " \n", 498 | " # Predictions for the training, validation, and test data.\n", 499 | " train_prediction = tf.nn.softmax(logits)\n", 500 | " valid_prediction = tf.nn.softmax(\n", 501 | " tf.matmul(tf_valid_dataset, weights) + biases)\n", 502 | " test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)" 503 | ] 504 | }, 505 | { 506 | "cell_type": "markdown", 507 | "metadata": { 508 | "colab_type": "text", 509 | "id": "XmVZESmtG4JH" 510 | }, 511 | "source": [ 512 | "Let's run it:" 513 | ] 514 | }, 515 | { 516 | "cell_type": "code", 517 | "execution_count": 9, 518 | "metadata": { 519 | "cellView": "both", 520 | "colab": { 521 | "autoexec": { 522 | "startup": false, 523 | "wait_interval": 0 524 | }, 525 | "output_extras": [ 526 | { 527 | "item_id": 6 528 | } 529 | ] 530 | }, 531 | "colab_type": "code", 532 | "collapsed": false, 533 | "executionInfo": { 534 | "elapsed": 66292, 535 | "status": "ok", 536 | "timestamp": 1449848003013, 537 | "user": { 538 | "color": "", 539 | "displayName": "", 540 | "isAnonymous": false, 541 | "isMe": true, 542 | "permissionId": "", 543 | "photoUrl": "", 544 | "sessionId": "0", 545 | "userId": "" 546 | }, 547 | "user_tz": 480 548 | }, 549 | "id": "FoF91pknG_YW", 550 | "outputId": "d255c80e-954d-4183-ca1c-c7333ce91d0a" 551 | }, 552 | "outputs": [ 553 | { 554 | "name": "stdout", 555 | "output_type": "stream", 556 | "text": [ 557 | "Initialized\n", 558 | "Minibatch loss at step 0: 15.037764\n", 559 | "Minibatch accuracy: 12.5%\n", 560 | "Validation accuracy: 12.3%\n", 561 | "Minibatch loss at step 500: 2.018972\n", 562 | "Minibatch accuracy: 75.8%\n", 563 | "Validation accuracy: 74.9%\n", 564 | "Minibatch loss at step 1000: 1.472441\n", 565 | "Minibatch accuracy: 72.7%\n", 566 | "Validation accuracy: 75.7%\n", 567 | "Minibatch loss at step 1500: 0.945338\n", 568 | "Minibatch accuracy: 77.3%\n", 569 | "Validation accuracy: 76.9%\n", 570 | "Minibatch loss at step 2000: 1.383193\n", 571 | "Minibatch accuracy: 72.7%\n", 572 | "Validation accuracy: 76.9%\n", 573 | "Minibatch loss at step 2500: 1.220180\n", 574 | "Minibatch accuracy: 74.2%\n", 575 | "Validation accuracy: 77.4%\n", 576 | "Minibatch loss at step 3000: 1.192983\n", 577 | "Minibatch accuracy: 75.0%\n", 578 | "Validation accuracy: 78.3%\n", 579 | "Test accuracy: 86.0%\n" 580 | ] 581 | } 582 | ], 583 | "source": [ 584 | "num_steps = 3001\n", 585 | "\n", 586 | "with tf.Session(graph=graph) as session:\n", 587 | " tf.initialize_all_variables().run()\n", 588 | " print(\"Initialized\")\n", 589 | " for step in range(num_steps):\n", 590 | " # Pick an offset within the training data, which has been randomized.\n", 591 | " # Note: we could use better randomization across epochs.\n", 592 | " offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n", 593 | " # Generate a minibatch.\n", 594 | " batch_data = train_dataset[offset:(offset + batch_size), :]\n", 595 | " batch_labels = train_labels[offset:(offset + batch_size), :]\n", 596 | " # Prepare a dictionary telling the session where to feed the minibatch.\n", 597 | " # The key of the dictionary is the placeholder node of the graph to be fed,\n", 598 | " # and the value is the numpy array to feed to it.\n", 599 | " feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n", 600 | " _, l, predictions = session.run(\n", 601 | " [optimizer, loss, train_prediction], feed_dict=feed_dict)\n", 602 | " if (step % 500 == 0):\n", 603 | " print(\"Minibatch loss at step %d: %f\" % (step, l))\n", 604 | " print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n", 605 | " print(\"Validation accuracy: %.1f%%\" % accuracy(\n", 606 | " valid_prediction.eval(), valid_labels))\n", 607 | " print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))\n" 608 | ] 609 | }, 610 | { 611 | "cell_type": "markdown", 612 | "metadata": { 613 | "colab_type": "text", 614 | "id": "7omWxtvLLxik" 615 | }, 616 | "source": [ 617 | "---\n", 618 | "Problem\n", 619 | "-------\n", 620 | "\n", 621 | "Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units [nn.relu()](https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#relu) and 1024 hidden nodes. This model should improve your validation / test accuracy.\n", 622 | "\n", 623 | "---" 624 | ] 625 | }, 626 | { 627 | "cell_type": "code", 628 | "execution_count": 10, 629 | "metadata": { 630 | "collapsed": false 631 | }, 632 | "outputs": [], 633 | "source": [ 634 | "#SGD with relu\n", 635 | "batch_size = 128\n", 636 | "relu_count = 1024 #hidden nodes count\n", 637 | "\n", 638 | "graph = tf.Graph()\n", 639 | "with graph.as_default():\n", 640 | "\n", 641 | " # Input data. For the training data, we use a placeholder that will be fed\n", 642 | " # at run time with a training minibatch.\n", 643 | " tf_train_dataset = tf.placeholder(tf.float32,\n", 644 | " shape=(batch_size, image_size * image_size))\n", 645 | " tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n", 646 | " tf_valid_dataset = tf.constant(valid_dataset)\n", 647 | " tf_test_dataset = tf.constant(test_dataset)\n", 648 | " \n", 649 | "\n", 650 | " # Variables.\n", 651 | " weights_1 = tf.Variable(\n", 652 | " tf.truncated_normal([image_size * image_size, relu_count]))\n", 653 | " biases_1 = tf.Variable(tf.zeros([relu_count]))\n", 654 | " # send relu to final nn layer\n", 655 | " weights_2 = tf.Variable(\n", 656 | " tf.truncated_normal([relu_count, num_labels]))\n", 657 | "\n", 658 | " biases_2 = tf.Variable(tf.zeros([num_labels]))\n", 659 | "\n", 660 | " # Training computation. (#layer_1 -> layer_2(relu) -> layer_3)\n", 661 | " logits = tf.matmul( tf.nn.relu(tf.matmul(tf_train_dataset, weights_1) + biases_1), weights_2) + biases_2\n", 662 | "\n", 663 | "\n", 664 | " loss = tf.reduce_mean(\n", 665 | " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", 666 | " \n", 667 | " # Optimizer.\n", 668 | " optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n", 669 | " \n", 670 | " # Predictions for the training, validation, and test data.\n", 671 | " train_prediction = tf.nn.softmax(logits)\n", 672 | " valid_prediction = tf.nn.softmax(\n", 673 | " tf.matmul( tf.nn.relu(tf.matmul(tf_valid_dataset, weights_1) + biases_1), weights_2) + biases_2)\n", 674 | " test_prediction = tf.nn.softmax(\n", 675 | " tf.matmul( tf.nn.relu(tf.matmul(tf_test_dataset, weights_1) + biases_1), weights_2) + biases_2)" 676 | ] 677 | }, 678 | { 679 | "cell_type": "code", 680 | "execution_count": 11, 681 | "metadata": { 682 | "collapsed": false 683 | }, 684 | "outputs": [ 685 | { 686 | "name": "stdout", 687 | "output_type": "stream", 688 | "text": [ 689 | "Initialized\n", 690 | "Minibatch loss at step 0: 257.288879\n", 691 | "Minibatch accuracy: 19.5%\n", 692 | "Validation accuracy: 33.9%\n", 693 | "Minibatch loss at step 500: 22.886765\n", 694 | "Minibatch accuracy: 83.6%\n", 695 | "Validation accuracy: 80.1%\n", 696 | "Minibatch loss at step 1000: 10.600883\n", 697 | "Minibatch accuracy: 77.3%\n", 698 | "Validation accuracy: 81.4%\n", 699 | "Minibatch loss at step 1500: 6.771549\n", 700 | "Minibatch accuracy: 81.2%\n", 701 | "Validation accuracy: 81.1%\n", 702 | "Minibatch loss at step 2000: 8.389015\n", 703 | "Minibatch accuracy: 81.2%\n", 704 | "Validation accuracy: 81.6%\n", 705 | "Minibatch loss at step 2500: 5.756484\n", 706 | "Minibatch accuracy: 83.6%\n", 707 | "Validation accuracy: 81.9%\n", 708 | "Minibatch loss at step 3000: 4.881168\n", 709 | "Minibatch accuracy: 80.5%\n", 710 | "Validation accuracy: 81.1%\n", 711 | "Minibatch loss at step 3500: 4.013457\n", 712 | "Minibatch accuracy: 80.5%\n", 713 | "Validation accuracy: 82.6%\n", 714 | "Minibatch loss at step 4000: 0.905417\n", 715 | "Minibatch accuracy: 86.7%\n", 716 | "Validation accuracy: 82.5%\n", 717 | "Minibatch loss at step 4500: 1.373417\n", 718 | "Minibatch accuracy: 87.5%\n", 719 | "Validation accuracy: 82.5%\n", 720 | "Test accuracy: 90.2%\n" 721 | ] 722 | } 723 | ], 724 | "source": [ 725 | "num_steps = 5000\n", 726 | "\n", 727 | "with tf.Session(graph=graph) as session:\n", 728 | " tf.initialize_all_variables().run()\n", 729 | " print(\"Initialized\")\n", 730 | " for step in range(num_steps):\n", 731 | " # Pick an offset within the training data, which has been randomized.\n", 732 | " # Note: we could use better randomization across epochs.\n", 733 | " offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n", 734 | " # Generate a minibatch.\n", 735 | " batch_data = train_dataset[offset:(offset + batch_size), :]\n", 736 | " batch_labels = train_labels[offset:(offset + batch_size), :]\n", 737 | " # Prepare a dictionary telling the session where to feed the minibatch.\n", 738 | " # The key of the dictionary is the placeholder node of the graph to be fed,\n", 739 | " # and the value is the numpy array to feed to it.\n", 740 | " feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n", 741 | " _, l, predictions = session.run(\n", 742 | " [optimizer, loss, train_prediction], feed_dict=feed_dict)\n", 743 | " if (step % 500 == 0):\n", 744 | " print(\"Minibatch loss at step %d: %f\" % (step, l))\n", 745 | " print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n", 746 | " print(\"Validation accuracy: %.1f%%\" % accuracy(\n", 747 | " valid_prediction.eval(), valid_labels))\n", 748 | " print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))" 749 | ] 750 | }, 751 | { 752 | "cell_type": "code", 753 | "execution_count": null, 754 | "metadata": { 755 | "collapsed": true 756 | }, 757 | "outputs": [], 758 | "source": [] 759 | } 760 | ], 761 | "metadata": { 762 | "colab": { 763 | "default_view": {}, 764 | "name": "2_fullyconnected.ipynb", 765 | "provenance": [], 766 | "version": "0.3.2", 767 | "views": {} 768 | }, 769 | "kernelspec": { 770 | "display_name": "Python 2", 771 | "language": "python", 772 | "name": "python2" 773 | }, 774 | "language_info": { 775 | "codemirror_mode": { 776 | "name": "ipython", 777 | "version": 2 778 | }, 779 | "file_extension": ".py", 780 | "mimetype": "text/x-python", 781 | "name": "python", 782 | "nbconvert_exporter": "python", 783 | "pygments_lexer": "ipython2", 784 | "version": "2.7.6" 785 | } 786 | }, 787 | "nbformat": 4, 788 | "nbformat_minor": 0 789 | } 790 | -------------------------------------------------------------------------------- /3_regularization.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "kR-4eNdK6lYS" 8 | }, 9 | "source": [ 10 | "Deep Learning\n", 11 | "=============\n", 12 | "\n", 13 | "Assignment 3\n", 14 | "------------\n", 15 | "\n", 16 | "Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.\n", 17 | "\n", 18 | "The goal of this assignment is to explore regularization techniques." 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 0, 24 | "metadata": { 25 | "cellView": "both", 26 | "colab": { 27 | "autoexec": { 28 | "startup": false, 29 | "wait_interval": 0 30 | } 31 | }, 32 | "colab_type": "code", 33 | "collapsed": true, 34 | "id": "JLpLa8Jt7Vu4" 35 | }, 36 | "outputs": [], 37 | "source": [ 38 | "# These are all the modules we'll be using later. Make sure you can import them\n", 39 | "# before proceeding further.\n", 40 | "from __future__ import print_function\n", 41 | "import numpy as np\n", 42 | "import tensorflow as tf\n", 43 | "from six.moves import cPickle as pickle" 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "metadata": { 49 | "colab_type": "text", 50 | "id": "1HrCK6e17WzV" 51 | }, 52 | "source": [ 53 | "First reload the data we generated in _notmist.ipynb_." 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 0, 59 | "metadata": { 60 | "cellView": "both", 61 | "colab": { 62 | "autoexec": { 63 | "startup": false, 64 | "wait_interval": 0 65 | }, 66 | "output_extras": [ 67 | { 68 | "item_id": 1 69 | } 70 | ] 71 | }, 72 | "colab_type": "code", 73 | "collapsed": false, 74 | "executionInfo": { 75 | "elapsed": 11777, 76 | "status": "ok", 77 | "timestamp": 1449849322348, 78 | "user": { 79 | "color": "", 80 | "displayName": "", 81 | "isAnonymous": false, 82 | "isMe": true, 83 | "permissionId": "", 84 | "photoUrl": "", 85 | "sessionId": "0", 86 | "userId": "" 87 | }, 88 | "user_tz": 480 89 | }, 90 | "id": "y3-cj1bpmuxc", 91 | "outputId": "e03576f1-ebbe-4838-c388-f1777bcc9873" 92 | }, 93 | "outputs": [ 94 | { 95 | "name": "stdout", 96 | "output_type": "stream", 97 | "text": [ 98 | "Training set (200000, 28, 28) (200000,)\n", 99 | "Validation set (10000, 28, 28) (10000,)\n", 100 | "Test set (18724, 28, 28) (18724,)\n" 101 | ] 102 | } 103 | ], 104 | "source": [ 105 | "pickle_file = 'notMNIST.pickle'\n", 106 | "\n", 107 | "with open(pickle_file, 'rb') as f:\n", 108 | " save = pickle.load(f)\n", 109 | " train_dataset = save['train_dataset']\n", 110 | " train_labels = save['train_labels']\n", 111 | " valid_dataset = save['valid_dataset']\n", 112 | " valid_labels = save['valid_labels']\n", 113 | " test_dataset = save['test_dataset']\n", 114 | " test_labels = save['test_labels']\n", 115 | " del save # hint to help gc free up memory\n", 116 | " print('Training set', train_dataset.shape, train_labels.shape)\n", 117 | " print('Validation set', valid_dataset.shape, valid_labels.shape)\n", 118 | " print('Test set', test_dataset.shape, test_labels.shape)" 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "metadata": { 124 | "colab_type": "text", 125 | "id": "L7aHrm6nGDMB" 126 | }, 127 | "source": [ 128 | "Reformat into a shape that's more adapted to the models we're going to train:\n", 129 | "- data as a flat matrix,\n", 130 | "- labels as float 1-hot encodings." 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": 0, 136 | "metadata": { 137 | "cellView": "both", 138 | "colab": { 139 | "autoexec": { 140 | "startup": false, 141 | "wait_interval": 0 142 | }, 143 | "output_extras": [ 144 | { 145 | "item_id": 1 146 | } 147 | ] 148 | }, 149 | "colab_type": "code", 150 | "collapsed": false, 151 | "executionInfo": { 152 | "elapsed": 11728, 153 | "status": "ok", 154 | "timestamp": 1449849322356, 155 | "user": { 156 | "color": "", 157 | "displayName": "", 158 | "isAnonymous": false, 159 | "isMe": true, 160 | "permissionId": "", 161 | "photoUrl": "", 162 | "sessionId": "0", 163 | "userId": "" 164 | }, 165 | "user_tz": 480 166 | }, 167 | "id": "IRSyYiIIGIzS", 168 | "outputId": "3f8996ee-3574-4f44-c953-5c8a04636582" 169 | }, 170 | "outputs": [ 171 | { 172 | "name": "stdout", 173 | "output_type": "stream", 174 | "text": [ 175 | "Training set (200000, 784) (200000, 10)\n", 176 | "Validation set (10000, 784) (10000, 10)\n", 177 | "Test set (18724, 784) (18724, 10)\n" 178 | ] 179 | } 180 | ], 181 | "source": [ 182 | "image_size = 28\n", 183 | "num_labels = 10\n", 184 | "\n", 185 | "def reformat(dataset, labels):\n", 186 | " dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n", 187 | " # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]\n", 188 | " labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n", 189 | " return dataset, labels\n", 190 | "train_dataset, train_labels = reformat(train_dataset, train_labels)\n", 191 | "valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n", 192 | "test_dataset, test_labels = reformat(test_dataset, test_labels)\n", 193 | "print('Training set', train_dataset.shape, train_labels.shape)\n", 194 | "print('Validation set', valid_dataset.shape, valid_labels.shape)\n", 195 | "print('Test set', test_dataset.shape, test_labels.shape)" 196 | ] 197 | }, 198 | { 199 | "cell_type": "code", 200 | "execution_count": 0, 201 | "metadata": { 202 | "cellView": "both", 203 | "colab": { 204 | "autoexec": { 205 | "startup": false, 206 | "wait_interval": 0 207 | } 208 | }, 209 | "colab_type": "code", 210 | "collapsed": true, 211 | "id": "RajPLaL_ZW6w" 212 | }, 213 | "outputs": [], 214 | "source": [ 215 | "def accuracy(predictions, labels):\n", 216 | " return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n", 217 | " / predictions.shape[0])" 218 | ] 219 | }, 220 | { 221 | "cell_type": "markdown", 222 | "metadata": { 223 | "colab_type": "text", 224 | "id": "sgLbUAQ1CW-1" 225 | }, 226 | "source": [ 227 | "---\n", 228 | "Problem 1\n", 229 | "---------\n", 230 | "\n", 231 | "Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.\n", 232 | "\n", 233 | "---" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": { 239 | "colab_type": "text", 240 | "id": "na8xX2yHZzNF" 241 | }, 242 | "source": [ 243 | "---\n", 244 | "Problem 2\n", 245 | "---------\n", 246 | "Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n", 247 | "\n", 248 | "---" 249 | ] 250 | }, 251 | { 252 | "cell_type": "markdown", 253 | "metadata": { 254 | "colab_type": "text", 255 | "id": "ww3SCBUdlkRc" 256 | }, 257 | "source": [ 258 | "---\n", 259 | "Problem 3\n", 260 | "---------\n", 261 | "Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.\n", 262 | "\n", 263 | "What happens to our extreme overfitting case?\n", 264 | "\n", 265 | "---" 266 | ] 267 | }, 268 | { 269 | "cell_type": "markdown", 270 | "metadata": { 271 | "colab_type": "text", 272 | "id": "-b1hTz3VWZjw" 273 | }, 274 | "source": [ 275 | "---\n", 276 | "Problem 4\n", 277 | "---------\n", 278 | "\n", 279 | "Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).\n", 280 | "\n", 281 | "One avenue you can explore is to add multiple layers.\n", 282 | "\n", 283 | "Another one is to use learning rate decay:\n", 284 | "\n", 285 | " global_step = tf.Variable(0) # count the number of steps taken.\n", 286 | " learning_rate = tf.train.exponential_decay(0.5, global_step, ...)\n", 287 | " optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)\n", 288 | " \n", 289 | " ---\n" 290 | ] 291 | } 292 | ], 293 | "metadata": { 294 | "colab": { 295 | "default_view": {}, 296 | "name": "3_regularization.ipynb", 297 | "provenance": [], 298 | "version": "0.3.2", 299 | "views": {} 300 | }, 301 | "kernelspec": { 302 | "display_name": "Python 2", 303 | "language": "python", 304 | "name": "python2" 305 | }, 306 | "language_info": { 307 | "codemirror_mode": { 308 | "name": "ipython", 309 | "version": 2 310 | }, 311 | "file_extension": ".py", 312 | "mimetype": "text/x-python", 313 | "name": "python", 314 | "nbconvert_exporter": "python", 315 | "pygments_lexer": "ipython2", 316 | "version": "2.7.6" 317 | } 318 | }, 319 | "nbformat": 4, 320 | "nbformat_minor": 0 321 | } 322 | -------------------------------------------------------------------------------- /4_convolutions.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "version": "0.3.2", 7 | "views": {}, 8 | "default_view": {}, 9 | "name": "4_convolutions.ipynb", 10 | "provenance": [] 11 | } 12 | }, 13 | "cells": [ 14 | { 15 | "cell_type": "markdown", 16 | "metadata": { 17 | "id": "4embtkV0pNxM", 18 | "colab_type": "text" 19 | }, 20 | "source": [ 21 | "Deep Learning\n", 22 | "=============\n", 23 | "\n", 24 | "Assignment 4\n", 25 | "------------\n", 26 | "\n", 27 | "Previously in `2_fullyconnected.ipynb` and `3_regularization.ipynb`, we trained fully connected networks to classify [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) characters.\n", 28 | "\n", 29 | "The goal of this assignment is make the neural network convolutional." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "metadata": { 35 | "id": "tm2CQN_Cpwj0", 36 | "colab_type": "code", 37 | "colab": { 38 | "autoexec": { 39 | "startup": false, 40 | "wait_interval": 0 41 | } 42 | }, 43 | "cellView": "both" 44 | }, 45 | "source": [ 46 | "# These are all the modules we'll be using later. Make sure you can import them\n", 47 | "# before proceeding further.\n", 48 | "from __future__ import print_function\n", 49 | "import numpy as np\n", 50 | "import tensorflow as tf\n", 51 | "from six.moves import cPickle as pickle\n", 52 | "from six.moves import range" 53 | ], 54 | "outputs": [], 55 | "execution_count": 0 56 | }, 57 | { 58 | "cell_type": "code", 59 | "metadata": { 60 | "id": "y3-cj1bpmuxc", 61 | "colab_type": "code", 62 | "colab": { 63 | "autoexec": { 64 | "startup": false, 65 | "wait_interval": 0 66 | }, 67 | "output_extras": [ 68 | { 69 | "item_id": 1 70 | } 71 | ] 72 | }, 73 | "cellView": "both", 74 | "executionInfo": { 75 | "elapsed": 11948, 76 | "status": "ok", 77 | "timestamp": 1446658914837, 78 | "user": { 79 | "color": "", 80 | "displayName": "", 81 | "isAnonymous": false, 82 | "isMe": true, 83 | "permissionId": "", 84 | "photoUrl": "", 85 | "sessionId": "0", 86 | "userId": "" 87 | }, 88 | "user_tz": 480 89 | }, 90 | "outputId": "016b1a51-0290-4b08-efdb-8c95ffc3cd01" 91 | }, 92 | "source": [ 93 | "pickle_file = 'notMNIST.pickle'\n", 94 | "\n", 95 | "with open(pickle_file, 'rb') as f:\n", 96 | " save = pickle.load(f)\n", 97 | " train_dataset = save['train_dataset']\n", 98 | " train_labels = save['train_labels']\n", 99 | " valid_dataset = save['valid_dataset']\n", 100 | " valid_labels = save['valid_labels']\n", 101 | " test_dataset = save['test_dataset']\n", 102 | " test_labels = save['test_labels']\n", 103 | " del save # hint to help gc free up memory\n", 104 | " print('Training set', train_dataset.shape, train_labels.shape)\n", 105 | " print('Validation set', valid_dataset.shape, valid_labels.shape)\n", 106 | " print('Test set', test_dataset.shape, test_labels.shape)" 107 | ], 108 | "outputs": [ 109 | { 110 | "output_type": "stream", 111 | "text": [ 112 | "Training set (200000, 28, 28) (200000,)\n", 113 | "Validation set (10000, 28, 28) (10000,)\n", 114 | "Test set (18724, 28, 28) (18724,)\n" 115 | ], 116 | "name": "stdout" 117 | } 118 | ], 119 | "execution_count": 0 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "metadata": { 124 | "id": "L7aHrm6nGDMB", 125 | "colab_type": "text" 126 | }, 127 | "source": [ 128 | "Reformat into a TensorFlow-friendly shape:\n", 129 | "- convolutions need the image data formatted as a cube (width by height by #channels)\n", 130 | "- labels as float 1-hot encodings." 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "metadata": { 136 | "id": "IRSyYiIIGIzS", 137 | "colab_type": "code", 138 | "colab": { 139 | "autoexec": { 140 | "startup": false, 141 | "wait_interval": 0 142 | }, 143 | "output_extras": [ 144 | { 145 | "item_id": 1 146 | } 147 | ] 148 | }, 149 | "cellView": "both", 150 | "executionInfo": { 151 | "elapsed": 11952, 152 | "status": "ok", 153 | "timestamp": 1446658914857, 154 | "user": { 155 | "color": "", 156 | "displayName": "", 157 | "isAnonymous": false, 158 | "isMe": true, 159 | "permissionId": "", 160 | "photoUrl": "", 161 | "sessionId": "0", 162 | "userId": "" 163 | }, 164 | "user_tz": 480 165 | }, 166 | "outputId": "650a208c-8359-4852-f4f5-8bf10e80ef6c" 167 | }, 168 | "source": [ 169 | "image_size = 28\n", 170 | "num_labels = 10\n", 171 | "num_channels = 1 # grayscale\n", 172 | "\n", 173 | "import numpy as np\n", 174 | "\n", 175 | "def reformat(dataset, labels):\n", 176 | " dataset = dataset.reshape(\n", 177 | " (-1, image_size, image_size, num_channels)).astype(np.float32)\n", 178 | " labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n", 179 | " return dataset, labels\n", 180 | "train_dataset, train_labels = reformat(train_dataset, train_labels)\n", 181 | "valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n", 182 | "test_dataset, test_labels = reformat(test_dataset, test_labels)\n", 183 | "print('Training set', train_dataset.shape, train_labels.shape)\n", 184 | "print('Validation set', valid_dataset.shape, valid_labels.shape)\n", 185 | "print('Test set', test_dataset.shape, test_labels.shape)" 186 | ], 187 | "outputs": [ 188 | { 189 | "output_type": "stream", 190 | "text": [ 191 | "Training set (200000, 28, 28, 1) (200000, 10)\n", 192 | "Validation set (10000, 28, 28, 1) (10000, 10)\n", 193 | "Test set (18724, 28, 28, 1) (18724, 10)\n" 194 | ], 195 | "name": "stdout" 196 | } 197 | ], 198 | "execution_count": 0 199 | }, 200 | { 201 | "cell_type": "code", 202 | "metadata": { 203 | "id": "AgQDIREv02p1", 204 | "colab_type": "code", 205 | "colab": { 206 | "autoexec": { 207 | "startup": false, 208 | "wait_interval": 0 209 | } 210 | }, 211 | "cellView": "both" 212 | }, 213 | "source": [ 214 | "def accuracy(predictions, labels):\n", 215 | " return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n", 216 | " / predictions.shape[0])" 217 | ], 218 | "outputs": [], 219 | "execution_count": 0 220 | }, 221 | { 222 | "cell_type": "markdown", 223 | "metadata": { 224 | "id": "5rhgjmROXu2O", 225 | "colab_type": "text" 226 | }, 227 | "source": [ 228 | "Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes." 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "metadata": { 234 | "id": "IZYv70SvvOan", 235 | "colab_type": "code", 236 | "colab": { 237 | "autoexec": { 238 | "startup": false, 239 | "wait_interval": 0 240 | } 241 | }, 242 | "cellView": "both" 243 | }, 244 | "source": [ 245 | "batch_size = 16\n", 246 | "patch_size = 5\n", 247 | "depth = 16\n", 248 | "num_hidden = 64\n", 249 | "\n", 250 | "graph = tf.Graph()\n", 251 | "\n", 252 | "with graph.as_default():\n", 253 | "\n", 254 | " # Input data.\n", 255 | " tf_train_dataset = tf.placeholder(\n", 256 | " tf.float32, shape=(batch_size, image_size, image_size, num_channels))\n", 257 | " tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n", 258 | " tf_valid_dataset = tf.constant(valid_dataset)\n", 259 | " tf_test_dataset = tf.constant(test_dataset)\n", 260 | " \n", 261 | " # Variables.\n", 262 | " layer1_weights = tf.Variable(tf.truncated_normal(\n", 263 | " [patch_size, patch_size, num_channels, depth], stddev=0.1))\n", 264 | " layer1_biases = tf.Variable(tf.zeros([depth]))\n", 265 | " layer2_weights = tf.Variable(tf.truncated_normal(\n", 266 | " [patch_size, patch_size, depth, depth], stddev=0.1))\n", 267 | " layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))\n", 268 | " layer3_weights = tf.Variable(tf.truncated_normal(\n", 269 | " [image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))\n", 270 | " layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))\n", 271 | " layer4_weights = tf.Variable(tf.truncated_normal(\n", 272 | " [num_hidden, num_labels], stddev=0.1))\n", 273 | " layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))\n", 274 | " \n", 275 | " # Model.\n", 276 | " def model(data):\n", 277 | " conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')\n", 278 | " hidden = tf.nn.relu(conv + layer1_biases)\n", 279 | " conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')\n", 280 | " hidden = tf.nn.relu(conv + layer2_biases)\n", 281 | " shape = hidden.get_shape().as_list()\n", 282 | " reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])\n", 283 | " hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)\n", 284 | " return tf.matmul(hidden, layer4_weights) + layer4_biases\n", 285 | " \n", 286 | " # Training computation.\n", 287 | " logits = model(tf_train_dataset)\n", 288 | " loss = tf.reduce_mean(\n", 289 | " tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n", 290 | " \n", 291 | " # Optimizer.\n", 292 | " optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n", 293 | " \n", 294 | " # Predictions for the training, validation, and test data.\n", 295 | " train_prediction = tf.nn.softmax(logits)\n", 296 | " valid_prediction = tf.nn.softmax(model(tf_valid_dataset))\n", 297 | " test_prediction = tf.nn.softmax(model(tf_test_dataset))" 298 | ], 299 | "outputs": [], 300 | "execution_count": 0 301 | }, 302 | { 303 | "cell_type": "code", 304 | "metadata": { 305 | "id": "noKFb2UovVFR", 306 | "colab_type": "code", 307 | "colab": { 308 | "autoexec": { 309 | "startup": false, 310 | "wait_interval": 0 311 | }, 312 | "output_extras": [ 313 | { 314 | "item_id": 37 315 | } 316 | ] 317 | }, 318 | "cellView": "both", 319 | "executionInfo": { 320 | "elapsed": 63292, 321 | "status": "ok", 322 | "timestamp": 1446658966251, 323 | "user": { 324 | "color": "", 325 | "displayName": "", 326 | "isAnonymous": false, 327 | "isMe": true, 328 | "permissionId": "", 329 | "photoUrl": "", 330 | "sessionId": "0", 331 | "userId": "" 332 | }, 333 | "user_tz": 480 334 | }, 335 | "outputId": "28941338-2ef9-4088-8bd1-44295661e628" 336 | }, 337 | "source": [ 338 | "num_steps = 1001\n", 339 | "\n", 340 | "with tf.Session(graph=graph) as session:\n", 341 | " tf.initialize_all_variables().run()\n", 342 | " print('Initialized')\n", 343 | " for step in range(num_steps):\n", 344 | " offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n", 345 | " batch_data = train_dataset[offset:(offset + batch_size), :, :, :]\n", 346 | " batch_labels = train_labels[offset:(offset + batch_size), :]\n", 347 | " feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n", 348 | " _, l, predictions = session.run(\n", 349 | " [optimizer, loss, train_prediction], feed_dict=feed_dict)\n", 350 | " if (step % 50 == 0):\n", 351 | " print('Minibatch loss at step %d: %f' % (step, l))\n", 352 | " print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))\n", 353 | " print('Validation accuracy: %.1f%%' % accuracy(\n", 354 | " valid_prediction.eval(), valid_labels))\n", 355 | " print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))" 356 | ], 357 | "outputs": [ 358 | { 359 | "output_type": "stream", 360 | "text": [ 361 | "Initialized\n", 362 | "Minibatch loss at step 0 : 3.51275\n", 363 | "Minibatch accuracy: 6.2%\n", 364 | "Validation accuracy: 12.8%\n", 365 | "Minibatch loss at step 50 : 1.48703\n", 366 | "Minibatch accuracy: 43.8%\n", 367 | "Validation accuracy: 50.4%\n", 368 | "Minibatch loss at step 100 : 1.04377\n", 369 | "Minibatch accuracy: 68.8%\n", 370 | "Validation accuracy: 67.4%\n", 371 | "Minibatch loss at step 150 : 0.601682\n", 372 | "Minibatch accuracy: 68.8%\n", 373 | "Validation accuracy: 73.0%\n", 374 | "Minibatch loss at step 200 : 0.898649\n", 375 | "Minibatch accuracy: 75.0%\n", 376 | "Validation accuracy: 77.8%\n", 377 | "Minibatch loss at step 250 : 1.3637\n", 378 | "Minibatch accuracy: 56.2%\n", 379 | "Validation accuracy: 75.4%\n", 380 | "Minibatch loss at step 300 : 1.41968\n", 381 | "Minibatch accuracy: 62.5%\n", 382 | "Validation accuracy: 76.0%\n", 383 | "Minibatch loss at step 350 : 0.300648\n", 384 | "Minibatch accuracy: 81.2%\n", 385 | "Validation accuracy: 80.2%\n", 386 | "Minibatch loss at step 400 : 1.32092\n", 387 | "Minibatch accuracy: 56.2%\n", 388 | "Validation accuracy: 80.4%\n", 389 | "Minibatch loss at step 450 : 0.556701\n", 390 | "Minibatch accuracy: 81.2%\n", 391 | "Validation accuracy: 79.4%\n", 392 | "Minibatch loss at step 500 : 1.65595\n", 393 | "Minibatch accuracy: 43.8%\n", 394 | "Validation accuracy: 79.6%\n", 395 | "Minibatch loss at step 550 : 1.06995\n", 396 | "Minibatch accuracy: 75.0%\n", 397 | "Validation accuracy: 81.2%\n", 398 | "Minibatch loss at step 600 : 0.223684\n", 399 | "Minibatch accuracy: 100.0%\n", 400 | "Validation accuracy: 82.3%\n", 401 | "Minibatch loss at step 650 : 0.619602\n", 402 | "Minibatch accuracy: 87.5%\n", 403 | "Validation accuracy: 81.8%\n", 404 | "Minibatch loss at step 700 : 0.812091\n", 405 | "Minibatch accuracy: 75.0%\n", 406 | "Validation accuracy: 82.4%\n", 407 | "Minibatch loss at step 750 : 0.276302\n", 408 | "Minibatch accuracy: 87.5%\n", 409 | "Validation accuracy: 82.3%\n", 410 | "Minibatch loss at step 800 : 0.450241\n", 411 | "Minibatch accuracy: 81.2%\n", 412 | "Validation accuracy: 82.3%\n", 413 | "Minibatch loss at step 850 : 0.137139\n", 414 | "Minibatch accuracy: 93.8%\n", 415 | "Validation accuracy: 82.3%\n", 416 | "Minibatch loss at step 900 : 0.52664\n", 417 | "Minibatch accuracy: 75.0%\n", 418 | "Validation accuracy: 82.2%\n", 419 | "Minibatch loss at step 950 : 0.623835\n", 420 | "Minibatch accuracy: 87.5%\n", 421 | "Validation accuracy: 82.1%\n", 422 | "Minibatch loss at step 1000 : 0.243114\n", 423 | "Minibatch accuracy: 93.8%\n", 424 | "Validation accuracy: 82.9%\n", 425 | "Test accuracy: 90.0%\n" 426 | ], 427 | "name": "stdout" 428 | } 429 | ], 430 | "execution_count": 0 431 | }, 432 | { 433 | "cell_type": "markdown", 434 | "metadata": { 435 | "id": "KedKkn4EutIK", 436 | "colab_type": "text" 437 | }, 438 | "source": [ 439 | "---\n", 440 | "Problem 1\n", 441 | "---------\n", 442 | "\n", 443 | "The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (`nn.max_pool()`) of stride 2 and kernel size 2.\n", 444 | "\n", 445 | "---" 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": { 451 | "id": "klf21gpbAgb-", 452 | "colab_type": "text" 453 | }, 454 | "source": [ 455 | "---\n", 456 | "Problem 2\n", 457 | "---------\n", 458 | "\n", 459 | "Try to get the best performance you can using a convolutional net. Look for example at the classic [LeNet5](http://yann.lecun.com/exdb/lenet/) architecture, adding Dropout, and/or adding learning rate decay.\n", 460 | "\n", 461 | "---" 462 | ] 463 | } 464 | ] 465 | } 466 | -------------------------------------------------------------------------------- /6_lstm.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "version": "0.3.2", 7 | "views": {}, 8 | "default_view": {}, 9 | "name": "6_lstm.ipynb", 10 | "provenance": [] 11 | } 12 | }, 13 | "cells": [ 14 | { 15 | "cell_type": "markdown", 16 | "metadata": { 17 | "id": "8tQJd2YSCfWR", 18 | "colab_type": "text" 19 | }, 20 | "source": [ 21 | "" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": { 27 | "id": "D7tqLMoKF6uq", 28 | "colab_type": "text" 29 | }, 30 | "source": [ 31 | "Deep Learning\n", 32 | "=============\n", 33 | "\n", 34 | "Assignment 6\n", 35 | "------------\n", 36 | "\n", 37 | "After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data." 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "metadata": { 43 | "id": "MvEblsgEXxrd", 44 | "colab_type": "code", 45 | "colab": { 46 | "autoexec": { 47 | "startup": false, 48 | "wait_interval": 0 49 | } 50 | }, 51 | "cellView": "both" 52 | }, 53 | "source": [ 54 | "# These are all the modules we'll be using later. Make sure you can import them\n", 55 | "# before proceeding further.\n", 56 | "from __future__ import print_function\n", 57 | "import os\n", 58 | "import numpy as np\n", 59 | "import random\n", 60 | "import string\n", 61 | "import tensorflow as tf\n", 62 | "import zipfile\n", 63 | "from six.moves import range\n", 64 | "from six.moves.urllib.request import urlretrieve" 65 | ], 66 | "outputs": [], 67 | "execution_count": 0 68 | }, 69 | { 70 | "cell_type": "code", 71 | "metadata": { 72 | "id": "RJ-o3UBUFtCw", 73 | "colab_type": "code", 74 | "colab": { 75 | "autoexec": { 76 | "startup": false, 77 | "wait_interval": 0 78 | }, 79 | "output_extras": [ 80 | { 81 | "item_id": 1 82 | } 83 | ] 84 | }, 85 | "cellView": "both", 86 | "executionInfo": { 87 | "elapsed": 5993, 88 | "status": "ok", 89 | "timestamp": 1445965582896, 90 | "user": { 91 | "color": "#1FA15D", 92 | "displayName": "Vincent Vanhoucke", 93 | "isAnonymous": false, 94 | "isMe": true, 95 | "permissionId": "05076109866853157986", 96 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 97 | "sessionId": "6f6f07b359200c46", 98 | "userId": "102167687554210253930" 99 | }, 100 | "user_tz": 420 101 | }, 102 | "outputId": "d530534e-0791-4a94-ca6d-1c8f1b908a9e" 103 | }, 104 | "source": [ 105 | "url = 'http://mattmahoney.net/dc/'\n", 106 | "\n", 107 | "def maybe_download(filename, expected_bytes):\n", 108 | " \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n", 109 | " if not os.path.exists(filename):\n", 110 | " filename, _ = urlretrieve(url + filename, filename)\n", 111 | " statinfo = os.stat(filename)\n", 112 | " if statinfo.st_size == expected_bytes:\n", 113 | " print('Found and verified %s' % filename)\n", 114 | " else:\n", 115 | " print(statinfo.st_size)\n", 116 | " raise Exception(\n", 117 | " 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n", 118 | " return filename\n", 119 | "\n", 120 | "filename = maybe_download('text8.zip', 31344016)" 121 | ], 122 | "outputs": [ 123 | { 124 | "output_type": "stream", 125 | "text": [ 126 | "Found and verified text8.zip\n" 127 | ], 128 | "name": "stdout" 129 | } 130 | ], 131 | "execution_count": 0 132 | }, 133 | { 134 | "cell_type": "code", 135 | "metadata": { 136 | "id": "Mvf09fjugFU_", 137 | "colab_type": "code", 138 | "colab": { 139 | "autoexec": { 140 | "startup": false, 141 | "wait_interval": 0 142 | }, 143 | "output_extras": [ 144 | { 145 | "item_id": 1 146 | } 147 | ] 148 | }, 149 | "cellView": "both", 150 | "executionInfo": { 151 | "elapsed": 5982, 152 | "status": "ok", 153 | "timestamp": 1445965582916, 154 | "user": { 155 | "color": "#1FA15D", 156 | "displayName": "Vincent Vanhoucke", 157 | "isAnonymous": false, 158 | "isMe": true, 159 | "permissionId": "05076109866853157986", 160 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 161 | "sessionId": "6f6f07b359200c46", 162 | "userId": "102167687554210253930" 163 | }, 164 | "user_tz": 420 165 | }, 166 | "outputId": "8f75db58-3862-404b-a0c3-799380597390" 167 | }, 168 | "source": [ 169 | "def read_data(filename):\n", 170 | " f = zipfile.ZipFile(filename)\n", 171 | " for name in f.namelist():\n", 172 | " return tf.compat.as_str(f.read(name))\n", 173 | " f.close()\n", 174 | " \n", 175 | "text = read_data(filename)\n", 176 | "print('Data size %d' % len(text))" 177 | ], 178 | "outputs": [ 179 | { 180 | "output_type": "stream", 181 | "text": [ 182 | "Data size 100000000\n" 183 | ], 184 | "name": "stdout" 185 | } 186 | ], 187 | "execution_count": 0 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "metadata": { 192 | "id": "ga2CYACE-ghb", 193 | "colab_type": "text" 194 | }, 195 | "source": [ 196 | "Create a small validation set." 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "metadata": { 202 | "id": "w-oBpfFG-j43", 203 | "colab_type": "code", 204 | "colab": { 205 | "autoexec": { 206 | "startup": false, 207 | "wait_interval": 0 208 | }, 209 | "output_extras": [ 210 | { 211 | "item_id": 1 212 | } 213 | ] 214 | }, 215 | "cellView": "both", 216 | "executionInfo": { 217 | "elapsed": 6184, 218 | "status": "ok", 219 | "timestamp": 1445965583138, 220 | "user": { 221 | "color": "#1FA15D", 222 | "displayName": "Vincent Vanhoucke", 223 | "isAnonymous": false, 224 | "isMe": true, 225 | "permissionId": "05076109866853157986", 226 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 227 | "sessionId": "6f6f07b359200c46", 228 | "userId": "102167687554210253930" 229 | }, 230 | "user_tz": 420 231 | }, 232 | "outputId": "bdb96002-d021-4379-f6de-a977924f0d02" 233 | }, 234 | "source": [ 235 | "valid_size = 1000\n", 236 | "valid_text = text[:valid_size]\n", 237 | "train_text = text[valid_size:]\n", 238 | "train_size = len(train_text)\n", 239 | "print(train_size, train_text[:64])\n", 240 | "print(valid_size, valid_text[:64])" 241 | ], 242 | "outputs": [ 243 | { 244 | "output_type": "stream", 245 | "text": [ 246 | "99999000 ons anarchists advocate social relations based upon voluntary as\n", 247 | "1000 anarchism originated as a term of abuse first used against earl\n" 248 | ], 249 | "name": "stdout" 250 | } 251 | ], 252 | "execution_count": 0 253 | }, 254 | { 255 | "cell_type": "markdown", 256 | "metadata": { 257 | "id": "Zdw6i4F8glpp", 258 | "colab_type": "text" 259 | }, 260 | "source": [ 261 | "Utility functions to map characters to vocabulary IDs and back." 262 | ] 263 | }, 264 | { 265 | "cell_type": "code", 266 | "metadata": { 267 | "id": "gAL1EECXeZsD", 268 | "colab_type": "code", 269 | "colab": { 270 | "autoexec": { 271 | "startup": false, 272 | "wait_interval": 0 273 | }, 274 | "output_extras": [ 275 | { 276 | "item_id": 1 277 | } 278 | ] 279 | }, 280 | "cellView": "both", 281 | "executionInfo": { 282 | "elapsed": 6276, 283 | "status": "ok", 284 | "timestamp": 1445965583249, 285 | "user": { 286 | "color": "#1FA15D", 287 | "displayName": "Vincent Vanhoucke", 288 | "isAnonymous": false, 289 | "isMe": true, 290 | "permissionId": "05076109866853157986", 291 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 292 | "sessionId": "6f6f07b359200c46", 293 | "userId": "102167687554210253930" 294 | }, 295 | "user_tz": 420 296 | }, 297 | "outputId": "88fc9032-feb9-45ff-a9a0-a26759cc1f2e" 298 | }, 299 | "source": [ 300 | "vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '\n", 301 | "first_letter = ord(string.ascii_lowercase[0])\n", 302 | "\n", 303 | "def char2id(char):\n", 304 | " if char in string.ascii_lowercase:\n", 305 | " return ord(char) - first_letter + 1\n", 306 | " elif char == ' ':\n", 307 | " return 0\n", 308 | " else:\n", 309 | " print('Unexpected character: %s' % char)\n", 310 | " return 0\n", 311 | " \n", 312 | "def id2char(dictid):\n", 313 | " if dictid > 0:\n", 314 | " return chr(dictid + first_letter - 1)\n", 315 | " else:\n", 316 | " return ' '\n", 317 | "\n", 318 | "print(char2id('a'), char2id('z'), char2id(' '), char2id('\u00ef'))\n", 319 | "print(id2char(1), id2char(26), id2char(0))" 320 | ], 321 | "outputs": [ 322 | { 323 | "output_type": "stream", 324 | "text": [ 325 | "1 26 0 Unexpected character: \u00ef\n", 326 | "0\n", 327 | "a z \n" 328 | ], 329 | "name": "stdout" 330 | } 331 | ], 332 | "execution_count": 0 333 | }, 334 | { 335 | "cell_type": "markdown", 336 | "metadata": { 337 | "id": "lFwoyygOmWsL", 338 | "colab_type": "text" 339 | }, 340 | "source": [ 341 | "Function to generate a training batch for the LSTM model." 342 | ] 343 | }, 344 | { 345 | "cell_type": "code", 346 | "metadata": { 347 | "id": "d9wMtjy5hCj9", 348 | "colab_type": "code", 349 | "colab": { 350 | "autoexec": { 351 | "startup": false, 352 | "wait_interval": 0 353 | }, 354 | "output_extras": [ 355 | { 356 | "item_id": 1 357 | } 358 | ] 359 | }, 360 | "cellView": "both", 361 | "executionInfo": { 362 | "elapsed": 6473, 363 | "status": "ok", 364 | "timestamp": 1445965583467, 365 | "user": { 366 | "color": "#1FA15D", 367 | "displayName": "Vincent Vanhoucke", 368 | "isAnonymous": false, 369 | "isMe": true, 370 | "permissionId": "05076109866853157986", 371 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 372 | "sessionId": "6f6f07b359200c46", 373 | "userId": "102167687554210253930" 374 | }, 375 | "user_tz": 420 376 | }, 377 | "outputId": "3dd79c80-454a-4be0-8b71-4a4a357b3367" 378 | }, 379 | "source": [ 380 | "batch_size=64\n", 381 | "num_unrollings=10\n", 382 | "\n", 383 | "class BatchGenerator(object):\n", 384 | " def __init__(self, text, batch_size, num_unrollings):\n", 385 | " self._text = text\n", 386 | " self._text_size = len(text)\n", 387 | " self._batch_size = batch_size\n", 388 | " self._num_unrollings = num_unrollings\n", 389 | " segment = self._text_size // batch_size\n", 390 | " self._cursor = [ offset * segment for offset in range(batch_size)]\n", 391 | " self._last_batch = self._next_batch()\n", 392 | " \n", 393 | " def _next_batch(self):\n", 394 | " \"\"\"Generate a single batch from the current cursor position in the data.\"\"\"\n", 395 | " batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float)\n", 396 | " for b in range(self._batch_size):\n", 397 | " batch[b, char2id(self._text[self._cursor[b]])] = 1.0\n", 398 | " self._cursor[b] = (self._cursor[b] + 1) % self._text_size\n", 399 | " return batch\n", 400 | " \n", 401 | " def next(self):\n", 402 | " \"\"\"Generate the next array of batches from the data. The array consists of\n", 403 | " the last batch of the previous array, followed by num_unrollings new ones.\n", 404 | " \"\"\"\n", 405 | " batches = [self._last_batch]\n", 406 | " for step in range(self._num_unrollings):\n", 407 | " batches.append(self._next_batch())\n", 408 | " self._last_batch = batches[-1]\n", 409 | " return batches\n", 410 | "\n", 411 | "def characters(probabilities):\n", 412 | " \"\"\"Turn a 1-hot encoding or a probability distribution over the possible\n", 413 | " characters back into its (most likely) character representation.\"\"\"\n", 414 | " return [id2char(c) for c in np.argmax(probabilities, 1)]\n", 415 | "\n", 416 | "def batches2string(batches):\n", 417 | " \"\"\"Convert a sequence of batches back into their (most likely) string\n", 418 | " representation.\"\"\"\n", 419 | " s = [''] * batches[0].shape[0]\n", 420 | " for b in batches:\n", 421 | " s = [''.join(x) for x in zip(s, characters(b))]\n", 422 | " return s\n", 423 | "\n", 424 | "train_batches = BatchGenerator(train_text, batch_size, num_unrollings)\n", 425 | "valid_batches = BatchGenerator(valid_text, 1, 1)\n", 426 | "\n", 427 | "print(batches2string(train_batches.next()))\n", 428 | "print(batches2string(train_batches.next()))\n", 429 | "print(batches2string(valid_batches.next()))\n", 430 | "print(batches2string(valid_batches.next()))" 431 | ], 432 | "outputs": [ 433 | { 434 | "output_type": "stream", 435 | "text": [ 436 | "['ons anarchi', 'when milita', 'lleria arch', ' abbeys and', 'married urr', 'hel and ric', 'y and litur', 'ay opened f', 'tion from t', 'migration t', 'new york ot', 'he boeing s', 'e listed wi', 'eber has pr', 'o be made t', 'yer who rec', 'ore signifi', 'a fierce cr', ' two six ei', 'aristotle s', 'ity can be ', ' and intrac', 'tion of the', 'dy to pass ', 'f certain d', 'at it will ', 'e convince ', 'ent told hi', 'ampaign and', 'rver side s', 'ious texts ', 'o capitaliz', 'a duplicate', 'gh ann es d', 'ine january', 'ross zero t', 'cal theorie', 'ast instanc', ' dimensiona', 'most holy m', 't s support', 'u is still ', 'e oscillati', 'o eight sub', 'of italy la', 's the tower', 'klahoma pre', 'erprise lin', 'ws becomes ', 'et in a naz', 'the fabian ', 'etchy to re', ' sharman ne', 'ised empero', 'ting in pol', 'd neo latin', 'th risky ri', 'encyclopedi', 'fense the a', 'duating fro', 'treet grid ', 'ations more', 'appeal of d', 'si have mad']\n", 437 | "['ists advoca', 'ary governm', 'hes nationa', 'd monasteri', 'raca prince', 'chard baer ', 'rgical lang', 'for passeng', 'the nationa', 'took place ', 'ther well k', 'seven six s', 'ith a gloss', 'robably bee', 'to recogniz', 'ceived the ', 'icant than ', 'ritic of th', 'ight in sig', 's uncaused ', ' lost as in', 'cellular ic', 'e size of t', ' him a stic', 'drugs confu', ' take to co', ' the priest', 'im to name ', 'd barred at', 'standard fo', ' such as es', 'ze on the g', 'e of the or', 'd hiver one', 'y eight mar', 'the lead ch', 'es classica', 'ce the non ', 'al analysis', 'mormons bel', 't or at lea', ' disagreed ', 'ing system ', 'btypes base', 'anguages th', 'r commissio', 'ess one nin', 'nux suse li', ' the first ', 'zi concentr', ' society ne', 'elatively s', 'etworks sha', 'or hirohito', 'litical ini', 'n most of t', 'iskerdoo ri', 'ic overview', 'air compone', 'om acnm acc', ' centerline', 'e than any ', 'devotional ', 'de such dev']\n", 438 | "[' a']\n", 439 | "['an']\n" 440 | ], 441 | "name": "stdout" 442 | } 443 | ], 444 | "execution_count": 0 445 | }, 446 | { 447 | "cell_type": "code", 448 | "metadata": { 449 | "id": "KyVd8FxT5QBc", 450 | "colab_type": "code", 451 | "colab": { 452 | "autoexec": { 453 | "startup": false, 454 | "wait_interval": 0 455 | } 456 | }, 457 | "cellView": "both" 458 | }, 459 | "source": [ 460 | "def logprob(predictions, labels):\n", 461 | " \"\"\"Log-probability of the true labels in a predicted batch.\"\"\"\n", 462 | " predictions[predictions < 1e-10] = 1e-10\n", 463 | " return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]\n", 464 | "\n", 465 | "def sample_distribution(distribution):\n", 466 | " \"\"\"Sample one element from a distribution assumed to be an array of normalized\n", 467 | " probabilities.\n", 468 | " \"\"\"\n", 469 | " r = random.uniform(0, 1)\n", 470 | " s = 0\n", 471 | " for i in range(len(distribution)):\n", 472 | " s += distribution[i]\n", 473 | " if s >= r:\n", 474 | " return i\n", 475 | " return len(distribution) - 1\n", 476 | "\n", 477 | "def sample(prediction):\n", 478 | " \"\"\"Turn a (column) prediction into 1-hot encoded samples.\"\"\"\n", 479 | " p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)\n", 480 | " p[0, sample_distribution(prediction[0])] = 1.0\n", 481 | " return p\n", 482 | "\n", 483 | "def random_distribution():\n", 484 | " \"\"\"Generate a random column of probabilities.\"\"\"\n", 485 | " b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size])\n", 486 | " return b/np.sum(b, 1)[:,None]" 487 | ], 488 | "outputs": [], 489 | "execution_count": 0 490 | }, 491 | { 492 | "cell_type": "markdown", 493 | "metadata": { 494 | "id": "K8f67YXaDr4C", 495 | "colab_type": "text" 496 | }, 497 | "source": [ 498 | "Simple LSTM Model." 499 | ] 500 | }, 501 | { 502 | "cell_type": "code", 503 | "metadata": { 504 | "id": "Q5rxZK6RDuGe", 505 | "colab_type": "code", 506 | "colab": { 507 | "autoexec": { 508 | "startup": false, 509 | "wait_interval": 0 510 | } 511 | }, 512 | "cellView": "both" 513 | }, 514 | "source": [ 515 | "num_nodes = 64\n", 516 | "\n", 517 | "graph = tf.Graph()\n", 518 | "with graph.as_default():\n", 519 | " \n", 520 | " # Parameters:\n", 521 | " # Input gate: input, previous output, and bias.\n", 522 | " ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n", 523 | " im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n", 524 | " ib = tf.Variable(tf.zeros([1, num_nodes]))\n", 525 | " # Forget gate: input, previous output, and bias.\n", 526 | " fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n", 527 | " fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n", 528 | " fb = tf.Variable(tf.zeros([1, num_nodes]))\n", 529 | " # Memory cell: input, state and bias. \n", 530 | " cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n", 531 | " cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n", 532 | " cb = tf.Variable(tf.zeros([1, num_nodes]))\n", 533 | " # Output gate: input, previous output, and bias.\n", 534 | " ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))\n", 535 | " om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))\n", 536 | " ob = tf.Variable(tf.zeros([1, num_nodes]))\n", 537 | " # Variables saving state across unrollings.\n", 538 | " saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)\n", 539 | " saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)\n", 540 | " # Classifier weights and biases.\n", 541 | " w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1))\n", 542 | " b = tf.Variable(tf.zeros([vocabulary_size]))\n", 543 | " \n", 544 | " # Definition of the cell computation.\n", 545 | " def lstm_cell(i, o, state):\n", 546 | " \"\"\"Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf\n", 547 | " Note that in this formulation, we omit the various connections between the\n", 548 | " previous state and the gates.\"\"\"\n", 549 | " input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)\n", 550 | " forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)\n", 551 | " update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb\n", 552 | " state = forget_gate * state + input_gate * tf.tanh(update)\n", 553 | " output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob)\n", 554 | " return output_gate * tf.tanh(state), state\n", 555 | "\n", 556 | " # Input data.\n", 557 | " train_data = list()\n", 558 | " for _ in range(num_unrollings + 1):\n", 559 | " train_data.append(\n", 560 | " tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size]))\n", 561 | " train_inputs = train_data[:num_unrollings]\n", 562 | " train_labels = train_data[1:] # labels are inputs shifted by one time step.\n", 563 | "\n", 564 | " # Unrolled LSTM loop.\n", 565 | " outputs = list()\n", 566 | " output = saved_output\n", 567 | " state = saved_state\n", 568 | " for i in train_inputs:\n", 569 | " output, state = lstm_cell(i, output, state)\n", 570 | " outputs.append(output)\n", 571 | "\n", 572 | " # State saving across unrollings.\n", 573 | " with tf.control_dependencies([saved_output.assign(output),\n", 574 | " saved_state.assign(state)]):\n", 575 | " # Classifier.\n", 576 | " logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b)\n", 577 | " loss = tf.reduce_mean(\n", 578 | " tf.nn.softmax_cross_entropy_with_logits(\n", 579 | " logits, tf.concat(0, train_labels)))\n", 580 | "\n", 581 | " # Optimizer.\n", 582 | " global_step = tf.Variable(0)\n", 583 | " learning_rate = tf.train.exponential_decay(\n", 584 | " 10.0, global_step, 5000, 0.1, staircase=True)\n", 585 | " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", 586 | " gradients, v = zip(*optimizer.compute_gradients(loss))\n", 587 | " gradients, _ = tf.clip_by_global_norm(gradients, 1.25)\n", 588 | " optimizer = optimizer.apply_gradients(\n", 589 | " zip(gradients, v), global_step=global_step)\n", 590 | "\n", 591 | " # Predictions.\n", 592 | " train_prediction = tf.nn.softmax(logits)\n", 593 | " \n", 594 | " # Sampling and validation eval: batch 1, no unrolling.\n", 595 | " sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])\n", 596 | " saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))\n", 597 | " saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))\n", 598 | " reset_sample_state = tf.group(\n", 599 | " saved_sample_output.assign(tf.zeros([1, num_nodes])),\n", 600 | " saved_sample_state.assign(tf.zeros([1, num_nodes])))\n", 601 | " sample_output, sample_state = lstm_cell(\n", 602 | " sample_input, saved_sample_output, saved_sample_state)\n", 603 | " with tf.control_dependencies([saved_sample_output.assign(sample_output),\n", 604 | " saved_sample_state.assign(sample_state)]):\n", 605 | " sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))" 606 | ], 607 | "outputs": [], 608 | "execution_count": 0 609 | }, 610 | { 611 | "cell_type": "code", 612 | "metadata": { 613 | "id": "RD9zQCZTEaEm", 614 | "colab_type": "code", 615 | "colab": { 616 | "autoexec": { 617 | "startup": false, 618 | "wait_interval": 0 619 | }, 620 | "output_extras": [ 621 | { 622 | "item_id": 41 623 | }, 624 | { 625 | "item_id": 80 626 | }, 627 | { 628 | "item_id": 126 629 | }, 630 | { 631 | "item_id": 144 632 | } 633 | ] 634 | }, 635 | "cellView": "both", 636 | "executionInfo": { 637 | "elapsed": 199909, 638 | "status": "ok", 639 | "timestamp": 1445965877333, 640 | "user": { 641 | "color": "#1FA15D", 642 | "displayName": "Vincent Vanhoucke", 643 | "isAnonymous": false, 644 | "isMe": true, 645 | "permissionId": "05076109866853157986", 646 | "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", 647 | "sessionId": "6f6f07b359200c46", 648 | "userId": "102167687554210253930" 649 | }, 650 | "user_tz": 420 651 | }, 652 | "outputId": "5e868466-2532-4545-ce35-b403cf5d9de6" 653 | }, 654 | "source": [ 655 | "num_steps = 7001\n", 656 | "summary_frequency = 100\n", 657 | "\n", 658 | "with tf.Session(graph=graph) as session:\n", 659 | " tf.initialize_all_variables().run()\n", 660 | " print('Initialized')\n", 661 | " mean_loss = 0\n", 662 | " for step in range(num_steps):\n", 663 | " batches = train_batches.next()\n", 664 | " feed_dict = dict()\n", 665 | " for i in range(num_unrollings + 1):\n", 666 | " feed_dict[train_data[i]] = batches[i]\n", 667 | " _, l, predictions, lr = session.run(\n", 668 | " [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict)\n", 669 | " mean_loss += l\n", 670 | " if step % summary_frequency == 0:\n", 671 | " if step > 0:\n", 672 | " mean_loss = mean_loss / summary_frequency\n", 673 | " # The mean loss is an estimate of the loss over the last few batches.\n", 674 | " print(\n", 675 | " 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr))\n", 676 | " mean_loss = 0\n", 677 | " labels = np.concatenate(list(batches)[1:])\n", 678 | " print('Minibatch perplexity: %.2f' % float(\n", 679 | " np.exp(logprob(predictions, labels))))\n", 680 | " if step % (summary_frequency * 10) == 0:\n", 681 | " # Generate some samples.\n", 682 | " print('=' * 80)\n", 683 | " for _ in range(5):\n", 684 | " feed = sample(random_distribution())\n", 685 | " sentence = characters(feed)[0]\n", 686 | " reset_sample_state.run()\n", 687 | " for _ in range(79):\n", 688 | " prediction = sample_prediction.eval({sample_input: feed})\n", 689 | " feed = sample(prediction)\n", 690 | " sentence += characters(feed)[0]\n", 691 | " print(sentence)\n", 692 | " print('=' * 80)\n", 693 | " # Measure validation set perplexity.\n", 694 | " reset_sample_state.run()\n", 695 | " valid_logprob = 0\n", 696 | " for _ in range(valid_size):\n", 697 | " b = valid_batches.next()\n", 698 | " predictions = sample_prediction.eval({sample_input: b[0]})\n", 699 | " valid_logprob = valid_logprob + logprob(predictions, b[1])\n", 700 | " print('Validation set perplexity: %.2f' % float(np.exp(\n", 701 | " valid_logprob / valid_size)))" 702 | ], 703 | "outputs": [ 704 | { 705 | "output_type": "stream", 706 | "text": [ 707 | "Initialized\n", 708 | "Average loss at step 0 : 3.29904174805 learning rate: 10.0\n", 709 | "Minibatch perplexity: 27.09\n", 710 | "================================================================================\n", 711 | "srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh\n", 712 | "lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o\n", 713 | "meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet\n", 714 | "unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw\n", 715 | "ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj\n", 716 | "================================================================================\n", 717 | "Validation set perplexity: 19.99\n", 718 | "Average loss at step 100 : 2.59553678274 learning rate: 10.0\n", 719 | "Minibatch perplexity: 9.57\n", 720 | "Validation set perplexity: 10.60\n", 721 | "Average loss at step 200 : 2.24747137785 learning rate: 10.0\n", 722 | "Minibatch perplexity: 7.68\n", 723 | "Validation set perplexity: 8.84\n", 724 | "Average loss at step 300 : 2.09438110709 learning rate: 10.0\n", 725 | "Minibatch perplexity: 7.41\n", 726 | "Validation set perplexity: 8.13\n", 727 | "Average loss at step 400 : 1.99440989017 learning rate: 10.0\n", 728 | "Minibatch perplexity: 6.46\n", 729 | "Validation set perplexity: 7.58\n", 730 | "Average loss at step 500 : 1.9320810616 learning rate: 10.0\n", 731 | "Minibatch perplexity: 6.30\n", 732 | "Validation set perplexity: 6.88\n", 733 | "Average loss at step 600 : 1.90935629249 learning rate: 10.0\n", 734 | "Minibatch perplexity: 7.21\n", 735 | "Validation set perplexity: 6.91\n", 736 | "Average loss at step 700 : 1.85583009005 learning rate: 10.0\n", 737 | "Minibatch perplexity: 6.13\n", 738 | "Validation set perplexity: 6.60\n", 739 | "Average loss at step 800 : 1.82152368546 learning rate: 10.0\n", 740 | "Minibatch perplexity: 6.01\n", 741 | "Validation set perplexity: 6.37\n", 742 | "Average loss at step 900 : 1.83169809818 learning rate: 10.0\n", 743 | "Minibatch perplexity: 7.20\n", 744 | "Validation set perplexity: 6.23\n", 745 | "Average loss at step 1000 : 1.82217029214 learning rate: 10.0\n", 746 | "Minibatch perplexity: 6.73\n", 747 | "================================================================================\n", 748 | "le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co\n", 749 | "le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes\n", 750 | "hian andoris ret the ecause bistory l pidect one eight five lack du that the ses\n", 751 | "aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in \n", 752 | "mer miter y sught esfectur of the upission vain is werms is vul ugher compted by\n", 753 | "================================================================================\n", 754 | "Validation set perplexity: 6.07\n", 755 | "Average loss at step 1100 : 1.77301145077 learning rate: 10.0\n", 756 | "Minibatch perplexity: 6.03\n", 757 | "Validation set perplexity: 5.89\n", 758 | "Average loss at step 1200 : 1.75306463003 learning rate: 10.0\n", 759 | "Minibatch perplexity: 6.50\n", 760 | "Validation set perplexity: 5.61\n", 761 | "Average loss at step 1300 : 1.72937195778 learning rate: 10.0\n", 762 | "Minibatch perplexity: 5.00\n", 763 | "Validation set perplexity: 5.60\n", 764 | "Average loss at step 1400 : 1.74773373723 learning rate: 10.0\n", 765 | "Minibatch perplexity: 6.48\n", 766 | "Validation set perplexity: 5.66\n", 767 | "Average loss at step 1500 : 1.7368799901 learning rate: 10.0\n", 768 | "Minibatch perplexity: 5.22\n", 769 | "Validation set perplexity: 5.44\n", 770 | "Average loss at step 1600 : 1.74528762937 learning rate: 10.0\n", 771 | "Minibatch perplexity: 5.85\n", 772 | "Validation set perplexity: 5.33\n", 773 | "Average loss at step 1700 : 1.70881183743 learning rate: 10.0\n", 774 | "Minibatch perplexity: 5.33\n", 775 | "Validation set perplexity: 5.56\n", 776 | "Average loss at step 1800 : 1.67776108027 learning rate: 10.0\n", 777 | "Minibatch perplexity: 5.33\n", 778 | "Validation set perplexity: 5.29\n", 779 | "Average loss at step 1900 : 1.64935536742 learning rate: 10.0\n", 780 | "Minibatch perplexity: 5.29\n", 781 | "Validation set perplexity: 5.15\n", 782 | "Average loss at step" 783 | ], 784 | "name": "stdout" 785 | }, 786 | { 787 | "output_type": "stream", 788 | "text": [ 789 | " 2000 : 1.69528644681 learning rate: 10.0\n", 790 | "Minibatch perplexity: 5.13\n", 791 | "================================================================================\n", 792 | "vers soqually have one five landwing to docial page kagan lower with ther batern\n", 793 | "ctor son alfortmandd tethre k skin the known purated to prooust caraying the fit\n", 794 | "je in beverb is the sournction bainedy wesce tu sture artualle lines digra forme\n", 795 | "m rousively haldio ourso ond anvary was for the seven solies hild buil s to te\n", 796 | "zall for is it is one nine eight eight one neval to the kime typer oene where he\n", 797 | "================================================================================\n", 798 | "Validation set perplexity: 5.25\n", 799 | "Average loss at step 2100 : 1.68808053017 learning rate: 10.0\n", 800 | "Minibatch perplexity: 5.17\n", 801 | "Validation set perplexity: 5.01\n", 802 | "Average loss at step 2200 : 1.68322490931 learning rate: 10.0\n", 803 | "Minibatch perplexity: 5.09\n", 804 | "Validation set perplexity: 5.15\n", 805 | "Average loss at step 2300 : 1.64465074301 learning rate: 10.0\n", 806 | "Minibatch perplexity: 5.51\n", 807 | "Validation set perplexity: 5.00\n", 808 | "Average loss at step 2400 : 1.66408578038 learning rate: 10.0\n", 809 | "Minibatch perplexity: 5.86\n", 810 | "Validation set perplexity: 4.80\n", 811 | "Average loss at step 2500 : 1.68515402555 learning rate: 10.0\n", 812 | "Minibatch perplexity: 5.75\n", 813 | "Validation set perplexity: 4.82\n", 814 | "Average loss at step 2600 : 1.65405208349 learning rate: 10.0\n", 815 | "Minibatch perplexity: 5.38\n", 816 | "Validation set perplexity: 4.85\n", 817 | "Average loss at step 2700 : 1.65706222177 learning rate: 10.0\n", 818 | "Minibatch perplexity: 5.46\n", 819 | "Validation set perplexity: 4.78\n", 820 | "Average loss at step 2800 : 1.65204829812 learning rate: 10.0\n", 821 | "Minibatch perplexity: 5.06\n", 822 | "Validation set perplexity: 4.64\n", 823 | "Average loss at step 2900 : 1.65107253551 learning rate: 10.0\n", 824 | "Minibatch perplexity: 5.00\n", 825 | "Validation set perplexity: 4.61\n", 826 | "Average loss at step 3000 : 1.6495274055 learning rate: 10.0\n", 827 | "Minibatch perplexity: 4.53\n", 828 | "================================================================================\n", 829 | "ject covered in belo one six six to finsh that all di rozial sime it a the lapse\n", 830 | "ble which the pullic bocades record r to sile dric two one four nine seven six f\n", 831 | " originally ame the playa ishaps the stotchational in a p dstambly name which as\n", 832 | "ore volum to bay riwer foreal in nuily operety can and auscham frooripm however \n", 833 | "kan traogey was lacous revision the mott coupofiteditey the trando insended frop\n", 834 | "================================================================================\n", 835 | "Validation set perplexity: 4.76\n", 836 | "Average loss at step 3100 : 1.63705502152 learning rate: 10.0\n", 837 | "Minibatch perplexity: 5.50\n", 838 | "Validation set perplexity: 4.76\n", 839 | "Average loss at step 3200 : 1.64740695596 learning rate: 10.0\n", 840 | "Minibatch perplexity: 4.84\n", 841 | "Validation set perplexity: 4.67\n", 842 | "Average loss at step 3300 : 1.64711504817 learning rate: 10.0\n", 843 | "Minibatch perplexity: 5.39\n", 844 | "Validation set perplexity: 4.57\n", 845 | "Average loss at step 3400 : 1.67113256454 learning rate: 10.0\n", 846 | "Minibatch perplexity: 5.56\n", 847 | "Validation set perplexity: 4.71\n", 848 | "Average loss at step 3500 : 1.65637169957 learning rate: 10.0\n", 849 | "Minibatch perplexity: 5.03\n", 850 | "Validation set perplexity: 4.80\n", 851 | "Average loss at step 3600 : 1.66601825476 learning rate: 10.0\n", 852 | "Minibatch perplexity: 4.63\n", 853 | "Validation set perplexity: 4.52\n", 854 | "Average loss at step 3700 : 1.65021387935 learning rate: 10.0\n", 855 | "Minibatch perplexity: 5.50\n", 856 | "Validation set perplexity: 4.56\n", 857 | "Average loss at step 3800 : 1.64481814981 learning rate: 10.0\n", 858 | "Minibatch perplexity: 4.60\n", 859 | "Validation set perplexity: 4.54\n", 860 | "Average loss at step 3900 : 1.642069453 learning rate: 10.0\n", 861 | "Minibatch perplexity: 4.91\n", 862 | "Validation set perplexity: 4.54\n", 863 | "Average loss at step 4000 : 1.65179730773 learning rate: 10.0\n", 864 | "Minibatch perplexity: 4.77\n", 865 | "================================================================================\n", 866 | "k s rasbonish roctes the nignese at heacle was sito of beho anarchys and with ro\n", 867 | "jusar two sue wletaus of chistical in causations d ow trancic bruthing ha laters\n", 868 | "de and speacy pulted yoftret worksy zeatlating to eight d had to ie bue seven si" 869 | ], 870 | "name": "stdout" 871 | }, 872 | { 873 | "output_type": "stream", 874 | "text": [ 875 | "\n", 876 | "s fiction of the feelly constive suq flanch earlied curauking bjoventation agent\n", 877 | "quen s playing it calana our seopity also atbellisionaly comexing the revideve i\n", 878 | "================================================================================\n", 879 | "Validation set perplexity: 4.58\n", 880 | "Average loss at step 4100 : 1.63794238806 learning rate: 10.0\n", 881 | "Minibatch perplexity: 5.47\n", 882 | "Validation set perplexity: 4.79\n", 883 | "Average loss at step 4200 : 1.63822438836 learning rate: 10.0\n", 884 | "Minibatch perplexity: 5.30\n", 885 | "Validation set perplexity: 4.54\n", 886 | "Average loss at step 4300 : 1.61844664574 learning rate: 10.0\n", 887 | "Minibatch perplexity: 4.69\n", 888 | "Validation set perplexity: 4.54\n", 889 | "Average loss at step 4400 : 1.61255454302 learning rate: 10.0\n", 890 | "Minibatch perplexity: 4.67\n", 891 | "Validation set perplexity: 4.54\n", 892 | "Average loss at step 4500 : 1.61543365479 learning rate: 10.0\n", 893 | "Minibatch perplexity: 4.83\n", 894 | "Validation set perplexity: 4.69\n", 895 | "Average loss at step 4600 : 1.61607327104 learning rate: 10.0\n", 896 | "Minibatch perplexity: 5.18\n", 897 | "Validation set perplexity: 4.64\n", 898 | "Average loss at step 4700 : 1.62757282495 learning rate: 10.0\n", 899 | "Minibatch perplexity: 4.24\n", 900 | "Validation set perplexity: 4.66\n", 901 | "Average loss at step 4800 : 1.63222063541 learning rate: 10.0\n", 902 | "Minibatch perplexity: 5.30\n", 903 | "Validation set perplexity: 4.53\n", 904 | "Average loss at step 4900 : 1.63678096652 learning rate: 10.0\n", 905 | "Minibatch perplexity: 5.43\n", 906 | "Validation set perplexity: 4.64\n", 907 | "Average loss at step 5000 : 1.610340662 learning rate: 1.0\n", 908 | "Minibatch perplexity: 5.10\n", 909 | "================================================================================\n", 910 | "in b one onarbs revieds the kimiluge that fondhtic fnoto cre one nine zero zero \n", 911 | " of is it of marking panzia t had wap ironicaghni relly deah the omber b h menba\n", 912 | "ong messified it his the likdings ara subpore the a fames distaled self this int\n", 913 | "y advante authors the end languarle meit common tacing bevolitione and eight one\n", 914 | "zes that materly difild inllaring the fusts not panition assertian causecist bas\n", 915 | "================================================================================\n", 916 | "Validation set perplexity: 4.69\n", 917 | "Average loss at step 5100 : 1.60593637228 learning rate: 1.0\n", 918 | "Minibatch perplexity: 4.69\n", 919 | "Validation set perplexity: 4.47\n", 920 | "Average loss at step 5200 : 1.58993269444 learning rate: 1.0\n", 921 | "Minibatch perplexity: 4.65\n", 922 | "Validation set perplexity: 4.39\n", 923 | "Average loss at step 5300 : 1.57930587292 learning rate: 1.0\n", 924 | "Minibatch perplexity: 5.11\n", 925 | "Validation set perplexity: 4.39\n", 926 | "Average loss at step 5400 : 1.58022856832 learning rate: 1.0\n", 927 | "Minibatch perplexity: 5.19\n", 928 | "Validation set perplexity: 4.37\n", 929 | "Average loss at step 5500 : 1.56654450059 learning rate: 1.0\n", 930 | "Minibatch perplexity: 4.69\n", 931 | "Validation set perplexity: 4.33\n", 932 | "Average loss at step 5600 : 1.58013380885 learning rate: 1.0\n", 933 | "Minibatch perplexity: 5.13\n", 934 | "Validation set perplexity: 4.35\n", 935 | "Average loss at step 5700 : 1.56974959254 learning rate: 1.0\n", 936 | "Minibatch perplexity: 5.00\n", 937 | "Validation set perplexity: 4.34\n", 938 | "Average loss at step 5800 : 1.5839582932 learning rate: 1.0\n", 939 | "Minibatch perplexity: 4.88\n", 940 | "Validation set perplexity: 4.31\n", 941 | "Average loss at step 5900 : 1.57129439116 learning rate: 1.0\n", 942 | "Minibatch perplexity: 4.66\n", 943 | "Validation set perplexity: 4.32\n", 944 | "Average loss at step 6000 : 1.55144061089 learning rate: 1.0\n", 945 | "Minibatch perplexity: 4.55\n", 946 | "================================================================================\n", 947 | "utic clositical poopy stribe addi nixe one nine one zero zero eight zero b ha ex\n", 948 | "zerns b one internequiption of the secordy way anti proble akoping have fictiona\n", 949 | "phare united from has poporarly cities book ins sweden emperor a sass in origina\n", 950 | "quulk destrebinist and zeilazar and on low and by in science over country weilti\n", 951 | "x are holivia work missincis ons in the gages to starsle histon one icelanctrotu\n", 952 | "================================================================================\n", 953 | "Validation set perplexity: 4.30\n", 954 | "Average loss at step 6100 : 1.56450940847 learning rate: 1.0\n", 955 | "Minibatch perplexity: 4.77\n", 956 | "Validation set perplexity: 4.27" 957 | ], 958 | "name": "stdout" 959 | }, 960 | { 961 | "output_type": "stream", 962 | "text": [ 963 | "\n", 964 | "Average loss at step 6200 : 1.53433164835 learning rate: 1.0\n", 965 | "Minibatch perplexity: 4.77\n", 966 | "Validation set perplexity: 4.27\n", 967 | "Average loss at step 6300 : 1.54773445129 learning rate: 1.0\n", 968 | "Minibatch perplexity: 4.76\n", 969 | "Validation set perplexity: 4.25\n", 970 | "Average loss at step 6400 : 1.54021131516 learning rate: 1.0\n", 971 | "Minibatch perplexity: 4.56\n", 972 | "Validation set perplexity: 4.24\n", 973 | "Average loss at step 6500 : 1.56153374553 learning rate: 1.0\n", 974 | "Minibatch perplexity: 5.43\n", 975 | "Validation set perplexity: 4.27\n", 976 | "Average loss at step 6600 : 1.59556478739 learning rate: 1.0\n", 977 | "Minibatch perplexity: 4.92\n", 978 | "Validation set perplexity: 4.28\n", 979 | "Average loss at step 6700 : 1.58076951623 learning rate: 1.0\n", 980 | "Minibatch perplexity: 4.77\n", 981 | "Validation set perplexity: 4.30\n", 982 | "Average loss at step 6800 : 1.6070714438 learning rate: 1.0\n", 983 | "Minibatch perplexity: 4.98\n", 984 | "Validation set perplexity: 4.28\n", 985 | "Average loss at step 6900 : 1.58413293839 learning rate: 1.0\n", 986 | "Minibatch perplexity: 4.61\n", 987 | "Validation set perplexity: 4.29\n", 988 | "Average loss at step 7000 : 1.57905534983 learning rate: 1.0\n", 989 | "Minibatch perplexity: 5.08\n", 990 | "================================================================================\n", 991 | "jague are officiencinels ored by film voon higherise haik one nine on the iffirc\n", 992 | "oshe provision that manned treatists on smalle bodariturmeristing the girto in s\n", 993 | "kis would softwenn mustapultmine truativersakys bersyim by s of confound esc bub\n", 994 | "ry of the using one four six blain ira mannom marencies g with fextificallise re\n", 995 | " one son vit even an conderouss to person romer i a lebapter at obiding are iuse\n", 996 | "================================================================================\n", 997 | "Validation set perplexity: 4.25\n" 998 | ], 999 | "name": "stdout" 1000 | } 1001 | ], 1002 | "execution_count": 0 1003 | }, 1004 | { 1005 | "cell_type": "markdown", 1006 | "metadata": { 1007 | "id": "pl4vtmFfa5nn", 1008 | "colab_type": "text" 1009 | }, 1010 | "source": [ 1011 | "---\n", 1012 | "Problem 1\n", 1013 | "---------\n", 1014 | "\n", 1015 | "You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.\n", 1016 | "\n", 1017 | "---" 1018 | ] 1019 | }, 1020 | { 1021 | "cell_type": "markdown", 1022 | "metadata": { 1023 | "id": "4eErTCTybtph", 1024 | "colab_type": "text" 1025 | }, 1026 | "source": [ 1027 | "---\n", 1028 | "Problem 2\n", 1029 | "---------\n", 1030 | "\n", 1031 | "We want to train a LSTM over bigrams, that is pairs of consecutive characters like 'ab' instead of single characters like 'a'. Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally.\n", 1032 | "\n", 1033 | "a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves.\n", 1034 | "\n", 1035 | "b- Write a bigram-based LSTM, modeled on the character LSTM above.\n", 1036 | "\n", 1037 | "c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329).\n", 1038 | "\n", 1039 | "---" 1040 | ] 1041 | }, 1042 | { 1043 | "cell_type": "markdown", 1044 | "metadata": { 1045 | "id": "Y5tapX3kpcqZ", 1046 | "colab_type": "text" 1047 | }, 1048 | "source": [ 1049 | "---\n", 1050 | "Problem 3\n", 1051 | "---------\n", 1052 | "\n", 1053 | "(difficult!)\n", 1054 | "\n", 1055 | "Write a sequence-to-sequence LSTM which mirrors all the words in a sentence. For example, if your input is:\n", 1056 | "\n", 1057 | " the quick brown fox\n", 1058 | " \n", 1059 | "the model should attempt to output:\n", 1060 | "\n", 1061 | " eht kciuq nworb xof\n", 1062 | " \n", 1063 | "Refer to the lecture on how to put together a sequence-to-sequence model, as well as [this article](http://arxiv.org/abs/1409.3215) for best practices.\n", 1064 | "\n", 1065 | "---" 1066 | ] 1067 | } 1068 | ] 1069 | } -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # udacity-deep-learning 2 | Assignment solutions for https://www.udacity.com/course/deep-learning--ud730 3 | 4 | # Steps 5 | install docker 6 | https://docs.docker.com/engine/installation/ 7 | 8 | git clone git@github.com:sdurgi17/udacity-deep-learning.git 9 | 10 | Run the docker container from google cloud repository. 11 | docker run -it -p 8888:8888 -v path_to(udacity-deep-learning):/notebooks b.gcr.io/tensorflow-udacity/assignments:0.5.0 12 | 13 | 14 | 15 | --------------------------------------------------------------------------------