├── .gitignore ├── README.md ├── figures └── augmented_image_acc_100_epochs.png ├── installation-instructions.md ├── lessons ├── 01-Vanilla-Neural-Networks.ipynb ├── 02-Vanilla-Convolutional-Neural-Network-Comparison.ipynb └── 03-Convolutional-Neural-Networks.ipynb └── solutions ├── 01-Vanilla-Neural-Networks_solutions.ipynb └── 03-Convolutional-Neural-Networks_solutions.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints 2 | */.ipynb_checkpoints/* 3 | .DS_Store 4 | .AppleDouble 5 | .LSOverride 6 | data/ 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deep Learning in Python 2 | 3 | This is the repository for D-Lab’s six-hour Introduction to Deep Learning in Python workshop. [View the associated slides here](https://docs.google.com/presentation/d/1NQBDrjkM5ZdabDQFxd5_EqjXA33gt9N0-uI9viVTs6A/edit?usp=sharing). 4 | 5 | ## Objectives 6 | 7 | Convey the basics of deep learning in Python using keras on image datasets. Students are empowered with a general grasp of deep learning, example code that they can modify, a working computational environment, and resources for further study. 8 | 9 | ## Content outline 10 | 11 | * Installation 12 | * Jupyter Notbook 13 | * Keras and Tensorflow 14 | * Helper packages 15 | * What is “deep” learning? 16 | * Understanding the dataset 17 | * Dataset splitting: training, test, cross-validation 18 | * Defining moving parts of a deep learning model 19 | * Understanding a loss function, activation function, and metrics 20 | * Performance evaluation 21 | * Part 1 22 | * MNIST 0-9 hand-written digit example 23 | * Feed Forward (Vanilla) Neural Networks 24 | * Part 2 25 | * CIFAR10 - 10 image type classification 26 | * Convolutional Neural Networks 27 | 28 | ## Prerequisites 29 | 30 | This is an advanced level workshop. Participants should be intermediate Python users and have had some prior exposure to machine learning. 31 | 32 | We assume the following background: 33 | 34 | * D-Lab's [Python Machine Learning Fundamentals](https://github.com/dlab-berkeley/Python-Machine-Learning-Fundamentals) (6 hours) 35 | * Or, comparable experience/training, assuming familiarity with: 36 | * Basic Python syntax 37 | * Train/validation/test splitting 38 | * Dataset cleaning 39 | * Overfitting / underfitting / generalization 40 | * Hyperparameter customization 41 | * Basic linear algebra (vector, matrix, etc.) 42 | * Basic statistics (linear regression) 43 | 44 | If you are not comfortable installing packages, writing your own Python code, and using Jupyter Notebooks, this will not be a good workshop for you. 45 | 46 | 47 | ## Getting Started 48 | 49 | 1. Datahub - If you are a UC Berkeley student, use D-Lab's Datahub (highly recommended), click this link: [![Datahub](https://img.shields.io/badge/launch-datahub-blue)](https://dlab.datahub.berkeley.edu/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fdlab-berkeley%2FPython-Deep-Learning&urlpath=tree%2FPython-Deep-Learning%2F&branch=main) 50 | 51 | 2. Google Colab - If you are not a UC Berkeley student, use the following links to open up each Jupyter Notebook in a Google Colab session: 52 | 53 | - 01-Vanilla-Neural-Networks.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/01-Vanilla-Neural-Networks.ipynb) 54 | - 02-Vanilla-Convolutional-Neural-Network-Comparison.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/02-Vanilla-Convolutional-Neural-Network-Comparison.ipynb) 55 | - 03-Convolutional-Neural-Networks.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/03-Convolutional-Neural-Networks.ipynb) 56 | 57 | 58 | 3. If you would like to install packages locally on your computer, see [the installation instructions](https://github.com/dlab-berkeley/Python-Deep-Learning/blob/development/installation-instructions.md). 59 | 60 | - This process can take about 30-60 minutes, so be sure to try and do this before class! 61 | 62 | ## Resources 63 | 64 | * Massive open online courses 65 | * [fast.ai - Practical Deep Learning for Coders](https://course.fast.ai/) 66 | * [Kaggle Deep Learning](https://www.kaggle.com/learn/deep-learning) 67 | * [Google Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/) 68 | * [See this](https://developers.google.com/machine-learning/crash-course/fitter/graph) sweet interactive learning rate tool 69 | * [Google seedbank examples](https://tools.google.com/seedbank/seeds) 70 | * [DeepLearning.ai](https://www.deeplearning.ai/) 71 | 72 | * Workshops 73 | * [Nvidia's Modeling Time Series Data with Recurrent Neural Networks in Keras](https://courses.nvidia.com/courses/course-v1:DLI+L-HX-05+V1/about) 74 | 75 | * Stanford 76 | * CS 20 - [Tensorflow for Deep Learning Research](http://web.stanford.edu/class/cs20si/syllabus.html) 77 | * CS 230 - [Deep Learning](http://cs230.stanford.edu/) 78 | * CS 231n - [Neural Networks for Visual Recognition](http://cs231n.github.io/) 79 | * CS 224n - [Natural Language Processing with Deep Learning](http://web.stanford.edu/class/cs224n/) 80 | 81 | * Berkeley 82 | * Machine Learning at Berkeley [ML@B](https://ml.berkeley.edu/) 83 | * CS [189/289A](https://people.eecs.berkeley.edu/~jrs/189/) 84 | 85 | * UToronto CSC 321 - [Intro to Deep Learning](http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/) 86 | 87 | 88 | * Books 89 | * F. Chollet and J.J. Allaire - [Deep Learning with Python](https://tanthiamhuat.files.wordpress.com/2018/03/deeplearningwithpython.pdf) 90 | * Charniak E - [Introduction to Deep Learning](https://mitpress.mit.edu/books/introduction-deep-learning) 91 | * I. Goodfellow, Y. Bengio, A. Courville - [www.deeplearningbook.org](https://www.deeplearningbook.org/) 92 | * Zhang et al. - [Dive into Deep Learning](http://en.diveintodeeplearning.org/) 93 | 94 | # Contributors 95 | * Sean Perez 96 | * Pratik Sachdeva 97 | -------------------------------------------------------------------------------- /figures/augmented_image_acc_100_epochs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dlab-berkeley/Python-Deep-Learning-Legacy/a27dd41663e3e6c7b08a01a162101b69d86a5e8f/figures/augmented_image_acc_100_epochs.png -------------------------------------------------------------------------------- /installation-instructions.md: -------------------------------------------------------------------------------- 1 | # Installation Instructions - Python Deep Learning 2 | 3 | In order to join the workshop, please go through the following software installation steps at least a day in advance. 4 | 5 | If you run into any errors please send an email to Sean Perez () for help and include: 6 | 7 | 1. Which step you’re running, 8 | 2. The error message(s) (screenshot preferred), and 9 | 3. Your operating system + Python version + Jupyter Notebook version. 10 | 11 | It is important to get everything working before the training because we will not likely have sufficient time to investigate fixes during the training itself. 12 | 13 | ## Software Requirements 14 | 15 | There are two options to get started on this workshop: 16 | 17 | ### 1. Use D-Lab's datahub (highly recommended). 18 | - If you have a berkeley.edu email account, you likely have access to dlab's datahub! 19 | - Click this link [![Datahub](https://img.shields.io/badge/launch-datahub-blue)](https://dlab.datahub.berkeley.edu/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fdlab-berkeley%2FPython-Deep-Learning&urlpath=tree%2FPython-Deep-Learning%2F&branch=main) 20 | - That's it. You are ready! 21 | 22 | - When you want to refer to the workshop in the future, just find the folder Python-Deep-Learning within your datahub environment at 23 | 24 | 25 | ### 2. Google Colab (recommended). 26 | - This option is for attendees without a berkeley.edu email account. 27 | - Simply open each notebook using the following links: 28 | - 01-Vanilla-Neural-Networks.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/01-Vanilla-Neural-Networks.ipynb) 29 | - 02-Vanilla-Convolutional-Neural-Network-Comparison.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/02-Vanilla-Convolutional-Neural-Network-Comparison.ipynb) 30 | - 03-Convolutional-Neural-Networks.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/03-Convolutional-Neural-Networks.ipynb) 31 | 32 | 33 | ### 3. Install keras locally 34 | - This will take approximately 30-60 minutes. 35 | 36 | #### Windows 37 | - Download the Python-Deep-Learning repository here: 38 | - Download Anaconda here: 39 | - Open the Anaconda Navigaor and launch the Powershell Prompt. 40 | - Execute the following lines of code one at a time (note that this may take a while...): 41 | ``` 42 | conda create --name pydeeplearning python=3.9 keras=2.6 matplotlib=3.4 numpy=1.21 jupyterlab ipykernel 43 | ``` 44 | _Creates a conda environment called pydeeplearning with only the essential packages for our workshop. You will need to reply `y` to allow installation._ 45 | 46 | 47 | ``` 48 | conda activate pydeeplearning 49 | ``` 50 | _Activates the environment._ 51 | 52 | 53 | ``` 54 | python -m ipykernel install --user --name pydeeplearning 55 | ``` 56 | _Creates a kernel so that we can use this python environment in our Jupyter notebook._ 57 | 58 | 59 | ``` 60 | conda install -c conda-forge tensorflow 61 | ``` 62 | _Installs tensorflow, the backend for Keras. This may take a while!_ 63 | 64 | 65 | - Launch Jupyter Notebook, navigate to the repository you downloaded, and open up your notebook. 66 | - In the toolbar below the Jupyter logo go to Kernel > change Kernel > pydeeplearning 67 | - You are set to go! 68 | 69 | #### Mac (intel chip - See below for M1 chip) 70 | - Download the Python-Deep-Learning repository here: 71 | - Download Anaconda here: 72 | - Open your terminal and input the following commands: 73 | 74 | ``` 75 | conda 76 | ``` 77 | _Check to see if the conda command is recogenized and shows argument options._ 78 | 79 | __If you get the error message: `command not found: conda`__ 80 | _Try running the following:_ 81 | ``` 82 | /opt/anaconda3/bin/conda init zsh 83 | ``` 84 | _Check to see if the conda command now shows argument options._ 85 | 86 | 87 | ``` 88 | conda create -n pydeeplearning python=3.9 89 | ``` 90 | _Creates a conda environment called pydeeplearning_ 91 | 92 | ``` 93 | conda activate pydeeplearning 94 | ``` 95 | _Activates the environment._ 96 | 97 | ``` 98 | python -m pip install tensorflow 99 | ``` 100 | _Installs tensorflow._ 101 | 102 | ``` 103 | python -m pip install matplotlib 104 | ``` 105 | _Installs matplotlib for plotting._ 106 | 107 | ``` 108 | python -m pip install jupyterlab 109 | ``` 110 | _Installs jupyter notebook station._ 111 | 112 | ``` 113 | python -m ipykernel install --user --name pydeeplearning 114 | ``` 115 | _Creates a kernel so that we can use this python environment in our Jupyter notebook._ 116 | 117 | 118 | - Launch Jupyter Notebook, navigate to the repository you downloaded, and open up your notebook. 119 | - In the toolbar below the Jupyter logo go to Kernel > change Kernel > pydeeplearning 120 | - You are set to go! 121 | 122 | #### Mac (ARM chip) 123 | - I suggest following this guide: 124 | - This process is quite difficult, but if successful allows you to use GPUs! 125 | 126 | Why do we need to need to do all this conda stuff? 127 | 128 | Conda is a package manager, which we can use to deal with versioning conflicts and dependencies for tensorflow/keras! In order for keras/tensorflow to work smoothly, we create a new isolated conda environment and only install our essential packages. 129 | -------------------------------------------------------------------------------- /lessons/01-Vanilla-Neural-Networks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "3bd23324", 6 | "metadata": {}, 7 | "source": [ 8 | "# Neural Networks in Keras with MNIST\n", 9 | "\n", 10 | "**Time**\n", 11 | "- Teaching: 2 hours\n", 12 | "- Challenges: 15 minutes\n", 13 | "\n", 14 | "**Questions**\n", 15 | "- \"How do we load and manipulate input data for deep learning?\"\n", 16 | "- \"How can we use keras to design custom neural networks?\"\n", 17 | "- \"How do we decide neural network architectures that perform best on MNIST?\"\n", 18 | "- \"How do we evaluate our models and test their ability to generalize?\"\n", 19 | "\n", 20 | "\n", 21 | "**Learning Objectives**\n", 22 | "- \"Understand and wrangle input to neural networks.\"\n", 23 | "- \"The ability to implement, troubleshoot, and modify your own neural networks.\"\n", 24 | "\n", 25 | "* * * * *" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "id": "d232252d", 31 | "metadata": {}, 32 | "source": [ 33 | "## Installation\n", 34 | "\n", 35 | "If your browser url adress is: \n", 36 | "\n", 37 | "`https://dlab.datahub.berkeley.edu/user/YOURNAME`\n", 38 | "\n", 39 | "Then you are good to go! You are in the dlab's datahub which already has the packages you need to get started along with 8GB of memory.\n", 40 | "\n", 41 | "If your browswer url address is:\n", 42 | "\n", 43 | "`https://colab.research.google.com/github/dlab-berkeley/Python-Deep-Learning/blob/main/lessons/01-Vanilla-Neural-Networks.ipynb`\n", 44 | "\n", 45 | "You are also good to go! You are in google colab and have access to ~12GB of memory and access to GPUs!\n", 46 | "\n", 47 | "If you want to work locally on your computer, you should have already followed the installation instructions at \n", 48 | "\n", 49 | "If you haven't yet installed keras and tensorflow locally, I highly suggest using the datahub or google colab." 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "id": "bc5d0fda", 55 | "metadata": {}, 56 | "source": [ 57 | "## Import packages\n", 58 | "In order to simplify the syntax we will import the specific functions we need from keras modules.\n", 59 | "\n", 60 | "Note you can simply import keras, then call each function from the module, for example:\n", 61 | "\n", 62 | "`from tensorflow import keras`\n", 63 | "\n", 64 | "`keras.dataset.mnist()`" 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": null, 70 | "id": "96f813f1", 71 | "metadata": { 72 | "scrolled": true 73 | }, 74 | "outputs": [], 75 | "source": [ 76 | "from tensorflow.keras.datasets import mnist\n", 77 | "from tensorflow.keras.models import Sequential\n", 78 | "from tensorflow.keras.layers import Dense, Dropout\n", 79 | "from tensorflow.keras.optimizers import RMSprop\n", 80 | "from tensorflow.keras.utils import to_categorical\n", 81 | "\n", 82 | "# Note you may get a warning about CUDA and GPU set up\n", 83 | "# You can ignore these for now" 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "id": "c0e1b58c", 89 | "metadata": {}, 90 | "source": [ 91 | "We also need to import packages to help us with visualizing and manipulating out data. Remember, this is half the battle!" 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": null, 97 | "id": "c9d96314", 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "import numpy as np\n", 102 | "import matplotlib.pyplot as plt" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "id": "253cf87a", 108 | "metadata": {}, 109 | "source": [ 110 | "### Keras and MNIST\n", 111 | "\n", 112 | "Keras, Tensorflow, and MNIST, oh my! \n", 113 | "\n", 114 | "#### Keras\n", 115 | "A deep-learning framework developed by google with a user-friendly API built for researchers to quickly prototype models.\n", 116 | "\n", 117 | "#### Tensorflow\n", 118 | "A backend engine for Keras.\n", 119 | "\n", 120 | "#### MNIST\n", 121 | "A dataset of 60,000 training images and 10,000 test images of handwritten digits. It is often considered the \"hello world\" of deep learning." 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "id": "0263315d", 127 | "metadata": {}, 128 | "source": [ 129 | "#### Problem Statement\n", 130 | "\n", 131 | "You work for a bank and they need to automate the processing of reading mobile check deposits. \n", 132 | "\n", 133 | "They have overworked staff because they must look at each photo of the checks and manually input the check number, account number, deposit amount, etc.\n", 134 | "\n", 135 | "Can we make their life easier!?" 136 | ] 137 | }, 138 | { 139 | "cell_type": "markdown", 140 | "id": "bacec302", 141 | "metadata": {}, 142 | "source": [ 143 | "" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "id": "b8e62029", 149 | "metadata": {}, 150 | "source": [ 151 | "#### Our Solution\n", 152 | "\n", 153 | "We will build a model that can correctly identify numbers from a picture.\n", 154 | "\n", 155 | "As a first pass, we want to build a model that can take as input an image of a single handwritten digit, and predict the correct target digit.\n", 156 | "\n", 157 | "Let's dive right in and build a neural network model with keras that is able identify handwritten digits! " 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "id": "e676450f", 163 | "metadata": {}, 164 | "source": [ 165 | "## Input Data " 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "id": "f32bfe53", 171 | "metadata": {}, 172 | "source": [ 173 | "They say 80% of data analysis is data cleaning.\n", 174 | "\n", 175 | "Getting data in the right format is a non-trivial and critical task when building deep learning models!" 176 | ] 177 | }, 178 | { 179 | "cell_type": "markdown", 180 | "id": "a83c42b7", 181 | "metadata": {}, 182 | "source": [ 183 | "### Reading in the data" 184 | ] 185 | }, 186 | { 187 | "cell_type": "code", 188 | "execution_count": null, 189 | "id": "3aac41f3", 190 | "metadata": {}, 191 | "outputs": [], 192 | "source": [ 193 | "def get_mnist_data(subset=True):\n", 194 | " \"\"\"\n", 195 | " Returns the MNIST dataset as a tuple:\n", 196 | " (x_train, y_train, x_val, y_val, x_test, y_test)\n", 197 | " \n", 198 | " When subset=TRUE:\n", 199 | " Returns only a subset of the mnist dataset.\n", 200 | " Especially important to use if you are on datahub and only have 1-2GB of memory.\n", 201 | " \"\"\"\n", 202 | " \n", 203 | " if subset:\n", 204 | " N_TRAIN = 5000\n", 205 | " N_VALIDATION = 1000\n", 206 | " N_TEST = 1000\n", 207 | " else:\n", 208 | " N_TRAIN = 48000\n", 209 | " N_VALIDATION = 12000\n", 210 | " N_TEST = 10000\n", 211 | " \n", 212 | " (x_train_and_val, y_train_and_val), (x_test, y_test) = mnist.load_data()\n", 213 | " \n", 214 | " x_train = x_train_and_val[:N_TRAIN,:,:]\n", 215 | " y_train = y_train_and_val[:N_TRAIN]\n", 216 | " \n", 217 | " x_val = x_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION,:,:]\n", 218 | " y_val = y_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION]\n", 219 | " \n", 220 | " x_test = x_test[:N_TEST]\n", 221 | " y_test = y_test[:N_TEST]\n", 222 | " \n", 223 | " return x_train, y_train, x_val, y_val, x_test, y_test\n", 224 | " \n", 225 | " " 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": null, 231 | "id": "8be26764", 232 | "metadata": {}, 233 | "outputs": [], 234 | "source": [ 235 | "# Load the data\n", 236 | "# Set subset=False if you want to use the full dataset!\n", 237 | "# Note that this will require 2+ GB of memory and will make training take longer\n", 238 | "\n", 239 | "x_train, y_train, x_val, y_val, x_test, y_test = get_mnist_data(subset=True)" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "id": "296c8cda", 245 | "metadata": {}, 246 | "source": [ 247 | "### Understanding the data" 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": null, 253 | "id": "e9aeca89", 254 | "metadata": {}, 255 | "outputs": [], 256 | "source": [ 257 | "def data_summary(data):\n", 258 | " \"\"\"\n", 259 | " Takes a list of our data partitions and returns the shape.\n", 260 | " \"\"\"\n", 261 | " \n", 262 | " for i, data_partition in enumerate(data):\n", 263 | " if i == 0:\n", 264 | " print(\"Training Data\")\n", 265 | " elif i == 2:\n", 266 | " print()\n", 267 | " print(\"Validation Data\")\n", 268 | " elif i == 4:\n", 269 | " print()\n", 270 | " print(\"Testing Data\")\n", 271 | "\n", 272 | " print(f\"Shape: {data_partition.shape}\")\n", 273 | " \n", 274 | " " 275 | ] 276 | }, 277 | { 278 | "cell_type": "code", 279 | "execution_count": null, 280 | "id": "4409be4b", 281 | "metadata": {}, 282 | "outputs": [], 283 | "source": [ 284 | "data_summary([x_train, y_train, x_val, y_val, x_test, y_test])" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "id": "8428fa7a", 290 | "metadata": {}, 291 | "source": [ 292 | "## Challenge 1: Understanding the Input Data\n", 293 | "\n", 294 | "1. Why do we split our data into train, validation, and test sets?\n", 295 | "2. What is the shape of our input data partitions?\n", 296 | "3. What is the type of the data?\n", 297 | "\n", 298 | "**BONUS:**\n", 299 | "\n", 300 | "4. What is the distribution of the target classes within the data, is it balanced?" 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "id": "37dbbfef", 307 | "metadata": {}, 308 | "outputs": [], 309 | "source": [] 310 | }, 311 | { 312 | "cell_type": "markdown", 313 | "id": "e7aa18bb", 314 | "metadata": {}, 315 | "source": [ 316 | "### Understanding and visualizing the images" 317 | ] 318 | }, 319 | { 320 | "cell_type": "markdown", 321 | "id": "1c364eb7", 322 | "metadata": {}, 323 | "source": [ 324 | "Let's extract just one example from the training data" 325 | ] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": null, 330 | "id": "c0fa4f5f", 331 | "metadata": {}, 332 | "outputs": [], 333 | "source": [ 334 | "one_image = x_train[0, :, :]" 335 | ] 336 | }, 337 | { 338 | "cell_type": "markdown", 339 | "id": "a3b014c0", 340 | "metadata": {}, 341 | "source": [ 342 | "Let's look inside and see how it's stored!" 343 | ] 344 | }, 345 | { 346 | "cell_type": "code", 347 | "execution_count": null, 348 | "id": "014bcabf", 349 | "metadata": {}, 350 | "outputs": [], 351 | "source": [ 352 | "one_image" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "id": "1f74e383", 358 | "metadata": {}, 359 | "source": [ 360 | "Not very helpful to see the data in it's raw format \n", 361 | "\n", 362 | "Instead, let's utilize the shape attribute and matplotlib to help us visualize this image data!" 363 | ] 364 | }, 365 | { 366 | "cell_type": "code", 367 | "execution_count": null, 368 | "id": "597c087d", 369 | "metadata": {}, 370 | "outputs": [], 371 | "source": [ 372 | "one_image.shape" 373 | ] 374 | }, 375 | { 376 | "cell_type": "code", 377 | "execution_count": null, 378 | "id": "4206f70f", 379 | "metadata": { 380 | "scrolled": true 381 | }, 382 | "outputs": [], 383 | "source": [ 384 | "plt.imshow(one_image, cmap=plt.cm.binary);" 385 | ] 386 | }, 387 | { 388 | "cell_type": "markdown", 389 | "id": "f08b9c5a", 390 | "metadata": {}, 391 | "source": [ 392 | "Which dimension represent rows vs. columns in our image?\n", 393 | "\n", 394 | "Let's find out through a test by grabbing all of the first dimension, and half of the second.\n", 395 | "\n", 396 | "If the image is cutoff column-wise, we know the dimensions are \\[ row pixel, column pixel \\]" 397 | ] 398 | }, 399 | { 400 | "cell_type": "code", 401 | "execution_count": null, 402 | "id": "b702ff40", 403 | "metadata": {}, 404 | "outputs": [], 405 | "source": [ 406 | "one_image_first_dimension = one_image[:,0:14]" 407 | ] 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": null, 412 | "id": "0c5e564c", 413 | "metadata": {}, 414 | "outputs": [], 415 | "source": [ 416 | "plt.imshow(one_image_first_dimension, cmap=plt.cm.binary);" 417 | ] 418 | }, 419 | { 420 | "cell_type": "markdown", 421 | "id": "bcd6cff4", 422 | "metadata": {}, 423 | "source": [ 424 | "So now we feel solid with our input data format!\n", 425 | "\n", 426 | "The input data has 3 dimensions \\[ index for image, pixel row, pixel column \\]" 427 | ] 428 | }, 429 | { 430 | "cell_type": "markdown", 431 | "id": "89753677", 432 | "metadata": {}, 433 | "source": [ 434 | "### Building a Neural Network in Keras" 435 | ] 436 | }, 437 | { 438 | "cell_type": "markdown", 439 | "id": "002b2bd9", 440 | "metadata": {}, 441 | "source": [ 442 | "First we define our neural network architecture." 443 | ] 444 | }, 445 | { 446 | "cell_type": "code", 447 | "execution_count": null, 448 | "id": "e6db3efd", 449 | "metadata": {}, 450 | "outputs": [], 451 | "source": [ 452 | "first_network = Sequential()\n", 453 | "\n", 454 | "first_network.add(Dense(units = 64,\n", 455 | " activation= \"relu\",\n", 456 | " input_shape=(28*28,)))" 457 | ] 458 | }, 459 | { 460 | "cell_type": "markdown", 461 | "id": "70a2acc3", 462 | "metadata": {}, 463 | "source": [ 464 | "Let's review the python code we used to initialize this neural network." 465 | ] 466 | }, 467 | { 468 | "cell_type": "markdown", 469 | "id": "3943a74f", 470 | "metadata": {}, 471 | "source": [ 472 | "#### Hidden Layer\n", 473 | "\n", 474 | "`first_network = Sequential()`\n", 475 | "* This line initializes our model object.\n", 476 | "\n", 477 | "`first_network.add(Dense(...))`\n", 478 | "* The add method adds a layer to our model object, specifically a dense layer.\n", 479 | "\n", 480 | "`units = 64`\n", 481 | "* Within our dense layer, we specify the number of units or *neurons* we want in our layer.\n", 482 | "\n", 483 | "`activation= \"relu\"`\n", 484 | "* We also specify the activation function we want to use, in this case we use the Rectified Linear Unit (relu).\n", 485 | "* This simple activation unit returns x if x > 0, or 0 if x <= 0. \n", 486 | "* A critical component to add non-linearity to our network.\n", 487 | "\n", 488 | "`input_shape=(28*28,)`\n", 489 | "* In our first layer must always define the input shape. \n", 490 | "* We pass a single dimensional vector of size 28 \\* 28 = 784\n", 491 | "* This represents the number of pixels in a single dimension.\n", 492 | "* We add the comma to pass in our input shape as a tuple.\n", 493 | "* Subsequent layers we add be able to infer the input size from the previous layers of the model.\n" 494 | ] 495 | }, 496 | { 497 | "cell_type": "code", 498 | "execution_count": null, 499 | "id": "2d8e8f49", 500 | "metadata": {}, 501 | "outputs": [], 502 | "source": [ 503 | "first_network.add(Dense(units = 10,\n", 504 | " activation = \"softmax\"))" 505 | ] 506 | }, 507 | { 508 | "cell_type": "markdown", 509 | "id": "f5d1a9ba", 510 | "metadata": {}, 511 | "source": [ 512 | "#### Output Layer\n", 513 | "`first_network.add(Dense(...))`\n", 514 | "* We add our output layer using the same syntax as our hidden layer.\n", 515 | "\n", 516 | "`units = 10`\n", 517 | "* We give this layer 10 units or *neurons* corresponding to the number of classes we are attempting to classify.\n", 518 | "\n", 519 | "`activation = \"softmax\"`\n", 520 | "* We use the softmax activation function for our output layer which gives us back a probability for each class." 521 | ] 522 | }, 523 | { 524 | "cell_type": "markdown", 525 | "id": "850bd067", 526 | "metadata": {}, 527 | "source": [ 528 | "We just built our first neural network! It's a simple neural network with a single \"hidden\" layer, not very *deep*.\n", 529 | "\n", 530 | "We can check out a summary of our model layout with the method `model_object.summary()`" 531 | ] 532 | }, 533 | { 534 | "cell_type": "code", 535 | "execution_count": null, 536 | "id": "614a18a9", 537 | "metadata": { 538 | "scrolled": true 539 | }, 540 | "outputs": [], 541 | "source": [ 542 | "first_network.summary()" 543 | ] 544 | }, 545 | { 546 | "cell_type": "markdown", 547 | "id": "75f95153", 548 | "metadata": {}, 549 | "source": [ 550 | "Even this TINY neural network has over 50,000 parameters!\n", 551 | "\n", 552 | "Next we need to give the model:\n", 553 | "1. An optimizer strategy\n", 554 | "2. A way to calculate loss\n", 555 | "3. The metric evaluate during training." 556 | ] 557 | }, 558 | { 559 | "cell_type": "code", 560 | "execution_count": null, 561 | "id": "13ac7c26", 562 | "metadata": {}, 563 | "outputs": [], 564 | "source": [ 565 | "first_network.compile(optimizer = 'rmsprop', \n", 566 | " loss = 'categorical_crossentropy',\n", 567 | " metrics = ['accuracy'])" 568 | ] 569 | }, 570 | { 571 | "cell_type": "markdown", 572 | "id": "712fdc57", 573 | "metadata": {}, 574 | "source": [ 575 | "Lastly, we train the model using the `model_name.fit()` method" 576 | ] 577 | }, 578 | { 579 | "cell_type": "markdown", 580 | "id": "8f703738", 581 | "metadata": {}, 582 | "source": [ 583 | "If you run the code below.... you would find that the model would fail to train. I do not suggest running it to save memory!" 584 | ] 585 | }, 586 | { 587 | "cell_type": "code", 588 | "execution_count": null, 589 | "id": "2cf0ffcf", 590 | "metadata": {}, 591 | "outputs": [], 592 | "source": [ 593 | "#history = first_network.fit(x_train, y_train, epochs=5, batch_size=128, validation_data=(x_val, y_val))" 594 | ] 595 | }, 596 | { 597 | "cell_type": "markdown", 598 | "id": "eab9fda6", 599 | "metadata": {}, 600 | "source": [ 601 | "Remember that half the battle with model training is understanding what format we need the data in!\n", 602 | "\n", 603 | "We won't go over this in too much detail, but in a snapshot we will:\n", 604 | "1. Flatten the pixel dimensions from (28,28) into a single dimension (784)\n", 605 | "2. Change pixel data type from integer with pixel values \\[0, 225\\] to a float values \\[0,1\\].\n", 606 | "3. Expand the y targets from a single dimension with values \\[0:9\\] to 10 dimensions each representing a target class. \n", 607 | " - In each row, the target class column will have a value of 1, while the other columns will have a value of 0\n", 608 | " - This is a common technique to reformat categorical targets known as [One Hot Encoding](https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/).\n", 609 | " " 610 | ] 611 | }, 612 | { 613 | "cell_type": "code", 614 | "execution_count": null, 615 | "id": "c94a3824", 616 | "metadata": {}, 617 | "outputs": [], 618 | "source": [ 619 | "def transform_data(xdata, ydata):\n", 620 | " \"\"\"\n", 621 | " Transforms image data:\n", 622 | " 1. Flattens pixel dimensions from 2 -> 1\n", 623 | " 2. Scales pixel values between [0,1]\n", 624 | " Transforms target data (ydata):\n", 625 | " - Formats targets as one hot encoded columns\n", 626 | " \"\"\"\n", 627 | " \n", 628 | " x = {}\n", 629 | " for name, partition in zip([\"x_train\", \"x_val\", \"x_test\"],xdata):\n", 630 | " flatten = partition.reshape((partition.shape[0], 28 * 28))\n", 631 | " scaled = flatten.astype('float32') / 255\n", 632 | " x[name] = scaled\n", 633 | " \n", 634 | " y = {}\n", 635 | " for name, partition in zip([\"y_train\", \"y_val\", \"y_test\"],ydata):\n", 636 | " y[name] = to_categorical(partition)\n", 637 | " \n", 638 | " return x['x_train'], y['y_train'], x['x_val'], y['y_val'], x['x_test'], y['y_test']" 639 | ] 640 | }, 641 | { 642 | "cell_type": "code", 643 | "execution_count": null, 644 | "id": "a333742d", 645 | "metadata": {}, 646 | "outputs": [], 647 | "source": [ 648 | "x_train_trans, y_train_trans, x_val_trans, y_val_trans, x_test_trans, y_test_trans = transform_data([x_train, x_val, x_test],\n", 649 | " [y_train, y_val, y_test])" 650 | ] 651 | }, 652 | { 653 | "cell_type": "code", 654 | "execution_count": null, 655 | "id": "fc7b74c2", 656 | "metadata": {}, 657 | "outputs": [], 658 | "source": [ 659 | "data_summary([x_train_trans, y_train_trans, x_val_trans, y_val_trans, x_test_trans, y_test_trans])" 660 | ] 661 | }, 662 | { 663 | "cell_type": "markdown", 664 | "id": "a7e48488", 665 | "metadata": {}, 666 | "source": [ 667 | "We have succesfully flattened our input images!" 668 | ] 669 | }, 670 | { 671 | "cell_type": "code", 672 | "execution_count": null, 673 | "id": "79f845ce", 674 | "metadata": {}, 675 | "outputs": [], 676 | "source": [ 677 | "history = first_network.fit(x_train_trans, \n", 678 | " y_train_trans, \n", 679 | " epochs=5, \n", 680 | " batch_size=128, \n", 681 | " validation_data=(x_val_trans, y_val_trans), )" 682 | ] 683 | }, 684 | { 685 | "cell_type": "markdown", 686 | "id": "1221b829", 687 | "metadata": {}, 688 | "source": [ 689 | "### Visualizing the accuracy" 690 | ] 691 | }, 692 | { 693 | "cell_type": "code", 694 | "execution_count": null, 695 | "id": "fcc7af9f", 696 | "metadata": {}, 697 | "outputs": [], 698 | "source": [ 699 | "def plot_epoch_accuracy(history_dict):\n", 700 | " \"\"\"\n", 701 | " Plots the training and validation accuracy of a neural network.\n", 702 | " \"\"\"\n", 703 | " \n", 704 | " acc = history_dict['accuracy']\n", 705 | " val_acc = history_dict['val_accuracy']\n", 706 | " epochs = range(1, len(acc) + 1)\n", 707 | " plt.plot(epochs, acc, color = 'navy', alpha = 0.8, label='Training Accuracy')\n", 708 | " plt.plot(epochs, val_acc, color = 'green', label='Validation Accuracy')\n", 709 | " plt.title('Training and validation Accuracy')\n", 710 | " plt.xlabel('Epochs')\n", 711 | " plt.ylabel('Accuracy')\n", 712 | " plt.legend()\n", 713 | " return plt.show()" 714 | ] 715 | }, 716 | { 717 | "cell_type": "code", 718 | "execution_count": null, 719 | "id": "9d6ea650", 720 | "metadata": {}, 721 | "outputs": [], 722 | "source": [ 723 | "plot_epoch_accuracy(history.history)" 724 | ] 725 | }, 726 | { 727 | "cell_type": "markdown", 728 | "id": "e6b7b7e7", 729 | "metadata": {}, 730 | "source": [ 731 | "Not too shabby for a *tiny* neural network!\n", 732 | "\n" 733 | ] 734 | }, 735 | { 736 | "cell_type": "markdown", 737 | "id": "07dd5123", 738 | "metadata": {}, 739 | "source": [ 740 | "Let's build a second network that has more neurons in the hidden layer (previously: 64, now: 512) and see how effects our classification performance." 741 | ] 742 | }, 743 | { 744 | "cell_type": "code", 745 | "execution_count": null, 746 | "id": "9048fbcc", 747 | "metadata": {}, 748 | "outputs": [], 749 | "source": [ 750 | "second_network = Sequential()\n", 751 | "second_network.add(Dense(512, activation= \"relu\", input_shape=(28*28,)))\n", 752 | "second_network.add(Dense(10, activation=\"softmax\"))" 753 | ] 754 | }, 755 | { 756 | "cell_type": "code", 757 | "execution_count": null, 758 | "id": "4929d1e5", 759 | "metadata": {}, 760 | "outputs": [], 761 | "source": [ 762 | "second_network.compile(optimizer = 'rmsprop', \n", 763 | " loss = 'categorical_crossentropy',\n", 764 | " metrics = ['accuracy'])" 765 | ] 766 | }, 767 | { 768 | "cell_type": "code", 769 | "execution_count": null, 770 | "id": "c8b65e30", 771 | "metadata": {}, 772 | "outputs": [], 773 | "source": [ 774 | "history_two = second_network.fit(x_train_trans, \n", 775 | " y_train_trans, \n", 776 | " epochs=5, \n", 777 | " batch_size=128, \n", 778 | " validation_data=(x_val_trans, y_val_trans))" 779 | ] 780 | }, 781 | { 782 | "cell_type": "code", 783 | "execution_count": null, 784 | "id": "a3c3dcbb", 785 | "metadata": {}, 786 | "outputs": [], 787 | "source": [ 788 | "plot_epoch_accuracy(history_two.history)" 789 | ] 790 | }, 791 | { 792 | "cell_type": "markdown", 793 | "id": "42e90cc7", 794 | "metadata": {}, 795 | "source": [ 796 | "How does adding more neurons effect our training and validation accuracy?\n", 797 | "\n", 798 | "Let's keep training this model for 10 more epochs." 799 | ] 800 | }, 801 | { 802 | "cell_type": "code", 803 | "execution_count": null, 804 | "id": "2a718300", 805 | "metadata": {}, 806 | "outputs": [], 807 | "source": [ 808 | "history_two_more_epochs = second_network.fit(x_train_trans, \n", 809 | " y_train_trans, \n", 810 | " epochs=10, \n", 811 | " batch_size=128, \n", 812 | " validation_data=(x_val_trans, y_val_trans))" 813 | ] 814 | }, 815 | { 816 | "cell_type": "code", 817 | "execution_count": null, 818 | "id": "1e28a76c", 819 | "metadata": {}, 820 | "outputs": [], 821 | "source": [ 822 | "def combined_epoch_plot(history_dicts):\n", 823 | " \"\"\"\n", 824 | " Combines two history dictionaries for extended epochs.\n", 825 | " \"\"\"\n", 826 | " \n", 827 | " combined_history = {key: [] for key in history_dicts[0].keys()}\n", 828 | " \n", 829 | " for hist in history_dicts:\n", 830 | " for key in combined_history.keys():\n", 831 | " combined_history[key].extend(hist[key])\n", 832 | " \n", 833 | " return plot_epoch_accuracy(combined_history)\n", 834 | " \n", 835 | " " 836 | ] 837 | }, 838 | { 839 | "cell_type": "code", 840 | "execution_count": null, 841 | "id": "695900f1", 842 | "metadata": {}, 843 | "outputs": [], 844 | "source": [ 845 | "combined_epoch_plot([history_two.history, history_two_more_epochs.history])" 846 | ] 847 | }, 848 | { 849 | "cell_type": "markdown", 850 | "id": "9866c866", 851 | "metadata": {}, 852 | "source": [ 853 | "Diagnosing our accuracy plot, it looks like even with more epochs, our model is *still* failing to generalize. \n", 854 | "\n", 855 | "The dunning kruger effect in deep learning." 856 | ] 857 | }, 858 | { 859 | "cell_type": "markdown", 860 | "id": "02cd093e", 861 | "metadata": {}, 862 | "source": [ 863 | "One solution is to give our add an additional hidden layer to extract more features from the data.\n", 864 | "\n", 865 | "Below we add a second dense layer with the same number of neurons as our first dense layer." 866 | ] 867 | }, 868 | { 869 | "cell_type": "code", 870 | "execution_count": null, 871 | "id": "13b9e1e8", 872 | "metadata": {}, 873 | "outputs": [], 874 | "source": [ 875 | "third_network = Sequential()\n", 876 | "third_network.add(Dense(512, activation= \"relu\", input_shape=(28*28,)))\n", 877 | "third_network.add(Dense(512, activation= \"relu\"))\n", 878 | "third_network.add(Dense(10, activation=\"softmax\"))" 879 | ] 880 | }, 881 | { 882 | "cell_type": "code", 883 | "execution_count": null, 884 | "id": "9dd1076f", 885 | "metadata": {}, 886 | "outputs": [], 887 | "source": [ 888 | "third_network.compile(optimizer = 'rmsprop', \n", 889 | " loss = 'categorical_crossentropy',\n", 890 | " metrics = ['accuracy'])" 891 | ] 892 | }, 893 | { 894 | "cell_type": "code", 895 | "execution_count": null, 896 | "id": "c3e262e0", 897 | "metadata": { 898 | "scrolled": false 899 | }, 900 | "outputs": [], 901 | "source": [ 902 | "history_two_layers = third_network.fit(x_train_trans, \n", 903 | " y_train_trans, \n", 904 | " epochs=10, \n", 905 | " batch_size=128, \n", 906 | " validation_data=(x_val_trans, y_val_trans))" 907 | ] 908 | }, 909 | { 910 | "cell_type": "code", 911 | "execution_count": null, 912 | "id": "250d892f", 913 | "metadata": {}, 914 | "outputs": [], 915 | "source": [ 916 | "plot_epoch_accuracy(history_two_layers.history)" 917 | ] 918 | }, 919 | { 920 | "cell_type": "markdown", 921 | "id": "92ae52cb", 922 | "metadata": {}, 923 | "source": [ 924 | "Our model is still failing to generalize and reach validation accuracy that approaches the training accuracy!\n", 925 | "\n", 926 | "One way we can prevent our model from overfitting to the training data is to add dropout in our hidden layers.\n", 927 | "\n", 928 | "### TLDR for Dropout\n", 929 | "Dropout makes our network a bit \"forgetful\" during training. We get to set a proportion of neurons the network will randomly \"forget\" or \"drop\", which we set as a probability in the `model.add(Dropout(probability_here))` function.\n", 930 | "\n", 931 | "This forces our model to generalize, preventing it from overfitting to the training data. \n", 932 | "\n", 933 | "Check out [Hinton et al. 2012](https://arxiv.org/abs/1207.0580) describing how dropout helps neural network generalization.\n", 934 | "- *Fun fact: they also use MNIST to test model generalization :)*" 935 | ] 936 | }, 937 | { 938 | "cell_type": "code", 939 | "execution_count": null, 940 | "id": "3ff735c1", 941 | "metadata": {}, 942 | "outputs": [], 943 | "source": [ 944 | "fourth_network = Sequential()\n", 945 | "fourth_network.add(Dense(512, activation= \"relu\", input_shape=(28*28,)))\n", 946 | "fourth_network.add(Dropout(0.5))\n", 947 | "fourth_network.add(Dense(512, activation= \"relu\"))\n", 948 | "fourth_network.add(Dropout(0.5))\n", 949 | "fourth_network.add(Dense(10, activation=\"softmax\"))" 950 | ] 951 | }, 952 | { 953 | "cell_type": "code", 954 | "execution_count": null, 955 | "id": "73f2d243", 956 | "metadata": {}, 957 | "outputs": [], 958 | "source": [ 959 | "fourth_network.compile(optimizer = 'rmsprop', \n", 960 | " loss = 'categorical_crossentropy',\n", 961 | " metrics = ['accuracy'])" 962 | ] 963 | }, 964 | { 965 | "cell_type": "code", 966 | "execution_count": null, 967 | "id": "f8f366fb", 968 | "metadata": {}, 969 | "outputs": [], 970 | "source": [ 971 | "history_dropout = fourth_network.fit(x_train_trans, \n", 972 | " y_train_trans, \n", 973 | " epochs=20, \n", 974 | " batch_size=128, \n", 975 | " validation_data=(x_val_trans, y_val_trans))" 976 | ] 977 | }, 978 | { 979 | "cell_type": "code", 980 | "execution_count": null, 981 | "id": "fe63f132", 982 | "metadata": { 983 | "scrolled": true 984 | }, 985 | "outputs": [], 986 | "source": [ 987 | "plot_epoch_accuracy(history_dropout.history)" 988 | ] 989 | }, 990 | { 991 | "cell_type": "markdown", 992 | "id": "0ad25319", 993 | "metadata": {}, 994 | "source": [ 995 | "After implementing dropout, do you think the model is better able to generalize?" 996 | ] 997 | }, 998 | { 999 | "cell_type": "markdown", 1000 | "id": "a4a24c3b", 1001 | "metadata": {}, 1002 | "source": [ 1003 | "## Challenge 2: Build your own neural network\n", 1004 | "\n", 1005 | "1. Build and compile your own neural network in an object called `my_network`. Feel free to choose your own:\n", 1006 | " - Architecture\n", 1007 | " - Activation Function\n", 1008 | " - Epochs\n", 1009 | " \n", 1010 | "2. Train your model, saving the results to an object called `history_my_network`." 1011 | ] 1012 | }, 1013 | { 1014 | "cell_type": "code", 1015 | "execution_count": null, 1016 | "id": "303e16f0", 1017 | "metadata": {}, 1018 | "outputs": [], 1019 | "source": [] 1020 | }, 1021 | { 1022 | "cell_type": "markdown", 1023 | "id": "ca9a307c", 1024 | "metadata": {}, 1025 | "source": [ 1026 | "Now that we have several models on hand, let's see how they perform on the holdout test set!" 1027 | ] 1028 | }, 1029 | { 1030 | "cell_type": "code", 1031 | "execution_count": null, 1032 | "id": "e53264eb", 1033 | "metadata": {}, 1034 | "outputs": [], 1035 | "source": [ 1036 | "def get_model_accuracy(model, x_test, y_test):\n", 1037 | " \"\"\"\n", 1038 | " Takes a model and a test set of data.\n", 1039 | " Returns the accuracy.\n", 1040 | " \"\"\"\n", 1041 | " \n", 1042 | " score = model.evaluate(x_test, y_test, verbose=0)\n", 1043 | " \n", 1044 | " accuracy = round(score[1]*100, 1)\n", 1045 | " \n", 1046 | " return accuracy" 1047 | ] 1048 | }, 1049 | { 1050 | "cell_type": "code", 1051 | "execution_count": null, 1052 | "id": "974dc653", 1053 | "metadata": {}, 1054 | "outputs": [], 1055 | "source": [ 1056 | "for i, model in enumerate([first_network, second_network, third_network]):\n", 1057 | " acc = get_model_accuracy(model, x_test_trans, y_test_trans)\n", 1058 | " print(f\"Model {i+1} has an accuracy of {acc}%\")" 1059 | ] 1060 | }, 1061 | { 1062 | "cell_type": "markdown", 1063 | "id": "1bed1dea", 1064 | "metadata": {}, 1065 | "source": [ 1066 | "## Challenge 3: Evaluate your own model\n", 1067 | "\n", 1068 | "Use your own model from challenge 2 to evaluate its general performance.\n", 1069 | "\n", 1070 | "1. Visualize the training and validation accuracy over each epoch.\n", 1071 | "2. Print the accuracy of your model on the test set." 1072 | ] 1073 | }, 1074 | { 1075 | "cell_type": "code", 1076 | "execution_count": null, 1077 | "id": "5916e30d", 1078 | "metadata": {}, 1079 | "outputs": [], 1080 | "source": [] 1081 | }, 1082 | { 1083 | "cell_type": "markdown", 1084 | "id": "4cd1b345", 1085 | "metadata": {}, 1086 | "source": [ 1087 | "### Visualize the test results" 1088 | ] 1089 | }, 1090 | { 1091 | "cell_type": "code", 1092 | "execution_count": null, 1093 | "id": "427945c0", 1094 | "metadata": {}, 1095 | "outputs": [], 1096 | "source": [ 1097 | "def plot_wrong_predictions(model, x_test, y_test):\n", 1098 | " \"\"\"\n", 1099 | " Plots 25 incorrectly predicted images.\n", 1100 | " \"\"\"\n", 1101 | " \n", 1102 | " # Back transform images\n", 1103 | " x_images = x_test.reshape(x_test.shape[0], 28, 28)\n", 1104 | " \n", 1105 | " # Format predictions and targets\n", 1106 | " predictions = model.predict(x_test)\n", 1107 | " predicted = np.argmax(predictions, axis=1)\n", 1108 | " target = np.argmax(y_test, axis = 1)\n", 1109 | " \n", 1110 | " # Get wrong indices\n", 1111 | " wrong_indices = np.where(predicted != target)[0]\n", 1112 | " \n", 1113 | " fig, axes = plt.subplots(5,5, figsize = (25,25))\n", 1114 | " axes = axes.ravel()\n", 1115 | " \n", 1116 | " for ax, index in zip(axes, wrong_indices[:26]):\n", 1117 | " ax.imshow(x_images[index], cmap=plt.cm.binary, interpolation='nearest')\n", 1118 | " ax.set_title(f\"Predicted {predicted[index]}, Actual is {target[index]}\", size = 20)\n", 1119 | " ax.axis('off')\n", 1120 | " \n", 1121 | " return plt.show()" 1122 | ] 1123 | }, 1124 | { 1125 | "cell_type": "code", 1126 | "execution_count": null, 1127 | "id": "d6513237", 1128 | "metadata": {}, 1129 | "outputs": [], 1130 | "source": [ 1131 | "plot_wrong_predictions(fourth_network, x_test_trans, y_test_trans)" 1132 | ] 1133 | }, 1134 | { 1135 | "cell_type": "markdown", 1136 | "id": "b819253a", 1137 | "metadata": {}, 1138 | "source": [ 1139 | "Do you feel our model made reasonable mistakes?" 1140 | ] 1141 | } 1142 | ], 1143 | "metadata": { 1144 | "kernelspec": { 1145 | "display_name": "Python 3.9.7", 1146 | "language": "python", 1147 | "name": "python3" 1148 | }, 1149 | "language_info": { 1150 | "codemirror_mode": { 1151 | "name": "ipython", 1152 | "version": 3 1153 | }, 1154 | "file_extension": ".py", 1155 | "mimetype": "text/x-python", 1156 | "name": "python", 1157 | "nbconvert_exporter": "python", 1158 | "pygments_lexer": "ipython3", 1159 | "version": "3.9.7" 1160 | }, 1161 | "vscode": { 1162 | "interpreter": { 1163 | "hash": "9179d63e122556c6756b68a7ef958f74b99cc51ba44ce56116c8db8491528148" 1164 | } 1165 | } 1166 | }, 1167 | "nbformat": 4, 1168 | "nbformat_minor": 5 1169 | } 1170 | -------------------------------------------------------------------------------- /lessons/02-Vanilla-Convolutional-Neural-Network-Comparison.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "8a87e8f0", 6 | "metadata": {}, 7 | "source": [ 8 | "# Comparing Performance of Different Types of Neural Networks\n", 9 | "\n", 10 | "**Time**\n", 11 | "- Teaching: 15 minutes\n", 12 | "\n", 13 | "**Questions**\n", 14 | "- \"How do vanilla neural networks compare to convolutional neural networks?\"\n", 15 | "\n", 16 | "* * * * *" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "id": "2e673336", 22 | "metadata": {}, 23 | "source": [ 24 | "### Convolutional Neural Networks\n", 25 | "The neural networks we have created so far are known as *vanilla neural networks* also known as *fully-connected, feed-foward neural networks*. \n", 26 | "\n", 27 | "These have many great usecases, but for problems in computer vision, we often use a different architecture called covolutional neural networks (CNNS).\n", 28 | "\n", 29 | "We will review the the details of how CNNs work in the slides, but for now, let's just compare their efficacy in image classification to vanilla neural nets by:\n", 30 | "1. Training a vanilla neural network architecture we used in the last notebook\n", 31 | "2. Building a convolutional neural network\n", 32 | "3. Comparing the classification accuracy between the two models" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "id": "034bb61e", 38 | "metadata": {}, 39 | "source": [ 40 | "### Imports" 41 | ] 42 | }, 43 | { 44 | "cell_type": "code", 45 | "execution_count": null, 46 | "id": "4de327f5", 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "from keras.datasets import mnist\n", 51 | "from keras.models import Sequential\n", 52 | "from keras.layers import Dense\n", 53 | "from keras.layers import Conv2D\n", 54 | "from keras.layers import MaxPooling2D\n", 55 | "from keras.layers import Flatten\n", 56 | "from tensorflow.keras.utils import to_categorical\n", 57 | "from keras.models import load_model\n", 58 | "\n", 59 | "import os\n", 60 | "import matplotlib.pyplot as plt\n", 61 | "import numpy as np" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "id": "969c2a7e", 67 | "metadata": {}, 68 | "source": [ 69 | "### Input data " 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "id": "c49c530f", 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "def get_mnist_data(subset=True):\n", 80 | " \"\"\"\n", 81 | " Returns the MNIST dataset as a tuple:\n", 82 | " (x_train, y_train, x_val, y_val, x_test, y_test)\n", 83 | " \n", 84 | " When subset=TRUE:\n", 85 | " Returns only a subset of the mnist dataset.\n", 86 | " Especially important to use if you are on datahub and only have 1-2GB of memory.\n", 87 | " \"\"\"\n", 88 | " \n", 89 | " if subset:\n", 90 | " N_TRAIN = 5000\n", 91 | " N_VALIDATION = 1000\n", 92 | " N_TEST = 1000\n", 93 | " else:\n", 94 | " N_TRAIN = 48000\n", 95 | " N_VALIDATION = 12000\n", 96 | " N_TEST = 10000\n", 97 | " \n", 98 | " (x_train_and_val, y_train_and_val), (x_test, y_test) = mnist.load_data()\n", 99 | " \n", 100 | " x_train = x_train_and_val[:N_TRAIN,:,:]\n", 101 | " y_train = y_train_and_val[:N_TRAIN]\n", 102 | " \n", 103 | " x_val = x_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION,:,:]\n", 104 | " y_val = y_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION]\n", 105 | " \n", 106 | " x_test = x_test[:N_TEST]\n", 107 | " y_test = y_test[:N_TEST]\n", 108 | " \n", 109 | " return x_train, y_train, x_val, y_val, x_test, y_test\n", 110 | " \n", 111 | " " 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "id": "7c261f81", 118 | "metadata": {}, 119 | "outputs": [], 120 | "source": [ 121 | "x_train, y_train, x_val, y_val, x_test, y_test = get_mnist_data(subset=True)" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "id": "30230c81", 127 | "metadata": {}, 128 | "source": [ 129 | "### Transformation to one dimension\n", 130 | "\n", 131 | "This is the same transformation we used in the last notebook to flatten our image pixels to one dimension for use in a vanilla neural network." 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": null, 137 | "id": "49a18d13", 138 | "metadata": {}, 139 | "outputs": [], 140 | "source": [ 141 | "def one_dim_transform_data(xdata, ydata):\n", 142 | " \"\"\"\n", 143 | " Transforms image data:\n", 144 | " 1. Flattens pixel dimensions from 2 -> 1\n", 145 | " 2. Scales pixel values between [0,1]\n", 146 | " Transforms target data (ydata):\n", 147 | " - Formats targets as one hot encoded columns\n", 148 | " \"\"\"\n", 149 | " \n", 150 | " x = {}\n", 151 | " for name, partition in zip([\"x_train\", \"x_val\", \"x_test\"],xdata):\n", 152 | " flatten = partition.reshape((partition.shape[0], 28 * 28))\n", 153 | " scaled = flatten.astype('float32') / 255\n", 154 | " x[name] = scaled\n", 155 | " \n", 156 | " y = {}\n", 157 | " for name, partition in zip([\"y_train\", \"y_val\", \"y_test\"],ydata):\n", 158 | " y[name] = to_categorical(partition)\n", 159 | " \n", 160 | " return x['x_train'], y['y_train'], x['x_val'], y['y_val'], x['x_test'], y['y_test']" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "id": "8fee7250", 167 | "metadata": {}, 168 | "outputs": [], 169 | "source": [ 170 | "x_train_trans, y_train_trans, x_val_trans, y_val_trans, x_test_trans, y_test_trans = one_dim_transform_data([x_train, x_val, x_test],\n", 171 | " [y_train, y_val, y_test])" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "id": "4808ecb1", 177 | "metadata": {}, 178 | "source": [ 179 | "### Train a vanilla neural network (same as the third model we built in notebook 01)" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": null, 185 | "id": "0c002279", 186 | "metadata": {}, 187 | "outputs": [], 188 | "source": [ 189 | "vanilla_model = Sequential()\n", 190 | "vanilla_model.add(Dense(512, activation= \"relu\", input_shape=(28*28,)))\n", 191 | "vanilla_model.add(Dense(512, activation= \"relu\"))\n", 192 | "vanilla_model.add(Dense(10, activation=\"softmax\"))\n", 193 | "\n", 194 | "vanilla_model.compile(optimizer = 'rmsprop', \n", 195 | " loss = 'categorical_crossentropy',\n", 196 | " metrics = ['accuracy'])\n", 197 | "\n", 198 | "vanilla_history = vanilla_model.fit(x_train_trans, \n", 199 | " y_train_trans, \n", 200 | " epochs=10, \n", 201 | " batch_size=128, \n", 202 | " validation_data=(x_val_trans, y_val_trans))\n" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "id": "973e9864", 208 | "metadata": {}, 209 | "source": [ 210 | "### Transformation back to two dimensions\n", 211 | "While vanilla neural networks primarily handle one-dimensional input data, convolutional neural networks work well on multidimensional input data!\n", 212 | "\n", 213 | "We will backtransform our 1-dimensional data into 2-dimensions.\n", 214 | "\n", 215 | "*Note* - We also must add a depth/channel dimension to our data. Color pictures have 3 channels for Red, Green, and Blue while our black and white mnist images only have 1." 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "execution_count": null, 221 | "id": "6f5d0762", 222 | "metadata": {}, 223 | "outputs": [], 224 | "source": [ 225 | "def back_transform_2d(data):\n", 226 | " \"\"\"\n", 227 | " Takes a list of flattened input pixel data.\n", 228 | " Reshapes pixel data from a single vector to two dimensions.\n", 229 | " \"\"\"\n", 230 | " \n", 231 | " two_dimensional_data = []\n", 232 | " \n", 233 | " for d in data:\n", 234 | " # reshape to [index for image, pixel row, pixel column, channels]\n", 235 | " transformed = d.reshape(d.shape[0], 28, 28, 1)\n", 236 | " two_dimensional_data.append(transformed)\n", 237 | " \n", 238 | " return [t for t in two_dimensional_data]" 239 | ] 240 | }, 241 | { 242 | "cell_type": "code", 243 | "execution_count": null, 244 | "id": "51c002f7", 245 | "metadata": {}, 246 | "outputs": [], 247 | "source": [ 248 | "x_train_2d, x_val_2d, x_test_2d = back_transform_2d([x_train_trans, x_val_trans, x_test_trans])" 249 | ] 250 | }, 251 | { 252 | "cell_type": "markdown", 253 | "id": "917387c3", 254 | "metadata": {}, 255 | "source": [ 256 | "Let's confirm that we succesfully reshaped our data back to 28x28 pixels." 257 | ] 258 | }, 259 | { 260 | "cell_type": "code", 261 | "execution_count": null, 262 | "id": "48699c5a", 263 | "metadata": {}, 264 | "outputs": [], 265 | "source": [ 266 | "x_train_trans.shape" 267 | ] 268 | }, 269 | { 270 | "cell_type": "code", 271 | "execution_count": null, 272 | "id": "0d0d3c4d", 273 | "metadata": {}, 274 | "outputs": [], 275 | "source": [ 276 | "x_train_2d.shape" 277 | ] 278 | }, 279 | { 280 | "cell_type": "markdown", 281 | "id": "a08eae10", 282 | "metadata": {}, 283 | "source": [ 284 | "Success!" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "id": "9c142d96", 290 | "metadata": {}, 291 | "source": [ 292 | "### Convolutional neural network\n", 293 | "\n", 294 | "Ignore the details regarding implementation for now, just know we are building a rather small convolutional neural network here to compare with our vanilla neural network." 295 | ] 296 | }, 297 | { 298 | "cell_type": "code", 299 | "execution_count": null, 300 | "id": "102ec69f", 301 | "metadata": {}, 302 | "outputs": [], 303 | "source": [ 304 | "convnet = Sequential()\n", 305 | "\n", 306 | "convnet.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))\n", 307 | "convnet.add(MaxPooling2D((2, 2)))\n", 308 | "convnet.add(Conv2D(64, (3, 3), activation='relu'))\n", 309 | "convnet.add(MaxPooling2D((2, 2)))\n", 310 | "convnet.add(Conv2D(64, (3, 3), activation='relu'))\n", 311 | "\n", 312 | "convnet.add(Flatten())\n", 313 | "convnet.add(Dense(64, activation='relu'))\n", 314 | "convnet.add(Dense(10, activation='softmax'))\n", 315 | "\n", 316 | "convnet.compile(optimizer='rmsprop',\n", 317 | " loss='categorical_crossentropy',\n", 318 | " metrics=['accuracy'])" 319 | ] 320 | }, 321 | { 322 | "cell_type": "markdown", 323 | "id": "81fdf466", 324 | "metadata": {}, 325 | "source": [ 326 | "### Comparing architectures and number of parameters" 327 | ] 328 | }, 329 | { 330 | "cell_type": "code", 331 | "execution_count": null, 332 | "id": "630b1dbe", 333 | "metadata": {}, 334 | "outputs": [], 335 | "source": [ 336 | "vanilla_model.summary()" 337 | ] 338 | }, 339 | { 340 | "cell_type": "code", 341 | "execution_count": null, 342 | "id": "243f4613", 343 | "metadata": {}, 344 | "outputs": [], 345 | "source": [ 346 | "convnet.summary()" 347 | ] 348 | }, 349 | { 350 | "cell_type": "markdown", 351 | "id": "fe1f0b09", 352 | "metadata": {}, 353 | "source": [ 354 | "Notice any interesting differences between the two model architectures?" 355 | ] 356 | }, 357 | { 358 | "cell_type": "markdown", 359 | "id": "07b23d7d", 360 | "metadata": {}, 361 | "source": [ 362 | "### Convolutional Neural Network Training" 363 | ] 364 | }, 365 | { 366 | "cell_type": "code", 367 | "execution_count": null, 368 | "id": "c1e481e3", 369 | "metadata": {}, 370 | "outputs": [], 371 | "source": [ 372 | "convnet_history = convnet.fit(x_train_2d,\n", 373 | " y_train_trans, \n", 374 | " epochs=10,\n", 375 | " batch_size=64,\n", 376 | " validation_data=(x_val_2d, y_val_trans))" 377 | ] 378 | }, 379 | { 380 | "cell_type": "markdown", 381 | "id": "653ad920", 382 | "metadata": {}, 383 | "source": [ 384 | "### Accuracy over epochs" 385 | ] 386 | }, 387 | { 388 | "cell_type": "code", 389 | "execution_count": null, 390 | "id": "b475535c", 391 | "metadata": {}, 392 | "outputs": [], 393 | "source": [ 394 | "def plot_epoch_accuracy(history_dict):\n", 395 | " \"\"\"\n", 396 | " Plots the training and validation accuracy of a neural network.\n", 397 | " \"\"\"\n", 398 | " \n", 399 | " acc = history_dict['accuracy']\n", 400 | " val_acc = history_dict['val_accuracy']\n", 401 | " epochs = range(1, len(acc) + 1)\n", 402 | " plt.plot(epochs, acc, color = 'navy', alpha = 0.8, label='Training Accuracy')\n", 403 | " plt.plot(epochs, val_acc, color = 'green', label='Validation Accuracy')\n", 404 | " plt.title('Training and validation Accuracy')\n", 405 | " plt.xlabel('Epochs')\n", 406 | " plt.ylabel('Accuracy')\n", 407 | " plt.legend()\n", 408 | " return plt.show()" 409 | ] 410 | }, 411 | { 412 | "cell_type": "code", 413 | "execution_count": null, 414 | "id": "daab6643", 415 | "metadata": {}, 416 | "outputs": [], 417 | "source": [ 418 | "plot_epoch_accuracy(convnet_history.history)" 419 | ] 420 | }, 421 | { 422 | "cell_type": "markdown", 423 | "id": "ebbba5a7", 424 | "metadata": {}, 425 | "source": [ 426 | "### Accuracy on the test data\n", 427 | "So how well does our CNN perform on the test data?" 428 | ] 429 | }, 430 | { 431 | "cell_type": "code", 432 | "execution_count": null, 433 | "id": "aa409212", 434 | "metadata": {}, 435 | "outputs": [], 436 | "source": [ 437 | "def get_model_accuracy(model, x_test, y_test):\n", 438 | " \"\"\"\n", 439 | " Takes a model and a test set of data.\n", 440 | " Returns the accuracy.\n", 441 | " \"\"\"\n", 442 | " \n", 443 | " score = model.evaluate(x_test, y_test, verbose=0)\n", 444 | " \n", 445 | " accuracy = round(score[1]*100, 1)\n", 446 | " \n", 447 | " return accuracy\n", 448 | "\n" 449 | ] 450 | }, 451 | { 452 | "cell_type": "code", 453 | "execution_count": null, 454 | "id": "69c31c66", 455 | "metadata": {}, 456 | "outputs": [], 457 | "source": [ 458 | "vanilla_accuracy = get_model_accuracy(vanilla_model, x_test_trans, y_test_trans)\n", 459 | "convnet_accuracy = get_model_accuracy(convnet,x_test_2d, y_test_trans)" 460 | ] 461 | }, 462 | { 463 | "cell_type": "code", 464 | "execution_count": null, 465 | "id": "8b81c3ad", 466 | "metadata": {}, 467 | "outputs": [], 468 | "source": [ 469 | "print(f\"Classification accuracy results: \\n\\nVanilla Neural Network: {vanilla_accuracy}%\\nConvolution Neural Network: {convnet_accuracy}%\")" 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": null, 475 | "id": "08352551", 476 | "metadata": {}, 477 | "outputs": [], 478 | "source": [ 479 | "def plot_wrong_predictions(model, x_test, y_test, title = \"\"):\n", 480 | " \"\"\"\n", 481 | " Plots 16 incorrectly predicted images.\n", 482 | " \"\"\"\n", 483 | " \n", 484 | " # Back transform images\n", 485 | " x_images = x_test.reshape(x_test.shape[0], 28, 28)\n", 486 | " \n", 487 | " # Format predictions and targets\n", 488 | " predictions = model.predict(x_test)\n", 489 | " predicted = np.argmax(predictions, axis=1)\n", 490 | " target = np.argmax(y_test, axis = 1)\n", 491 | " \n", 492 | " # Get wrong indices\n", 493 | " wrong_indices = np.where(predicted != target)[0]\n", 494 | " \n", 495 | " fig, axes = plt.subplots(4,4, figsize = (30,30))\n", 496 | " fig.suptitle(title, fontsize=30)\n", 497 | " \n", 498 | " axes = axes.ravel()\n", 499 | " \n", 500 | " for ax, index in zip(axes, wrong_indices[:17]):\n", 501 | " ax.imshow(x_images[index], cmap=plt.cm.binary, interpolation='nearest')\n", 502 | " ax.set_title(f\"Predicted {predicted[index]}, Actual is {target[index]}\", size = 25)\n", 503 | " ax.axis('off')\n", 504 | " \n", 505 | " return plt.show()" 506 | ] 507 | }, 508 | { 509 | "cell_type": "markdown", 510 | "id": "d0357ddc", 511 | "metadata": {}, 512 | "source": [ 513 | "### Comparing wrong predictions\n", 514 | "\n", 515 | "Let's visualize incorrect wrong predictions between these two models and see if we can get some insight into how reasonable these mistakes are." 516 | ] 517 | }, 518 | { 519 | "cell_type": "code", 520 | "execution_count": null, 521 | "id": "9766aa36", 522 | "metadata": {}, 523 | "outputs": [], 524 | "source": [ 525 | "plot_wrong_predictions(vanilla_model, x_test_trans, y_test_trans, title = \"Wrong Predictions in Vanilla Neural Networks\")" 526 | ] 527 | }, 528 | { 529 | "cell_type": "code", 530 | "execution_count": null, 531 | "id": "9c477ee6", 532 | "metadata": {}, 533 | "outputs": [], 534 | "source": [ 535 | "plot_wrong_predictions(convnet, x_test_2d, y_test_trans, title = \"Wrong Predictions in Convolutional Neural Networks\")" 536 | ] 537 | }, 538 | { 539 | "cell_type": "code", 540 | "execution_count": null, 541 | "id": "3c13dced", 542 | "metadata": {}, 543 | "outputs": [], 544 | "source": [] 545 | } 546 | ], 547 | "metadata": { 548 | "kernelspec": { 549 | "display_name": "Python 3.9.7", 550 | "language": "python", 551 | "name": "python3" 552 | }, 553 | "language_info": { 554 | "codemirror_mode": { 555 | "name": "ipython", 556 | "version": 3 557 | }, 558 | "file_extension": ".py", 559 | "mimetype": "text/x-python", 560 | "name": "python", 561 | "nbconvert_exporter": "python", 562 | "pygments_lexer": "ipython3", 563 | "version": "3.9.7" 564 | }, 565 | "vscode": { 566 | "interpreter": { 567 | "hash": "9179d63e122556c6756b68a7ef958f74b99cc51ba44ce56116c8db8491528148" 568 | } 569 | } 570 | }, 571 | "nbformat": 4, 572 | "nbformat_minor": 5 573 | } 574 | -------------------------------------------------------------------------------- /lessons/03-Convolutional-Neural-Networks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "d434da15", 6 | "metadata": {}, 7 | "source": [ 8 | "# Convolutional Neural Networks\n", 9 | "\n", 10 | "**Time**\n", 11 | "- Teaching: 1.5 hours\n", 12 | "- Challenges: 30 minutes\n", 13 | "\n", 14 | "**Questions**\n", 15 | "- \"What is CIFAR10 and how does the modeling process and challenges differ from MNIST?\"\n", 16 | "- \"How do convolutional neural networks differ from vanilla neural networks?\"\n", 17 | "- \"How do we write python code to develop convolutional neural networks?\"\n", 18 | "- \"What options do we have to train large models if my personal machine can't handle it?\"\n", 19 | "\n", 20 | "\n", 21 | "**Learning Objectives**\n", 22 | "- \"Understand the process of data preprocessing for deep learning.\"\n", 23 | "- \"Build python functions that help us process and visualize our data.\"\n", 24 | "- \"Take a peek at the machinery underlying convolutional neural networks.\"\n", 25 | "- \"Understand the hardware constrains in deep learning.\"\n", 26 | "\n", 27 | "* * * * *" 28 | ] 29 | }, 30 | { 31 | "cell_type": "markdown", 32 | "id": "f26b2889", 33 | "metadata": {}, 34 | "source": [ 35 | "## Import packages\n", 36 | "\n", 37 | "For this notebook, instead of importing only specific functions, we will also import modules that contain functions.\n", 38 | "\n", 39 | "**Old way:**\n", 40 | "\n", 41 | "`from keras.layers import Dense`\n", 42 | "\n", 43 | "`model.add(Dense(...))`\n", 44 | "\n", 45 | "**New way:**\n", 46 | "\n", 47 | "`from keras import layers`\n", 48 | "\n", 49 | "`model.add(layers.Dense(...))`\n", 50 | "\n", 51 | "But why change it up? I had trouble myself in the past understanding the way modules work, so code would break due simply to the way imports were done. Let's avoid that by getting comfortable with python modules!" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": null, 57 | "id": "339f88d8", 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "from keras import layers\n", 62 | "from keras import models\n", 63 | "from keras.datasets import cifar10\n", 64 | "from tensorflow.keras.utils import to_categorical\n", 65 | "from keras.preprocessing.image import ImageDataGenerator" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "id": "f9a5eee2", 72 | "metadata": {}, 73 | "outputs": [], 74 | "source": [ 75 | "import numpy as np\n", 76 | "import matplotlib.pyplot as plt" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "id": "c1358987", 82 | "metadata": {}, 83 | "source": [ 84 | "### CIFAR10\n", 85 | "\n", 86 | "So we get sidetracked at work and we want to instead build a classifier for animals and vehicles!\n", 87 | "\n", 88 | "We shop around and find an interesting image dataset called [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html).\n", 89 | "\n", 90 | "This dataset consists of:\n", 91 | "- 60,000 total images\n", 92 | "- 10 classes (6,000 images per class)\n", 93 | "\n", 94 | "Example images of each class are shown below:\n", 95 | "\n", 96 | "![CIFAR10 classes](https://maet3608.github.io/nuts-ml/_images/cifar10.png)" 97 | ] 98 | }, 99 | { 100 | "cell_type": "markdown", 101 | "id": "e06b0fa1", 102 | "metadata": {}, 103 | "source": [ 104 | "### Loading the dataset" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": null, 110 | "id": "bb4074f2", 111 | "metadata": {}, 112 | "outputs": [], 113 | "source": [ 114 | "def load_cifar10(subset = True):\n", 115 | " \"\"\"\n", 116 | " Loads a training, validation, and test set of CIFAR10 images.\n", 117 | " \n", 118 | " When subset=TRUE:\n", 119 | " Returns only a subset of the mnist dataset.\n", 120 | " Especially important to use if you are on datahub and only have 1-2GB of memory.\n", 121 | " \"\"\"\n", 122 | " if subset:\n", 123 | " N_TRAIN = 8000\n", 124 | " N_VALIDATION = 2000\n", 125 | " N_TEST = 2000\n", 126 | " else:\n", 127 | " N_TRAIN = 40000\n", 128 | " N_VALIDATION = 10000\n", 129 | " N_TEST = 10000\n", 130 | " \n", 131 | " (x_train_and_val, y_train_and_val), (x_test, y_test) = cifar10.load_data()\n", 132 | " \n", 133 | " x_train = x_train_and_val[:N_TRAIN,:,:]\n", 134 | " y_train = y_train_and_val[:N_TRAIN]\n", 135 | " \n", 136 | " x_val = x_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION,:,:]\n", 137 | " y_val = y_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION]\n", 138 | " \n", 139 | " x_test = x_test[:N_TEST]\n", 140 | " y_test = y_test[:N_TEST]\n", 141 | " \n", 142 | " return x_train, y_train, x_val, y_val, x_test, y_test\n", 143 | " " 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": null, 149 | "id": "7b14bae8", 150 | "metadata": {}, 151 | "outputs": [], 152 | "source": [ 153 | "x_train, y_train, x_val, y_val, x_test, y_test = load_cifar10()" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "id": "0dbf7c22", 159 | "metadata": {}, 160 | "source": [ 161 | "### Input Data Due Dillegence\n", 162 | "\n", 163 | "We will borrow some of our previous functions in order to get a feel for CIFAR10." 164 | ] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "execution_count": null, 169 | "id": "e9206452", 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [ 173 | "def data_summary(data):\n", 174 | " \"\"\"\n", 175 | " Takes a list of our data partitions and returns the shape.\n", 176 | " \"\"\"\n", 177 | " \n", 178 | " for i, data_partition in enumerate(data):\n", 179 | " if i == 0:\n", 180 | " print(\"Training Data\")\n", 181 | " elif i == 2:\n", 182 | " print()\n", 183 | " print(\"Validation Data\")\n", 184 | " elif i == 4:\n", 185 | " print()\n", 186 | " print(\"Testing Data\")\n", 187 | "\n", 188 | " print(f\"Shape: {data_partition.shape}\")" 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "id": "4efbae49", 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [ 198 | "data_summary([x_train, y_train, x_val, y_val, x_test, y_test])" 199 | ] 200 | }, 201 | { 202 | "cell_type": "code", 203 | "execution_count": null, 204 | "id": "0dd71db1", 205 | "metadata": {}, 206 | "outputs": [], 207 | "source": [ 208 | "one_image = x_train[0]\n", 209 | "one_image.shape" 210 | ] 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "id": "23a92346", 216 | "metadata": { 217 | "scrolled": true 218 | }, 219 | "outputs": [], 220 | "source": [ 221 | "plt.imshow(one_image);" 222 | ] 223 | }, 224 | { 225 | "cell_type": "markdown", 226 | "id": "d2c1651f", 227 | "metadata": {}, 228 | "source": [ 229 | "Can you tell what the class of the image above is!?\n", 230 | "\n", 231 | "Let's show more images with their correct class from the CIFAR10 dataset!" 232 | ] 233 | }, 234 | { 235 | "cell_type": "code", 236 | "execution_count": null, 237 | "id": "cda0e803", 238 | "metadata": {}, 239 | "outputs": [], 240 | "source": [ 241 | "def plot_images(x, y, random=False):\n", 242 | " \"\"\"\n", 243 | " Plots 25 images from x data with titles set as y.\n", 244 | " Set random=True if you want random images rather than the first 25.\n", 245 | " \"\"\"\n", 246 | " \n", 247 | " if random:\n", 248 | " indices = np.random.choice(range(y.shape[0]), 25, replace=False)\n", 249 | " \n", 250 | " else:\n", 251 | " indices = np.array(range(25))\n", 252 | " \n", 253 | " fig, axes = plt.subplots(5,5, figsize = (15,15))\n", 254 | " axes = axes.ravel()\n", 255 | " \n", 256 | " for ax, index in zip(axes, indices):\n", 257 | " ax.imshow(x[index])\n", 258 | " ax.set_title(f\"Class: {y[index][0]}\", size=15)\n", 259 | " \n", 260 | " plt.tight_layout()\n", 261 | " \n", 262 | " return plt.show()" 263 | ] 264 | }, 265 | { 266 | "cell_type": "code", 267 | "execution_count": null, 268 | "id": "7274e165", 269 | "metadata": {}, 270 | "outputs": [], 271 | "source": [ 272 | "plot_images(x_train, y_train)" 273 | ] 274 | }, 275 | { 276 | "cell_type": "markdown", 277 | "id": "72e6ea76", 278 | "metadata": {}, 279 | "source": [ 280 | "Oh no, the class labels are just digits!" 281 | ] 282 | }, 283 | { 284 | "cell_type": "markdown", 285 | "id": "4f040bc6", 286 | "metadata": {}, 287 | "source": [ 288 | "## Challenge 1: Translate Classes\n", 289 | "\n", 290 | "Create a function `translate_class()` that uses the correct class name for the target classes (truck, horse, etc..).\n", 291 | "- Use the [keras CIFAR10 documentation](https://keras.io/api/datasets/cifar10/) as a guide to know how the classes are labeled.\n", 292 | "- The function should take a class index [0-9]\n", 293 | "- The function should return the correct corresponding CIFAR10 class category." 294 | ] 295 | }, 296 | { 297 | "cell_type": "code", 298 | "execution_count": null, 299 | "id": "40be335a", 300 | "metadata": {}, 301 | "outputs": [], 302 | "source": [ 303 | "def translate_class(class_integer):\n", 304 | " # Your code here\n", 305 | " return None" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "id": "8f3b995e", 311 | "metadata": {}, 312 | "source": [ 313 | "## Challenge 2: Plotting Image Classes\n", 314 | "\n", 315 | "Create a new function `my_imageplotter()` that reuses code from our `plot_images()` function and incorporates the `translate_class()` function to give us the correct class titles in our images." 316 | ] 317 | }, 318 | { 319 | "cell_type": "code", 320 | "execution_count": null, 321 | "id": "e6d0dcfa", 322 | "metadata": { 323 | "scrolled": true 324 | }, 325 | "outputs": [], 326 | "source": [ 327 | "def my_imageplotter(x, y, random=False):\n", 328 | " \"\"\"\n", 329 | " Plots 25 images from x data with titles set as y.\n", 330 | " Set random=True if you want random images rather than the first 25.\n", 331 | " \"\"\"\n", 332 | " \n", 333 | " # Add code to this function that translates the image class!\n", 334 | " \n", 335 | " if random:\n", 336 | " indices = np.random.choice(range(y.shape[0]), 25, replace=False)\n", 337 | " \n", 338 | " else:\n", 339 | " indices = np.array(range(25))\n", 340 | " \n", 341 | " fig, axes = plt.subplots(5,5, figsize = (15,15))\n", 342 | " axes = axes.ravel()\n", 343 | " \n", 344 | " for ax, index in zip(axes, indices):\n", 345 | " ax.imshow(x[index])\n", 346 | " ax.set_title(f\"Class: {y[index][0]}\", size=15)\n", 347 | " \n", 348 | " plt.tight_layout()\n", 349 | " \n", 350 | " return plt.show()" 351 | ] 352 | }, 353 | { 354 | "cell_type": "markdown", 355 | "id": "97d5d98c", 356 | "metadata": {}, 357 | "source": [ 358 | "Test your function below!" 359 | ] 360 | }, 361 | { 362 | "cell_type": "code", 363 | "execution_count": null, 364 | "id": "ecd13b7f", 365 | "metadata": {}, 366 | "outputs": [], 367 | "source": [ 368 | "my_imageplotter(x_train, y_train)" 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "id": "e74b3685", 374 | "metadata": {}, 375 | "source": [ 376 | "Let's make sure we have balanced class distributions." 377 | ] 378 | }, 379 | { 380 | "cell_type": "code", 381 | "execution_count": null, 382 | "id": "ef11d82a", 383 | "metadata": {}, 384 | "outputs": [], 385 | "source": [ 386 | "def plot_target_distributions(targets, titles):\n", 387 | " \"\"\"\n", 388 | " Returns the distribution of target classes.\n", 389 | " \"\"\"\n", 390 | " \n", 391 | " fig, axes = plt.subplots(3,1, figsize = (10,10))\n", 392 | " \n", 393 | " for ax, target, title in zip(axes, targets, titles):\n", 394 | " ax.hist(target) \n", 395 | " ax.set_title(f\"{title} Class Distribution\")\n", 396 | " \n", 397 | " return plt.show()" 398 | ] 399 | }, 400 | { 401 | "cell_type": "code", 402 | "execution_count": null, 403 | "id": "336aab80", 404 | "metadata": {}, 405 | "outputs": [], 406 | "source": [ 407 | "plot_target_distributions([y_train, y_val, y_test], [\"Train\", \"Validation\", \"Test\"])" 408 | ] 409 | }, 410 | { 411 | "cell_type": "markdown", 412 | "id": "04bda62d", 413 | "metadata": {}, 414 | "source": [ 415 | "Before writing code to build a convolutional neural network, let's quickly review what the significance a convolutional neural network is, the convolutional layer." 416 | ] 417 | }, 418 | { 419 | "cell_type": "markdown", 420 | "id": "3a227afc", 421 | "metadata": {}, 422 | "source": [ 423 | "#### Convolutional layers\n", 424 | "As reviewed in our slides, convolutional layers contain *filters* that we *stride* along our image to find matching patterns, producing a *response map*. \n", 425 | "\n", 426 | "![title](https://qph.fs.quoracdn.net/main-qimg-6428cf505ac1e9e1cf462e1ec8fe9a68)\n", 427 | "- The green boxes represent the pixels in our input image.\n", 428 | "- The yellow boxes represent the *filter*.\n", 429 | "- The movement of the filter represents the *stride*.\n", 430 | "- The red boxes represent our *response map*." 431 | ] 432 | }, 433 | { 434 | "cell_type": "markdown", 435 | "id": "43c36700", 436 | "metadata": {}, 437 | "source": [ 438 | "### Data Transformation" 439 | ] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "id": "d1dcbed5", 444 | "metadata": {}, 445 | "source": [ 446 | "Notice that the input dimensions of our images in convolutional neural networks do not to be flattened to one dimension. Instead, we can have multidimensional input data which is perfect for our 3-dimensional pixel data (row pixel, column pixel, channel).\n", 447 | "\n", 448 | "Let's scale our pixel values between [0,1] and transform the targets to one hot encoded categorical columns." 449 | ] 450 | }, 451 | { 452 | "cell_type": "code", 453 | "execution_count": null, 454 | "id": "50a73818", 455 | "metadata": {}, 456 | "outputs": [], 457 | "source": [ 458 | "def three_dim_transform(x_data, y_data):\n", 459 | " \"\"\"\n", 460 | " Transforms image data:\n", 461 | " - Scales pixel values between [0,1]\n", 462 | " Transforms target data (ydata):\n", 463 | " - Formats targets as one hot encoded columns\n", 464 | " \"\"\"\n", 465 | " \n", 466 | " x = {}\n", 467 | " for name, partition in zip([\"x_train\", \"x_val\", \"x_test\"],x_data):\n", 468 | " scaled = partition.astype('float32') / 255\n", 469 | " x[name] = scaled\n", 470 | " \n", 471 | " y = {}\n", 472 | " for name, partition in zip([\"y_train\", \"y_val\", \"y_test\"],y_data):\n", 473 | " y[name] = to_categorical(partition)\n", 474 | " \n", 475 | " return x['x_train'], y['y_train'], x['x_val'], y['y_val'], x['x_test'], y['y_test']\n", 476 | " " 477 | ] 478 | }, 479 | { 480 | "cell_type": "code", 481 | "execution_count": null, 482 | "id": "46de6989", 483 | "metadata": {}, 484 | "outputs": [], 485 | "source": [ 486 | "x_train_trans, y_train_trans, x_val_trans, y_val_trans, x_test_trans, y_test_trans = three_dim_transform([x_train, x_val, x_test],\n", 487 | " [y_train, y_val, y_test])" 488 | ] 489 | }, 490 | { 491 | "cell_type": "markdown", 492 | "id": "fdcb2ccd", 493 | "metadata": {}, 494 | "source": [ 495 | "### Convolutional Neural Network Model Syntax" 496 | ] 497 | }, 498 | { 499 | "cell_type": "markdown", 500 | "id": "7fc4f866", 501 | "metadata": {}, 502 | "source": [ 503 | "Let's jump right in and build a convolutional neural network!\n", 504 | "\n", 505 | "We initialize a convolutional neural network the same way we did a vanilla neural network." 506 | ] 507 | }, 508 | { 509 | "cell_type": "code", 510 | "execution_count": null, 511 | "id": "d35c455a", 512 | "metadata": {}, 513 | "outputs": [], 514 | "source": [ 515 | "convnet = models.Sequential()" 516 | ] 517 | }, 518 | { 519 | "cell_type": "markdown", 520 | "id": "73a1a5aa", 521 | "metadata": {}, 522 | "source": [ 523 | "We now add our first convolutional layer. \n", 524 | "\n", 525 | "*Note: I use the argument names here to clarify what parameters we are passing into our neural network including default values.*" 526 | ] 527 | }, 528 | { 529 | "cell_type": "code", 530 | "execution_count": null, 531 | "id": "cc10feec", 532 | "metadata": {}, 533 | "outputs": [], 534 | "source": [ 535 | "convnet.add(layers.Conv2D(filters = 32,\n", 536 | " kernel_size = (3, 3),\n", 537 | " strides = (1,1),\n", 538 | " activation= \"relu\",\n", 539 | " input_shape=(32, 32, 3)))" 540 | ] 541 | }, 542 | { 543 | "cell_type": "markdown", 544 | "id": "27738946", 545 | "metadata": {}, 546 | "source": [ 547 | "We define our first convolutional layer using:\n", 548 | "`.add(layers.Conv2D(...))`\n", 549 | "- This adds a convolutional layer to our model object.\n", 550 | "\n", 551 | "`filters = 32`\n", 552 | "- Initializes 32 filters total.\n", 553 | "- Each filter slides along the entire image.\n", 554 | "- Each filter produce a response map.\n", 555 | "\n", 556 | "`kernel_size = (3, 3)`\n", 557 | "- Specifies the \"kernel\" size of the filters.\n", 558 | "- (3 x 3) : (height x width)\n", 559 | " \n", 560 | "`strides = (1,1)`\n", 561 | "- The filter strides by one unit in both the horizontal and vertical dimension.\n", 562 | "- (1 x 1) or (height stride x width stride)\n", 563 | "\n", 564 | "`activation= \"relu\"`\n", 565 | "- Uses the relu (Rectified Linear Unit) activation function on the response map values.\n", 566 | "- This simply causes any negative values to be 0, and other values to stay the same.\n", 567 | "\n", 568 | "`input_shape=(32, 32, 3)`\n", 569 | "- Specifies the shape of our input data\n", 570 | "- 32 pixels wide\n", 571 | "- 32 pixels high\n", 572 | "- 3 channels deep (Red, Green, Blue)" 573 | ] 574 | }, 575 | { 576 | "cell_type": "markdown", 577 | "id": "9ed5d071", 578 | "metadata": {}, 579 | "source": [ 580 | "#### Max Pooling layers\n", 581 | "After a convolutional layer, we can downsample a *reponse map* using a maxpooling layer. This simply takes the maximum value for a given kernel-size sliding along our response map, much like a filter. \n", 582 | "\n", 583 | "Maxpooling has a dual benefit. Not only does it effectively reduce the amount of data we must process, but it also improves our focus on finding **good** matches from our filter, making our model less specific about the location where it was found (location invariance).\n", 584 | "\n", 585 | "![title](https://nico-curti.github.io/NumPyNet/NumPyNet/images/maxpool.gif)\n", 586 | "- The blue boxes represent our *response map*.\n", 587 | "- The purple box represent our kernel size (2 x 2).\n", 588 | "- The yellow box is the output of our maxpooling operation." 589 | ] 590 | }, 591 | { 592 | "cell_type": "code", 593 | "execution_count": null, 594 | "id": "1078b24c", 595 | "metadata": {}, 596 | "outputs": [], 597 | "source": [ 598 | "convnet.add(layers.MaxPooling2D(pool_size=(2, 2),\n", 599 | " strides = None))" 600 | ] 601 | }, 602 | { 603 | "cell_type": "markdown", 604 | "id": "9ee720ea", 605 | "metadata": {}, 606 | "source": [ 607 | "`.add(layers.MaxPooling2D(...))`\n", 608 | "- Adds a maxpooling layer.\n", 609 | "\n", 610 | "`pool_size=(2, 2)`\n", 611 | "- Defines the \"kernel\" size of our maxpooling operation.\n", 612 | "- Will return a single value from this entire window\n", 613 | "\n", 614 | "`strides = None`\n", 615 | "- None makes it so there is no overlap during striding, just continue to inputs not in the previous window." 616 | ] 617 | }, 618 | { 619 | "cell_type": "markdown", 620 | "id": "f9494618", 621 | "metadata": {}, 622 | "source": [ 623 | "#### Dense layers\n", 624 | "Until this point, the convolutional and maxpooling layers have been performing feature extraction.\n", 625 | "\n", 626 | "In order to classify our images we return to our good old dense layer, the same we used in our vanilla neural networks. \n", 627 | "\n", 628 | "Before we can use our dense layer, we first flatten the data passed through the convolutional and maxpooling layers (which are high dimensional) into a single dimension.\n", 629 | "\n", 630 | "We then use a dense layer that is able to *look* at the all of the features extracted from the convolutional operations, and optimize its weights to correctly associate these features with the correct class.\n", 631 | "\n", 632 | "Finally we have our output layer, which contains the same number of neurons as classes we are predicting. We use a softmax activation function, giving us the probability for each class in our prediction." 633 | ] 634 | }, 635 | { 636 | "cell_type": "code", 637 | "execution_count": null, 638 | "id": "e039adf5", 639 | "metadata": {}, 640 | "outputs": [], 641 | "source": [ 642 | "convnet.add(layers.Flatten())\n", 643 | "convnet.add(layers.Dense(64, activation= \"relu\"))\n", 644 | "convnet.add(layers.Dense(10, activation= \"softmax\"))" 645 | ] 646 | }, 647 | { 648 | "cell_type": "markdown", 649 | "id": "0c022acf", 650 | "metadata": {}, 651 | "source": [ 652 | "#### Compiling\n", 653 | "We now compile our model as we have done previously. Using:\n", 654 | "1. rmsprop optimizer to update weights smoothly\n", 655 | "2. categorical crossentropy loss function\n", 656 | "3. track the accuracy metric" 657 | ] 658 | }, 659 | { 660 | "cell_type": "code", 661 | "execution_count": null, 662 | "id": "7c746cad", 663 | "metadata": {}, 664 | "outputs": [], 665 | "source": [ 666 | "convnet.compile(optimizer= \"rmsprop\",\n", 667 | " loss= \"categorical_crossentropy\", \n", 668 | " metrics= [\"accuracy\"])" 669 | ] 670 | }, 671 | { 672 | "cell_type": "markdown", 673 | "id": "6c4eade4", 674 | "metadata": {}, 675 | "source": [ 676 | "We now use the fit method to train our model for 20 epochs." 677 | ] 678 | }, 679 | { 680 | "cell_type": "code", 681 | "execution_count": null, 682 | "id": "50e7f9a2", 683 | "metadata": {}, 684 | "outputs": [], 685 | "source": [ 686 | "convnet_history = convnet.fit(x_train_trans,\n", 687 | " y_train_trans,\n", 688 | " epochs=20,\n", 689 | " batch_size = 64,\n", 690 | " validation_data=(x_val_trans, y_val_trans))" 691 | ] 692 | }, 693 | { 694 | "cell_type": "markdown", 695 | "id": "e5661d28", 696 | "metadata": {}, 697 | "source": [ 698 | "Let's plot the training and validation accuracy through each epoch of our neural network" 699 | ] 700 | }, 701 | { 702 | "cell_type": "code", 703 | "execution_count": null, 704 | "id": "b6a4ac85", 705 | "metadata": {}, 706 | "outputs": [], 707 | "source": [ 708 | "def plot_epoch_accuracy(history_dict):\n", 709 | " \"\"\"\n", 710 | " Plots the training and validation accuracy of a neural network.\n", 711 | " \"\"\"\n", 712 | " \n", 713 | " acc = history_dict['accuracy']\n", 714 | " val_acc = history_dict['val_accuracy']\n", 715 | " epochs = range(1, len(acc) + 1)\n", 716 | " plt.plot(epochs, acc, color = 'navy', alpha = 0.8, label='Training Accuracy')\n", 717 | " plt.plot(epochs, val_acc, color = 'green', label='Validation Accuracy')\n", 718 | " plt.title('Training and validation Accuracy')\n", 719 | " plt.xlabel('Epochs')\n", 720 | " plt.ylabel('Accuracy')\n", 721 | " plt.legend()\n", 722 | " return plt.show()" 723 | ] 724 | }, 725 | { 726 | "cell_type": "code", 727 | "execution_count": null, 728 | "id": "a3f672e2", 729 | "metadata": { 730 | "scrolled": true 731 | }, 732 | "outputs": [], 733 | "source": [ 734 | "plot_epoch_accuracy(convnet_history.history)" 735 | ] 736 | }, 737 | { 738 | "cell_type": "markdown", 739 | "id": "a1eb7834", 740 | "metadata": {}, 741 | "source": [ 742 | "We can see that our Convolutional Neural Network is failing to generalize. While our training accuracy keeps increasing throughout our 20 epochs, our validation accuracy stalls at ~50%." 743 | ] 744 | }, 745 | { 746 | "cell_type": "markdown", 747 | "id": "c11a538c", 748 | "metadata": {}, 749 | "source": [ 750 | "Let's plot these incorrect predictions and see if they make sense to our own image recognition system, our brain!" 751 | ] 752 | }, 753 | { 754 | "cell_type": "code", 755 | "execution_count": null, 756 | "id": "8ad7045a", 757 | "metadata": {}, 758 | "outputs": [], 759 | "source": [ 760 | "def plot_wrong_predictions(model, x_test, y_test):\n", 761 | " \"\"\"\n", 762 | " Plots 25 incorrectly predicted images.\n", 763 | " \"\"\"\n", 764 | " \n", 765 | " # Format predictions and targets\n", 766 | " predictions = model.predict(x_test)\n", 767 | " predicted = np.argmax(predictions, axis = 1)\n", 768 | " target = np.argmax(y_test, axis = 1)\n", 769 | " \n", 770 | " # Set up subplots\n", 771 | " fig, axes = plt.subplots(5,5, figsize = (25,25))\n", 772 | " axes = axes.ravel()\n", 773 | " \n", 774 | " for ax, index in zip(axes, range(25)):\n", 775 | " ax.imshow(x_test[index], cmap=plt.cm.binary, interpolation='nearest')\n", 776 | " prediction_title = translate_class(predicted[index])\n", 777 | " target_title = translate_class(target[index])\n", 778 | " \n", 779 | " # Color title based on if prediction is correct\n", 780 | " if predicted[index] == target[index]:\n", 781 | " color = \"green\"\n", 782 | " else:\n", 783 | " color = \"red\"\n", 784 | " \n", 785 | " ax.set_title(f\"Predicted {prediction_title}, Actual is {target_title}\", color = color)\n", 786 | " \n", 787 | " return plt.show()" 788 | ] 789 | }, 790 | { 791 | "cell_type": "code", 792 | "execution_count": null, 793 | "id": "99565252", 794 | "metadata": { 795 | "scrolled": false 796 | }, 797 | "outputs": [], 798 | "source": [ 799 | "plot_wrong_predictions(convnet,\n", 800 | " x_test_trans,\n", 801 | " y_test_trans)" 802 | ] 803 | }, 804 | { 805 | "cell_type": "markdown", 806 | "id": "40d370ff", 807 | "metadata": {}, 808 | "source": [ 809 | "Do you see any patterns that this convolutional neural network is able to pick up?" 810 | ] 811 | }, 812 | { 813 | "cell_type": "markdown", 814 | "id": "4d8776f0", 815 | "metadata": {}, 816 | "source": [ 817 | "### Troubleshooting\n", 818 | "\n", 819 | "Perhaps our model is not able to extract enough features for this complex problem. To address this possibility let's build a model with:\n", 820 | "1. More convolutional layers\n", 821 | "2. More neurons in the dense layer" 822 | ] 823 | }, 824 | { 825 | "cell_type": "markdown", 826 | "id": "97702e09", 827 | "metadata": {}, 828 | "source": [ 829 | "## Challenge 3: Create a Convolutional Neural Network\n", 830 | "\n", 831 | "1. Initialize a Convolutional Neural Network called `my_convnet` with the following architecture:\n", 832 | " * Conv2D layer with 32 3x3 filters and relu activation function\n", 833 | " * Maxpooling layer 2x2\n", 834 | " * Conv2D layer with 64 3x3 filters and relu activation function\n", 835 | " * Maxpooling layer 2x2\n", 836 | " * Conv2D layer with 128 3x3 filters and relu activation function\n", 837 | " * Maxpooling layer wtih a pool size of 2x2\n", 838 | " * A flattening layer\n", 839 | " * A dense layer with 512 neurons and relu activation function\n", 840 | " * An output layer with the number of classes and softmax activation function\n", 841 | "\n", 842 | "2. Compile with the model with:\n", 843 | " * rmsprop optimizer\n", 844 | " * categorical crossentropy loss\n", 845 | " * accuracy metric\n", 846 | "\n", 847 | "3. Train the network for 20 epochs.\n", 848 | "\n", 849 | "4. Plot the training and validation accuracy through each epoch\n" 850 | ] 851 | }, 852 | { 853 | "cell_type": "code", 854 | "execution_count": null, 855 | "id": "c46cd816", 856 | "metadata": {}, 857 | "outputs": [], 858 | "source": [] 859 | }, 860 | { 861 | "cell_type": "markdown", 862 | "id": "d3c069ea", 863 | "metadata": {}, 864 | "source": [ 865 | "Does expanding the architecture have a significant impact on the validation accuracy of the model?" 866 | ] 867 | }, 868 | { 869 | "cell_type": "markdown", 870 | "id": "38d59bfb", 871 | "metadata": {}, 872 | "source": [ 873 | "### Generalization with Image Augmentation\n", 874 | "\n", 875 | "In our vanilla neural network, we prevented overfitting to the training data by using a dropout layer.\n", 876 | "\n", 877 | "We could do the same for our convolutional neural network, but instead of using dropout, let's use image augmentation. We will feed altered images with a random amount of rotation, shifting, distorted, and flipped! This will make our neural network generalize to different image orientations." 878 | ] 879 | }, 880 | { 881 | "cell_type": "code", 882 | "execution_count": null, 883 | "id": "69246e21", 884 | "metadata": {}, 885 | "outputs": [], 886 | "source": [ 887 | "datagen = ImageDataGenerator(\n", 888 | " rotation_range=40,\n", 889 | " width_shift_range=0.2,\n", 890 | " height_shift_range=0.2,\n", 891 | " shear_range=0.2,\n", 892 | " zoom_range=0.2,\n", 893 | " horizontal_flip=True,\n", 894 | " fill_mode='nearest')" 895 | ] 896 | }, 897 | { 898 | "cell_type": "code", 899 | "execution_count": null, 900 | "id": "546f3cd9", 901 | "metadata": {}, 902 | "outputs": [], 903 | "source": [ 904 | "def plot_image_augmentation(augment_gen, image, n = 4):\n", 905 | " \"\"\"\n", 906 | " Takes an image data generator and an images.\n", 907 | " Returns n plots of augmented images.\n", 908 | " \"\"\"\n", 909 | " image = image.reshape((1,) + image.shape)\n", 910 | " \n", 911 | " i = 0\n", 912 | " \n", 913 | " for batch in augment_gen.flow(image, batch_size = 1):\n", 914 | " plt.figure(i)\n", 915 | " plt.imshow(batch[0])\n", 916 | " \n", 917 | " i += 1\n", 918 | " if i % n == 0:\n", 919 | " break\n", 920 | "\n", 921 | " plt.show()\n", 922 | " " 923 | ] 924 | }, 925 | { 926 | "cell_type": "code", 927 | "execution_count": null, 928 | "id": "d5c79057", 929 | "metadata": {}, 930 | "outputs": [], 931 | "source": [ 932 | "plot_image_augmentation(datagen, x_test_trans[12])" 933 | ] 934 | }, 935 | { 936 | "cell_type": "markdown", 937 | "id": "c262766a", 938 | "metadata": {}, 939 | "source": [ 940 | "Let's reuse the architecture from our first convolutional neural network, then fit it using these augmented images.\n", 941 | "\n", 942 | "*Warning* - This will take a while!" 943 | ] 944 | }, 945 | { 946 | "cell_type": "code", 947 | "execution_count": null, 948 | "id": "39795355", 949 | "metadata": {}, 950 | "outputs": [], 951 | "source": [ 952 | "augmented_image_model = models.Sequential()\n", 953 | "\n", 954 | "augmented_image_model.add(layers.Conv2D(32, (3,3), activation= \"relu\", input_shape = (32, 32, 3)))\n", 955 | "augmented_image_model.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", 956 | "augmented_image_model.add(layers.Conv2D(32, (3,3), activation= \"relu\"))\n", 957 | "\n", 958 | "augmented_image_model.add(layers.Flatten())\n", 959 | "augmented_image_model.add(layers.Dense(64, activation= \"relu\"))\n", 960 | "augmented_image_model.add(layers.Dense(10, activation= \"softmax\"))\n", 961 | "\n", 962 | "augmented_image_model.compile(optimizer= \"rmsprop\",\n", 963 | " loss= \"categorical_crossentropy\",\n", 964 | " metrics= [\"accuracy\"])" 965 | ] 966 | }, 967 | { 968 | "cell_type": "code", 969 | "execution_count": null, 970 | "id": "21295230", 971 | "metadata": {}, 972 | "outputs": [], 973 | "source": [ 974 | "augmented_image_model_history = augmented_image_model.fit(\n", 975 | " datagen.flow(\n", 976 | " x_train_trans,\n", 977 | " y_train_trans,\n", 978 | " batch_size=32),\n", 979 | " validation_data=datagen.flow(\n", 980 | " x_val_trans,\n", 981 | " y_val_trans,\n", 982 | " batch_size=32),\n", 983 | " steps_per_epoch = 200,\n", 984 | " epochs=20)\n" 985 | ] 986 | }, 987 | { 988 | "cell_type": "code", 989 | "execution_count": null, 990 | "id": "e9353f71", 991 | "metadata": { 992 | "scrolled": true 993 | }, 994 | "outputs": [], 995 | "source": [ 996 | "plot_epoch_accuracy(augmented_image_model_history.history)" 997 | ] 998 | }, 999 | { 1000 | "cell_type": "markdown", 1001 | "id": "2fb439a1", 1002 | "metadata": {}, 1003 | "source": [ 1004 | "Even after training a convolutional neural network with image augmentation for 100 epochs, we still don't cross the 60% validation accuracy threshold. " 1005 | ] 1006 | }, 1007 | { 1008 | "attachments": { 1009 | "augmented_image_acc_100_epochs.png": { 1010 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAc4AAAEWCAYAAADvi3fyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOydd1hUR9fAf0OTIkizYcMOWACx9xZ77EZNs7xRU33VmMSYvMZo2pcYk5hEjdFEjcZeYjf2hih2RVAUQUBBKQIK0na+P+7usgu7C1iiSe7veXjYe6fcuXPnzrnnzMwZIaVERUVFRUVFpWRYPekCqKioqKio/J1QBaeKioqKikopUAWnioqKiopKKVAFp4qKioqKSilQBaeKioqKikopUAWnioqKiopKKVAFZwkRQnQUQsQ96XL8mxFCtBNCXHrUcZ8kQoiRQojDjyFfo/YqhAgTQnQsSdwHuNZ8IcT/HjS9SvEIIT4RQiQJIRKedFn+jgghpBCizqPKr8SCUwixXwiRKoQo86gu/lcihFgshPjkMeb/SB/M40IIUVkIsUgIcVMIkSGEiBBCfCyEcNKGSyHEeSGElUGaT4QQi7W/vbVxthXKd5kQYrqJ600VQtzV/t0XQuQbHIeVpuxSykNSyvqPOu7TiBDCXghxRwjR2UTYN0KItaXJT0rZQEq5/xGUq4igl1K+KqWc+bB5F3NNKYQY+riu8TQjhKgOvA34SSkrPY6PLW2/sEkIcUNb196FwssIIX4RQqQLIRKEEJMKhXfR9iWZQoh9QogaFq4VLYTIMugH7gohfniU9/O4KZHg1FZiO0ACfR9jeVQeI0IId+Ao4AC0klI6A88ArkBtg6hewLBismshhGhd3DWllJ9JKctKKcsCrwJHdcdSygYGZROGwvrfjpTyPrAKeNnwvBDCGhgOLHkS5XpCjABSKFQXjxshhM1feT0LVAeSpZS3HkVmZu5LA+wABplJNh2oC9QAOgHvCiF6aPPzBNYD/wPcgRMobdcSzxr0A2WllG+W+kaeJFLKYv+AacARYDawpVDYfuAVg+ORwGGD427AJSANmAsc0MXXxj0CfAPcAaKA1trzscAtYIRBXmWAWcB1IBGYDzhowzoCcShfZreAm8AobdhYIBfIAe4Cm7XnvYB1wG3gGjDe4FoOwGIgFbgIvAPEWagjCdTR/p4OrAGWARnAeaAe8L62bLFAN4O0o4BwbdwoYFyhvN/V3s8N4JVC1zJbJybK+Im2LFbF3Md7QCRgY5Busfa3t0GcfQbplgHTi2lHhdvGfuBTbRvIAupYqgvdMzY4jgYmA+dQ2tcqwL60cYurYxP3UWwZMdEOteEewCYgHTgOzDSsk0LXaa29hqPBuV7afG0eoK66lqRtA1OAq9p8LwIDtOd9gftAPsp7dEd7fjHwiUH6McAVFGG3CfAq1L5eRWlfd4AfAWGhzdRA6dQHAXlAJYMwa2CqQVlPAtW0YQ2AXdoyJAJTzZTVVD29p20n2dp6Nlkfhe433CC8ibZO1xWKNwf4zsx9mqvzrijvhkZb56vMPIOS9I3vAQnAbxbq20b7jLwLnb+BcZ81E1hp0L8GG4Q5acvsY+Ya0Wjbopk+4gjwA8p7GgF0MQj30rapFG0bG1PC9mC23aH0Owe010sCVlnqx6SUJRacV4DXgSAUAVSxUOdnUnACnigdxEDtA/mvNr2h4MxD6QCsUTro69qbKoMidDOAstr432grzR1wBjYDnxs0jjxgBmCL0sFkAm5mXhgrbcVOA+yAWiidT3dt+BfAIe21qgEXKJ3gvA901973UhTB/IG2bGOAawZpe6NofALooC13E21YD5TG3gBwRBFQhtcyWycmyhgCfFzMs5YoX5YnDZ6TKcHpDMRT0Bk/qOC8rr03G23dWKqLjhTt5I6jvEzuKJ3Xqw8Q12Idm7iP4spoqR2uBFajdC4NtXVoUnBq418GXjQ4XgF8W8JymBOcFts2MERbT1bAUOAeUNnUMyz8bgGdUTqfJijv8PfAwULtawuKlaM6ykdrDwv3/z/guPb3eeBtg7B3tOfqa+vAH+XDxBnlg+VtwF573MJMP2Cqns5o68WhBPUxRPsMm2nLUAdF2FfWxnPVxrNB+eAJMnOflq5RuIymnkFJ+sb/0z4Tkx/WBuU0EpyAm/acYb8/GDiv/f0dMK9QPheAQWauEY1lwZkHTER5f4aiCDR3bfhBFAXMHgjQtp/OltpDce0O5Z36QFv39kBbS/2YlCUQnEBbFGHnqT2OACYW6vzMCc6XUUxzujCBom0ZCs5Ig/BGJh5QsraChLYx1TYIa4VWAGkbRxZaLUl77hbQ0swL0wK4Xuhe3wd+1f6OwuCFRvmqKo3g3GUQ9izK16G19thZG9/VTF4bgf9qf/+CgSBEeTGl9r/FOjGRbyRaYVHcfaB0+DEoHxWmBKcNysdUiPb8gwrOGcWkMayLjhTt5AyFypfA/AeIa7aOi3s/zJTRZDtE+TjMxeBLHPgMy4LzQ+BP7W8XFOEY+IB1pROcpW3bZ4B+pp5h4XcLWAR8aRBWVnvP3gbtq61B+GpgioVrRwITDN7PswZhl3TlKpRmOHDaTH76slqop9HFPG/D+tipq3MT8baj1YiAPsDFkrQnE9coXEajZ0DJ+sYcDCwsFq5rSnBW054ztNA8A0QbPPMvCuVzBBhp5hrRaLVlg78xBvd2AwMrBMoH70vacuQDzgZhn1PQN5lsD8W1OxTFZgFQtaTPpyRjSiNQXtwk7fHv2nMlwQtFUAIglVIWnr2XaPA7Sxuv8LmyQHkUbeCkdtLEHRSbfHmDuMlSyjyD40xtWlPUALx0eWnzmwpUNFV2FCFSGgrfQ5KUMt/gGF3ZhBA9hRAhQogUbTl6oWjrpsph+LskdWJIMsqXcLFIKbehPKtxFqItBCoKIZ4tSZ5mMLyf4urCFIazDC09b0txLdVxEUpQRnPtsDxKx1SadvUb0EkI4YXylX9VSnm6hOUwh8W2LYR4WQhxxqBNNSxhvrq89flJKe+itLsqBnFK9MyEEG2AmihaOih9TyMhRID2uBqKWa4w5s6XlMJt0lJ9WLrWEuBF7e8XUZ6lSR6yzkvSD9yWyrj5g3BX+9/F4JwLijVQF+6CMYbhpugvpXQ1+PvZICxeKyt0xKC0Ky8gRUqZUShM17aKe+7m2t27KB8fx4Uy+3y0hTyAYiYHCSEcgOeADtqZVAkoKrS/EMJfG+0eykPTUcng902gqkF+wvC4lCShCJwGBpVdTiqTTkqCLHQci/JFZvjwnKWUvQzKXs0gfvUHLLdFtLOU16GMT1SUUroC21AepK4chnVmWKbS1sluYEApJuF8gPIx4WgqUEqZA3yMMt4hTMUpAfrnUoK6eFxYqmMjHrKMt1HMUCVuV1LKGBSz6osoX91LHkE5zLZt7WzIn4E3UcxcrihmN12+hd+jwtxA+SjV5eeEYj6NL0G5CjNCe90z2r7nmMF5UN7h2ibSxaIMvZjCUn+lw7BNFlcf5soAigWgsRCiIYrGudxUpBJcw2z5tJSkHyjuuZlFSpmK0mb8DU77A7pZ8WGGYdpnXtsgvLRU0coKHdVR2tUNwF0I4VwoTNe2LD0Ls0gpE6SUY6SUXiiKwlxRzAqJ4jrQ/iiqsR+KuTQAZYLAIQpmuJ0BBgohHLUX+49B+q0oX4j9tTO53sB0Qy0WKaUGpXF9I4SoACCEqCKE6F7CLBIxfpmOAxlCiPeEEA5CCGshREMhRDNt+GrgfSGEmxCiKvDWg5S7BNihjDvcBvKEED1RxnZ1rAZGCSF8hRCOKGM+wAPVyWyUL8Eluuni2vizhRCNC0eWyvKFC1i2MPyGMi7QoyQ3WwzF1cXjwmwdP8oyai0O64Hp2vfFj5JZb5agdKptKOh8H6auLLVtJ5RO9jaAEGIUivajIxGoKoSwM5P3CpS6DNAK98+AY1LK6BKWDe117VE+2sdS0PcEaMv6vLY/WQjMFELU1c7KbiyE8EAZy6oshJgglGUUzkKIFtqszwC9hBDuQohKwIRiilJcfSwEJgshgrRlqKN7t7Qa3loUTfm4lPL6A16jMEbP4BH0jWjT2KO0KYAy2mMdS4EPtW3GB2WexmJt2AagoRBikDbNNOCclDKiNNc3oAIwXghhK4QYgiJztkkpY4Fg4HOhLNdqjCJvlmnTmWsPxd33EO17AMqEOYkyGcssxQnOEShjfte1UjlBSpmAMuPpBW3j/QbFfp6I8oLrv6q05t0hKONJySgC+ATKbLUH4T2UiUohQoh0FA2qpGv1FgF+WlPGRm0n1gflZbyG8tW2ECinjf8xihngGvAnFswsD4PW7DAepTNLBZ5HGeTXhW9HmY23D+29a4N0dVjiOpFSpqDM1MwFjgkhMoA9KIPvV8wU8UOUCQfmyp+P8qKYjVNSiquLx0UJ6vhRlvFNFBNRAkrH82sJ0qxDqd89Usqbj6AcZtu2lPIi8DXKsqVElHkHRwzS7kXRJBKEEEkUQkq5G+XDYx2KllKb4pc2maI/iha1tFDf8wuKubsHyofgau09pKO84w7aunkGZW5BAso4aSdtvr8BZ1HG2f6kmGUTxdWHlHINyszw31FMkxsxfheWaNOY7T9KUOeFMfUMHqZv1JFFgVk2goIhJYCPUMygMSgzUL+SUu7Qlv82yqznT1HaYguKf+abhfE6zg0GYcdQJigmafMcLKVM1oYNR5lncQNFYH+kbXNgpj2U4L6bofSHd1Heof9KKaMsJdBNx/1L0JoI44AXpJT7/rIL/4MQQviiaIFlCo2jqTwi1DpWeVQIxXlBBMoymvQnXZ6nHSHESJTJo22fdFks8dgXnAshugshXLVmm6kodvuQYpKpGCCEGKA1ObmhTCnfrHbojxa1jlUeNVpFYRLKekdVaP6D+Cs8tbRCUfGTUEwn/aWUWZaTqBRiHMqShqsoY86vPdni/CNR61jlkaGdIJOOYjL+6AkXR+UR85eaalVUVFRUVP7uqL5BVVRUVFRUSsHT4sT4ieHp6Sm9vb2fdDFUVFRU/lacPHkySUppztnKP5p/veD09vbmxIkTT7oYKioqKn8rhBCl9ab2j0E11aqoqKioqJQCVXCqqKioqKiUAlVwqqioqKiolIJ//RinKXJzc4mLi+P+/QfdTEDln4a9vT1Vq1bF1tb2SRdFRUXlCaMKThPExcXh7OyMt7c3xk76Vf6NSClJTk4mLi6OmjVrPuniqKioPGGeOlOtEKKHEOKSEOKKEGKKifCRQojbQtm77owQ4hWDsHyD8w/sHPz+/ft4eHioQlMFACEEHh4eqgVCRUUFeMo0TiGENfAjipuqOCBUCLFJu3uAIauklG+ayCJLShlg4vyDlOVRZKPyD0FtDyoqKjqeNo2zOXBFShml3SR5JdDvCZdJRUVF5R9FdPQd5s8/wb17OU+6KH9LnjbBWQVlF28dcdpzhRkkhDgnhFgrhDDcyd5eCHFCCBEihOhv7iJCiLHaeCdu3779iIr+6EhOTiYgIICAgAAqVapElSpV9Mc5OZYb+okTJxg/fnyx12jduvWjKi4AEyZMoEqVKmg0Fvd/VVFReQpYsOAky5efJycn/0kX5W/JU2WqLSGbgRVSymwhxDiUjWI7a8NqSCnjhRC1gL1CiPNSyquFM5BSLgAWADRt2vSp83Lv4eHBmTNnAJg+fTply5Zl8uTJ+vC8vDxsbEw/uqZNm9K0adNirxEcHPxoCgtoNBo2bNhAtWrVOHDgAJ06dSo+0QNg6b5VVFRKxpUrKezaFcWoUQG4uZVkn2eVwjxtGmc8YKhBVtWe0yOlTJZSZmsPFwJBBmHx2v9RwH4g8HEW9q9k5MiRvPrqq7Ro0YJ3332X48eP06pVKwIDA2ndujWXLl0CYP/+/fTp0wdQhO7o0aPp2LEjtWrVYs6cOfr8ypYtq4/fsWNHBg8ejI+PDy+88AK6HXO2bduGj48PQUFBjB8/Xp9vYfbv30+DBg147bXXWLFihf58YmIiAwYMwN/fH39/f72wXrp0KY0bN8bf35+XXnpJf39r1641Wb527drRt29f/Pz8AOjfvz9BQUE0aNCABQsW6NPs2LGDJk2a4O/vT5cuXdBoNNStWxedVUGj0VCnTh2eRiuDisqjIiHhLuHh5tv4ggUncXS05cUXG/+Fpfpn8bR9vocCdYUQNVEE5jDgecMIQojKUsqb2sO+QLj2vBuQqdVEPYE2wJcPW6Cvvw7m0qXkh83GiPr1PXj77dKbSuPi4ggODsba2pr09HQOHTqEjY0Nu3fvZurUqaxbt65ImoiICPbt20dGRgb169fntddeK7IW8fTp04SFheHl5UWbNm04cuQITZs2Zdy4cRw8eJCaNWsyfPhws+VasWIFw4cPp1+/fkydOpXc3FxsbW0ZP348HTp0YMOGDeTn53P37l3CwsL45JNPCA4OxtPTk5SUlGLv+9SpU1y4cEG/FOSXX37B3d2drKwsmjVrxqBBg9BoNIwZM0Zf3pSUFKysrHjxxRdZvnw5EyZMYPfu3fj7+1O+/L/SL7XKvwApJZMm7eTq1VRmzXqGdu1qGIVfupTE3r3XGDs2CBeXMk+olH9/niqNU0qZB7wJ7EQRiKullGFCiBlCiL7aaOOFEGFCiLPAeGCk9rwvcEJ7fh/whYnZuH9rhgwZgrW1NQBpaWkMGTKEhg0bMnHiRMLCwkym6d27N2XKlMHT05MKFSqQmJhYJE7z5s2pWrUqVlZWBAQEEB0dTUREBLVq1dILK3OCMycnh23bttG/f39cXFxo0aIFO3fuBGDv3r289pqyH7S1tTXlypVj7969DBkyBE9PTwDc3d2Lve/mzZsbrZ+cM2cO/v7+tGzZktjYWCIjIwkJCaF9+/b6eLp8R48ezdKlSwFF4I4aNarY66moPE2cOZPAqlUXmDPnGNOm7ePIketm4x4+fJ3Ll5Nxdrbjvfd2c/LkDaPwn346ibNzGZ5/vtHjLvY/mqdN40RKuQ3YVujcNIPf7wPvm0gXDDzy1vAgmuHjwsnJSf/7f//7H506dWLDhg1ER0fTsWNHk2nKlCn4qrS2tiYvL++B4phj586d3Llzh0aNlKrPzMzEwcHBrFnXHDY2NvqJRRqNxmgSlOF979+/n927d3P06FEcHR3p2LGjxfWV1apVo2LFiuzdu5fjx4+zfPnyUpVLRaUwx4/HEx19h+eea/BQ+WRn5/Hzz6fo1q029ep5FAm/deses2YFs3fvNQBsba0pU8aaPXuusXRpf2rXNv7olFKyaNFpvLyc+fXXfrz22lYmTtzJJ590xspKEBWVysGDMbz+ejPKlrV7qLL/23mqNE6VkpOWlkaVKsqE48WLFz/y/OvXr09UVBTR0dEArFq1ymS8FStWsHDhQqKjo4mOjubatWvs2rWLzMxMunTpwrx58wDIz88nLS2Nzp07s2bNGpKTFfO3zlTr7e3NyZMnAdi0aRO5ubkmr5eWloabmxuOjo5EREQQEhICQMuWLTl48CDXrl0zyhfglVde4cUXXzTS2FVUHgQpJV9/fZSvvgrmxo2Mh8pr586rLF58hhEjNrJy5QX93IKUlCyWLDnD4MGrOXz4Om+80Yw//3yJ4ODRrFv3HE5OtkyZsoesLON3JDT0Bhcu3GLkyAA8PBz54YdeuLraM2nSTiZM2MGcOceoXr0cw4Y1fKhyqzyFGqdKyXj33XcZMWIEn3zyCb17937k+Ts4ODB37lx69OiBk5MTzZo1KxInMzOTHTt2MH/+fP05Jycn2rZty+bNm/nuu+8YO3YsixYtwtramnnz5tGqVSs++OADOnTogLW1NYGBgSxevJgxY8bQr18//P399dc0RY8ePZg/fz6+vr7Ur1+fli1bAlC+fHkWLFjAwIED0Wg0VKhQgV27dgHQt29fRo0apZppVR6aS5eSuXpV+Shbu/Yi48e3eOC8tmy5TNWqLtSq5casWcEcPnydvDwNp07dRKORtGlTjffea4uXl7M+jYeHI5980pk33tjGV18FM21aB33YokWnKF/eiT596gFQoYITS5b058yZBDw9HalYsSyeno5YWanOPB4WofvK+bfStGlTWXgj6/DwcHx9fZ9QiZ4e7t69S9myZZFS8sYbb1C3bl0mTpz4pItVak6cOMHEiRM5dOjQQ+WjtguVr746wvr1ETRpUonw8CS2bXsBe3tF/zgae5SFpxbyc9+fsRKWjXk3bmTQt+8KXn+9GaNGBbB6dRjffXeMypWd6datFl271ipiijVk3rxQFi06zQsvNCIoyIu8PA3vvruLSZNa/WXjl0KIk1LK4te+/QNRNU4Vs/z8888sWbKEnJwcAgMDGTdu3JMuUqn54osvmDdvnjq2+Q8gP19DRkYOrq72T+T6ubn57NhxlQ4davDccw0YO3YzO3ZcoX9/HwDWh6/nlzO/8H6796njXsdiXlu2XEYIQa9edRFCMHRoQ4YMaYAQJXPvOHZsEJGRKSxffp7ly88D4ObmwIABPg9/oyrFogpOFbNMnDjxb6lhGjJlyhSmTCmyV4DK34jMzFw2boxgxYoLJCdnsnbtc0bmy5Jy5UoK1aq5UKZMQben0UiWLj1Lw4YVaNrUy2L6I0diSUu7T58+9QgMrESdOu6sWhVGv371lfCwCwCcjDttUXBKKdm6NZKmTStTqVJZ/fnSmFCtra2YPbs79+7lEBWVSlRUKrVqueHgoG5791egTg5SUVF5ajlwIJqePZcze/ZRKlRwREpYsuRMqfM5fjyeYcPW8uab28jMLJhU88MPx/nhh+O8/fafxMenW8xjy5bLuLs70KpVVa2W2IDIyGSOHo3jf//bx7noSADWHdlnlO7EiRv077+SAweiAWV5SXx8Os8+W7/U91EYJyc7GjWqSL9+PjRqVPGh81MpGargVFFRsUhcXDqDB68mNDS++MiPkPT0bGbOPEiVKs4sWdKfRYv68eyz9di06TK3bt0rcT6ZmbnMnHkQT09Hzp5NZMKEHWRl5bJixXmWLj1Lz551sLISvP/+HnJzTftuTU3N4tCh6/TqVRdra6Xb7NmzLi4uZZgwYQc7d16lTHmlTAcvHdfnk5en4YsvDhMXl87kybtYuvQsW7ZcxtHRllbtKvHOn+9w+55lT1Y5+TlopOoD+mlCFZwqKioWOXMmgejoO0ya9Cfnzxd1oPG4+P77Y6SnZ/Pxxx1p0KACACNHBpCfr2HZsnP6eNnZeYSGxmNuouP33x8jIeEuX375DJ980pkzZxIYMWIjs2eH0LlzTT7+uBPTprXn4sXbfP/9cZN57Nx5lfx8DX361ONMwhn+iPgDe3sbhg9viItLGb757hnSNIoATC0TzZYtlwFl5m109B0+/7wLXbvWZM6cY2zadJkuXWpyLPEIs47OYl14UY9fOvI0edT7vh5fHnloJ2gqjxBVcKqoqAAQG5vGtm2RRQTQ9etpWFkJPDwcGD9+B5GRpXNBmZ6ebVaTM8e5c4ls2BDB8883om7dAucAXl7O9OxZh/Xrw0lNzSI5OZNx47bw2mtbOX68qEZ88uQN1qy5yPDhDWncuCLdutVm5sxOREffwd+/IjNndsLKStCpU02GDm3A77+f15tUdWRm5rJs2Tn8/MpTp447H+z9gNGbRgPwyitN+PPPl6jZ0JZ8mU/1ctXJcUxmwZJgkpIy+emnk7RoUYWuXWvx2WddGDs2CBsbKwYO9OV4vCKkzyeeN1sPofGhxKTFsPfa3lLVn8rjRRWcTyGdOnXSu63T8e233+rd15miY8eO6JbV9OrVizt37hSJM336dGbNmmXx2hs3buTixQJPhdOmTWP37t2lKb5F1O3Hnk62br3M88+vZ9q0fcTGGo/1xcTcoWpVF+bO7Y2Dgw1vvLGNxMS7JcpXo5G8+OJ63nhjG3l5JXvm+fkaPv/8MBUqODF2bFCR8FGjAsnOzuerr4J5+eWNXLmSgq2tNUeOxBrFy8nJZ8aMg1SrVo7XXy9Yh9y9ex3WrBnCjz/2Mpoo9N//tsTHx5OPPtpPXFxBHcybF0pi4j0mT26NlJKQuBBSslJIyUpBCIGVlSA2Xbl2n7qKx6yozAjGjt3MvXs5vP12a4QQCCEYOzaIgwdH0qhRRb3gvHD7gtm62HlV6QdOJ5wu8kETmRxJbFqsqWQqjxlVcD6FDB8+nJUrVxqdW7lypUVH64Zs27YNV1fXB7p2YcE5Y8YMunbt+kB5Fabw9mOPi9K4DPy3k5mZy7Rp+/joo/1UrKg4ndAt8Ndx/Xo61auXw8vLmR9/7EVaWjYrV5rv7A2JiEjixo0MTp26yddfl2wru6VLzxIZmcw777TG0bHoLFFvb1e6dKnJn39e1bqZ60vTppWLCM7Dh68TH5/O5Mmt9GstddSo4WokNAHs7Kz58stnsLISvPvuLu7fzyMs7BYrV4YxeLAvjRtX5ErKFVKylPq5mlKwY6FOgPWupzgjca2XxvXraQwe7EetWm5G17G1tUZKaaRxmjMz77iyA4CkzCTi0uOMwvqt7Mdza58zmU7l8aIKzqeQwYMHs3XrVr2/1ujoaG7cuEG7du147bXXaNq0KQ0aNOCjjz4ymd7b25ukpCQAPv30U+rVq0fbtm31W4+BskazWbNm+Pv7M2jQIDIzMwkODmbTpk288847BAQEcPXqVaPtvvbs2UNgYCCNGjVi9OjRZGdn66/30Ucf0aRJExo1akRERITJcqnbjz19fP/9MXbsuMK4cUEsWaLs/X71aqo+XKORXL+eRvXq5QCoWdONjh1rsGnT5RJtgnzoUAxWVoIBA3xYs+Yi69eHA4pbud9/P8+OHVeMhMbBgzHMnXuCZ56pRceO3mbzfeut5gwZ4seSJf2pX9+T1q2rERNzx8gN3o5dEdwKWktVv5LXh5eXMzNndiIyMoXPPjvEzJkHKV/ekTffbA5ASFyIPu6VlCv63zqNs2XVlrg7uFO1aSZt2lRj3LiiGjNATFoMtzNv41fej9T7qdy8e7NInJSsFEJvhNKtdjdA0Tp13Lp3i/CkcELiQohMjjR7P3maPDKyH841oEpR1HWcxTBhxwTOJJR++rslAioF8G2Pb82Gu7u707x5c7Zv306/fhEwizoAACAASURBVP1YuXIlzz33HEIIPv30U9zd3cnPz6dLly6cO3eOxo1N76t38uRJVq5cyZkzZ8jLy6NJkyYEBSkv8sCBAxkzZgwAH374IYsWLeKtt96ib9++9OnTh8GDBxvldf/+fUaOHMmePXuoV68eL7/8MvPmzWPChAkAeHp6curUKebOncusWbNYuHBhkfKo24+VnsTEu1hbW+Hh4VCihfGG3L2bQ2ZmLhUqmHZfePv2Pf744xL9+tVnzBilXXh5ORMVlWoUJzs7D2/vAgvGoEF+7NlzjT17oujZs67FMhw4EIO/f0Xef78diYn3+PLLIxw6FENwcBz5+YrpNjg4lqlT2xEdfYepU/fg6+vJRx91tHi/Vaq48N57bfXHbdpU5+uvjxIcHMvgwX5kZeWy/fwBYpvtYnPkJt5s/mbxFWaQ1yuvBPLzz6cAmD27O05OilP0kLgQnGyduJd7j6upxhqno60jbvZu+Ff0Jy43knXf9TR7DZ22OTpgNJN3TeZ84nm8nI3Xke6O2o1Ganin9TvsurqL0zdP07e+sklUcGyB9r78/HKmd5xu8jrv/PkOq8JWcenNSziXKf3aVxXTqBrnU4qhudbQTLt69WqaNGlCYGAgYWFhRmbVwhw6dIgBAwbg6OiIi4sLffv21YdduHCBdu3a0ahRI5YvX252WzIdly5dombNmtSrp/jBHDFiBAcPHtSHDxw4EICgoCC9Y3hD1O3HSs/69eH06bOCHj2W0aXLUsaM2cTZswklSqvRSF57bSu9ei1n1Kg/+P338yQnZxrFWbbsHHl5GkaODNCfq13bzUjjjIlJA9BrnABNm3pRrVo51q0Lt1iGhIS7XL6cTLt2NbCyEnz6aWeqVnXh/PlbDBvWgNWrh/Dqq03Zvv0Ko0b9wcSJOylXzp5vvulexLRaHNWqueDl5UxwsKL5HT58nTT7GACiUqMspn1317tsvbzV6NyYMUH06lWX555rQPv2BXtahsSH0LJqS6o4VzHSOOMy4qjmUg0hBI0rNuZ84nnyNeY18tD4UOys7Xih8QsAnL9VdILQjis7cLN3o5N3J+p71jfSOA9fP0wZ6zK0rd6WZeeWmTT15uTnsPTcUm7evcnso7Mt1oFK6VA1zmKwpBk+Tvr168fEiRM5deoUmZmZBAUFce3aNWbNmkVoaChubm6MHDnS4pZalhg5ciQbN27E39+fxYsXs3///ocqr25rMnPbkqnbj5WO338/z+zZR2nduhqtW1cjKiqVw4evM3XqXtasGWJy7M+QDRvCCQ+/zbPP1uPSpWRmzz7KwoWn+PnnZ6ld2507d+6zbl04PXrUoUoVF326WrXcOHo0jrw8DTY2VsTEKJPMDAWnlZVg4EAfvvvumN5jjSkOHVIEV4cOiuBxdi7D778PwspKYGNjpb+en195PvxwL3l5GhYt6ouHh2Op60sIQZs21diyJZKcnHx27YpCU0H5yLh255rZdDF3Yvgq+Cuupl7Vj0/q7nHGjE5GcTNzMzmbcJYpbaeQp8kzNtWmxVKtXDUA/Cv6k5WXxZWUK9T3NO3k4PiN4wRWCqRS2Up4OXtx4ZbxmLGUkp1Xd/JM7WewtrImsFIgh68f1ocfiT1CsyrN+E/gfxj1xyiOxR+jZdWWRnnsurqLlKwUapSrwayjs3i92euUd3ryVpR/AqrG+ZRStmxZOnXqxOjRo/XaZnp6Ok5OTpQrV47ExES2b99uMY/27duzceNGsrKyyMjIYPPmzfqwjIwMKleuTG5urpGQcHZ2JiOj6JhI/fr1iY6O5soVpbP47bff6NChQ5F45lC3Hys5CxeeYvbso3TuXJOvv+7GsGENmTq1HV980ZVbt+4xd26oxfRpaff58cdQgoIqM21aB37/fRArVgzCzs6a11/fRlxcOitWnCc7O99I2wSoXdudvDwNsbGKpnn9ehr29jaUL28szJ59tj62ttasW6dYPE6dusn48ds5eDBGH+fgwRiqVy9HjRoFZl47O2u90NTRunU11qwZwu+/DzJaelJaWrWqRlZWLkePxnL48HVEJWXNqSWNc9OlTQCE37asPQOcvHGSfJlPy6otqe1Wu8gYZzUXreCs5A/AucRzJvPJ0+Rx4sYJmldRxk0bVmhYROO8cOsCNzJu0L12dwACKwUSmx5LcmYymbmZnLxxkrbV2jLQdyD2Nvb8dva3ItdZcWEFbvZubBq+iczcTD479Fmx96hSMlTB+RQzfPhwzp49qxec/v7+BAYG4uPjw/PPP0+bNm0spm/SpAlDhw7F39+fnj17Gm0NNnPmTFq0aEGbNm3w8SlwDD1s2DC++uorAgMDuXq1YAzH3t6eX3/9lSFDhtCoUSOsrKx49dVXS3Qfuu3HDLc/K7z92L59+2jUqBFBQUFcvHiRBg0a6Lcf8/f3Z9KkSQCMGTOGAwcO4O/vz9GjRy1uP5aXl4evry9Tpkwxuf2Yv78/Q4cO1afp27cvd+/efaJm2osXbzN//gl69arL5593wda2QIA3blyRIUP8WLUqzKIjgh9/DOXu3RzefbeNfpywbl0P5s7tTV6ehtdf38qqVWF06uRdRFvUHevGOWNilIlBhccbXV3t6dq1Jlu2RDJ58p+MHbuZkJA4pk7dQ2RkMpmZuZw4cZN27aobpZsVPMtkJ+/h4UjVqi5FzpeGpk29sLW15ptvQsjOySXJSvk4ikqNMjtrdeOljQBEpkSSm2/6I0yHbmJQiyotqONeh8R7idzNuUtufi43M27qBadfeT+shTVnE8+azCf8djiZuZl6wdmoQiMu3r5oZNrVLUPRTQwKrBwIKBOEQuNDydXk0qZ6G1zKuNCvfj9Wha0iJ7/A+pKZm8nGiI0M9htM44qNGRUwirkn5hJzp+DDRuUhkFL+q/+CgoJkYS5evFjknMo/n9DQUNm2bVuz4Y+iXWg0GnntWqr8448I+dNPJ2R2dp5R+LffHpUtWvws09Pvm0x/92627NlzmXzuuTUyJyevSHhY2C3ZtOkC+fXXwSbTh4Xdku3a/SKDgn6SERG3i4RnZeXKpk0XyJ9+OiGllLJfvxVyypRdJvM6ffqmDAr6SbZr94tctOiUjI9Plz17LpPPPvu7XLfuogwK+kmePHlDHz8+PV7azLCRVb6uIvM1+aYr6CF57bUtMijoJ9mm3/9JpiMD5gdIpiNv3yt6rymZKdL6Y2tZ45sakunIiNsRFvMeuGqgrP1dbSmllKsvrJZMR565eUZGp0ZLpiMXnFigj9vgxwayz+99TOaz8ORCyXTkpaRLUkopfz39q9GxlFJ2XdpVNpzbUH+cdC9JMh355eEv5acHP5VMRyZnJksppdxyaYtkOnJTxCZ9/FUXVkmmI/dG7ZVSSnn9znVZZmYZ+eL6F6VGo7F4nyUFOCGfgj78SfypGqeKCsr2Y4MGDeLzzz9/bNcIC7tF9+7LGDx4NTNmHGDBgpNs3XpZHy6lZNeuKFq1qoqzcxmTeTg52TFlSluuXk3ht9+MTYEajeTLL4/g5mZv0nEAgJ9feebP78NHH3Wgfn3PIuH29jZUqaLMrM3NzefGjQyj8U1DAgIqMWdOTzZsGMro0YF4eTkza1Y3kpIy+eKLw7i4lMHfv8Dx+MJTC8nT5BGfEW80K/RR0rq1ovXVbKkslRrgMwAwba7dGrmVfJnP5NaTAQhPMm+ulVJyNPaofhxRt/vJlZQr+qUoujFOUMy1ZxNMa5zH44/jau+qz6NRBWXcX+dB6Na9WxyMOag30wJ4OHpQvVx1TiWc4vD1w/iV98PdQZnY1q12NzwdPZkdMpus3CxAMdNWLluZ9jXa68v23xb/Zdm5ZfT6vRfXUs2P+6oUz1MnOIUQPYQQl4QQV4QQRfaDEkKMFELcFkKc0f69YhA2QggRqf0b8deWXOXvzJQpU4iJiaFt27bFR35APlw2j0NN3mbS1EDWrBlCvXoerFlzUW9GvHDhFgkJd3nmmdoW82nfvgadO9dk4cJTRjt6bNlymQsXbvHf/7agbFk7s+n9/Mpb3Jmjdm03oqJSiY/PQKORRmOUhWnduprRZB4/v/J88EE7NBpJ69bV9A7Rc/Nz+enkT7St3hZ7G3tWh622eI8PSufONalevRxl66RiY2VD77rK8IApwfnHpT+oXLYyLzVW1ghbGueMS4/j5t2besFZ2115RldTr+odE+hMtaBMEIpNj+VGxo0ieR2/cZxmXs30m137lfdDIPTjnF8d+Yo8TR6vNHnFKF1gpUBO3jhJcGwwbaoVDNPYWtsys9NMDkQfoO2vbTmXeI5tkdsY2mAo1lYFpv7PunzGdz2+4/D1wzSY24Avj3xZrHlaxTRPleAUQlgDPwI9AT9guBDC1PLlVVLKAO3fQm1ad+AjoAXQHPhICGF6ul8J0HVmKirw8O0hNjGJbfI77jvcpkqTTGrWdGPwYD8uX07m/PlbAOzaFYWtrbXR8gdzTJ7cGhsbK/7v/44gpSQ9PZvvvz+Ov39FevWyvLayOGrXdicmJk3vQcicxmmO3r3r8dVXzxi5udt8eTM3Mm7wTut36FW3F2surrG4XKM0pN1P46N9H5GZm4mXlzPr1w8lNucyPp4++Hgq4/eFNaz7effZHrmdvvX7Us6+HFWcq1jUOHXjmzrB6VLGhfKO5RWNM62oxjnQdyBWwopvjn5jlE9mbibnE8/rxzcBHGwdqONehwu3LpBwN4EfQ3/khUYv6MuuI7BSIJEpkaRlp9G2uvEH3qtNX2XT8E1cSblC0IIgcvJzeL7R80ZxrK2sGd9iPBdfv0i32t2Yvn+6ScGuUjxPleBEEXhXpJRRUsocYCXQr4RpuwO7pJQpUspUYBfQ40EKYW9vT3Jysio8VQBFaCYnJ2Nvb//Aeby58n1yHJQJNxFJimelHj3q4ORkx5o1YWg0kt27FTOtOW1x7cW1eu2mQgUnXn21KcHBsezbF838+SdIS7vPe++1LbWjhMLUquVGfr6GQ4euA6UXnFJK4twPc8uqYHLZ3NC5VC9Xnd51ezO0wVAS7iZw6PqhhyqnjuXnlzPj4AyWnFmiP3c28SyNKzbGyc6JCk4Vimice6/t5V7uPfr7KN6SfMv76p+LKULiQrC3sadxxQJnI3Xc6+hNtS5lXHAp42IUNrzhcOadmEdSZpL+/I4rO8iX+UaCE6BRxUacv3We/zv8f+Tk5/C/9v8rUoYmlZvofxtqnDr61OvDsVeOUdutNo0qNKKpV1OT91KtXDU2DtvI+dfOU8O1+I80laI8bes4qwCGDifjUDTIwgwSQrQHLgMTpZSxZtJWMXURIcRYYCxA9erVi4RXrVqVuLi4f4XLNZWSYW9vT9WqVR8o7ZXkK2xJWUytrPYkuJ7Qd9COjrb06VOX9esj6Ny5Jrdu3eOtt5qbzCM+PZ4ha4bwdqu3mdVNcdQ/dGgDtmy5zOefHyYt7T6DB/tRr96DL+fQoZtZe/BgDG5uDri4mB5vNUWeJo9xm8fxy5lfsLWy5dPOn/Js/WfZc20Pn3b+FGsra3rX7Y2jrSOrw1bT0btjifOWUnIv9x5l7coand8dpWxC8NPJn3i16auk3k8lLj0O/4rKspBabrWKrOX8I+IPnO2c6eStrNX09fRl8ZnFSCmLfHjcz7vPqrBVtK3eFjvrgo+a2u61ORB9gHL25YzMtDqmtpvK8vPL+S7kO2Z2nklSZhKvb32dhhUaGo1fAjQs35CNERuZd2IeL/u/TF2PolYD3czaSmUrUcutlsk68vH04cLrF8jOyy72A0pnblYpPU+b4CwJm4EVUspsIcQ4YAnQuTQZSCkXAAsAmjZtWkSttLW1NfJAo6LyMIxZ+yZorPmwxUy+T5pARHKBZjN4sLK8ZObMg9jZWdOhg7fJPHTLEy7eLvAUZW1txfvvt2X06E2UK1eGV181rWGUFm9vV6ysBOnp2UaTe4rjXs49hq4dytbIrbzX5j0iUyJ5d/e7fH74c2ytbPlP4H8AcLJzok+9Pqy9uJY5PedgY1Wybuj749/z4d4PuTr+qn4hf74mn73X9uLu4M7ZxLOcuHGCuznKzi06wVnTtaaRj1mN1LDp8iZ61OlBGRvlo8DH04eMnAziM+Kp6mL8gfTzyZ+Jz4hn6YClRufruNVh+bnllLMvVyQNKGOXg3wHMef4HN5u/TZjN48l9X4qO1/cqb+ujkYVG6GRGvJlPh+2/9Dk/VdxrkLlspVpV6OdRaFoY2WDjd3fsWv/+/C0mWrjAcNPt6rac3qklMlSymzt4UIgqKRpVVT+anZH7WZ/wk6qX+vL0J6tqO9Zn0tJBc72a9Z0o1kzL9LTs2nTpppZj0C6XTIKj8M1alSRGTM6MmtWt1Jphpaws7PGtVYmV4N+Ir36SfI0xe82k6/Jp/uy7my/sp35vefzRdcvWDtkLT8/+zPZ+dkMbzScimULhPDQBkO5nXmbA9El2yUnNz+XL498SUZOBuvD1+vPn7x5krTsND7v8jmOto4sOLlAv35S54igllstrqdd19/H+cTzJNxN0E8cAkXjhKIThLJys/js8Gd0qNFBr53qqONeB4nkwq0LJjVOgA/bf0h6djq9lvdiQ8QGPun0ib5chuhMwKMCRpnVJoUQ7HppF9/1+M58Ran8JTxtgjMUqCuEqCmEsAOGAZsMIwghKhsc9gV0LX0n0E0I4aadFNRNe05F5bGQlnafmzct7zzxQ8iP2Ga78HK9MTg62uLj4UP0nWj9sgGAIUMaANCtm2nTWZ4mj91Ru7EW1kTfieZezj2j8J496xIQUAmAsFth7Inaw/28B3PFqCO+zhbueJ1iLTOpPac234Z8a3Eyz9G4oxyJPcIPPX9gXNNxgNLRv9LkFeInxbOgzwKj+D3r9KSsXVl+OfNLieYSrA5bTXxGPA42Dqy+WDAjV2emHeAzgGENhrHiwgoOXz9MBacKVCqr1Ektt1rky3z9JB5dmq61CrbL8y2vCM7C45zzT8wn4W4CH3f8uIiWp1tOAsYTgwwJqBRAn3p9OBp3lA41OjCp1SST8ep51GPFoBV8+cyXFuuhQYUG+vtSeXI8VYJTSpkHvIki8MKB1VLKMCHEDCGEzkP5eCFEmBDiLDAeGKlNmwLMRBG+ocAM7TkVlUdOTk4+r7yymaFD13LpUpLJOKlZqWy9shW3+GYMHqBoFD6ePkgkkSkFW0F16uTNB9/60KWL6eGB0PhQUu+nMshvEACXki+ZjJd4N5H2i9vT9beuuP2fG91+66bXVEvDjYwbXLLdT/lrHZnptwBvV28m7pzIvBPzzKZZH74eO2s7Xmz8YpEwV3vXIqZJB1sHxgWN4/fzv/PW9rcsCmUpJbNDZuPj6cOkVpPYH72fxLuK56TdUbsJqBRAeafyjA0ay73ce6wLX6c304JiqoWCJSm7r+3G19OXKi4FUyAqOlXE1d7VSKO/l3OPL458QZeaXejgXdS9pOEYoTmNE+DzLp/Tq24vlvRfYrQ8pDDDGg7D1f7B9tFV+Wt5qgQngJRym5SynpSytpTyU+25aVLKTdrf70spG0gp/aWUnaSUEQZpf5FS1tH+/fqk7kHl78mtW/dM7jF54EA0ISHGmwj//PNJrl1Lxc7OmvHjdxjtA6njp8NLyZO5dHTrS8OGFQD0Tr8NzbXBscEM3NOBg9cPFskDlPFNK2HFW83fAsyvN3xz+5vczbnLr/1+ZVzQOMKTwnlpw0slMrUa8sPxH9CQT6VrzzCi5XPsH7GfzjU7M33/dO7cv1MkvpSS9eHreabWM6XauurLZ75kcqvJ/Bj6I0PXDjWrJR+MOcipm6eY2HIiwxoOQyM1rL24lszcTI7EHqFrTUVzbF6lud7kaTj7VWf6jEqNIjsvm4MxB420TVC0Yx9PHyPBOTd0Lrfu3eLjjh+bLJeHgwflyigzjs1pnKD4ot36/FZ1Bus/iKdOcKqoPAlyc/MZOnQt334bYnT+3r0cPvhgL2+9tZ1NmxRhFx5+myVLztK3b30WLHiW3Nx83nxzG3fuFHT8+fkavt27AMd7Xsx5f7TezFfPQ9mWzdAkqJv4Y27f151Xd9LMqxnNqzTHxsrGaIKQjrUX17L24lqmd5jOyICRfNvjW+b0mENSZlKJxxFB0bLmn5jPAJ8B7Fo9nmrVFD+1s56ZRUpWiklH4WcSzhCTFsNA34Elvg6AlbDiq25fMbvbbNaFr2PERtM+S2aHzMbDwYOXGr9EwwoN8Svvx+qLqzl8/TA5+Tl6ISiEYGyTsQBGGmdVl6rYWNlw7c41QuJCyMzNLCI4QRnn1H2UJNxN4NNDn9K9dnfaVDftE1oIoTfXWtI4Vf55qIJTRQW4ciWFjIxstmy5TGZmgTeVP/+8yv37edSt686MGQdYtuwcH398AA8PRyZObEmtWm7Mnt2dhIS7jB27md27o8jP1zD7l+0k2l/kOZ9hVKxYsHzC0daRGuVqGM2s3Re9Dyg6vgaQkpXC8fjj9KjTAztrO+q41ykyQSg5M5k3tr1Bk8pN9O7jAHrU6YGTrRNrL641e9/X064zL3QeGdmKxrzk7BJS76fyduu3jcodWDmQl/1f5rtj3xF9J9ooj/Xh67ESVjxb71lLVWyWia0mMipgFHui9hQJu5x8mc2XNvN6s9dxsHUA4Dm/5zgUc4ilZ5diZ21n5AxgVOAoPu74sX59JigL/2uUq0FUahR7ru3BWljToUZR06uvpy+J9xJJzUpl0s5JZOVlFTsRRyc4Tc2qVfnnogpOFRUgLExZs5uZmcuOHQXbRW3ceIlatdxYvLg/nTp58+23IVy5ksIHH7TT+5MNCKjErFndyM3VMGXKbvr3X8WcfQsBmDbgzSLX8vH00ZtqM3MzORZ3DDA9drk7ajcaqdGv+/P19C2icb7959ukZKXwS99fsLUumJXrYOtA73q9WR+x3uwY4hvb3uD1ba9Te05tvj/2Pd+EfEOLKi1oVbVVkbifdv4Ua2HN+3veNzq/IWID7Wu0f6i9Hut71Cc5K1kvwHUsO7cMK2HFG83e0J8b2nAoEsny88tpXa01TnYFO+Q42joyrcO0Iibjmm41iUqNYnfUbppXaU45+6JOHXQThL4//j0rLqzg/bbvm91PU0f32t3p5N3JqAwq/3xUwanyryInJ59580JJTFTW+hn6iXVzc6BuXQ/WrQtHSsmVKymEhd2if38f7Oys+eKLrgwb1pDRowNp29bYcUbr1tVYt+45Zs3qRoWKjtypcZxWVdpQ063ohJ/6HvWJSIpASsmR60fI1eRS1aWq0binjh1XduBm70azKor7Or/yflxJuaLfQiojO4MVF1YwLmicyWUOg30Hc+veLaNNkHWcvnmaLZe3MCpgFA0qNGD8jvFcSbnCpFaTTK4TrOJShcmtJ7Pywko2RijbcV1KukTY7TAG+pTOTFsYb1dvAGLSjLe9upx8GW9Xb6OlLD6ePvoxTN34ZnHUcq1FRFIEx+OPmzTTQsGSlOn7p1PXvS5T2hZxlV2EUYGj2Dtib4nKoPLPQRWcKv8qFi8+w6JFp1m3Lpys3Cyqf1udX07/QljYbRo2LM+gQb5cupREeHgSf/wRgY2Nld73q7W1FWPfasiL/zGthVhZCTp29OaNmdW4a3+DkYEvm4zn4+nDvdx7xGfEsy96HzZWNozwH8HNuzdJzy5w2i6lZOfVnXSt1VXvJMDX05d8ma/fRPnPq3+Sk5/DEL8hJq/Vs25PHGwcTJprPzv8GS5lXJjdfTZ7X97L9he2M639NItjle+2eZdmXs0YtHoQC08tZEPEBgAj0+iDoJs4U3i/yKjUKJPrGp/zew7ArBAsTE23mmTkZJAv882m8Xb1pox1GSSSub3nYm/z4C4WVf7ZqIJT5V9DVFQqv/xyGoDg4FhOJ5wmLj2OGftncC0mmQYNKtCzZx0cHGxZseI8W7dG0rGjN66uBR3okDVDGLR6kMXrrDi/AlsrW7PCTOe8+1LSJfZF76OZVzOCKit+PC4nF2wzFpkSyY2MG0YdvV95Zc8Dnbl28+XNuNm7mZ3AUtauLD3r9mRd+Do0UqM/f/H2RdZdXMdbzd/C1d4VIQQ96vTg404fW/TkU9auLHtH7KV77e6M2TyGLw5/QTOvZhZnlZYEncZZePzUnOCc0HICKwetLOLz1Ry6PBxtHfWO2gtjbWVNt9rdGBc0rsQCWeXfiSo4Vf4VaDSSTz45iKOjLc8/34iIiCT2XVbMlzHpMaRWPE3DhhVwcrKjZ886bN9+hfT0bPr3L9ihIjM3k/3R+zkYc9Cig4EtkVvo6N0RNwfTm/Poxs1Cb4QSGh9KJ+9OJpep6MY+Dccb63vWRyAIvx1OviafrZFb6Vm3p0VhN9h3MDfv3uRo7FH9uc8Pf46DrQMTWk4wm84cZe3K8sewPxjhP4K07DQG+Vr+kCgJFZ0qYm9jbyQ40+6nkZyVbFJwOtk5MbTh0BI7tNfl0aFGByN/s4XZNHwT8/vML13hVf51qIJT5V/B+vXhnDuXyKRJrejdWzG9/hl2CC9nL8rbVCWx1m78/JTJLQMHKmNdlSqVpXnzgkXyIXEh5OTnkJOfw/H44yavE5UaRURShJE7t8JULlsZZztnFp1eRL7Mp1PNTtR2q421sDaaIHQs/hhl7crqtUzQzsp1rcHFpIuExIWQlJlU7GzW3vV6U8a6DItOL+Jswln2RO3h9/O/81rT1/B0LLqZdUmwtbbl136/svul3UxsNfGB8jBECEH1ctWNxjh1jtnNuaArDXXc62BvY0+fen0eOi8VFdUTsMrfgps3M7h+PY0WLUo/7T8pKZPvvz9O8+ZV6N27LlKCu7sDoSmn6OzXktTTldjnPpewtFO0cmmFj48nAwf60rhxRaysCjSaA9EHsBJWaKSGQzGHaF+jfZFrbb28FVCElTl0i+1Db4Ria2VL62qtKWNThppuNY2WpByLP0ZTr6ZFvM34HPryYAAAIABJREFUlfcj/HY4my9vxsbKhh51LO+e51LGhZ51e/LrmV/59YziF8Texp63W71dfOVZQAhBl1pdHioPQ7xdvY00Tp2nn0chOF3tXbk6/qrqrk7lkaAKTpW/BbNmBXP4cCzbt7+Au7uDyTjZ2XmEht6gdetqRgJvwYKTZGfnMWWKslelEBDQ2oVd4gbNKjdj92JP7JuXZXbIbNZUWwPA1KntiuS/P2Y/TSo3ISs3i8OxRWepAmyN3Eo9j3pGfkxNUd+zPqE3QmlZtSWOto7KOY/6eo3zft59ziacNenb1NfTl73X9pKdn037Gu1L5Kbtpz4/8XLjgslKtdxqUdm5soUUfz3e5bw5ffO0/vhRCk4AL2evR5KPiopqqlV56klPz+bIkVjy8zVGaywL8+23IUyYsINly87pz8XE3GHjxggGDfI12pC5nG8yALa3qnHnloZuHs+xPnw911KvFckXlF0yQuJC6FCjA22rtyU4NrjI2si7OXfZF73PoplWh4+HMnZquONGfY/6RCZHopEaTt88Ta4mlxZVim5H6+vpy/28+0QkRZTY6UAFpwoM8B2g/zO1dOVJ4+3qze3M23on9lGpUbjZu6n+W1WeOlTBqfLIuXjxtpH3nYdl9+4o8vI0eHg4smXLZZNxLl68zdq14Tg7l2Hu3FAiIhTH6z/+GIq9vQ3/+U8To/j3XKNBCoLXKcJvfMu3sBJWZh2ZH4s/Rk5+Dh29O9KuejvSs9M5l3jOKM6eqD3k5OeUaBytUcVGgPFyivqe9cnKyyI2LZZj8crEoBZViwpOwzHPB/XW8zSiW5JyPe06YH5GrYrKk0YVnCqPlJiYO7z88gb+859N3L59r/gEJWD79ki8vV35z38CuXw5mcuXk/VhSZlJ/HrqVz759CAeHg4sXz4Qd3cHpk7dw/Hj8ezde42XX/YvYt49n3Ia19yqxFy+j62tNe38G9GjTg9WXFhhtGxDx/7o/QgEbau3pV0NxYx76PohozhbI7fibOds5ALOHH3q9eHwqMP6vEDROEHxIHQs/hhVXaqaNC/qPNz4evoa7dDxd6fwkhRVcKo8raiCU+WRcuCAMisyLi6d0aM3ERNTdDcNS6Sl3SclpWCvyps3Mzh9OoFeverSvXttbGysjLTOr4O/ZvTm0ZxIPszkya3x8nJmxoxOxMam89//7sDd3YHnn29kdA0pJcfjj+NXLgCAevU8sLOzZnjD4cSlx3Hk+pGi9xVzgIBKAbjau1K9XHWql6tuJDillGyL3Ea32t0sLnfQYSWsiqy9NFyScizumEkzLSgTXf6/vTsPr6o6Fz/+fXMyJwQyMEPCFIYwYwRFJq0ik1CnAtZquVprr1Nvb1tt+1OrnW2fem1LvcRKr9cBRHBARdRrkUFRDDMJU8KMDElIgAwkJ8n7++MMnBACZDjJIef9PM95OHvtvfZemw15s4a91riUcXxvxPcuep3LiW/grKquYl/RPgucJiBZ4DRNavXq/fTtm0hGxjTKyyuZM+cdXnhhPf/611727y+64KLFqsrDDy/nttsWsW+fK+B6+jQnTepD27aRjB2bzPLlOVRWumqFb2W/A0DFFWu861mmp3fhrruG4HRW8f3vX0F0dFiN6xw4eYDjJce5rr+rZjhwoOs1lOn9phMVGsVrW1+rcfyZyjOsPbiWCT0meNPGJo9lzYE13vvZfGwzh08fvqT+zbp0jOlI24i2rDm4hr1Fe+sMnAArv7uySV4DCSSdYjsR7ghnX9E+Dp8+jLPaaYHTBCQLnKbJnDx5hs2bjzFuXAoDBrTnn/+cQdeuccybt56f/vRjbr11ERkZ62vlu/PNO5nzzhyys/PIyjpOcXEFDz64jM93beB/Vy5l6NCOdOnimrT7ppv6ceJEGWvXHmTxJ5+xs3A7EaVJ7A/PrLFqyL//+5VkZNzEzTcPoFqreSPrDU6eOQngfQdz+hXXcccdg72THMSGxzKj/wzeyH4DZ9XZPtp1h9dRXlVeY0WNscljOVp8lNzCXFSV+RvnA64p7hpKROiX1I93drh+GThf/2ZrFiIh3nc5m3pErTFNyQKnaTKffXaQ6mpl3DjXII+uXeN4+eWbWb16Di+/fDOpY5289MbaWgOHPt7zMS9teonn31hOdHQYzz8/lZOnyrhx/nQ+6/Qckyeneo8dPbo78fFR/O53a/jRf/8VgFdveZWo0CieXfus9ziHI4QRIzoTEiK8u/NdvrX4W9zw8g0UnSli3eF1RDgiGNppCD/60dX07ZvozTd70GwKygr4vz3/503z9G/69kd6vn+c+zFz3pnDX9f9lbuG3tXo9wT7JfajvKochzi80/AFE8+7nBY4TSCzwGmazMqV+0hKiqZ//5qz0URFhdG7b1uWJj1KTrc3WbZst3df0ZkijpccR1EWHX6BKVNSueKKLtzwH2UURx3GGVnE+OvOTnrgmnS9D8ePlxAxJIf+iQO4ddRE7h56Ny9veZnjJcdrlev1rNeJDY9l87HNXP+/1/Ovff9ieOfh5+2LvLH3jbSLbMeCbQsA1+ojb+14iyEdh5AQleA9rn9SfxKiEnjog4d4afNL/HL8L/nnjH82+u/QM0BoUIdBQblUVY+2ZwOnQxy2QLQJSAEXOEVkkojsFJEcEalzXR8RuVVEVETS3ds9RKRMRDa5PzbhZDOqqKhi7dpDjBuXUmPyAY+NRzZSWlmCdDrOokVZ3r7B3QWuINoxtDv5XdYydmo7KqsrefXwXxEERDlFXo1zPfDASF7434nsYzPT+7lex/jhVT+kvKqcv3/19xrHljnLWLpzKbMHzeatmW+x9fhWNhzZwJVdrjzvfUSERnDrgFt5a8dbbD22latevIqtx7bWWCAaXM2KE3tPJCosirdnvs2TE54kRBr/38kzQOhC/ZutWY92PThWcoysvCyS2ybXWF/UmEARUIFTRBzAXGAykAbMFpG08xzXBngE+PKcXbmqOsz9ud/vBTZeGzYcobTUydixyefd71kPsrzNUfbsKeSrr74Gzq4GkrJ9Nojy9pGXWLB1AbsKdnkXLz53jcbwcAe7dR2V1ZXc5A6c/ZL6Ma3vNOZ+NZdSZ6n32GW7l1HiLGHmwJlMSZ3CO7PeISEq4YLvP94x+A6KK4oZkTGCo8VH+fDOD7lzyJ21jsuYlsH+H+5nRv8Zl/rXdFGDO7hGANe12klr53mXc9X+VdZMawJWQAVOYCSQo6p7VLUCWAic76fSr4A/AHUvUWGa1apV+4mICK0xKbqvzw66XvE4WXmC6CQnr7++DXAFzhBCcO5MYXzSVOatn8eTnz7JsE7DeOSqR4DaazSCazmtxKjEGiuHPHrNo+SX5vOnz//kTVuUvYj20e0Z38M1sGdSn0nk/ySfG3rfUOe9jE8ZT+/43qS1TyPze5l1zsfaJqJNjebbptAvqR9f3PMF3x787SY97+XC80rKibITFjhNwAq0wNkVOOizfcid5iUiI4Duqvr+efL3FJGNIrJSRGpPNmr8QlVZtWo/o0Z1JSKi9vTHqsqaA2u8L/NfMSmU1asPsGnTUT7esI6o8vYkxbfhmW8+yemK0+wt2stTE54iuW0ygtSqcVZWV/L+rveZ2ndqjQnQxySP4fa02/ndmt+xv2g/JRUlvLfrPW5Lu63GslsXW4rKEeJg0/2b2HDfBnrG92zMX02DjOo2qtbE7sHCEzjBBgaZwBVogfOCRCQE+DNwvmUdjgDJqjoc+BHwmojE1XGe+0QkU0Qy8/LyzneIqYe1aw9x9Ggx48ennHd/zokc8krzmDNsDgBdhpYDcO+9S9l0OIu4yi488cR4rux2Bd/s/02u6X4NN/W9iXBHOJ3bdK4VOD8/+DmFZwrP29z6p4l/QhB+/PGPeX/3+5Q6S5k5cGa97yk2PDZog1dL6hzb2ftLjgVOE6gCLXAeBnyH0XVzp3m0AQYBn4rIPuAqYKmIpKtquaoWAKjqeiAX6Hu+i6hqhqqmq2p6+/bt/XAbwaOszMnvf7+GlJR2NV4b8eXp35w9aDYxYTEcLt/DY4+N4Qc/SEcST/Ct68YxZoyrb3Tx7Yv59LufemuFKW1TajXVfrLnExziYGLvibWuldw2mZ+N+RmLsxfz+IrH6RTb6ZKmwDOBwRHiILmt69+CBU4TqAItcH4FpIpITxEJB2YBSz07VfWkqiapag9V7QF8AUxX1UwRae8eXISI9AJSgT3Nfwutl6qye3dBjdl/5s1bz9dfn+YXvxhLePj5a2hrDqwhISqBAe0H0D+pP9l52dxyywAmf6sTpZUl9E08+/uNI8RRo1k1pV1KrRrn1uNb6ZPQh7iI8zYo8OPRP6ZHux7sKtjF7Wm3W83xMuNprrXAaQJVQAVOVa0EHgQ+BLYDi1Q1S0SeFpHpF8k+DtgiIpuAxcD9qnrCvyUOLq+8soXZs5fw/e+/x969hWzfnsdrr23lllsGMGJE3Ws7fnbwM0Z3H02IhDCg/QDvDD+eEbW+gfNcKW1TOHjyYI2J17PyshjUYVCdeaLCovjLpL8QGhLKd4Z8p763aVpYn/g+JEYlEh8Z39JFMea8Am4ha1VdBiw7J+2JOo6d4PN9CbDEr4ULYkePFjNv3nr69UsiJ+cEt3/nFY4NXUJqh1k89NDIOvPlleSxs2Cnt38zLSmNV7a8wqnyU5ccOJ3VTo6cPkLXuK6cqTxDzokcZg2cdcHy3tTvJgofLSQ2PLYBd2ta0lPXPsX3079/0UFcxrSUgKpxmsCQl1fCm29ux+k8u1DzH//oep3kT3+6gSVLvkXv64s41GElY77jpE2biDrP9fnBz4Gz7yV6lsTakb+DXQW7iAyNpFtctzrze5rtPM21O/J3UK3VDOww8KL3YUHz8tQpthMjOo+4+IHGtBALnKaW3/9+Db/97Wr+7d+WcuDASVat2s/Klfu5774r6Ny5DfHxUQy/3jWjS2SX0xc815oDawh3hJPeJR04uwjz9rzt7CrYRWpC6gVn3PG8EO8ZIJR1PAuAge0vHjiNMcYfAq6p1rSs7Ow8Vq7cz4QJPdiw4Qh33vkmkZGh9OoVX2Ndy50FOwHIPZFb57nKnGWsOrCK9C7pRIZGAq4BH+GOcLLzstlVsOuCfZXgaqqFszXObce3ERYSRmri+UfwGmOMv1mN09Qwb14mcXERPPXUBBYsuJV+/RIpKjrDz342htDQs/9cdua7AueeopoDl89UnuHqF6+mze/aEP3baNYdXse45HHe/aEhoaQmpLL1+FZyC3Mv2L8JEBMeQ2JU4tkaZ14WfRP7XtJi0cYY4w9W4wwypaVOKiuriYur3S+5ZcsxPvvsIA8+OJKYmHBiYsKZN+8m8vNL6dDh7EodqsqO/B1A7Rpn1vEsvjj0BbcMuIX0zul0iu3EzQNurnFMWvs03tv1HpXVlRcNnOBqrt13cp/r/HlZ3mZfY4xpCRY4g8yvf72KnJwTvP76bbVGLc6bl0l8fBTf+tbZ/sOQEKkRNAHyS/MpPFNIQlQCh04doryynIhQVyDOzst2XefaX3sHAp1rQNIA3sh+A7jwiFqPlLYp7MjfQUlFCXsK9/Ddod+95Ps1xpimZk21QURV+fLLw+zZU8iOHfk19m3ceIQvvzzM3XcPJTr6wks5efo3J/WZhKLsLdrr3Zedl01oSCh9EvrUmd8zQAguPXDuP7nf+/7npYyoNcYYf7HAGUT27z/JyZOuBWWWL8+pse+llzaTkBDFbbfVWsWtFk//5pQ+U4CazbXZ+dn0Tex7wXUUPTXR+Mh4EqMSL3q9lHYplDpLWblvJWAjao0xLcsCZxDZtOkoAL16xfPRR3uornZNnXfgwEnWrDnAbbelERl58db7nQU7iXBEcG3PawHYU3h2gFB2XnaNGuX59E3sS4iE0Dex7yW95O4ZWbssZxnhjnB6J/S+aB5jjPEXC5xBZPPmo7RtG8m9944gL6+EjRuPALBw4TbCwhyXVNsEV+Dsk9CHzrGdiQmLIbfQVeMsc5axp3APaUkXPk9kaCRXdL6Cq7pddUnX87zLuXr/agYkDagxl60xxjQ3+wkURDZtOsbQoR0ZNy6FqKgwli/PoW/fRN59dxc33tibhISo8+YrqSghJvzsAKGd+TsZ1GEQIkKv+F7ewLmrYBfVWn3RGifAqjmrLjkAemqczmqn9W8aY1qc1ThbqSVLsvn0033e7RMnyjh48CQ9BoVypOwg117bg08+2cvixdmUlTmZPbv2RASqyg+X/5AOf+rA3kLXACBnlZPcwlz6JfYDoHdCb28fp2dE7aUEzsjQyEsOnAlRCcSEuQK39W8aY1qaBc5W6MCBk/zwjcd58IXfU1Hhmm9282ZX/+biit8z8ZWJTJzYi1OnysnI2MCIEZ3p1y+p1nmeXvk0z335HKXOUl7Z8grg6s+srK6kX5I7cMb3Zk/hHqq1muy8bG/fZVMSEW9z7cVmGjLGGH+zwNkKvTh/A0d6Lye307u8955rBZJNm44SFgEbC78g50QOCalltG0bidNZdd7a5l++/Au/XPlL5gybw/iU8byy9RVU1fsqirfGGd+b8qpyjpw+QnZ+Nn0S+njf6WxKnuZaq3EaY1qaBc5W5tChU7zzSSZV4aWcifuaFxesobpa2bz5GB2HFFNcUQzA/+37iBkz+tGrVzzjx/eocY5lu5fxyPJHuLn/zWTclMF3hnyHXQW7yPw60/sqirfG6R7hmluYe0kjahsqNSGVNuFt6Bnf0y/nN8aYS2WBs5WZP38jFe2OeLd3lK3n449z2bEjH0evQwAkRSfxYe6HPPzwKBYuvI2QkJqvhPxt3d9IbpvMa7e+RmhIKLem3UqEI4KXt7zMzoKddIjpQLvIdoBr0nZwrXayu2D3RUfUNtTj4x9n1ZxVF1xJxRhjmoP9FGpFvv76NO+9t4veoysBCHeEoyn7eOaZz6msrKYgZgcpbVOYNXAWK/au4EzlmVpBM68kj49yP2L2oNneFU3aRbZjer/pLNy2kKy8LG8zLbiaUB3iYHnucqq0ym81zqToJIZ1GuaXcxtjTH1Y4GxF5s/fSGhoCG16FdEpthPjUsZR1X0vJ0+eQVGyS9YzNmUsN/a5kbLKMtYcWFPrHIuzF1OlVcweNLtG+p1D7iSvNI8vDn1RI3CGOcJIbpvMx7kfA5c2otYYYy5nfgucInKTiLWrNZeKiio++CCHqVNTyTm1g0EdBjE+ZTwHyncRleCkfb8z5JUeZ2zyWCb0mEC4I5wPcz6sdZ4F2xaQ1j6NIR2H1Eif1GcSCVEJAPRP6l9jX++E3pQ4SxDE2/dpjDGtlT8D20xgt4g8IyL9L3q0m4hMEpGdIpIjIo9d4LhbRURFJN0n7WfufDtF5MZGlv+ysmXLMcrLKxkztjtZeVkMaj+ICT0mADD9oUhG3e56LWVs8lhiw2MZkzyG5bnLa5zj4MmDrD6wmtmDZteaCi/cEc7MgTMBagXH3vGuAUI943sSHRbtj9szxpiA4bfAqap3AsOBXOB/RGStiNwnIm3qyiMiDmAuMBlIA2aLSK22P/c5HgG+9ElLA2YBA4FJwN/d5wsK69YdJiRESOpTQamzlEEdBnFllyuJDI3kcNhWDshWEqMSvbXFSb0nse34Ng6fOuw9x8JtCwGYNWjWea/xwJUPcHW3q2tNlecJnNZMa4wJBn5tSlXVU8BiYCHQGbgZ2CAiD9WRZSSQo6p7VLXCnW/GeY77FfAH4IxP2gxgoaqWq+peIMd9vqDwxReHGDy4A7mnXa+LDOowiIjQCEZ3H83K/StZvX81Y5LHeGuSN/ZxVcg/zD3bXLtg2wKu7HJlnUuCDewwkM/v+Zyk6JqTJXheSfHXiFpjjAkk/uzjnC4ibwGfAmHASFWdDAwF/rOObF2Bgz7bh9xpvucdAXRX1ffrm9fnHPeJSKaIZObl5V3iHQWuU6fK2b49n1GjurHt+DbgbO1vfMp4Nh3dRG5hLmOTx3rzDO4wmM6xnVmUtYiNRzby2YHP2Hh0Y61BQZfCU4sd2mloE9yNMcYENn9O8n4r8KyqrvJNVNVSEbmnISd0Dzb6M/DdxhRMVTOADID09HRtzLkCQWbm16gqo0Z15Q8520hpm0KbCFeLuKefE2BsytnAKSLc1PcmMjZkeGudgjBz0Mx6Xz+tfRorv7uS0d1HN+5GjDHmMuDPwPlLwPsmvohEAR1VdZ+qflJHnsNAd5/tbu40jzbAIOBTd5NjJ2CpiEy/hLytRl5eCYmJ0d53MNetO0x0dBgDB3Zg2+fbasznOrLrSCIcEThCHAzvNLzGeZ6b/Bx3D7ubY8XHOFZyjE6xnejSpkuDyjQuZVzDb8gYYy4j/gycbwC+VZAqd9qVF8jzFZAqIj1xBb1ZwB2enap6EvB2sInIp8CPVTVTRMqA10Tkz0AXIBVY1zS3Ejh27sznrrveZtq0VB5/fDzgCpxXXNEZlSp25O9gSuoU7/GRoZHc2OdGHOIgzBFW41yRoZFWSzTGmHryZ+AMdQ/wAUBVK0Qk/EIZVLVSRB4EPgQcwHxVzRKRp4FMVV16gbxZIrIIyAYqgQdUtapJ7iRAVFcrv/2ta+7Zd97ZyejR3UlLa8+BAye5/fY0dp/YjbPaWWsFkcW3L26hEhtjTOvjz8CZJyLTPcFORGYA+RfLpKrLgGXnpD1Rx7ETztn+DfCbhhY40L311nayso7z5JPjWbx4O7/+9WrvyiajRnUj87hr9p5zA+e5NU1jjDEN58/XUe4Hfi4iB0TkIPAo8H0/Xq9VO3GijL/+dR1XXtmFadP68utfX0tVVTUZGetJSoqmZ892bDu+jRAJqTWzjzHGmKbjzwkQclX1KlwTGQxQ1dGqmuOv67V2zz67lvLyKh57zPUuZvfubfnJT1z9kyNHdkVE2HZ8G6kJqd7J2Y0xxjQ9fzbVIiJTcc3kE+l58V5Vn/bnNVujXbsK+OCDHO65ZzgpKe286dOm9cXprGbEiM6UV5aT+XUmI7sGzZwPxhjTIvwWOEXkv4Fo4FrgH8BttMJRrs1hyZJswsMdfPvbNSdeFxFuuWUAqsr33v0eB08d5O9T/95CpTTGmODgzz7O0ap6F1Coqk8BVwN9/Xi9Vqm01MkHH+QwcWJv4uIizntMxvoMXtz4Ir8Y+wum9Z3WzCU0xpjg4s/A6ZlHtlREugBOXPPVmnpYvjyH0lIn6TdGoFp7kqPPD37OQx88xOQ+k3lqwlMtUEJjjAku/gyc74pIO+CPwAZgH/CaH6932SspqeCf/9xIUZHrdw5VZfHibEKu3MBNH40mY31GjeOdVU5mLp5JcttkXr3lVRwhQbMYjDHGtBi/BE73nLKfqGqRqi4BUoD+db2PaVwWLcpi7tyvuO++d8nPLyUrK4/MwtWs7/QPAF7Per3G8Sv3r+TQqUP88YY/Eh8V3xJFNsaYoOOXwUGqWi0ic3Gtx4mqlgPl/rhWa1Fdrbz11g5SUtpx9Ggx9967lNhehexJn0da+zSu63ktf/vqb+SX5nuX9VqcvZiYsBgm9ZnUwqU3xpjg4c+m2k9E5FbxvIdiLigz82u+/vo0k+9sx9THzrAx4RUWhf+c6NAYln37fe4edjfVWs27O98FoKq6ije3v8m0vtOICotq4dIbY0zw8Od7nN8HfgRUisgZQABV1Tg/XvOy9eab26nqvpfvbXFNrhTWM4x2FSn8z/QX6d62O93iupHSNoU3d7zJnOFzWH1gNXmledyWdlsLl9wYY4KL3wKnqrbx17lbmxMnyvj0033EzzhEaGUo6+5dx8AOAwl3nJ0TX0S4uf/NPJ/5PKfLT7M4ezFRoVFM7jO5BUtujDHBx58TIJx3gcZzF7Y28N57u6isrKYwbjsjo0YyvPPw8x5384Cb+a8v/4v3d7/Pku1LmJI6hZjwmGYurTHGBDd/NtX+xOd7JDASWA9c58drXnZUlbff3kHa8DheK9jEY2Meq/PYa7pfQ/vo9jy+4nGOFh/l9rTbm7GkxhhjwL+TvN/k87kBGAQU+ut6l6P8/FKeffYLDhw4Sa8Jp6nSKq7tcW2dxztCHHyz/zfJOZFDZGhkjQWrjTHGNA9/jqo91yFgQDNeL6AUnSki8+tMAE6fLueXv/yUadNeY8GCbUyc2JtTiTsId4QzuvvoC57n5v43AzCpzyTaRFg3sjHGNDd/9nH+FfDMERcCDMM1g1BQ+tu6v/Hkp0+y68Fd/GtJEe+/v5vbb0/jjjsG061bHOkZj3JVt6su+mrJdT2vY0rqFB4e+XAzldwYY4wvf/ZxZvp8rwQWqOpnfrxeQPv69NdUazXPffkcumUSvXvH89OfXgO4aqMbj27k8XGPX/Q8EaERvH/H+/4urjHGmDr4M3AuBs6oahWAiDhEJFpVSy+USUQmAc8BDuAfqvr7c/bfDzwAVAHFwH2qmi0iPYDtwE73oV+o6v1NeD+NUlBWAMD8jfMZur0fU687u0TYqv2rqNbqC/ZvGmOMCQx+nTkI8G13jAL+70IZRMQBzAUmA2nAbBFJO+ew11R1sKoOA54B/uyzL1dVh7k/ARM0AQpKC2gf3Z4SZwn7Ez9h8OAO3n0r9q4gMjSSq7pd1YIlNMYYcyn8GTgjVbXYs+H+Hn2RPCOBHFXdo6oVwEJghu8BqnrKZzOGs/2oAa2grIBR3UYxJPYqjvdcQf+BZydlX7FvBaO7jyYi9PzrbRpjjAkc/gycJSIywrMhIlcAZRfJ0xU46LN9yJ1Wg4g8ICK5uGqcvqNkeorIRhFZKSJjG170pldQWkBiVCJDSm7GGVXEl8UfedM3H9tszbTGGHOZ8Gcf5w+BN0Tka1zz1HYCZjbFiVV1LjBXRO4A/h9wN3AESFbVAneQfltEBp5TQwVARO4D7gNITk5uiiJdVEGZK3Ae2JZCfP/uPLHiCRZuW8jGoxsBLHAaY8xlwp9z1X4lIv2Bfu6knarqvEi2w0B3n+0U1eeqAAAS3UlEQVRu7rS6LASed1/Pu3SZqq5310j7UnN0r6dsGUAGQHp6ut+bes9UnqHUWUqMI469e05y29j7efnUr4gNj+XaHtcyqusoru5+tb+LYYwxpgn48z3OB4BXVXWbezteRGar6t8vkO0rIFVEeuIKmLOAO845b6qq7nZvTgV2u9PbAydUtUpEegGpwJ4mvakGOlF2AoAzJ8IAuHfU3cy78mfYimvGGHP58Wcf5/dUtcizoaqFwPculEFVK4EHgQ9xvVqySFWzRORpEZnuPuxBEckSkU24li27250+DtjiTl8M3K+qJ5r2lhqmoNT1KkrRkRBCQoRBgzpY0DTGmMuUP/s4HSIiqqrgfdUk/CJ5UNVlwLJz0p7w+f5IHfmWAEsaVWI/8bzDeWxvNb17JxAdHdbCJTLGGNNQ/qxxLgdeF5FviMg3gAXAB368XsDy1DgP7a5kyJAOFznaGGNMIPNnjfNRXCNXPRMRbME1sjboeGqczpMRDBnSsYVLY4wxpjH8uaxYNfAlsA/XxAbX4eq3DDqeGmdoRQyDB1vgNMaYy1mT1zhFpC8w2/3JB14HUNWgfVGxoKyAUI0gIS6O7t3jWro4xhhjGsEfTbU7gNXANFXNARCR//DDdS4bBWUFhDpjGDask42mNcaYy5w/mmpvwTWLzwoRecE9MCioo8WRouNIWTTDhgVlF68xxrQqTR44VfVtVZ0F9AdW4Jp6r4OIPC8iE5v6epeDQwVHCa2IscBpjDGtgD8HB5Wo6muqehOuqfM24hppG3TyigsIr2pD//5JLV0UY4wxjeTP9zi9VLVQVTNU9RvNcb1Ac9JZSMc27QkNbZa/bmOMMX5kP8mbQEVVBWXO86+YVlJSQXlIMckdrJnWGGNaAwucTeCepfcw5bUp5923duNuCKmmb7duzVwqY4wx/mCBswnsLdzLp/s+5eDJg7X2fbHZtZDLwJ4pzV0sY4wxfmCBswmUOEsAeHvH27X2bdjpWtmsS7zNGGSMMa2BBc4mUFxRDMBbO96qkV5ZWc3O/a5aaGJUYrOXyxhjTNOzwNkEPIFz5f6V5Jfme9N37synlJMAJEZb4DTGmNbAAmcTKK4oZnzKeKq1mqU7l3rT168/QmW4qxnXapzGGNM6WOBsJFWlpKKEscljSW6b7G2uzc09wT/+sYHEbiAI7SLbtXBJjTHGNAULnI1UVlmGorSJaMMt/W/ho9yP2HfkGI88spzo6DBGXNOW+Kh4HCGOli6qMcaYJmCBs5E8/Zux4bHcMuAWKqoquOtXf6So6AzPPnsjZXLammmNMaYVCbjAKSKTRGSniOSIyGPn2X+/iGwVkU0iskZE0nz2/cydb6eI3Ngc5fUEzpiwGEZ3H02stGNb5Up+9atrGTCgPQWlBTYwyBhjWpGACpwi4gDmApOBNGC2b2B0e01VB6vqMOAZ4M/uvGnALGAgMAn4u/t8flVS4Rr8ExseiyPEQVLBCIo7ZzFmnGumoIKyAqtxGmNMKxJQgRMYCeSo6h5VrQAWAjN8D1DVUz6bMYC6v88AFqpquaruBXLc5/Mr36baU6fKCcntizOkjM8Pfg5gNU5jjGllAi1wdgV856075E6rQUQeEJFcXDXOh+uT153/PhHJFJHMvLy8RhXYN3Bu23acuPz+OCSUD3I+AFw1zoTIhEZdwxhjTOAItMB5SVR1rqr2xrW+5/9rQP4MVU1X1fT27ds3qizePs7wGLZuPUZYdTTXdLuGZbuXUVFVQXFFsdU4jTGmFQm0wHkY6O6z3c2dVpeFwDcbmLdJ+NY4t2w5Rp8+CUztN4Wtx7ey5dgWwCY/MMaY1iTQAudXQKqI9BSRcFyDfZb6HiAiqT6bU4Hd7u9LgVkiEiEiPYFUYJ2/C+yZ4D06NIZt2/IYPLgDU1JdS4y9uuVVwKbbM8aY1iS0pQvgS1UrReRB4EPAAcxX1SwReRrIVNWlwIMicj3gBAqBu915s0RkEZANVAIPqGqVv8vsqXHmHXZSUlLBkCEdGdg+lW5x3ViwbQFgNU5jjGlNAipwAqjqMmDZOWlP+Hx/5AJ5fwP8xn+lq80TOHdnnwZg8OCOiAiT+0zmhQ0vAFbjNMaY1iTQmmovO8UVxUSHRZO1LY927SLp3j0OgMl9JnuPsRqnMca0HhY4G6mkosQ9MOg4Q4a4apsA3+j1DcJCwgCrcRpjTGtigbORip3FRIfGsH9/EYMHd/Cmx0XEMSZ5DJGhkUSHRbdgCY0xxjSlgOvjvNwUVxTjqIoAYMiQjjX2/Xzsz1l7cG1LFMsYY4yfWOBspOKKYqrPhBESIqSl1ZxM4fpe13N9r+tbqGTGGGP8wZpqG6mkooTKslB6904gKiqspYtjjDHGzyxwNlJxRTFSEU5SUlRLF8UYY0wzsMDZSMUVxWh5GG3aRLR0UYwxxjQDC5yNVFxRTHV5OG3ahLd0UYwxxjQDC5yNVOIsobos1GqcxhgTJCxwNkJVdRWlzlJwWo3TGGOChQXORih1lgIQUhlhNU5jjAkSFjgbwTPBu6MqwmqcxhgTJCxwNoJnLU6rcRpjTPCwwNkIZ2uckVbjNMaYIGGBsxE8gTOkMtxqnMYYEyQscDaCN3BaH6cxxgQNC5yNUFLh6uN0VEZajdMYY4KEBc5G8NQ4o0NjCA21v0pjjAkGAffTXkQmichOEckRkcfOs/9HIpItIltE5BMRSfHZVyUim9yfpf4uqydwxkXF+vtSxhhjAkRArccpIg5gLnADcAj4SkSWqmq2z2EbgXRVLRWRHwDPADPd+8pUdVhzldcTONtGxTXXJY0xxrSwQKtxjgRyVHWPqlYAC4EZvgeo6gpVLXVvfgF0a+YyepU4S0CFdrFW4zTGmGARaIGzK3DQZ/uQO60u9wAf+GxHikimiHwhIt+sK5OI3Oc+LjMvL6/BhS2uKCa0OpK2cZENPocxxpjLS0A11daHiNwJpAPjfZJTVPWwiPQC/iUiW1U199y8qpoBZACkp6drQ8tQXFGMo9pmDTLGmGASaDXOw0B3n+1u7rQaROR64BfAdFUt96Sr6mH3n3uAT4Hh/ixscUUxIc4IYmPtHU5jjAkWgRY4vwJSRaSniIQDs4Aao2NFZDgwD1fQPO6THi8iEe7vScA1gO+goiZ3urwYsSXFjDEmqARUU62qVorIg8CHgAOYr6pZIvI0kKmqS4E/ArHAGyICcEBVpwMDgHkiUo3rF4LfnzMat8mdLj9NSGUEcXHWVGuMMcEioAIngKouA5adk/aEz/fr68j3OTDYv6Wr6VTZaUKqbJ5aY4wJJoHWVHtZOV1eTEhlpPVxGmNMELHA2QjFzmJbxNoYY4KMBc5GKHWWElIZbn2cxhgTRCxwNkJZVYl7STELnMYYEywscDaQs8qJUytwWB+nMcYEFQucDVTidK3FGVJlEyAYY0wwscDZQJ5FrKMc0YSESAuXxhhjTHOxwNlAniXFYsNtZRRjjAkmFjgbyAKnMcYEJwucDeQJnHGRFjiNMSaYWOBsIM/goLZRcS1cEmOMMc3JAmcDeWqc7WLatHBJjDHGNCcLnA3kCZzxMW1buCTGGGOakwXOBjp15jQAiW0scBpjTDCxwNlAhcWnAEhs066FS2KMMaY5WeBsoBMlJ6E6hIQ4G1VrjDHBxAJnAxWVnsJRFUFcXGRLF8UYY0wzssDZQCfLThFSaWtxGmNMsAm4wCkik0Rkp4jkiMhj59n/IxHJFpEtIvKJiKT47LtbRHa7P3f7s5ynzxTbkmLGGBOEAipwiogDmAtMBtKA2SKSds5hG4F0VR0CLAaecedNAJ4ERgEjgSdFJN5fZT1dXozDapzGGBN0Aipw4gp4Oaq6R1UrgIXADN8DVHWFqpa6N78Aurm/3wh8rKonVLUQ+BiY5K+CljitxmmMMcEo0AJnV+Cgz/Yhd1pd7gE+qG9eEblPRDJFJDMvL69BBS2tLMFRFUFUVGiD8htjjLk8XbY/9UXkTiAdGF/fvKqaAWQApKena0OuPzXkYb46fBQRW4vTGGOCSaDVOA8D3X22u7nTahCR64FfANNVtbw+eZtKQkk/ujLAX6c3xhgToAItcH4FpIpITxEJB2YBS30PEJHhwDxcQfO4z64PgYkiEu8eFDTRneYXp0+XW/+mMcYEoYBqqlXVShF5EFfAcwDzVTVLRJ4GMlV1KfBHIBZ4w91MekBVp6vqCRH5Fa7gC/C0qp7wV1lPn66wEbXGGBOEAipwAqjqMmDZOWlP+Hy//gJ55wPz/Ve6s4YP72Q1TmOMCUIBFzgvFw89NKqli2CMMaYFBFofpzHGGBPQLHAaY4wx9WCB0xhjjKkHC5zGGGNMPVjgNMYYY+rBAqcxxhhTDxY4jTHGmHqwwGmMMcbUg6g2aHGQVkNE8oD99ciSBOT7qTiBKhjvGYLzvoPxniE477ux95yiqu2bqjCXk6APnPUlIpmqmt7S5WhOwXjPEJz3HYz3DMF538F4z03FmmqNMcaYerDAaYwxxtSDBc76y2jpArSAYLxnCM77DsZ7huC872C85yZhfZzGGGNMPViN0xhjjKkHC5zGGGNMPVjgvEQiMklEdopIjog81tLl8RcR6S4iK0QkW0SyROQRd3qCiHwsIrvdf8a3dFmbmog4RGSjiLzn3u4pIl+6n/nrIhLe0mVsaiLSTkQWi8gOEdkuIle39mctIv/h/re9TUQWiEhka3zWIjJfRI6LyDaftPM+W3H5i/v+t4jIiJYreeCzwHkJRMQBzAUmA2nAbBFJa9lS+U0l8J+qmgZcBTzgvtfHgE9UNRX4xL3d2jwCbPfZ/gPwrKr2AQqBe1qkVP71HLBcVfsDQ3Hdf6t91iLSFXgYSFfVQYADmEXrfNb/A0w6J62uZzsZSHV/7gOeb6YyXpYscF6akUCOqu5R1QpgITCjhcvkF6p6RFU3uL+fxvWDtCuu+33JfdhLwDdbpoT+ISLdgKnAP9zbAlwHLHYf0hrvuS0wDngRQFUrVLWIVv6sgVAgSkRCgWjgCK3wWavqKuDEOcl1PdsZwP+qyxdAOxHp3DwlvfxY4Lw0XYGDPtuH3Gmtmoj0AIYDXwIdVfWIe9dRoGMLFctf/gv4KVDt3k4EilS10r3dGp95TyAP+Ke7ifofIhJDK37WqnoY+BNwAFfAPAmsp/U/a4+6nm1Q/oxrKAuc5rxEJBZYAvxQVU/57lPXO0yt5j0mEZkGHFfV9S1dlmYWCowAnlfV4UAJ5zTLtsJnHY+rdtUT6ALEULs5Myi0tmfbnCxwXprDQHef7W7utFZJRMJwBc1XVfVNd/IxT9ON+8/jLVU+P7gGmC4i+3A1w1+Hq++vnbs5D1rnMz8EHFLVL93bi3EF0tb8rK8H9qpqnqo6gTdxPf/W/qw96nq2QfUzrrEscF6ar4BU98i7cFyDCZa2cJn8wt239yKwXVX/7LNrKXC3+/vdwDvNXTZ/UdWfqWo3Ve2B69n+S1W/DawAbnMf1qruGUBVjwIHRaSfO+kbQDat+FnjaqK9SkSi3f/WPffcqp+1j7qe7VLgLvfo2quAkz5NuuYcNnPQJRKRKbj6wRzAfFX9TQsXyS9EZAywGtjK2f6+n+Pq51wEJONahu1bqnruwIPLnohMAH6sqtNEpBeuGmgCsBG4U1XLW7J8TU1EhuEaEBUO7AHm4PqFutU+axF5CpiJawT5RuBeXP15repZi8gCYAKu5cOOAU8Cb3OeZ+v+JeJvuJqtS4E5qprZEuW+HFjgNMYYY+rBmmqNMcaYerDAaYwxxtSDBU5jjDGmHixwGmOMMfVggdMYY4ypBwucxviBiFSJyCafT5NNlC4iPXxXvDDGNK/Qix9ijGmAMlUd1tKFMMY0PatxGtOMRGSfiDwjIltFZJ2I9HGn9xCRf7nXQvxERJLd6R1F5C0R2ez+jHafyiEiL7jXlfxIRKLcxz8srrVUt4jIwha6TWNaNQucxvhH1DlNtTN99p1U1cG4Zmr5L3faX4GXVHUI8CrwF3f6X4CVqjoU1zyyWe70VGCuqg4EioBb3emPAcPd57nfXzdnTDCzmYOM8QMRKVbV2POk7wOuU9U97sn0j6pqoojkA51V1elOP6KqSSKSB3Tznf7Nvdzbx+7FiBGRR4EwVf21iCwHinFNrfa2qhb7+VaNCTpW4zSm+Wkd3+vDdx7VKs6OV5gKzMVVO/3KZ8UPY0wTscBpTPOb6fPnWvf3z3GtzALwbVwT7QN8AvwAQEQcItK2rpOKSAjQXVVXAI8CbYFatV5jTOPYb6PG+EeUiGzy2V6uqp5XUuJFZAuuWuNsd9pDwD9F5CdAHq5VSgAeATJE5B5cNcsfAHUt9+QAXnEHVwH+oqpFTXZHxhjA+jiNaVbuPs50Vc1v6bIYYxrGmmqNMcaYerAapzHGGFMPVuM0xhhj6sECpzHGGFMPFjiNMcaYerDAaYwxxtSDBU5jjDGmHv4/0yXNA7INfZkAAAAASUVORK5CYII=" 1011 | } 1012 | }, 1013 | "cell_type": "markdown", 1014 | "id": "7d518945", 1015 | "metadata": {}, 1016 | "source": [ 1017 | "![augmented_image_acc_100_epochs](attachment:augmented_image_acc_100_epochs.png)" 1018 | ] 1019 | }, 1020 | { 1021 | "cell_type": "markdown", 1022 | "id": "9abf0003", 1023 | "metadata": {}, 1024 | "source": [ 1025 | "### Results\n", 1026 | "\n", 1027 | "Let's compare the accuracy of our three models." 1028 | ] 1029 | }, 1030 | { 1031 | "cell_type": "code", 1032 | "execution_count": null, 1033 | "id": "c1d71eda", 1034 | "metadata": {}, 1035 | "outputs": [], 1036 | "source": [ 1037 | "def get_model_accuracy(model, x_test, y_test):\n", 1038 | " \"\"\"\n", 1039 | " Takes a model and a test set of data.\n", 1040 | " Returns the accuracy.\n", 1041 | " \"\"\"\n", 1042 | " \n", 1043 | " score = model.evaluate(x_test, y_test, verbose=0)\n", 1044 | " \n", 1045 | " accuracy = round(score[1]*100, 1)\n", 1046 | " \n", 1047 | " return accuracy" 1048 | ] 1049 | }, 1050 | { 1051 | "cell_type": "code", 1052 | "execution_count": null, 1053 | "id": "c19457c6", 1054 | "metadata": {}, 1055 | "outputs": [], 1056 | "source": [ 1057 | "for i, mod in enumerate([convnet, my_convnet, augmented_image_model]):\n", 1058 | " acc = get_model_accuracy(mod, x_test_trans, y_test_trans)\n", 1059 | " print(f\"Model {i+1} has an accuracy of {acc}%\")" 1060 | ] 1061 | }, 1062 | { 1063 | "cell_type": "markdown", 1064 | "id": "ed590a59", 1065 | "metadata": {}, 1066 | "source": [ 1067 | "Do you feel satisfied with these results?" 1068 | ] 1069 | }, 1070 | { 1071 | "cell_type": "markdown", 1072 | "id": "bfecb6d1", 1073 | "metadata": {}, 1074 | "source": [ 1075 | "The convolutional neural networks we built are relatively small compared to those used in applications such as google photos.\n", 1076 | "\n", 1077 | "Let's compare the architecture from the models we built with a popular convolutional neural network called [VGG16](https://arxiv.org/pdf/1409.1556.pdf). " 1078 | ] 1079 | }, 1080 | { 1081 | "cell_type": "markdown", 1082 | "id": "0e0dd8b8", 1083 | "metadata": {}, 1084 | "source": [ 1085 | "### First Convolutional Neural Network" 1086 | ] 1087 | }, 1088 | { 1089 | "cell_type": "code", 1090 | "execution_count": null, 1091 | "id": "9e4cbd1f", 1092 | "metadata": {}, 1093 | "outputs": [], 1094 | "source": [ 1095 | "convnet.summary()" 1096 | ] 1097 | }, 1098 | { 1099 | "cell_type": "markdown", 1100 | "id": "ad5ce78f", 1101 | "metadata": {}, 1102 | "source": [ 1103 | "### Second Convolutional Neural Network" 1104 | ] 1105 | }, 1106 | { 1107 | "cell_type": "code", 1108 | "execution_count": null, 1109 | "id": "6fe73384", 1110 | "metadata": {}, 1111 | "outputs": [], 1112 | "source": [ 1113 | "my_convnet.summary()" 1114 | ] 1115 | }, 1116 | { 1117 | "cell_type": "markdown", 1118 | "id": "1fe75db2", 1119 | "metadata": {}, 1120 | "source": [ 1121 | "### VGG16" 1122 | ] 1123 | }, 1124 | { 1125 | "cell_type": "markdown", 1126 | "id": "bacbe639", 1127 | "metadata": {}, 1128 | "source": [ 1129 | "![title](https://alexisbcook.github.io/assets/vgg16_keras.png)" 1130 | ] 1131 | }, 1132 | { 1133 | "cell_type": "markdown", 1134 | "id": "7d391081", 1135 | "metadata": {}, 1136 | "source": [ 1137 | "## Hardware for Deep-Learning" 1138 | ] 1139 | }, 1140 | { 1141 | "cell_type": "markdown", 1142 | "id": "dc37b8fb", 1143 | "metadata": {}, 1144 | "source": [ 1145 | "We are highly restricted in our ability to train larger models due to our hardware. While there are many aspects about hardware that are critical to deep learning, we will focus on one major concept, the Processing Unit.\n", 1146 | "\n", 1147 | "All computers have a Central Processing Unit (CPU) that performs all of the operations your computer needs to do, including running all this python code!\n", 1148 | "\n", 1149 | "For deep-learning, CPUs are not very efficient at running the massive number operations required to train deep neural networks. That's where the Graphical Proccessing Unit (GPU) comes into the picture. \n", 1150 | "\n", 1151 | "GPUs, originally designed to render complex graphics have been repurposed for training neural networks! This key innovation came in 2007 when NVIDIA launched [CUDA](https://developer.nvidia.com/about-cuda), a programming interface for GPUs which allows highly parallelizable computations. Since many neural networks operations are parallelizable, this has become the go-to-way to train large neural network models.\n", 1152 | "\n", 1153 | "*Loosely based on: (Chollet 2018) Deep Learning with Python*" 1154 | ] 1155 | }, 1156 | { 1157 | "cell_type": "markdown", 1158 | "id": "26d4e3cf", 1159 | "metadata": {}, 1160 | "source": [ 1161 | "The code below allows you to see the available hardware (CPUs, GPUs) that can be used in tensorflow/keras." 1162 | ] 1163 | }, 1164 | { 1165 | "cell_type": "code", 1166 | "execution_count": null, 1167 | "id": "7b30621f", 1168 | "metadata": {}, 1169 | "outputs": [], 1170 | "source": [ 1171 | "from tensorflow.python.client import device_lib\n", 1172 | "\n", 1173 | "print(device_lib.list_local_devices())" 1174 | ] 1175 | }, 1176 | { 1177 | "cell_type": "markdown", 1178 | "id": "4ba109c6", 1179 | "metadata": {}, 1180 | "source": [ 1181 | "### Final words\n", 1182 | "\n", 1183 | "So are we doomed? Can only the tech giants create deep neural networks?\n", 1184 | "\n", 1185 | "Well.. sorta, but we do have alternatives, we just need to be creative! Here are some options to train your own deep neural networks that may aren't possible on your own machine:\n", 1186 | "1. Use other services\n", 1187 | "\n", 1188 | " - Google Colab\n", 1189 | " - Amazon Web Services (AWS) allows you to use their hardware in \"EC2 instances\" to train your models offerring a [dizzying number of options](https://aws.amazon.com/ec2/instance-types/). Note that this can be expensive and is billed by time!\n", 1190 | " \n", 1191 | "2. Transfer learning\n", 1192 | " - Why build your model from the ground up?\n", 1193 | " - Transfer learning uses an existing model with all it's tuned weights, and retrains just the final portion of the model to a new task." 1194 | ] 1195 | }, 1196 | { 1197 | "cell_type": "code", 1198 | "execution_count": null, 1199 | "id": "2f8b6541", 1200 | "metadata": {}, 1201 | "outputs": [], 1202 | "source": [] 1203 | } 1204 | ], 1205 | "metadata": { 1206 | "kernelspec": { 1207 | "display_name": "Python 3.8.9 64-bit", 1208 | "language": "python", 1209 | "name": "python3" 1210 | }, 1211 | "language_info": { 1212 | "codemirror_mode": { 1213 | "name": "ipython", 1214 | "version": 3 1215 | }, 1216 | "file_extension": ".py", 1217 | "mimetype": "text/x-python", 1218 | "name": "python", 1219 | "nbconvert_exporter": "python", 1220 | "pygments_lexer": "ipython3", 1221 | "version": "3.8.9" 1222 | }, 1223 | "vscode": { 1224 | "interpreter": { 1225 | "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" 1226 | } 1227 | } 1228 | }, 1229 | "nbformat": 4, 1230 | "nbformat_minor": 5 1231 | } 1232 | -------------------------------------------------------------------------------- /solutions/01-Vanilla-Neural-Networks_solutions.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from tensorflow.keras.datasets import mnist\n", 10 | "from tensorflow.keras.models import Sequential\n", 11 | "from tensorflow.keras.layers import Dense, Dropout\n", 12 | "from tensorflow.keras.optimizers import RMSprop\n", 13 | "from tensorflow.keras.utils import to_categorical\n", 14 | "\n", 15 | "# Note you may get a warning about CUDA and GPU set up\n", 16 | "# You can ignore these for now\n", 17 | "import numpy as np\n", 18 | "import matplotlib.pyplot as plt" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "def get_mnist_data(subset=True):\n", 28 | " \"\"\"\n", 29 | " Returns the MNIST dataset as a tuple:\n", 30 | " (x_train, y_train, x_val, y_val, x_test, y_test)\n", 31 | " \n", 32 | " When subset=TRUE:\n", 33 | " Returns only a subset of the mnist dataset.\n", 34 | " Especially important to use if you are on datahub and only have 1-2GB of memory.\n", 35 | " \"\"\"\n", 36 | " \n", 37 | " if subset:\n", 38 | " N_TRAIN = 5000\n", 39 | " N_VALIDATION = 1000\n", 40 | " N_TEST = 1000\n", 41 | " else:\n", 42 | " N_TRAIN = 48000\n", 43 | " N_VALIDATION = 12000\n", 44 | " N_TEST = 10000\n", 45 | " \n", 46 | " (x_train_and_val, y_train_and_val), (x_test, y_test) = mnist.load_data()\n", 47 | " \n", 48 | " x_train = x_train_and_val[:N_TRAIN,:,:]\n", 49 | " y_train = y_train_and_val[:N_TRAIN]\n", 50 | " \n", 51 | " x_val = x_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION,:,:]\n", 52 | " y_val = y_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION]\n", 53 | " \n", 54 | " x_test = x_test[:N_TEST]\n", 55 | " y_test = y_test[:N_TEST]\n", 56 | " \n", 57 | " return x_train, y_train, x_val, y_val, x_test, y_test\n", 58 | "\n", 59 | "# Load the data\n", 60 | "# Set subset=False if you want to use the full dataset!\n", 61 | "# Note that this will require 2+ GB of memory and will make training take longer\n", 62 | "\n", 63 | "x_train, y_train, x_val, y_val, x_test, y_test = get_mnist_data(subset=True)\n", 64 | "\n", 65 | "def transform_data(xdata, ydata):\n", 66 | " \"\"\"\n", 67 | " Transforms image data:\n", 68 | " 1. Flattens pixel dimensions from 2 -> 1\n", 69 | " 2. Scales pixel values between [0,1]\n", 70 | " Transforms target data (ydata):\n", 71 | " - Formats targets as one hot encoded columns\n", 72 | " \"\"\"\n", 73 | " \n", 74 | " x = {}\n", 75 | " for name, partition in zip([\"x_train\", \"x_val\", \"x_test\"],xdata):\n", 76 | " flatten = partition.reshape((partition.shape[0], 28 * 28))\n", 77 | " scaled = flatten.astype('float32') / 255\n", 78 | " x[name] = scaled\n", 79 | " \n", 80 | " y = {}\n", 81 | " for name, partition in zip([\"y_train\", \"y_val\", \"y_test\"],ydata):\n", 82 | " y[name] = to_categorical(partition)\n", 83 | " \n", 84 | " return x['x_train'], y['y_train'], x['x_val'], y['y_val'], x['x_test'], y['y_test']\n", 85 | "\n", 86 | "\n", 87 | "x_train_trans, y_train_trans, x_val_trans, y_val_trans, x_test_trans, y_test_trans = transform_data([x_train, x_val, x_test],\n", 88 | " [y_train, y_val, y_test])" 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "metadata": {}, 94 | "source": [ 95 | "## Challenge 1: Understanding the Input Data\n", 96 | "\n", 97 | "1. Why do we use split our data into train/validation/test?\n", 98 | "2. What is the shape of our input data partitions?\n", 99 | "3. What is the type of the data?\n", 100 | "\n", 101 | "**BONUS:**\n", 102 | "\n", 103 | "4. What is the distribution of the target classes within the data, is it balanced?" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": null, 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [ 112 | "# 1.1 Why do we split our data into train, validation, and test sets?\n", 113 | "\n", 114 | "# We need to train our model and avoid overfitting.\n", 115 | "\n", 116 | "# We use the training set to fit the model.\n", 117 | "# We use the validation set to tune the hyperparameters of our model.\n", 118 | "# We use the holdout test set to determine our final performance (generalization). " 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": null, 124 | "metadata": {}, 125 | "outputs": [], 126 | "source": [ 127 | "# 1.2 What is the shape of our input data partitions?\n", 128 | "\n", 129 | "# The training set contains 5000 examples\n", 130 | "# The validation set contains 1000 examples \n", 131 | "# The test set contains 1000 examples\n", 132 | "# The X are 3-dimensional\n", 133 | "# The y are 1 dimensional" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "metadata": {}, 140 | "outputs": [], 141 | "source": [ 142 | "# 1.3\n", 143 | "[type(partition) for partition in [x_train, y_train, x_val, y_val, x_test, y_test]]\n", 144 | "# All are numpy arrays" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "metadata": {}, 151 | "outputs": [], 152 | "source": [ 153 | "# BONUS 1.4\n", 154 | "\n", 155 | "# Simply plot the histogram of each y in a different cell, for example\n", 156 | "# plt.hist(y_train)\n", 157 | "# plt.title('Train Class Distribution');\n", 158 | "# repeat for y_val and y_test\n", 159 | " \n", 160 | "# or make subplots\n", 161 | "def plot_target_distributions(targets, titles):\n", 162 | " \"\"\"\n", 163 | " Returns the distribution of target classes.\n", 164 | " \"\"\"\n", 165 | " \n", 166 | " fig, axes = plt.subplots(3,1, figsize = (10,10))\n", 167 | " \n", 168 | " for ax, target, title in zip(axes, targets, titles):\n", 169 | " ax.hist(target) \n", 170 | " ax.set_title(f\"{title} Class Distribution\")\n", 171 | " \n", 172 | " return plt.show()\n", 173 | "\n", 174 | "plot_target_distributions([y_train, y_val, y_test], [\"Train\", \"Validation\", \"Test\"])\n", 175 | " " 176 | ] 177 | }, 178 | { 179 | "cell_type": "markdown", 180 | "metadata": {}, 181 | "source": [ 182 | "## Challenge 2: Build your own neural network\n", 183 | "\n", 184 | "1. Build and compile your own neural network in an object called `my_network`. Feel free to choose your own:\n", 185 | " - Architecture\n", 186 | " - Activation Function\n", 187 | " - Epochs\n", 188 | " \n", 189 | "2. Train your model, saving the results to an object called `history_my_network`." 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": null, 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [ 198 | "# 2.1\n", 199 | "# An example network with: 3 dense layers, each with 512 neurons and a dropout of 0.3\n", 200 | "\n", 201 | "my_network = Sequential()\n", 202 | "my_network.add(Dense(512, activation= \"relu\", input_shape=(28*28,)))\n", 203 | "my_network.add(Dropout(0.3))\n", 204 | "my_network.add(Dense(512, activation= \"relu\"))\n", 205 | "my_network.add(Dropout(0.3))\n", 206 | "my_network.add(Dense(512, activation= \"relu\"))\n", 207 | "my_network.add(Dropout(0.3))\n", 208 | "my_network.add(Dense(10, activation=\"softmax\"))\n", 209 | "\n", 210 | "my_network.compile(optimizer = 'rmsprop', \n", 211 | " loss = 'categorical_crossentropy',\n", 212 | " metrics = ['accuracy'])" 213 | ] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": null, 218 | "metadata": {}, 219 | "outputs": [], 220 | "source": [ 221 | "# 2.2\n", 222 | "# Train model for 20 epochs with batch size of 128\n", 223 | "history_my_network = my_network.fit(x_train_trans, \n", 224 | " y_train_trans, \n", 225 | " epochs=20, \n", 226 | " batch_size=128, \n", 227 | " validation_data=(x_val_trans, y_val_trans))\n" 228 | ] 229 | }, 230 | { 231 | "cell_type": "markdown", 232 | "metadata": {}, 233 | "source": [ 234 | "## Challenge 3: Evaluate your own model\n", 235 | "\n", 236 | "Use your own model from challenge 2 to evaluate its general performance.\n", 237 | "\n", 238 | "1. Visualize the training and validation accuracy over each epoch.\n", 239 | "2. Print the accuracy of your model on the test set." 240 | ] 241 | }, 242 | { 243 | "cell_type": "code", 244 | "execution_count": null, 245 | "metadata": {}, 246 | "outputs": [], 247 | "source": [ 248 | "def plot_epoch_accuracy(history_dict):\n", 249 | " \"\"\"\n", 250 | " Plots the training and validation accuracy of a neural network.\n", 251 | " \"\"\"\n", 252 | " \n", 253 | " acc = history_dict['accuracy']\n", 254 | " val_acc = history_dict['val_accuracy']\n", 255 | " epochs = range(1, len(acc) + 1)\n", 256 | " plt.plot(epochs, acc, color = 'navy', alpha = 0.8, label='Training Accuracy')\n", 257 | " plt.plot(epochs, val_acc, color = 'green', label='Validation Accuracy')\n", 258 | " plt.title('Training and validation Accuracy')\n", 259 | " plt.xlabel('Epochs')\n", 260 | " plt.ylabel('Accuracy')\n", 261 | " plt.legend()\n", 262 | " return plt.show()\n", 263 | "\n", 264 | "def get_model_accuracy(model, x_test, y_test):\n", 265 | " \"\"\"\n", 266 | " Takes a model and a test set of data.\n", 267 | " Returns the accuracy.\n", 268 | " \"\"\"\n", 269 | " \n", 270 | " score = model.evaluate(x_test, y_test, verbose=0)\n", 271 | " \n", 272 | " accuracy = round(score[1]*100, 1)\n", 273 | " \n", 274 | " return accuracy" 275 | ] 276 | }, 277 | { 278 | "cell_type": "code", 279 | "execution_count": null, 280 | "metadata": {}, 281 | "outputs": [], 282 | "source": [ 283 | "# 3.1 \n", 284 | "plot_epoch_accuracy(history_my_network.history)" 285 | ] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "metadata": {}, 291 | "outputs": [], 292 | "source": [ 293 | "# 3.2\n", 294 | "get_model_accuracy(my_network, x_test_trans, y_test_trans)" 295 | ] 296 | } 297 | ], 298 | "metadata": { 299 | "kernelspec": { 300 | "display_name": "Python 3.9.7", 301 | "language": "python", 302 | "name": "python3" 303 | }, 304 | "language_info": { 305 | "codemirror_mode": { 306 | "name": "ipython", 307 | "version": 3 308 | }, 309 | "file_extension": ".py", 310 | "mimetype": "text/x-python", 311 | "name": "python", 312 | "nbconvert_exporter": "python", 313 | "pygments_lexer": "ipython3", 314 | "version": "3.9.7" 315 | }, 316 | "vscode": { 317 | "interpreter": { 318 | "hash": "9179d63e122556c6756b68a7ef958f74b99cc51ba44ce56116c8db8491528148" 319 | } 320 | } 321 | }, 322 | "nbformat": 4, 323 | "nbformat_minor": 2 324 | } 325 | -------------------------------------------------------------------------------- /solutions/03-Convolutional-Neural-Networks_solutions.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "a2cc16a7", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "from keras import layers\n", 11 | "from keras import models\n", 12 | "from keras.datasets import cifar10\n", 13 | "from tensorflow.keras.utils import to_categorical\n", 14 | "from keras.preprocessing.image import ImageDataGenerator\n", 15 | "import numpy as np\n", 16 | "import matplotlib.pyplot as plt" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "3aab9dd0", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "def load_cifar10(subset = True):\n", 27 | " \"\"\"\n", 28 | " Loads a training, validation, and test set of CIFAR10 images.\n", 29 | " \n", 30 | " When subset=TRUE:\n", 31 | " Returns only a subset of the mnist dataset.\n", 32 | " Especially important to use if you are on datahub and only have 1-2GB of memory.\n", 33 | " \"\"\"\n", 34 | " if subset:\n", 35 | " N_TRAIN = 8000\n", 36 | " N_VALIDATION = 2000\n", 37 | " N_TEST = 2000\n", 38 | " else:\n", 39 | " N_TRAIN = 40000\n", 40 | " N_VALIDATION = 10000\n", 41 | " N_TEST = 10000\n", 42 | " \n", 43 | " (x_train_and_val, y_train_and_val), (x_test, y_test) = cifar10.load_data()\n", 44 | " \n", 45 | " x_train = x_train_and_val[:N_TRAIN,:,:]\n", 46 | " y_train = y_train_and_val[:N_TRAIN]\n", 47 | " \n", 48 | " x_val = x_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION,:,:]\n", 49 | " y_val = y_train_and_val[N_TRAIN: N_TRAIN + N_VALIDATION]\n", 50 | " \n", 51 | " x_test = x_test[:N_TEST]\n", 52 | " y_test = y_test[:N_TEST]\n", 53 | " \n", 54 | " return x_train, y_train, x_val, y_val, x_test, y_test\n", 55 | "\n", 56 | "x_train, y_train, x_val, y_val, x_test, y_test = load_cifar10()\n", 57 | "\n", 58 | "def three_dim_transform(x_data, y_data):\n", 59 | " \"\"\"\n", 60 | " Transforms image data:\n", 61 | " - Scales pixel values between [0,1]\n", 62 | " Transforms target data (ydata):\n", 63 | " - Formats targets as one hot encoded columns\n", 64 | " \"\"\"\n", 65 | " \n", 66 | " x = {}\n", 67 | " for name, partition in zip([\"x_train\", \"x_val\", \"x_test\"],x_data):\n", 68 | " scaled = partition.astype('float32') / 255\n", 69 | " x[name] = scaled\n", 70 | " \n", 71 | " y = {}\n", 72 | " for name, partition in zip([\"y_train\", \"y_val\", \"y_test\"],y_data):\n", 73 | " y[name] = to_categorical(partition)\n", 74 | " \n", 75 | " return x['x_train'], y['y_train'], x['x_val'], y['y_val'], x['x_test'], y['y_test']\n", 76 | "\n", 77 | "x_train_trans, y_train_trans, x_val_trans, y_val_trans, x_test_trans, y_test_trans = three_dim_transform([x_train, x_val, x_test],\n", 78 | " [y_train, y_val, y_test])" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "id": "4baa6f63", 84 | "metadata": {}, 85 | "source": [ 86 | "## Challenge 1: Translate Classes\n", 87 | "\n", 88 | "Create a function `translate_class()` that uses the correct class name for the target classes (truck, horse, etc..).\n", 89 | "- Use the [keras CIFAR10 documentation](https://keras.io/api/datasets/cifar10/) as a guide to know how the classes are labeled.\n", 90 | "- The function should take a class index [0-9]\n", 91 | "- The function should return the correct corresponding CIFAR10 class category." 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": null, 97 | "id": "4749221a", 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "def translate_class(y):\n", 102 | " \"\"\"\n", 103 | " Takes a class index [0-9] and returns the CIFAR10 class category.\n", 104 | " \"\"\"\n", 105 | " # Create a list of categories\n", 106 | " categories = [\"airplane\", \n", 107 | " \"automobile\",\n", 108 | " \"bird\",\n", 109 | " \"cat\",\n", 110 | " \"deer\",\n", 111 | " \"dog\",\n", 112 | " \"frog\",\n", 113 | " \"horse\",\n", 114 | " \"ship\",\n", 115 | " \"truck\"]\n", 116 | " \n", 117 | " return categories[y]\n", 118 | " \n", 119 | " \n", 120 | "def translate_classes_fancy(y):\n", 121 | " \"\"\"\n", 122 | " Use a key-value paired dictionary to translate target class\n", 123 | " \"\"\"\n", 124 | " # Create a list of categories\n", 125 | " categories = [\"airplane\", \n", 126 | " \"automobile\",\n", 127 | " \"bird\",\n", 128 | " \"cat\",\n", 129 | " \"deer\",\n", 130 | " \"dog\",\n", 131 | " \"frog\",\n", 132 | " \"horse\",\n", 133 | " \"ship\",\n", 134 | " \"truck\"]\n", 135 | " \n", 136 | " # Use a dictionary comprehesion to attach class number to category\n", 137 | " category_dict = {key : value for key, value in zip(list(range(10)), categories)}\n", 138 | " \n", 139 | " return category_dict[y]" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "id": "29c4d69c", 145 | "metadata": {}, 146 | "source": [ 147 | "## Challenge 2: Plotting Image Classes\n", 148 | "\n", 149 | "Create a new function `my_imageplotter()` that reuses code from our `plot_images()` function and incorporates the `translate_class()` function to give us the correct class titles in our images." 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "id": "54b9ba8b", 156 | "metadata": {}, 157 | "outputs": [], 158 | "source": [ 159 | "def my_imageplotter(x, y, random=False):\n", 160 | " \"\"\"\n", 161 | " Plots 25 images from x data with titles set as y.\n", 162 | " Set random=True if you want random images rather than the first 25.\n", 163 | " \"\"\"\n", 164 | " \n", 165 | " if random:\n", 166 | " indices = np.random.choice(range(y.shape[0]), 25, replace=False)\n", 167 | " \n", 168 | " else:\n", 169 | " indices = np.array(range(25))\n", 170 | " \n", 171 | " fig, axes = plt.subplots(5,5, figsize = (15,15))\n", 172 | " axes = axes.ravel()\n", 173 | " \n", 174 | " for ax, index in zip(axes, indices):\n", 175 | " # NEW LINE HERE\n", 176 | " title = translate_class(y[index][0])\n", 177 | " ax.imshow(x[index])\n", 178 | " ax.set_title(f\"Class: {title}\", size=15)\n", 179 | " \n", 180 | " plt.tight_layout()\n", 181 | " \n", 182 | " return plt.show()" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "id": "be2f551f", 188 | "metadata": {}, 189 | "source": [ 190 | "## Challenge 3: Create a Convolutional Neural Network\n", 191 | "\n", 192 | "1. Initialize a Convolutional Neural Network called `my_convnet` with the following architecture:\n", 193 | " * Conv2D layer with 32 3x3 filters and relu activation function\n", 194 | " * Maxpooling layer 2x2\n", 195 | " * Conv2D layer with 64 3x3 filters and relu activation function\n", 196 | " * Maxpooling layer 2x2\n", 197 | " * Conv2D layer with 128 3x3 filters and relu activation function\n", 198 | " * Maxpooling layer wtih a pool size of 2x2\n", 199 | " * A flattening layer\n", 200 | " * A dense layer with 512 neurons and relu activation function\n", 201 | " * An output layer with the number of classes and softmax activation function\n", 202 | "\n", 203 | "2. Compile with the model with:\n", 204 | " * rmsprop optimizer\n", 205 | " * categorical crossentropy loss\n", 206 | " * accuracy metric\n", 207 | "\n", 208 | "3. Train the network for 20 epochs.\n", 209 | "\n", 210 | "4. Plot the training and validation accuracy through each epoch\n" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": null, 216 | "id": "0c88e660", 217 | "metadata": {}, 218 | "outputs": [], 219 | "source": [ 220 | "# 3.1 Initialize a Convolutional Neural Network called `my_convnet`\n", 221 | "my_convnet = models.Sequential()\n", 222 | "my_convnet.add(layers.Conv2D(32, (3,3), activation= \"relu\", input_shape = (32, 32, 3)))\n", 223 | "my_convnet.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", 224 | "my_convnet.add(layers.Conv2D(64, (3,3), activation= \"relu\"))\n", 225 | "my_convnet.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", 226 | "my_convnet.add(layers.Conv2D(128, (3,3), activation= \"relu\"))\n", 227 | "my_convnet.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", 228 | "\n", 229 | "# Flatten for dense layers\n", 230 | "my_convnet.add(layers.Flatten())\n", 231 | "my_convnet.add(layers.Dense(512, activation= \"relu\"))\n", 232 | "my_convnet.add(layers.Dense(10, activation= \"softmax\"))\n" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": null, 238 | "id": "40911e67", 239 | "metadata": {}, 240 | "outputs": [], 241 | "source": [ 242 | "# 3.2 Compile model\n", 243 | "my_convnet.compile(optimizer= \"rmsprop\",\n", 244 | " loss= \"categorical_crossentropy\",\n", 245 | " metrics= [\"accuracy\"])" 246 | ] 247 | }, 248 | { 249 | "cell_type": "code", 250 | "execution_count": null, 251 | "id": "37491775", 252 | "metadata": {}, 253 | "outputs": [], 254 | "source": [ 255 | "# 3.3 Train the network for 20 epochs\n", 256 | "my_convnet_history = my_convnet.fit(x_train_trans,\n", 257 | " y_train_trans,\n", 258 | " epochs=20,\n", 259 | " batch_size = 64,\n", 260 | " validation_data=(x_val_trans, y_val_trans))" 261 | ] 262 | }, 263 | { 264 | "cell_type": "code", 265 | "execution_count": null, 266 | "id": "fd1497fa", 267 | "metadata": {}, 268 | "outputs": [], 269 | "source": [ 270 | "def plot_epoch_accuracy(history_dict):\n", 271 | " \"\"\"\n", 272 | " Plots the training and validation accuracy of a neural network.\n", 273 | " \"\"\"\n", 274 | " \n", 275 | " acc = history_dict['accuracy']\n", 276 | " val_acc = history_dict['val_accuracy']\n", 277 | " epochs = range(1, len(acc) + 1)\n", 278 | " plt.plot(epochs, acc, color = 'navy', alpha = 0.8, label='Training Accuracy')\n", 279 | " plt.plot(epochs, val_acc, color = 'green', label='Validation Accuracy')\n", 280 | " plt.title('Training and validation Accuracy')\n", 281 | " plt.xlabel('Epochs')\n", 282 | " plt.ylabel('Accuracy')\n", 283 | " plt.legend()\n", 284 | " return plt.show()" 285 | ] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "id": "340f6d0b", 291 | "metadata": {}, 292 | "outputs": [], 293 | "source": [ 294 | "# 3.4 Plot accuracy through epochs\n", 295 | "plot_epoch_accuracy(my_convnet_history.history)" 296 | ] 297 | } 298 | ], 299 | "metadata": { 300 | "kernelspec": { 301 | "display_name": "Python 3.8.9 64-bit", 302 | "language": "python", 303 | "name": "python3" 304 | }, 305 | "language_info": { 306 | "codemirror_mode": { 307 | "name": "ipython", 308 | "version": 3 309 | }, 310 | "file_extension": ".py", 311 | "mimetype": "text/x-python", 312 | "name": "python", 313 | "nbconvert_exporter": "python", 314 | "pygments_lexer": "ipython3", 315 | "version": "3.8.9" 316 | }, 317 | "vscode": { 318 | "interpreter": { 319 | "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" 320 | } 321 | } 322 | }, 323 | "nbformat": 4, 324 | "nbformat_minor": 5 325 | } 326 | --------------------------------------------------------------------------------