├── Colab1_for_deeplearn.ipynb ├── Convolutions_Sidebar.ipynb ├── Course_1_Part_4_Lesson_2_Notebook.ipynb ├── Course_1_Part_6_Lesson_2_Notebook_ConvNet_Intro.ipynb ├── Exercise2_MNIST_Question.ipynb ├── Exercise4_Question_Happy_Sad.ipynb ├── Exercise_1_House_Prices_Question.ipynb ├── Exercise_3_Question_MNIST_ConvNet.ipynb ├── Horse_Human_150x150_Course_1_Part_8_Lesson_4_Notebook.ipynb ├── Horse_or_Human_NoValidation.ipynb ├── Horse_or_Human_Validation_Course_2_Part_2_Lesson_3_Notebook.ipynb └── README.md /Course_1_Part_6_Lesson_2_Notebook_ConvNet_Intro.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Course 1 - Part 6 - Lesson 2 - Notebook.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "collapsed_sections": [], 10 | "include_colab_link": true 11 | }, 12 | "kernelspec": { 13 | "name": "python3", 14 | "display_name": "Python 3" 15 | }, 16 | "accelerator": "GPU" 17 | }, 18 | "cells": [ 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "id": "view-in-github", 23 | "colab_type": "text" 24 | }, 25 | "source": [ 26 | "\"Open" 27 | ] 28 | }, 29 | { 30 | "metadata": { 31 | "id": "R6gHiH-I7uFa", 32 | "colab_type": "text" 33 | }, 34 | "cell_type": "markdown", 35 | "source": [ 36 | "#Improving Computer Vision Accuracy using Convolutions\n", 37 | "\n", 38 | "In the previous lessons you saw how to do fashion recognition using a Deep Neural Network (DNN) containing three layers -- the input layer (in the shape of the data), the output layer (in the shape of the desired output) and a hidden layer. You experimented with the impact of different sized of hidden layer, number of training epochs etc on the final accuracy.\n", 39 | "\n", 40 | "For convenience, here's the entire code again. Run it and take a note of the test accuracy that is printed out at the end. " 41 | ] 42 | }, 43 | { 44 | "metadata": { 45 | "id": "xcsRtq9OLorS", 46 | "colab_type": "code", 47 | "outputId": "6de1cced-508d-454e-ec72-7c319a336e74", 48 | "colab": { 49 | "base_uri": "https://localhost:8080/", 50 | "height": 411 51 | } 52 | }, 53 | "cell_type": "code", 54 | "source": [ 55 | "import tensorflow as tf\n", 56 | "mnist = tf.keras.datasets.fashion_mnist\n", 57 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 58 | "training_images=training_images / 255.0\n", 59 | "test_images=test_images / 255.0\n", 60 | "model = tf.keras.models.Sequential([\n", 61 | " tf.keras.layers.Flatten(),\n", 62 | " tf.keras.layers.Dense(128, activation=tf.nn.relu),\n", 63 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n", 64 | "])\n", 65 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 66 | "model.fit(training_images, training_labels, epochs=5)\n", 67 | "\n", 68 | "test_loss = model.evaluate(test_images, test_labels)" 69 | ], 70 | "execution_count": 0, 71 | "outputs": [ 72 | { 73 | "output_type": "stream", 74 | "text": [ 75 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n", 76 | "32768/29515 [=================================] - 0s 0us/step\n", 77 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n", 78 | "26427392/26421880 [==============================] - 1s 0us/step\n", 79 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n", 80 | "8192/5148 [===============================================] - 0s 0us/step\n", 81 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n", 82 | "4423680/4422102 [==============================] - 0s 0us/step\n", 83 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 84 | "Instructions for updating:\n", 85 | "Colocations handled automatically by placer.\n", 86 | "Epoch 1/5\n", 87 | "60000/60000 [==============================] - 8s 132us/sample - loss: 0.4917 - acc: 0.8275\n", 88 | "Epoch 2/5\n", 89 | "60000/60000 [==============================] - 7s 118us/sample - loss: 0.3711 - acc: 0.8664\n", 90 | "Epoch 3/5\n", 91 | "60000/60000 [==============================] - 7s 118us/sample - loss: 0.3345 - acc: 0.8769\n", 92 | "Epoch 4/5\n", 93 | "60000/60000 [==============================] - 7s 117us/sample - loss: 0.3082 - acc: 0.8869\n", 94 | "Epoch 5/5\n", 95 | "60000/60000 [==============================] - 7s 118us/sample - loss: 0.2927 - acc: 0.8926\n", 96 | "10000/10000 [==============================] - 1s 64us/sample - loss: 0.3460 - acc: 0.8743\n" 97 | ], 98 | "name": "stdout" 99 | } 100 | ] 101 | }, 102 | { 103 | "metadata": { 104 | "id": "zldEXSsF8Noz", 105 | "colab_type": "text" 106 | }, 107 | "cell_type": "markdown", 108 | "source": [ 109 | "Your accuracy is probably about 89% on training and 87% on validation...not bad...But how do you make that even better? One way is to use something called Convolutions. I'm not going to details on Convolutions here, but the ultimate concept is that they narrow down the content of the image to focus on specific, distinct, details. \n", 110 | "\n", 111 | "If you've ever done image processing using a filter (like this: https://en.wikipedia.org/wiki/Kernel_(image_processing)) then convolutions will look very familiar.\n", 112 | "\n", 113 | "In short, you take an array (usually 3x3 or 5x5) and pass it over the image. By changing the underlying pixels based on the formula within that matrix, you can do things like edge detection. So, for example, if you look at the above link, you'll see a 3x3 that is defined for edge detection where the middle cell is 8, and all of its neighbors are -1. In this case, for each pixel, you would multiply its value by 8, then subtract the value of each neighbor. Do this for every pixel, and you'll end up with a new image that has the edges enhanced.\n", 114 | "\n", 115 | "This is perfect for computer vision, because often it's features that can get highlighted like this that distinguish one item for another, and the amount of information needed is then much less...because you'll just train on the highlighted features.\n", 116 | "\n", 117 | "That's the concept of Convolutional Neural Networks. Add some layers to do convolution before you have the dense layers, and then the information going to the dense layers is more focussed, and possibly more accurate.\n", 118 | "\n", 119 | "Run the below code -- this is the same neural network as earlier, but this time with Convolutional layers added first. It will take longer, but look at the impact on the accuracy:" 120 | ] 121 | }, 122 | { 123 | "metadata": { 124 | "id": "C0tFgT1MMKi6", 125 | "colab_type": "code", 126 | "outputId": "181aadbb-8ab7-4750-9702-fcbf00e70db3", 127 | "colab": { 128 | "base_uri": "https://localhost:8080/", 129 | "height": 578 130 | } 131 | }, 132 | "cell_type": "code", 133 | "source": [ 134 | "import tensorflow as tf\n", 135 | "print(tf.__version__)\n", 136 | "mnist = tf.keras.datasets.fashion_mnist\n", 137 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 138 | "training_images=training_images.reshape(60000, 28, 28, 1)\n", 139 | "training_images=training_images / 255.0\n", 140 | "test_images = test_images.reshape(10000, 28, 28, 1)\n", 141 | "test_images=test_images/255.0\n", 142 | "model = tf.keras.models.Sequential([\n", 143 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),\n", 144 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 145 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 146 | " tf.keras.layers.MaxPooling2D(2,2),\n", 147 | " tf.keras.layers.Flatten(),\n", 148 | " tf.keras.layers.Dense(128, activation='relu'),\n", 149 | " tf.keras.layers.Dense(10, activation='softmax')\n", 150 | "])\n", 151 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 152 | "model.summary()\n", 153 | "model.fit(training_images, training_labels, epochs=5)\n", 154 | "test_loss = model.evaluate(test_images, test_labels)\n" 155 | ], 156 | "execution_count": 0, 157 | "outputs": [ 158 | { 159 | "output_type": "stream", 160 | "text": [ 161 | "1.13.1\n", 162 | "_________________________________________________________________\n", 163 | "Layer (type) Output Shape Param # \n", 164 | "=================================================================\n", 165 | "conv2d_3 (Conv2D) (None, 26, 26, 64) 640 \n", 166 | "_________________________________________________________________\n", 167 | "max_pooling2d_3 (MaxPooling2 (None, 13, 13, 64) 0 \n", 168 | "_________________________________________________________________\n", 169 | "conv2d_4 (Conv2D) (None, 11, 11, 64) 36928 \n", 170 | "_________________________________________________________________\n", 171 | "max_pooling2d_4 (MaxPooling2 (None, 5, 5, 64) 0 \n", 172 | "_________________________________________________________________\n", 173 | "flatten_3 (Flatten) (None, 1600) 0 \n", 174 | "_________________________________________________________________\n", 175 | "dense_6 (Dense) (None, 128) 204928 \n", 176 | "_________________________________________________________________\n", 177 | "dense_7 (Dense) (None, 10) 1290 \n", 178 | "=================================================================\n", 179 | "Total params: 243,786\n", 180 | "Trainable params: 243,786\n", 181 | "Non-trainable params: 0\n", 182 | "_________________________________________________________________\n", 183 | "Epoch 1/5\n", 184 | "60000/60000 [==============================] - 15s 247us/sample - loss: 0.4462 - acc: 0.8374\n", 185 | "Epoch 2/5\n", 186 | "60000/60000 [==============================] - 15s 243us/sample - loss: 0.2936 - acc: 0.8937\n", 187 | "Epoch 3/5\n", 188 | "60000/60000 [==============================] - 14s 241us/sample - loss: 0.2472 - acc: 0.9090\n", 189 | "Epoch 4/5\n", 190 | "60000/60000 [==============================] - 15s 244us/sample - loss: 0.2155 - acc: 0.9198\n", 191 | "Epoch 5/5\n", 192 | "60000/60000 [==============================] - 15s 244us/sample - loss: 0.1892 - acc: 0.9298\n", 193 | "10000/10000 [==============================] - 1s 110us/sample - loss: 0.2604 - acc: 0.9028\n" 194 | ], 195 | "name": "stdout" 196 | } 197 | ] 198 | }, 199 | { 200 | "metadata": { 201 | "id": "uRLfZ0jt-fQI", 202 | "colab_type": "text" 203 | }, 204 | "cell_type": "markdown", 205 | "source": [ 206 | "It's likely gone up to about 93% on the training data and 91% on the validation data. \n", 207 | "\n", 208 | "That's significant, and a step in the right direction!\n", 209 | "\n", 210 | "Try running it for more epochs -- say about 20, and explore the results! But while the results might seem really good, the validation results may actually go down, due to something called 'overfitting' which will be discussed later. \n", 211 | "\n", 212 | "(In a nutshell, 'overfitting' occurs when the network learns the data from the training set really well, but it's too specialised to only that data, and as a result is less effective at seeing *other* data. For example, if all your life you only saw red shoes, then when you see a red shoe you would be very good at identifying it, but blue suade shoes might confuse you...and you know you should never mess with my blue suede shoes.)\n", 213 | "\n", 214 | "Then, look at the code again, and see, step by step how the Convolutions were built:" 215 | ] 216 | }, 217 | { 218 | "metadata": { 219 | "id": "RaLX5cgI_JDb", 220 | "colab_type": "text" 221 | }, 222 | "cell_type": "markdown", 223 | "source": [ 224 | "Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape. \n", 225 | "\n", 226 | "\n", 227 | "\n", 228 | "```\n", 229 | "import tensorflow as tf\n", 230 | "mnist = tf.keras.datasets.fashion_mnist\n", 231 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 232 | "training_images=training_images.reshape(60000, 28, 28, 1)\n", 233 | "training_images=training_images / 255.0\n", 234 | "test_images = test_images.reshape(10000, 28, 28, 1)\n", 235 | "test_images=test_images/255.0\n", 236 | "```\n", 237 | "\n" 238 | ] 239 | }, 240 | { 241 | "metadata": { 242 | "id": "SS_W_INc_kJQ", 243 | "colab_type": "text" 244 | }, 245 | "cell_type": "markdown", 246 | "source": [ 247 | "Next is to define your model. Now instead of the input layer at the top, you're going to add a Convolution. The parameters are:\n", 248 | "\n", 249 | "1. The number of convolutions you want to generate. Purely arbitrary, but good to start with something in the order of 32\n", 250 | "2. The size of the Convolution, in this case a 3x3 grid\n", 251 | "3. The activation function to use -- in this case we'll use relu, which you might recall is the equivalent of returning x when x>0, else returning 0\n", 252 | "4. In the first layer, the shape of the input data.\n", 253 | "\n", 254 | "You'll follow the Convolution with a MaxPooling layer which is then designed to compress the image, while maintaining the content of the features that were highlighted by the convlution. By specifying (2,2) for the MaxPooling, the effect is to quarter the size of the image. Without going into too much detail here, the idea is that it creates a 2x2 array of pixels, and picks the biggest one, thus turning 4 pixels into 1. It repeats this across the image, and in so doing halves the number of horizontal, and halves the number of vertical pixels, effectively reducing the image by 25%.\n", 255 | "\n", 256 | "You can call model.summary() to see the size and shape of the network, and you'll notice that after every MaxPooling layer, the image size is reduced in this way. \n", 257 | "\n", 258 | "\n", 259 | "```\n", 260 | "model = tf.keras.models.Sequential([\n", 261 | " tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),\n", 262 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 263 | "```\n", 264 | "\n" 265 | ] 266 | }, 267 | { 268 | "metadata": { 269 | "id": "RMorM6daADjA", 270 | "colab_type": "text" 271 | }, 272 | "cell_type": "markdown", 273 | "source": [ 274 | "Add another convolution\n", 275 | "\n", 276 | "\n", 277 | "\n", 278 | "```\n", 279 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 280 | " tf.keras.layers.MaxPooling2D(2,2)\n", 281 | "```\n", 282 | "\n" 283 | ] 284 | }, 285 | { 286 | "metadata": { 287 | "colab_type": "text", 288 | "id": "b1-x-kZF4_tC" 289 | }, 290 | "cell_type": "markdown", 291 | "source": [ 292 | "Now flatten the output. After this you'll just have the same DNN structure as the non convolutional version\n", 293 | "\n", 294 | "```\n", 295 | " tf.keras.layers.Flatten(),\n", 296 | "```\n", 297 | "\n" 298 | ] 299 | }, 300 | { 301 | "metadata": { 302 | "id": "qPtqR23uASjX", 303 | "colab_type": "text" 304 | }, 305 | "cell_type": "markdown", 306 | "source": [ 307 | "The same 128 dense layers, and 10 output layers as in the pre-convolution example:\n", 308 | "\n", 309 | "\n", 310 | "\n", 311 | "```\n", 312 | " tf.keras.layers.Dense(128, activation='relu'),\n", 313 | " tf.keras.layers.Dense(10, activation='softmax')\n", 314 | "])\n", 315 | "```\n", 316 | "\n" 317 | ] 318 | }, 319 | { 320 | "metadata": { 321 | "id": "C0GSsjUhAaSj", 322 | "colab_type": "text" 323 | }, 324 | "cell_type": "markdown", 325 | "source": [ 326 | "Now compile the model, call the fit method to do the training, and evaluate the loss and accuracy from the test set.\n", 327 | "\n", 328 | "\n", 329 | "\n", 330 | "```\n", 331 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 332 | "model.fit(training_images, training_labels, epochs=5)\n", 333 | "test_loss, test_acc = model.evaluate(test_images, test_labels)\n", 334 | "print(test_acc)\n", 335 | "```\n", 336 | "\n", 337 | "\n" 338 | ] 339 | }, 340 | { 341 | "metadata": { 342 | "id": "IXx_LX3SAlFs", 343 | "colab_type": "text" 344 | }, 345 | "cell_type": "markdown", 346 | "source": [ 347 | "# Visualizing the Convolutions and Pooling\n", 348 | "\n", 349 | "This code will show us the convolutions graphically. The print (test_labels[;100]) shows us the first 100 labels in the test set, and you can see that the ones at index 0, index 23 and index 28 are all the same value (9). They're all shoes. Let's take a look at the result of running the convolution on each, and you'll begin to see common features between them emerge. Now, when the DNN is training on that data, it's working with a lot less, and it's perhaps finding a commonality between shoes based on this convolution/pooling combination." 350 | ] 351 | }, 352 | { 353 | "metadata": { 354 | "id": "f-6nX4QsOku6", 355 | "colab_type": "code", 356 | "outputId": "6ff9d960-aeb2-46f5-c683-c994e9a11f42", 357 | "colab": { 358 | "base_uri": "https://localhost:8080/", 359 | "height": 68 360 | } 361 | }, 362 | "cell_type": "code", 363 | "source": [ 364 | "print(test_labels[:100])" 365 | ], 366 | "execution_count": 0, 367 | "outputs": [ 368 | { 369 | "output_type": "stream", 370 | "text": [ 371 | "[9 2 1 1 6 1 4 6 5 7 4 5 7 3 4 1 2 4 8 0 2 5 7 9 1 4 6 0 9 3 8 8 3 3 8 0 7\n", 372 | " 5 7 9 6 1 3 7 6 7 2 1 2 2 4 4 5 8 2 2 8 4 8 0 7 7 8 5 1 1 2 3 9 8 7 0 2 6\n", 373 | " 2 3 1 2 8 4 1 8 5 9 5 0 3 2 0 6 5 3 6 7 1 8 0 1 4 2]\n" 374 | ], 375 | "name": "stdout" 376 | } 377 | ] 378 | }, 379 | { 380 | "metadata": { 381 | "id": "9FGsHhv6JvDx", 382 | "colab_type": "code", 383 | "outputId": "312fdb5d-b3df-49a0-c8a8-6cc52d0bd324", 384 | "colab": { 385 | "base_uri": "https://localhost:8080/", 386 | "height": 349 387 | } 388 | }, 389 | "cell_type": "code", 390 | "source": [ 391 | "import matplotlib.pyplot as plt\n", 392 | "f, axarr = plt.subplots(3,4)\n", 393 | "\n", 394 | "FIRST_IMAGE=0\n", 395 | "SECOND_IMAGE=7\n", 396 | "THIRD_IMAGE=26\n", 397 | "CONVOLUTION_NUMBER = 2\n", 398 | "\n", 399 | "from tensorflow.keras import models\n", 400 | "\n", 401 | "layer_outputs = [layer.output for layer in model.layers]\n", 402 | "\n", 403 | "activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)\n", 404 | "for x in range(0,4):\n", 405 | " f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]\n", 406 | " axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')\n", 407 | " axarr[0,x].grid(False)\n", 408 | " \n", 409 | " f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]\n", 410 | " axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')\n", 411 | " axarr[1,x].grid(False)\n", 412 | " \n", 413 | " f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]\n", 414 | " axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')\n", 415 | " axarr[2,x].grid(False)" 416 | ], 417 | "execution_count": 0, 418 | "outputs": [ 419 | { 420 | "output_type": "display_data", 421 | "data": { 422 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdUAAAFMCAYAAACd0CZ8AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAIABJREFUeJzt3X90VGWe5/FPUWWIkYQfMQmmRWQY\n1JYf47iLmuQQ5Fd247oHcE6bkAGbM7GJG1E4vSwTWRAdGiQhcjScacOkO+KkZad6ym7GOYee0Dgy\nzWqImJnBBXsnCdtjB4SYaBoTqAgJd//wUJ3EVFV+3Kp7b+X9+qvqPsm9X/It6nuf597nPi7DMAwB\nAIBRG2d1AAAAxAqKKgAAJqGoAgBgEooqAAAmoagCAGASiioAACbxjPQXd+3apVOnTsnlcmnLli2a\nN2+emXEBAOA4IyqqH3zwgT755BN5vV6dPXtWW7ZskdfrNTs2AAAcZURFta6uTkuXLpUkzZw5U5cu\nXVJXV5cmTJgw6M+7XCPuEGMIDKPHlP0MZ/SBnEaWWTkdKvIZWdHOp2SvnI4bl2jKfq5f7xz1Pr59\ny0oTIpE+7vrbQbeP6Jpqe3u7Jk+eHHg/ZcoUtbW1jSwy2ELf0YedO3dq586dVoeEUdq1a5fy8vKU\nn5+vjz76yOpwgDHBlBuVeNKh8wUbfYAzcZIEWGNERTU1NVXt7e2B95999plSUlJMCwrRx+hDbOEk\nCbDGiIpqVlaWamtrJUlnzpxRampq0OupcCZGH5yNkyTAGiO6kn3//fdr9uzZys/Pl8vl0vbt282O\nC1HG6ENs4yQpNjCV0f5GfHvYpk2bzIwDFsvKytK+ffuUn5/P6EMM4CQp9jCV0Rl4ohIk9R99+MEP\nfsDog8NxiSb2cJ3cGewzkQmWY/QhdsTaJZpw8xxHO38x3NzFNv02ZHv75YZRHX8o2tvbNXv27MD7\nG9fJOVmyF4oqEKM4SYptXCe3J4Z/AcABuE7uDBRVAHAArpM7A8O/AOAAsXadPFZRVAHAIbhObn8U\nVcAit92SZXUIAEzGNVUAAExCTxWA7ZmxjmYov77884juH2MHRdUhBg4VXrj8nkWRAACCoagCACIq\n0iMNwxHpUQmuqQIAYBJ6qqPgGvDnM9QTsWO1dK7r994zjuHfSBiY077Mzu/AnAJwPnqqAACYhKIK\nAIBJKKoAAJiEa6qjEMlrqAP9Uwbz6ADA7uipAgBgEooqAAAmcRlRWD7e5WKUOZIMI3rD0DeQ08iK\ndk7JZ2TxfzT2BMspPVUAAExCUQUAwCQUVQAATEJRBQDAJEO6kt3Y2Kji4mKtXbtWq1ev1oULF7R5\n82b19vYqJSVFe/bsUVxcXKRjBYARGee6JWT75JvvCtn++ZV/MTMcxLCwPdUrV65ox44dysjICGyr\nqKhQQUGBDh48qOnTp8vn80U0SAAAnCBsUY2Li1NVVZVSU1MD2+rr67VkyRJJ0qJFi1RXVxe5CBEV\n9fX1euihh7RmzRqtWbNGO3bssDokAHCcsMO/Ho9HHk//H/P7/YHh3uTkZLW1tUUmOkTVAw88oIqK\nCqvDAIBBhRvGH4pwQ/2jNeoblaLw7AgAABxhREU1ISFB3d3dkqTW1tZ+Q8NwrubmZj311FNatWqV\n3nuPRdABYLhG9ByrzMxM1dbWavny5Tpy5IgWLFhgdlyIsjvvvFPr169Xbm6uWlpa9MQTT+jIkSPc\n1Q0AwxC2qJ4+fVqlpaU6f/68PB6PamtrVV5erpKSEnm9XqWnp2vFihXRiBURlJaWpkceeUSSdMcd\nd+jWW29Va2urpk2bZnFkAOAcPFA/BpjxsO63335bbW1tKiwsVFtbmx5//HHV1tYG7amS08gabU7r\n6+u1YcMGzZo1S5J01113adu2bUF/nnxGllkP1C8rK1NDQ4N6enpUVFSknJycoD8bizm1041K7ZdP\nDro99v7qGJHFixdr06ZNeuedd3Tt2jW98MILDP06HHdzx5YTJ06oqalJXq9XHR0dWrlyZciiCmtQ\nVCFJmjBhgiorK60OA0AQ8+fP17x58yRJSUlJ8vv96u3tldvttjgy9MWzf4EYxd3cscXtdishIUGS\n5PP5lJ2dTUG1IXqqQAzibu7YdfToUfl8PlVXV1sdCgZBTxWIQTfu5na5XP3u5oazHT9+XJWVlaqq\nqlJiYqLV4WAQFFUgBr399tv68Y9/LElqa2vT559/rrS0NIujwmh0dnaqrKxM+/fv16RJk6wOB0Ew\n/AvEIO7mjj2HDx9WR0eHNm7cGNhWWlqq9PR0C6PCQMxTjQFmzYEbDnIaWdHOKfmMLP6PmsMJ81QZ\n/gUAwCQUVQAATEJRBQDAJBRVAABMEntXsgEAtrIu5WlT9vPD1gdGvY9/yvi5CZEER08VAACTUFQB\nADAJw78ALOcK81X0vZSikO0V3389ZLvnz18L2R5uSHDtmS9Dtp+7/H7Idowd9FQBADAJRRUAAJNQ\nVAEAMAlFFQAAk1BUAQAwCUUVAACTUFQBADAJ81QB6NKXW0K2j2s/E7I9vunDkO3XkyaH3v9vmkO2\nt9f+Y8j2//uPD4Zsn33v90O2Z1TPCdne/Iu6kO2/+3hGyHaMHfRUAQAwyZB6qmVlZWpoaFBPT4+K\nioo0d+5cbd68Wb29vUpJSdGePXsUFxcX6VgBALC1sEX1xIkTampqktfrVUdHh1auXKmMjAwVFBQo\nNzdXe/fulc/nU0FBQTTiBQDAtsIW1fnz52vevHmSpKSkJPn9ftXX1+vFF1+UJC1atEjV1dUU1Sgr\nvHXkSyk1NjaquLhYa9eu1erVq3XhwoUxNfIwfcLSoG2fdB2NWhybv1UctWMBiI6w11TdbrcSEhIk\nST6fT9nZ2fL7/YEv3eTkZLW1tUU2SpjmypUr2rFjhzIyMgLbKioqVFBQoIMHD2r69Ony+XwWRggA\nzjXku3+PHj0qn8+n6upq5eTkBLYbhhGRwBAZcXFxqqqqUlVVVWAbIw9A7Al3R/dQxZ95e9T7GPeb\nH5sQiWT8/d+Meh/h7vQerSEV1ePHj6uyslI/+tGPlJiYqISEBHV3dys+Pl6tra1KTU2NaJBWGjhU\nGM3hwVCSx4/sZMbj8cjj6Z92Rh4AwBxhi2pnZ6fKysp04MABTZo0SZKUmZmp2tpaLV++XEeOHNGC\nBQsiHiiig5GHsanojt+FbD/7VVLI9gvuWSHbL/pPhWzv6e0O2S79enTtvwzdPG2CO2T7/a7CkO1/\n1xl6vVbDnI4aHCBsUT18+LA6Ojq0cePGwLbdu3dr69at8nq9Sk9P14oVKyIaJCJrLI08AEAkhS2q\neXl5ysvL+8b2119/PSIB2Y1dhnsHKjv/w8DrUlWMal9jbeTBLjntm8OBRptTANbgMYVjzOnTp1Va\nWqrz58/L4/GotrZW5eXlKikpYeQBAEaJojrGzJkzRzU1Nd/YPlZGHgAn6+7u1qOPPqri4mI99thj\nVoeDQfDsXwBwiNdee00TJ060OgyEQFEFAAc4e/asmpub9fDDD1sdCkKgqAKAA5SWlqqkpMTqMBAG\n11SBGDDa5zn/ze+C34k8FrR0hV6vtUWh2yPt0KFDuu+++zRt2jRL40B4FFXA4UI9z5mVpGLDsWPH\n1NLSomPHjunixYuKi4vT1KlTlZmZaXVoGIDhX8DhbjzPue9DO+rr67VkyRJJXz/Pua6uzqrwYIJX\nXnlFb731ln7605/qO9/5joqLiymoNkVPFXA4nucM2AdFFYhxPM85tjzzzDNWh4AQGP4FYtCN5zlL\n4nnOQBRRVIEYdON5zpLGxPOcAbtg+BdwOJ7nDNiHy+CCCwAApmD4FwAAk1BUAQAwCUUVAACTUFQB\nADAJRRUAAJNQVAEAMEnU5qnu2rVLp06dksvl0pYtWzRv3rxoHdo2ysrK1NDQoJ6eHhUVFWnu3LnD\nWp7Ljsjr6JddsxtyOnx2/wyEyun777+vvXv3yu12Kzs7W08//bRlcd4w8LsyJycn0LZ48WJNnTpV\nbrdbklReXq60tDSrQv0mIwrq6+uNdevWGYZhGM3Nzcbjjz8ejcPaSl1dnfHkk08ahmEYX3zxhbFw\n4UKjpKTEOHz4sGEYhvHyyy8bb775ppUhDht5NYzLly8bq1evNrZu3WrU1NQYhmE4Oq/kdPjs/hkI\nl9Pc3Fzj008/NXp7e41Vq1YZTU1NVoQZMNh3ZV+LFi0yurq6LIhsaKIy/FtXV6elS5dKkmbOnKlL\nly6pq6srGoe2jfnz5+vVV1+VJCUlJcnv9zt+eS7yGnvLrpHT4bP7ZyBUTltaWjRx4kTddtttGjdu\nnBYuXGj553Ww78re3l5LYxqOqBTV9vZ2TZ48OfB+ypQpY24pKrfbrYSEBEmSz+dTdna245fnIq9f\nL7sWHx/fb5uT80pOh8/un4FQOW1ra9OUKVMGbbPKYN+VN4Z6b9i+fbtWrVql8vJy263CZMmNSnb7\nI0TT0aNH5fP59Pzzz/fbHgt/k1j4N5jN6X8Tp8dvB3b7G9otnmCCfVc+++yzeu6551RTU6OmpqbA\nwhF2EZWimpqaqvb29sD7zz77TCkpKdE4tK0cP35clZWVqqqqUmJiouOX5yKvg3NyXsmpOez0GQiV\n04FtVsd6w8Dvyr5WrFih5ORkeTweZWdnq7Gx0aIoBxeVopqVlRU4mzhz5oxSU1M1YcKEaBzaNjo7\nO1VWVqb9+/dr0qRJkpy/PBd5HZyT80pOzWGnz0ConN5+++3q6urSuXPn1NPTo3fffVdZWVmWxSoN\n/l3Zt62wsFBXr16VJJ08eVKzZs2yIsygorZKTXl5uT788EO5XC5t375d99xzTzQOaxter1f79u3T\njBkzAtt2796trVu36quvvlJ6erpeeukl3XTTTRZGOXxjPa8Dl11LS0sLLLvm1LyO9ZwOlxM+AwNz\n+vHHHysxMVHLli3TyZMnVV5eLknKyclRYWGhZXFKg39XPvjgg7r77ru1bNkyvfHGGzp06JDGjx+v\ne++9V9u2bZPL5bIw4v5Y+g0AAJOM+OEPTBAHAKC/ERXVDz74QJ988om8Xq/Onj2rLVu2yOv1Bv15\nlytqD27STZ7f31RxrWdsTAUwjB5T9jOcEyWzcxrnmRq0zVDwOWpm53hZwrqgbb+88lemHisUs3I6\nVNH8PzqYbdP+W8j2HS2vRSmSyIh2PiWp5/X48D80BDf92ehjD5ffobLT5yBYTkd0oxITxGNP3xOl\nnTt3aufOnVaHhFHatWuX8vLylJ+fr48++sjqcIAxYURFlQnisYcTpdjCSRJgDVPGfKJ9r9PAocKr\nPRcDr60c8u07dBjNoUIztLe3a/bs2YH3N06UmE7hTMFOksgnEFkj6qkyQTz2cVO4szGaBFhjREWV\nCeKxhxOl2MZJEhAdIxr+vf/++zV79mzl5+cHJhPD2bKysrRv3z7l5+dzohQDOEmKTUxltL8RX1Pd\ntGmTmXGENfHmewOvL/k/juqxh8pp11H7itaJ0mNJg99a/7Mv7XGrvJNz2BcnSbFnuFMZYQ1rJ6fB\nVqJ9ooTIGe5J0rXq0F8FZsxVDMVO8w/tipvPnIGiCsQoTpJiC3foO4NjiuqSmxYGXv/MpsO/ABAt\n3HxmT5YsUg4AGB5uPnMGiioAOABTGZ3BMcO/ADCWMZXRGRxTVO0y5cJs2Tf/fkHgX/l/bGEk0WF2\nHm+f8HDQtnNdx0w9Vih989jXWMgpooebz+yP4V8AAEzimJ4qgMhpOLA0ZPu4ce+FbL9+vdPMcKLu\nT5KKQ7a/9eUPoxQJnG7MFtWBw4bRHCrs6/2rhyw5LgDAfGO2qAJALAs3+jBUPddXjXofnnHfNSES\nc/zsvvyI7p9rqgAAmMTWPdWv3vt24PX4rF+bum+rhnsH6un93OoQoqpvTvsaaX4fGDf4/iTpnI6N\naJ8jwV2+ACR6qgAAmIaiCgCASSiqAACYxNbXVFsrxkds3wMXy47VJzYBQ/HQr/5hVL/fc/2NkO2v\nzgo9z/W/n7V2cfg//YMvQra/9a9RCgSOR08VAACTUFQBADCJrYd/c/9+Zp935o6/JHo4n7DCo8sW\nBGkZ2ZSakeQxzjM1aNvVnosjigMAJHqqAACYhqIKAIBJbD38m67kwOszJu/7n7s/G9Hv9R06ZKgQ\nANAXPVUAAExi654qAGew0yokI5HxR6dC/wDzVDFEQ+qpNjY2aunSpfrJT34iSbpw4YLWrFmjgoIC\nbdiwQVevXo1okAAAOEHYnuqVK1e0Y8cOZWRkBLZVVFSooKBAubm52rt3r3w+nwoKCkwP7tAX9wRe\n3xJv7r6fnpbY7/1T/za037stfk7g9SddsXNNtb6+Xhs2bNCsWbMkSXfddZe2bdtm+nH65rSvkeY3\nI8UftO2NIA/J6ZvDgWIppwCiL2xRjYuLU1VVlaqqqgLb6uvr9eKLL0qSFi1apOrq6ogUVUTXAw88\noIqKCqvDAGCCzP8d+tGQQ3V93OgeYWk3YYf6RylsUfV4PPJ4+v+Y3+9XXFycJCk5OVltbW2RiQ4A\nAAcZ9Y1KhmGYEcegzn3HF7F9/9Vvu0f0e590HTU5Evtobm7WU089pUuXLmn9+vXKysqyOiQAcJQR\nFdWEhAR1d3crPj5era2tSk1NNTsuRNmdd96p9evXKzc3Vy0tLXriiSd05MiRwIgEACC8Ec1TzczM\nVG1trSTpyJEjWrAg2PNc4RRpaWl65JFH5HK5dMcdd+jWW29Va2ur1WEBgKOE7amePn1apaWlOn/+\nvDwej2pra1VeXq6SkhJ5vV6lp6drxYoV0YgVEfT222+rra1NhYWFamtr0+eff660tDSrw0KUjBuX\nGLL9+vXOKEVijdveGNmCDsBALiOSF0VvHMRlv2dMLEn4Xr/371ypCvKT9mcYPaPeR1dXlzZt2qQv\nv/xS165d0/r167Vw4cKgP2+XnA7MY19m5/TmuDuCtvmv/tbUY402p8OdIuV2Tw65v1gvqpFmxv9R\nSSorK1NDQ4N6enpUVFSknJycoD8bLqdDFWu5v/Ddb5uyn6kH/s+g2+3xzQjLTZgwQZWVlVaHARMx\nRSq2nDhxQk1NTfJ6vero6NDKlStDFlVYg6IKAA4wf/58zZs3T5KUlJQkv9+v3t5eud1uiyNDX2O2\nqH6urqgda+CwodlDhcBgmCIVW9xutxISEiRJPp9P2dnZFFQbGrNFFYhlTJGKXUePHpXP51N1dbXV\noWAQLP0GxCCmSMWm48ePq7KyUlVVVUpMDH3HNqwxZnuq/3rlp1E7FsO94e35g3VB2/7H//uroG3/\nJT34Pt9pHk1E3+SkPDJFKvZ0dnaqrKxMBw4c0KRJk6wOB0GM2aIKxLLFixdr06ZNeuedd3Tt2jW9\n8MILIYd+Iz1t4v6bQy+48c/+gxE9fiw4fPiwOjo6tHHjxsC20tJSpaeHOLNE1FFUgRjEFKnYk5eX\np7y8PKvDQBhcUwUAwCSO7KkOvP4W6ppbMHv/8M/6vf9+s3OfqAQAsAd6qgAAmMSRPVUAQGi/zZ9h\nyn5azn1r1PvI+NUvTIjEHEnTL0Z0/44sqiMZ7h3ohXO/MiESDMb/Lw8Gbbv5j+sH3T7SnP7wwqcj\n+j0AiASGfwEAMIkje6oAzHWuYF7I9tsPfjSq/f/l/I6Q7RkWDxxdfj70Mmm3/EXo+IEb6KkCAGAS\nW/VUB16LC3b9zQyp7j/s9/5L/VvEjgUAGBvoqQIAYBKKKgAAJrHV8G/pij8esCVyw7/J15P7vTd5\nQZMx7Zt57MvcnA7MY1/kFEC00VMFAMAkFFUAAExiq+HfFz6J3lJV9f6/jtqxALsb7TzUcC75EyK6\n/9G6ePoPw/zEyajEAeejpwoAgEmG1FMtKytTQ0ODenp6VFRUpLlz52rz5s3q7e1VSkqK9uzZo7i4\nuEjHCgCArYUtqidOnFBTU5O8Xq86Ojq0cuVKZWRkqKCgQLm5udq7d698Pp8KCgqiES8AALYVtqjO\nnz9f8+Z9/VzQpKQk+f1+1dfX68UXX5QkLVq0SNXV1aYU1V/8xz/p9z73w7dGvc9g0m55qN/71ssn\nInYsu2lsbFRxcbHWrl2r1atX68KFC6aOPDyY0ha0Lf7C7YNu7756bkTH+vdxjSP6PQCIhLDXVN1u\ntxISvr7JwOfzKTs7W36/P/Clm5ycrLa24F+isJcrV65ox44dysjICGyrqKhQQUGBDh48qOnTp8vn\n81kYIQA415Dv/j169Kh8Pp+qq6uVk5MT2G4YRkQCQ2TExcWpqqpKVVVVgW2RGnkAYB2z7uj+h/mz\nTNmPXYS/03to/iDI9iEV1ePHj6uyslI/+tGPlJiYqISEBHV3dys+Pl6tra1KTU01JcgfnEkK2hYf\n9/thw5EOFfY1loZ7+/J4PPJ4+qedkQcAMEfYotrZ2amysjIdOHBAkyZNkiRlZmaqtrZWy5cv15Ej\nR7RgwYKIB4roYOQBkfCfT0bu/ggzzPwZ81BhjrBF9fDhw+ro6NDGjRsD23bv3q2tW7fK6/UqPT1d\nK1asiGiQiKxIjTwAwFgTtqjm5eUpLy/vG9tff/1104N5zx98n13dOwOvPeO+a/qxh2p2wu/vUD5z\nJfjZd9+7i+0+1Gz2yEOou7Z7rr8x6PaR5nQkf9slCd8L2vbOlaqgbQPvGB9tHABij60eU4jIO336\ntEpLS3X+/Hl5PB7V1taqvLxcJSUljDwAwChRVMeYOXPmqKam5hvbIzHyAMBc3d3devTRR1VcXKzH\nHnvM6nAwCJ79CwAO8dprr2nixIlWh4EQHNNTNQ6uszoESdJUTQm8PhPi57jGBsBMZ8+eVXNzsx5+\n+GGrQ0EI9FQBwAFKS0tVUlJidRgIwzE9VQCR0/lcSsj2xJdG90CQ9qKZIdtv3X92VPsfrctbp4Rs\nv+UHX0QpksEdOnRI9913n6ZNm2ZpHAjPMUV1y58X9nn3Q8viCDXlAuH5/+emYf/OlIQ/Ctr2xZVT\nw97fp672Yf+OJN1mzAja1iprh/sjvUgCrHXs2DG1tLTo2LFjunjxouLi4jR16lRlZmZaHRoGcExR\nBTC4UIsksDxjbHjllVcCr/ft26dvfetbFFSb4poq4HA3Fkno+ySs+vp6LVmyRNLXiyTU1dVZFR4w\nptBTBRyORRLGlmeeecbqEBCCY4rq80/8beD1npeC/1zf628jud4WzrdvWRl4/evLPw/6c/clrAq8\n/tcr/8v0OIChYpEEIHoY/gVi0I1FEiSxSAIQRRRVIAbdWCRBEsszAlHkMhgbAhxt4CIJaWlpgUUS\nvvrqK6Wnp+ull17STTfdZHWoQMyjqAIAYBKGfwEAMAlFFQAAk1BUAQAwCUUVAACTUFQBADAJRRUA\nAJNE7TGFu3bt0qlTp+RyubRlyxbNmzcvWoe2jbKyMjU0NKinp0dFRUWaO3eu45fnIq+xt+waOR0+\nu38GQuX0/fff1969e+V2u5Wdna2nn37asjhvGPhdmZOTE2hbvHixpk6dKrfbLUkqLy9XWlqaVaF+\nkxEF9fX1xrp16wzDMIzm5mbj8ccfj8ZhbaWurs548sknDcMwjC+++MJYuHChUVJSYhw+fNgwDMN4\n+eWXjTfffNPKEIeNvBrG5cuXjdWrVxtbt241ampqDMMwHJ1Xcjp8dv8MhMtpbm6u8emnnxq9vb3G\nqlWrjKamJivCDBjsu7KvRYsWGV1dXRZENjRRGf6tq6vT0qVLJUkzZ87UpUuX1NXVFY1D28b8+fP1\n6quvSpKSkpLk9/sdvzwXeY29ZdfI6fDZ/TMQKqctLS2aOHGibrvtNo0bN04LFy60/PM62Hdlb2+v\npTENR1SKant7uyZPnhx4P2XKlDG3FJXb7VZCQoIkyefzKTs72/HLc5HXr5ddi4+P77fNyXklp8Nn\n989AqJy2tbVpypQpg7ZZZbDvyhtDvTds375dq1atUnl5ue1WYbLkRiW7/RGi6ejRo/L5fHr++ef7\nbY+Fv0ks/BvM5vS/idPjtwO7/Q3tFk8wwb4rn332WT333HOqqalRU1NTYOEIu4hKUU1NTVV7e3vg\n/WeffaaUlJRoHNpWjh8/rsrKSlVVVSkxMdHxy3OR18E5Oa/k1Bx2+gyEyunANqtjvWHgd2VfK1as\nUHJysjwej7Kzs9XY2GhRlIOLSlHNysoKnE2cOXNGqampmjBhQjQObRudnZ0qKyvT/v37NWnSJEnO\nX56LvA7OyXklp+aw02cgVE5vv/12dXV16dy5c+rp6dG7776rrKwsy2KVBv+u7NtWWFioq1evSpJO\nnjypWbNmWRFmUFFbpaa8vFwffvihXC6Xtm/frnvuuScah7UNr9erffv2acaMGYFtu3fv1tatWx29\nPNdYz2ssLrs21nM6XE74DAzM6ccff6zExEQtW7ZMJ0+eVHl5uSQpJydHhYWFlsUpDf5d+eCDD+ru\nu+/WsmXL9MYbb+jQoUMaP3687r33Xm3btk0ul8vCiPsbcVFlLhsAAP2N6OEPH3zwgT755BN5vV6d\nPXtWW7ZskdfrNTs2AAAcZURFNdi8p2DXXlyuqD24SQ/e/ETgdb3/r6N2XCsZRo8p+xnO6IPZOd09\noyho288v+oO2xWqOzcrpUEXz/+hYFO18StIvH3zclP3kfPAzU/ZjhvE3pY96H19d+9SESILndEQ3\nKjGXLfb0HX3YuXOndu7caXVIGKVdu3YpLy9P+fn5+uijj6wOBxgTTLn71ynznhAcT9KJLZwkAdYY\n0ZiP1XPZBg4Vlvxmf+B1rA4HRlp7e7tmz54deH9j9IHpFM403Es0AMwxop4qc9liH6MPzsYlGsAa\nI+qp3n///Zo9e7by8/MD857gbFaPPiCyOEmKDUxltL8R3/K3adMmM+OAxbKysrRv3z7l5+cz+hAD\nOEmKPUxldAZH3kff9xoqzGH16AM5NRcnSbGH6+TO4Miiishg9CF2DPck6dZb/kPI9nuv3xey/Vf+\nHw87xmhyueJDthtGd5QiGTluJnQGiioQozhJim1cJ7cnS9ZTBQAMD9fJnYGiCgAOwFRGZ2D4FwAc\nwOqbCTE0FFUAcAiuk9sfw78AAJiEogoAgEkY/gWg9ssNIdt/pdDtdueEeaiIDfRUAQAwicuIwgxi\nl4sOcSQFW4E+kshpZEU7p+QMDOi/AAALXElEQVQzspz8f/T+mwtGvY9/9h80IRJJcpuwj+sm7EMy\njGuDbqenCgCASSiqAACYhKIKAIBJKKoAAJiEogoAgEm45Q/AqCWMvzNku/+rcyHbDUX67lhXmHaW\nUYM56KkCAGASiioAACahqAIAYBKKKgAAJqGoAgBgEooqAAAmoagCAGCSIc1TbWxsVHFxsdauXavV\nq1frwoUL2rx5s3p7e5WSkqI9e/YoLi4u0rECiJBv37IyZPuvL/88ZHu4eahbpn0vZPvOltdCto8e\n81ARHWF7qleuXNGOHTuUkZER2FZRUaGCggIdPHhQ06dPl8/ni2iQAAA4QdiiGhcXp6qqKqWmpga2\n1dfXa8mSJZKkRYsWqa6uLnIRIirq6+v10EMPac2aNVqzZo127NhhdUgA4Dhhh389Ho88nv4/5vf7\nA8O9ycnJamtri0x0iKoHHnhAFRUVVocBwAS7ZxSZsp9NZzNHvQ/POLMWKe81aT+RM+oblQyDaxUA\nAEgjLKoJCQnq7u6WJLW2tvYbGoZzNTc366mnntKqVav03nvvWR0OADjOiFapyczMVG1trZYvX64j\nR45owYIFZseFKLvzzju1fv165ebmqqWlRU888YSOHDnCXd0AMAxhi+rp06dVWlqq8+fPy+PxqLa2\nVuXl5SopKZHX61V6erpWrFgRjVgRQWlpaXrkkUckSXfccYduvfVWtba2atq0aRZHBgDOEbaozpkz\nRzU1Nd/Y/vrrr0ckIFjj7bffVltbmwoLC9XW1qbPP/9caWlpVoeFKPn0+r+N6vfDrYeaMj7S66UC\n9sAi5ZAkLV68WJs2bdI777yja9eu6YUXXmDo18Hq6+u1YcMGzZo1S5J01113adu2bRZHhdEqKytT\nQ0ODenp6VFRUpJycHKtDwgAUVUiSJkyYoMrKSqvDgImYIhVbTpw4oaamJnm9XnV0dGjlypUUVRui\nqAKAA8yfP1/z5s2TJCUlJcnv96u3t1dut9viyNAXD9QHYhRTpGKL2+1WQkKCJMnn8yk7O5uCakP0\nVIEYxBSp2HX06FH5fD5VV1dbHQoGQU8ViEE3pki5XK5+U6TgbMePH1dlZaWqqqqUmJhodTgYBD3V\nCKmZ/UTg9Zozf21hJDDLtV/cFrTtptwLUYwkPKZIxZ7Ozk6VlZXpwIEDmjRpktXhIAiKKhCDhjtF\n6pL/44jG885Frv2N1uHDh9XR0aGNGzcGtpWWlio9Pd3CqDAQRRWIQUyRij15eXnKy8uzOgyEwTVV\nAABMQk81QvLLfxl4vSbXwkAAAFFDTxUAAJPQUwWAGFTym/2m7Gf5fz1tyn7GCopqhLj+0+4+775r\nWRwwT/+cDkSOATD8CwCAaeipAoi4v+9ieg/GBnqqAACYhJ5qhHjGcY0NAMYaeqoAAJiEogoAgEkY\n/gWGiCF9AOHQUwUAwCQUVQAATMLwLwBNunlOyPbf+XlUHTAU9FQBADDJkHqqZWVlamhoUE9Pj4qK\nijR37lxt3rxZvb29SklJ0Z49exQXFxfpWAEAsLWwRfXEiRNqamqS1+tVR0eHVq5cqYyMDBUUFCg3\nN1d79+6Vz+dTQUFBNOIFAMC2wg7/zp8/X6+++qokKSkpSX6/X/X19VqyZIkkadGiRaqrq4tslDBV\nY2Ojli5dqp/85CeSpAsXLmjNmjUqKCjQhg0bdPXqVYsjBABnCltU3W63EhISJEk+n0/Z2dny+/2B\n4d7k5GS1tbVFNkqY5sqVK9qxY4cyMjIC2yoqKlRQUKCDBw9q+vTp8vl8FkYIAM415Lt/jx49Kp/P\np+rqauXk5AS2G4YRkcAQGXFxcaqqqlJVVVVgW319vV588UVJX488VFdXM5wPONzVX95hyn7ilr1n\nyn7GiiHd/Xv8+HFVVlaqqqpKiYmJSkhIUHd3tySptbVVqampEQ0S5vF4PIqPj++3jZEHADBH2J5q\nZ2enysrKdODAAU2aNEmSlJmZqdraWi1fvlxHjhzRggULIh4oooORh7Fp3ZTskO1l55mnCgxF2KJ6\n+PBhdXR0aOPGjYFtu3fv1tatW+X1epWenq4VK1ZENEhE1o2Rh/j4eEYeAGAUwhbVvLw85eXlfWP7\n66+/HpGAEH2MPACAOXhM4Rhz+vRplZaW6vz58/J4PKqtrVV5eblKSkoYeQCAUaKojjFz5sxRTU3N\nN7Yz8gDYX3d3tx599FEVFxfrscceszocDIJn/wKAQ7z22muaOHGi1WEgBIoqADjA2bNn1dzcrIcf\nftjqUBACRRUAHKC0tFQlJSVWh4EwuKYKxIDGxkYVFxdr7dq1Wr16tS5cuDCslaS+n/OPIff/5zff\nGbI96Y9bQra7CqtDtl/9i2dDtvd+FXoVrO720EOiE+f8e+jjP7I0ZPuf3j8jZPvfdb4Wsn20Dh06\npPvuu0/Tpk2L6HEwehRVwOFCPc+ZlaRiw7Fjx9TS0qJjx47p4sWLiouL09SpU5WZmWl1aBiA4V/A\n4W48z7nvQztYSSq2vPLKK3rrrbf005/+VN/5zndUXFxMQbUpeqqAw3k8Hnk8/f8r8zxnwBoUVSDG\n8Tzn2PLMM89YHQJCYPgXiEGsJAVYg6IKxKAbz3OWxPOcgShi+BdwOJ7nDNiHy+CCCwAApmD4FwAA\nk1BUAQAwCUUVAACTUFQBADAJRRUAAJNQVAEAMEnU5qnu2rVLp06dksvl0pYtWzRv3rxoHdo2ysrK\n1NDQoJ6eHhUVFWnu3LnDWp7Ljsjr6JddsxtyOnx2/wyEyun777+vvXv3yu12Kzs7W08//bRlcd4w\n8LsyJycn0LZ48WJNnTpVbrdbklReXq60tDSrQv0mIwrq6+uNdevWGYZhGM3Nzcbjjz8ejcPaSl1d\nnfHkk08ahmEYX3zxhbFw4UKjpKTEOHz4sGEYhvHyyy8bb775ppUhDht5NYzLly8bq1evNrZu3WrU\n1NQYhmE4Oq/kdPjs/hkIl9Pc3Fzj008/NXp7e41Vq1YZTU1NVoQZMNh3ZV+LFi0yurq6LIhsaKIy\n/FtXV6elS79eBHjmzJm6dOmSurq6onFo25g/f75effVVSVJSUpL8fr/jl+cir7G37Bo5HT67fwZC\n5bSlpUUTJ07UbbfdpnHjxmnhwoWWf14H+67s7e21NKbhiEpRbW9v1+TJkwPvp0yZMuaWonK73UpI\nSJAk+Xw+ZWdnO355LvL69bJr8fHx/bY5Oa/kdPjs/hkIldO2tjZNmTJl0DarDPZdeWOo94bt27dr\n1apVKi8vt90qTJbcqGS3P0I0HT16VD6fT88//3y/7bHwN4mFf4PZnP43cXr8dmC3v6Hd4gkm2Hfl\ns88+q+eee041NTVqamoKLBxhF1EpqqmpqWpvbw+8/+yzz5SSkhKNQ9vK8ePHVVlZqaqqKiUmJjp+\neS7yOjgn55WcmsNOn4FQOR3YZnWsNwz8ruxrxYoVSk5OlsfjUXZ2thobGy2KcnBRKapZWVmBs4kz\nZ84oNTVVEyZMiMahbaOzs1NlZWXav3+/Jk2aJMn5y3OR18E5Oa/k1Bx2+gyEyuntt9+urq4unTt3\nTj09PXr33XeVlZVlWazS4N+VfdsKCwt19epVSdLJkyc1a9YsK8IMKmqr1JSXl+vDDz+Uy+XS9u3b\ndc8990TjsLbh9Xq1b98+zZgxI7Bt9+7d2rp1q7766iulp6frpZde0k033WRhlMM31vM6cNm1tLS0\nwLJrTs3rWM/pcDnhMzAwpx9//LESExO1bNkynTx5UuXl5ZKknJwcFRYWWhanNPh35YMPPqi7775b\ny5Yt0xtvvKFDhw5p/Pjxuvfee7Vt2za5XC4LI+6Ppd8AADAJT1QCAMAkFFUAAExCUQUAwCQUVQAA\nTEJRBQDAJBRVAABMQlEFAMAkFFUAAEzy/wHU/a5KKMhpPAAAAABJRU5ErkJggg==\n", 423 | "text/plain": [ 424 | "
" 425 | ] 426 | }, 427 | "metadata": { 428 | "tags": [] 429 | } 430 | } 431 | ] 432 | }, 433 | { 434 | "metadata": { 435 | "id": "8KVPZqgHo5Ux", 436 | "colab_type": "text" 437 | }, 438 | "cell_type": "markdown", 439 | "source": [ 440 | "EXERCISES\n", 441 | "\n", 442 | "1. Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this have on accuracy and/or training time.\n", 443 | "\n", 444 | "2. Remove the final Convolution. What impact will this have on accuracy or training time?\n", 445 | "\n", 446 | "3. How about adding more Convolutions? What impact do you think this will have? Experiment with it.\n", 447 | "\n", 448 | "4. Remove all Convolutions but the first. What impact do you think this will have? Experiment with it. \n", 449 | "\n", 450 | "5. In the previous lesson you implemented a callback to check on the loss function and to cancel training once it hit a certain amount. See if you can implement that here!" 451 | ] 452 | }, 453 | { 454 | "metadata": { 455 | "id": "ZpYRidBXpBPM", 456 | "colab_type": "code", 457 | "outputId": "a18beedc-2768-47b6-db51-a918c978db4c", 458 | "colab": { 459 | "base_uri": "https://localhost:8080/", 460 | "height": 442 461 | } 462 | }, 463 | "cell_type": "code", 464 | "source": [ 465 | "import tensorflow as tf\n", 466 | "print(tf.__version__)\n", 467 | "mnist = tf.keras.datasets.mnist\n", 468 | "\n", 469 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 470 | "\n", 471 | "training_images=training_images.reshape(60000, 28, 28, 1)\n", 472 | "training_images=training_images / 255.0\n", 473 | "test_images = test_images.reshape(10000, 28, 28, 1)\n", 474 | "test_images=test_images/255.0\n", 475 | "\n", 476 | "model = tf.keras.models.Sequential([\n", 477 | " tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),\n", 478 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 479 | " tf.keras.layers.Flatten(),\n", 480 | " tf.keras.layers.Dense(128, activation='relu'),\n", 481 | " tf.keras.layers.Dense(10, activation='softmax')\n", 482 | "])\n", 483 | "\n", 484 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 485 | "\n", 486 | "model.fit(training_images, training_labels, epochs=10)\n", 487 | "\n", 488 | "test_loss, test_acc = model.evaluate(test_images, test_labels)\n", 489 | "print(test_acc)" 490 | ], 491 | "execution_count": 0, 492 | "outputs": [ 493 | { 494 | "output_type": "stream", 495 | "text": [ 496 | "1.13.1\n", 497 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", 498 | "11493376/11490434 [==============================] - 0s 0us/step\n", 499 | "Epoch 1/10\n", 500 | "60000/60000 [==============================] - 11s 186us/sample - loss: 0.1668 - acc: 0.9504\n", 501 | "Epoch 2/10\n", 502 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0544 - acc: 0.9835\n", 503 | "Epoch 3/10\n", 504 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0348 - acc: 0.9894\n", 505 | "Epoch 4/10\n", 506 | "60000/60000 [==============================] - 11s 187us/sample - loss: 0.0238 - acc: 0.9925\n", 507 | "Epoch 5/10\n", 508 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0163 - acc: 0.9951\n", 509 | "Epoch 6/10\n", 510 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0111 - acc: 0.9962\n", 511 | "Epoch 7/10\n", 512 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0080 - acc: 0.9973\n", 513 | "Epoch 8/10\n", 514 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0051 - acc: 0.9984\n", 515 | "Epoch 9/10\n", 516 | "60000/60000 [==============================] - 11s 186us/sample - loss: 0.0064 - acc: 0.9976\n", 517 | "Epoch 10/10\n", 518 | "60000/60000 [==============================] - 11s 183us/sample - loss: 0.0039 - acc: 0.9987\n", 519 | "10000/10000 [==============================] - 1s 83us/sample - loss: 0.0564 - acc: 0.9862\n", 520 | "0.9862\n" 521 | ], 522 | "name": "stdout" 523 | } 524 | ] 525 | } 526 | ] 527 | } -------------------------------------------------------------------------------- /Exercise2_MNIST_Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Exercise2-Question.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "collapsed_sections": [], 10 | "include_colab_link": true 11 | }, 12 | "kernelspec": { 13 | "name": "python3", 14 | "display_name": "Python 3" 15 | }, 16 | "accelerator": "GPU" 17 | }, 18 | "cells": [ 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "id": "view-in-github", 23 | "colab_type": "text" 24 | }, 25 | "source": [ 26 | "\"Open" 27 | ] 28 | }, 29 | { 30 | "metadata": { 31 | "id": "tOoyQ70H00_s", 32 | "colab_type": "text" 33 | }, 34 | "cell_type": "markdown", 35 | "source": [ 36 | "## Exercise 2\n", 37 | "In the course you learned how to do classification using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.\n", 38 | "\n", 39 | "Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.\n", 40 | "\n", 41 | "Some notes:\n", 42 | "1. It should succeed in less than 10 epochs, so it is okay to change epochs to 10, but nothing larger\n", 43 | "2. When it reaches 99% or greater it should print out the string \"Reached 99% accuracy so cancelling training!\"\n", 44 | "3. If you add any additional variables, make sure you use the same names as the ones used in the class\n", 45 | "\n", 46 | "I've started the code for you below -- how would you finish it? " 47 | ] 48 | }, 49 | { 50 | "metadata": { 51 | "id": "9rvXQGAA0ssC", 52 | "colab_type": "code", 53 | "outputId": "0bbf99fd-b542-4d6c-865b-da39ff5e8678", 54 | "colab": { 55 | "base_uri": "https://localhost:8080/", 56 | "height": 326 57 | } 58 | }, 59 | "cell_type": "code", 60 | "source": [ 61 | "import tensorflow as tf\n", 62 | "mnist = tf.keras.datasets.mnist\n", 63 | "\n", 64 | "# YOUR CODE SHOULD START HERE\n", 65 | "class StopTrainingCallback(tf.keras.callbacks.Callback):\n", 66 | " def on_epoch_end(self, epoch, logs={}):\n", 67 | " if (logs.get('acc') >= 0.99):\n", 68 | " print('\\nReached desired accuracy (0.99), No more training.')\n", 69 | " self.model.stop_training = True\n", 70 | "\n", 71 | "mCallback = StopTrainingCallback()\n", 72 | "# YOUR CODE SHOULD END HERE\n", 73 | "\n", 74 | "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n", 75 | "# YOUR CODE SHOULD START HERE\n", 76 | "x_train = x_train / 255.0\n", 77 | "x_test = x_test / 255.0\n", 78 | "# YOUR CODE SHOULD END HERE\n", 79 | "model = tf.keras.models.Sequential([\n", 80 | "# YOUR CODE SHOULD START HERE\n", 81 | " tf.keras.layers.Flatten(),\n", 82 | " tf.keras.layers.Dense(512, activation=tf.nn.relu),\n", 83 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n", 84 | "# YOUR CODE SHOULD END HERE\n", 85 | "])\n", 86 | "\n", 87 | "model.compile(optimizer='adam',\n", 88 | " loss='sparse_categorical_crossentropy',\n", 89 | " metrics=['accuracy'])\n", 90 | "\n", 91 | "# YOUR CODE SHOULD START HERE\n", 92 | "model.fit(x_train, y_train, epochs=10, callbacks=[mCallback])\n", 93 | "# YOUR CODE SHOULD END HERE" 94 | ], 95 | "execution_count": 0, 96 | "outputs": [ 97 | { 98 | "output_type": "stream", 99 | "text": [ 100 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", 101 | "11493376/11490434 [==============================] - 0s 0us/step\n", 102 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 103 | "Instructions for updating:\n", 104 | "Colocations handled automatically by placer.\n", 105 | "Epoch 1/10\n", 106 | "60000/60000 [==============================] - 9s 145us/sample - loss: 0.2018 - acc: 0.9408\n", 107 | "Epoch 2/10\n", 108 | "60000/60000 [==============================] - 8s 135us/sample - loss: 0.0803 - acc: 0.9750\n", 109 | "Epoch 3/10\n", 110 | "60000/60000 [==============================] - 8s 136us/sample - loss: 0.0515 - acc: 0.9838\n", 111 | "Epoch 4/10\n", 112 | "60000/60000 [==============================] - 8s 135us/sample - loss: 0.0363 - acc: 0.9881\n", 113 | "Epoch 5/10\n", 114 | "59968/60000 [============================>.] - ETA: 0s - loss: 0.0281 - acc: 0.9910Reached desired accuracy (0.99), No more training.\n", 115 | "60000/60000 [==============================] - 8s 136us/sample - loss: 0.0281 - acc: 0.9911\n" 116 | ], 117 | "name": "stdout" 118 | }, 119 | { 120 | "output_type": "execute_result", 121 | "data": { 122 | "text/plain": [ 123 | "" 124 | ] 125 | }, 126 | "metadata": { 127 | "tags": [] 128 | }, 129 | "execution_count": 2 130 | } 131 | ] 132 | } 133 | ] 134 | } -------------------------------------------------------------------------------- /Exercise4_Question_Happy_Sad.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Exercise4-Question.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "include_colab_link": true 10 | }, 11 | "kernelspec": { 12 | "name": "python3", 13 | "display_name": "Python 3" 14 | }, 15 | "accelerator": "GPU" 16 | }, 17 | "cells": [ 18 | { 19 | "cell_type": "markdown", 20 | "metadata": { 21 | "id": "view-in-github", 22 | "colab_type": "text" 23 | }, 24 | "source": [ 25 | "\"Open" 26 | ] 27 | }, 28 | { 29 | "metadata": { 30 | "id": "UncprnB0ymAE", 31 | "colab_type": "text" 32 | }, 33 | "cell_type": "markdown", 34 | "source": [ 35 | "Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad. \n", 36 | "Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999\n", 37 | "\n", 38 | "Hint -- it will work best with 3 convolutional layers." 39 | ] 40 | }, 41 | { 42 | "metadata": { 43 | "id": "7Vti6p3PxmpS", 44 | "colab_type": "code", 45 | "outputId": "ba4ae59e-cc99-4849-da7b-4523fb2e0eec", 46 | "colab": { 47 | "base_uri": "https://localhost:8080/", 48 | "height": 204 49 | } 50 | }, 51 | "cell_type": "code", 52 | "source": [ 53 | "import tensorflow as tf\n", 54 | "import os\n", 55 | "import zipfile\n", 56 | "\n", 57 | "\n", 58 | "DESIRED_ACCURACY = 0.999\n", 59 | "\n", 60 | "!wget --no-check-certificate \\\n", 61 | " \"https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip\" \\\n", 62 | " -O \"/tmp/happy-or-sad.zip\"\n", 63 | "\n", 64 | "zip_ref = zipfile.ZipFile(\"/tmp/happy-or-sad.zip\", 'r')\n", 65 | "zip_ref.extractall(\"/tmp/h-or-s\")\n", 66 | "zip_ref.close()\n", 67 | "\n", 68 | "class myCallback(tf.keras.callbacks.Callback):\n", 69 | " def on_epoch_end(self, epoch, logs={}):\n", 70 | " if (logs.get('acc') > DESIRED_ACCURACY):\n", 71 | " print('\\nDesired Accuracy is met, Stopping training...')\n", 72 | " self.model.stop_training = True\n", 73 | "\n", 74 | "callbacks = myCallback()" 75 | ], 76 | "execution_count": 0, 77 | "outputs": [ 78 | { 79 | "output_type": "stream", 80 | "text": [ 81 | "--2019-04-09 14:13:44-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip\n", 82 | "Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.141.128, 2607:f8b0:400c:c06::80\n", 83 | "Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.141.128|:443... connected.\n", 84 | "HTTP request sent, awaiting response... 200 OK\n", 85 | "Length: 2670333 (2.5M) [application/zip]\n", 86 | "Saving to: ‘/tmp/happy-or-sad.zip’\n", 87 | "\n", 88 | "\r/tmp/happy-or-sad.z 0%[ ] 0 --.-KB/s \r/tmp/happy-or-sad.z 100%[===================>] 2.55M --.-KB/s in 0.02s \n", 89 | "\n", 90 | "2019-04-09 14:13:44 (129 MB/s) - ‘/tmp/happy-or-sad.zip’ saved [2670333/2670333]\n", 91 | "\n" 92 | ], 93 | "name": "stdout" 94 | } 95 | ] 96 | }, 97 | { 98 | "metadata": { 99 | "id": "6DLGbXXI1j_V", 100 | "colab_type": "code", 101 | "outputId": "73ee7956-18ab-40cc-df54-6eee44408d07", 102 | "colab": { 103 | "base_uri": "https://localhost:8080/", 104 | "height": 88 105 | } 106 | }, 107 | "cell_type": "code", 108 | "source": [ 109 | "# This Code Block should Define and Compile the Model\n", 110 | "model = tf.keras.models.Sequential([\n", 111 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),\n", 112 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 113 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 114 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 115 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 116 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 117 | " tf.keras.layers.Flatten(),\n", 118 | " tf.keras.layers.Dense(128, activation='relu'),\n", 119 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 120 | "])\n", 121 | "\n", 122 | "from tensorflow.keras.optimizers import RMSprop\n", 123 | "\n", 124 | "model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])" 125 | ], 126 | "execution_count": 0, 127 | "outputs": [ 128 | { 129 | "output_type": "stream", 130 | "text": [ 131 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 132 | "Instructions for updating:\n", 133 | "Colocations handled automatically by placer.\n" 134 | ], 135 | "name": "stdout" 136 | } 137 | ] 138 | }, 139 | { 140 | "metadata": { 141 | "id": "4Ap9fUJE1vVu", 142 | "colab_type": "code", 143 | "outputId": "bb608da5-7c50-4dff-9c82-6bc2d94786ce", 144 | "colab": { 145 | "base_uri": "https://localhost:8080/", 146 | "height": 34 147 | } 148 | }, 149 | "cell_type": "code", 150 | "source": [ 151 | "# This code block should create an instance of an ImageDataGenerator called train_datagen \n", 152 | "# And a train_generator by calling train_datagen.flow_from_directory\n", 153 | "\n", 154 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 155 | "\n", 156 | "train_datagen = ImageDataGenerator(rescale=1/255.0)\n", 157 | "\n", 158 | "train_generator = train_datagen.flow_from_directory(\n", 159 | " '/tmp/h-or-s/', \n", 160 | " target_size=(150,150), \n", 161 | " batch_size=80, \n", 162 | " class_mode='binary')\n", 163 | "\n", 164 | "# Expected output: 'Found 80 images belonging to 2 classes'" 165 | ], 166 | "execution_count": 0, 167 | "outputs": [ 168 | { 169 | "output_type": "stream", 170 | "text": [ 171 | "Found 80 images belonging to 2 classes.\n" 172 | ], 173 | "name": "stdout" 174 | } 175 | ] 176 | }, 177 | { 178 | "metadata": { 179 | "id": "48dLm13U1-Le", 180 | "colab_type": "code", 181 | "outputId": "938a568d-46ba-4073-f599-5ca3ced42acd", 182 | "colab": { 183 | "base_uri": "https://localhost:8080/", 184 | "height": 802 185 | } 186 | }, 187 | "cell_type": "code", 188 | "source": [ 189 | "# This code block should call model.fit_generator and train for\n", 190 | "# a number of epochs. \n", 191 | "history = model.fit_generator(\n", 192 | " train_generator,\n", 193 | " steps_per_epoch=1, \n", 194 | " epochs=30,\n", 195 | " verbose=1, \n", 196 | " callbacks=[callbacks])\n", 197 | " \n", 198 | "# Expected output: \"Reached 99.9% accuracy so cancelling training!\"\"" 199 | ], 200 | "execution_count": 0, 201 | "outputs": [ 202 | { 203 | "output_type": "stream", 204 | "text": [ 205 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n", 206 | "Instructions for updating:\n", 207 | "Use tf.cast instead.\n", 208 | "Epoch 1/30\n", 209 | "1/1 [==============================] - 4s 4s/step - loss: 0.7001 - acc: 0.5000\n", 210 | "Epoch 2/30\n", 211 | "1/1 [==============================] - 0s 155ms/step - loss: 1.1031 - acc: 0.5000\n", 212 | "Epoch 3/30\n", 213 | "1/1 [==============================] - 0s 236ms/step - loss: 0.6741 - acc: 0.5000\n", 214 | "Epoch 4/30\n", 215 | "1/1 [==============================] - 0s 259ms/step - loss: 0.6804 - acc: 0.5000\n", 216 | "Epoch 5/30\n", 217 | "1/1 [==============================] - 0s 206ms/step - loss: 0.6587 - acc: 0.6125\n", 218 | "Epoch 6/30\n", 219 | "1/1 [==============================] - 0s 274ms/step - loss: 0.6200 - acc: 0.9125\n", 220 | "Epoch 7/30\n", 221 | "1/1 [==============================] - 0s 206ms/step - loss: 0.5605 - acc: 0.9375\n", 222 | "Epoch 8/30\n", 223 | "1/1 [==============================] - 0s 280ms/step - loss: 0.5262 - acc: 0.7500\n", 224 | "Epoch 9/30\n", 225 | "1/1 [==============================] - 0s 221ms/step - loss: 0.4577 - acc: 0.8875\n", 226 | "Epoch 10/30\n", 227 | "1/1 [==============================] - 0s 263ms/step - loss: 0.3603 - acc: 0.9625\n", 228 | "Epoch 11/30\n", 229 | "1/1 [==============================] - 0s 225ms/step - loss: 0.2902 - acc: 0.9625\n", 230 | "Epoch 12/30\n", 231 | "1/1 [==============================] - 0s 269ms/step - loss: 0.2693 - acc: 0.9125\n", 232 | "Epoch 13/30\n", 233 | "1/1 [==============================] - 0s 221ms/step - loss: 0.1999 - acc: 0.9500\n", 234 | "Epoch 14/30\n", 235 | "1/1 [==============================] - 0s 263ms/step - loss: 0.1757 - acc: 0.9500\n", 236 | "Epoch 15/30\n", 237 | "1/1 [==============================] - 0s 218ms/step - loss: 0.1490 - acc: 0.9375\n", 238 | "Epoch 16/30\n", 239 | "1/1 [==============================] - 0s 273ms/step - loss: 0.1341 - acc: 0.9375\n", 240 | "Epoch 17/30\n", 241 | "1/1 [==============================] - 0s 221ms/step - loss: 0.1338 - acc: 0.9625\n", 242 | "Epoch 18/30\n", 243 | "1/1 [==============================] - 0s 261ms/step - loss: 0.1195 - acc: 0.9500\n", 244 | "Epoch 19/30\n", 245 | "1/1 [==============================] - 0s 214ms/step - loss: 0.1075 - acc: 0.9500\n", 246 | "Epoch 20/30\n", 247 | "\n", 248 | "Desired Accuracy is met, Stopping training...\n", 249 | "1/1 [==============================] - 0s 268ms/step - loss: 0.1119 - acc: 1.0000\n" 250 | ], 251 | "name": "stdout" 252 | } 253 | ] 254 | } 255 | ] 256 | } -------------------------------------------------------------------------------- /Exercise_1_House_Prices_Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Exercise 1 - House Prices - Question.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "include_colab_link": true 10 | }, 11 | "kernelspec": { 12 | "name": "python3", 13 | "display_name": "Python 3" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "markdown", 19 | "metadata": { 20 | "id": "view-in-github", 21 | "colab_type": "text" 22 | }, 23 | "source": [ 24 | "\"Open" 25 | ] 26 | }, 27 | { 28 | "metadata": { 29 | "id": "mw2VBrBcgvGa", 30 | "colab_type": "text" 31 | }, 32 | "cell_type": "markdown", 33 | "source": [ 34 | "In this exercise you'll try to build a neural network that predicts the price of a house according to a simple formula.\n", 35 | "\n", 36 | "So, imagine if house pricing was as easy as a house costs 50k + 50k per bedroom, so that a 1 bedroom house costs 100k, a 2 bedroom house costs 150k etc.\n", 37 | "\n", 38 | "How would you create a neural network that learns this relationship so that it would predict a 7 bedroom house as costing close to 400k etc.\n", 39 | "\n", 40 | "Hint: Your network might work better if you scale the house price down. You don't have to give the answer 400...it might be better to create something that predicts the number 4, and then your answer is in the 'hundreds of thousands' etc." 41 | ] 42 | }, 43 | { 44 | "metadata": { 45 | "id": "PUNO2E6SeURH", 46 | "colab_type": "code", 47 | "outputId": "bb7052c4-12fd-45f5-a6b6-42d838165497", 48 | "colab": { 49 | "base_uri": "https://localhost:8080/", 50 | "height": 17034 51 | } 52 | }, 53 | "cell_type": "code", 54 | "source": [ 55 | "import tensorflow as tf\n", 56 | "import numpy as np\n", 57 | "from tensorflow import keras\n", 58 | "\n", 59 | "model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])\n", 60 | "\n", 61 | "model.compile(optimizer='sgd', loss='mean_squared_error')\n", 62 | "\n", 63 | "xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 10.0], dtype=float)\n", 64 | "ys = np.array([100.0, 150.0, 200.0, 250.0, 300.0, 350.0, 550.0], dtype=float)\n", 65 | "\n", 66 | "model.fit(xs, ys, epochs=500)\n", 67 | " \n", 68 | "print(model.predict([7.0]))" 69 | ], 70 | "execution_count": 0, 71 | "outputs": [ 72 | { 73 | "output_type": "stream", 74 | "text": [ 75 | "Epoch 1/500\n", 76 | "7/7 [==============================] - 0s 14ms/sample - loss: 89793.7344\n", 77 | "Epoch 2/500\n", 78 | "7/7 [==============================] - 0s 703us/sample - loss: 17739.2148\n", 79 | "Epoch 3/500\n", 80 | "7/7 [==============================] - 0s 244us/sample - loss: 3800.3049\n", 81 | "Epoch 4/500\n", 82 | "7/7 [==============================] - 0s 306us/sample - loss: 1100.5829\n", 83 | "Epoch 5/500\n", 84 | "7/7 [==============================] - 0s 331us/sample - loss: 574.4808\n", 85 | "Epoch 6/500\n", 86 | "7/7 [==============================] - 0s 235us/sample - loss: 468.7845\n", 87 | "Epoch 7/500\n", 88 | "7/7 [==============================] - 0s 236us/sample - loss: 444.4350\n", 89 | "Epoch 8/500\n", 90 | "7/7 [==============================] - 0s 318us/sample - loss: 435.8596\n", 91 | "Epoch 9/500\n", 92 | "7/7 [==============================] - 0s 476us/sample - loss: 430.3779\n", 93 | "Epoch 10/500\n", 94 | "7/7 [==============================] - 0s 266us/sample - loss: 425.5361\n", 95 | "Epoch 11/500\n", 96 | "7/7 [==============================] - 0s 312us/sample - loss: 420.8592\n", 97 | "Epoch 12/500\n", 98 | "7/7 [==============================] - 0s 432us/sample - loss: 416.2551\n", 99 | "Epoch 13/500\n", 100 | "7/7 [==============================] - 0s 240us/sample - loss: 411.7054\n", 101 | "Epoch 14/500\n", 102 | "7/7 [==============================] - 0s 249us/sample - loss: 407.2059\n", 103 | "Epoch 15/500\n", 104 | "7/7 [==============================] - 0s 185us/sample - loss: 402.7568\n", 105 | "Epoch 16/500\n", 106 | "7/7 [==============================] - 0s 266us/sample - loss: 398.3553\n", 107 | "Epoch 17/500\n", 108 | "7/7 [==============================] - 0s 240us/sample - loss: 394.0020\n", 109 | "Epoch 18/500\n", 110 | "7/7 [==============================] - 0s 257us/sample - loss: 389.6970\n", 111 | "Epoch 19/500\n", 112 | "7/7 [==============================] - 0s 179us/sample - loss: 385.4385\n", 113 | "Epoch 20/500\n", 114 | "7/7 [==============================] - 0s 219us/sample - loss: 381.2266\n", 115 | "Epoch 21/500\n", 116 | "7/7 [==============================] - 0s 208us/sample - loss: 377.0605\n", 117 | "Epoch 22/500\n", 118 | "7/7 [==============================] - 0s 192us/sample - loss: 372.9407\n", 119 | "Epoch 23/500\n", 120 | "7/7 [==============================] - 0s 202us/sample - loss: 368.8652\n", 121 | "Epoch 24/500\n", 122 | "7/7 [==============================] - 0s 227us/sample - loss: 364.8346\n", 123 | "Epoch 25/500\n", 124 | "7/7 [==============================] - 0s 177us/sample - loss: 360.8477\n", 125 | "Epoch 26/500\n", 126 | "7/7 [==============================] - 0s 212us/sample - loss: 356.9043\n", 127 | "Epoch 27/500\n", 128 | "7/7 [==============================] - 0s 157us/sample - loss: 353.0046\n", 129 | "Epoch 28/500\n", 130 | "7/7 [==============================] - 0s 237us/sample - loss: 349.1467\n", 131 | "Epoch 29/500\n", 132 | "7/7 [==============================] - 0s 210us/sample - loss: 345.3318\n", 133 | "Epoch 30/500\n", 134 | "7/7 [==============================] - 0s 200us/sample - loss: 341.5580\n", 135 | "Epoch 31/500\n", 136 | "7/7 [==============================] - 0s 158us/sample - loss: 337.8259\n", 137 | "Epoch 32/500\n", 138 | "7/7 [==============================] - 0s 149us/sample - loss: 334.1344\n", 139 | "Epoch 33/500\n", 140 | "7/7 [==============================] - 0s 241us/sample - loss: 330.4828\n", 141 | "Epoch 34/500\n", 142 | "7/7 [==============================] - 0s 231us/sample - loss: 326.8716\n", 143 | "Epoch 35/500\n", 144 | "7/7 [==============================] - 0s 198us/sample - loss: 323.2998\n", 145 | "Epoch 36/500\n", 146 | "7/7 [==============================] - 0s 228us/sample - loss: 319.7666\n", 147 | "Epoch 37/500\n", 148 | "7/7 [==============================] - 0s 158us/sample - loss: 316.2722\n", 149 | "Epoch 38/500\n", 150 | "7/7 [==============================] - 0s 192us/sample - loss: 312.8164\n", 151 | "Epoch 39/500\n", 152 | "7/7 [==============================] - 0s 157us/sample - loss: 309.3984\n", 153 | "Epoch 40/500\n", 154 | "7/7 [==============================] - 0s 196us/sample - loss: 306.0173\n", 155 | "Epoch 41/500\n", 156 | "7/7 [==============================] - 0s 197us/sample - loss: 302.6731\n", 157 | "Epoch 42/500\n", 158 | "7/7 [==============================] - 0s 151us/sample - loss: 299.3661\n", 159 | "Epoch 43/500\n", 160 | "7/7 [==============================] - 0s 192us/sample - loss: 296.0945\n", 161 | "Epoch 44/500\n", 162 | "7/7 [==============================] - 0s 219us/sample - loss: 292.8589\n", 163 | "Epoch 45/500\n", 164 | "7/7 [==============================] - 0s 189us/sample - loss: 289.6588\n", 165 | "Epoch 46/500\n", 166 | "7/7 [==============================] - 0s 181us/sample - loss: 286.4936\n", 167 | "Epoch 47/500\n", 168 | "7/7 [==============================] - 0s 195us/sample - loss: 283.3631\n", 169 | "Epoch 48/500\n", 170 | "7/7 [==============================] - 0s 192us/sample - loss: 280.2663\n", 171 | "Epoch 49/500\n", 172 | "7/7 [==============================] - 0s 207us/sample - loss: 277.2039\n", 173 | "Epoch 50/500\n", 174 | "7/7 [==============================] - 0s 201us/sample - loss: 274.1748\n", 175 | "Epoch 51/500\n", 176 | "7/7 [==============================] - 0s 215us/sample - loss: 271.1784\n", 177 | "Epoch 52/500\n", 178 | "7/7 [==============================] - 0s 202us/sample - loss: 268.2153\n", 179 | "Epoch 53/500\n", 180 | "7/7 [==============================] - 0s 166us/sample - loss: 265.2842\n", 181 | "Epoch 54/500\n", 182 | "7/7 [==============================] - 0s 201us/sample - loss: 262.3856\n", 183 | "Epoch 55/500\n", 184 | "7/7 [==============================] - 0s 201us/sample - loss: 259.5181\n", 185 | "Epoch 56/500\n", 186 | "7/7 [==============================] - 0s 184us/sample - loss: 256.6824\n", 187 | "Epoch 57/500\n", 188 | "7/7 [==============================] - 0s 194us/sample - loss: 253.8773\n", 189 | "Epoch 58/500\n", 190 | "7/7 [==============================] - 0s 145us/sample - loss: 251.1034\n", 191 | "Epoch 59/500\n", 192 | "7/7 [==============================] - 0s 196us/sample - loss: 248.3593\n", 193 | "Epoch 60/500\n", 194 | "7/7 [==============================] - 0s 179us/sample - loss: 245.6454\n", 195 | "Epoch 61/500\n", 196 | "7/7 [==============================] - 0s 168us/sample - loss: 242.9613\n", 197 | "Epoch 62/500\n", 198 | "7/7 [==============================] - 0s 160us/sample - loss: 240.3062\n", 199 | "Epoch 63/500\n", 200 | "7/7 [==============================] - 0s 132us/sample - loss: 237.6800\n", 201 | "Epoch 64/500\n", 202 | "7/7 [==============================] - 0s 173us/sample - loss: 235.0829\n", 203 | "Epoch 65/500\n", 204 | "7/7 [==============================] - 0s 149us/sample - loss: 232.5138\n", 205 | "Epoch 66/500\n", 206 | "7/7 [==============================] - 0s 170us/sample - loss: 229.9734\n", 207 | "Epoch 67/500\n", 208 | "7/7 [==============================] - 0s 147us/sample - loss: 227.4602\n", 209 | "Epoch 68/500\n", 210 | "7/7 [==============================] - 0s 135us/sample - loss: 224.9747\n", 211 | "Epoch 69/500\n", 212 | "7/7 [==============================] - 0s 147us/sample - loss: 222.5164\n", 213 | "Epoch 70/500\n", 214 | "7/7 [==============================] - 0s 139us/sample - loss: 220.0845\n", 215 | "Epoch 71/500\n", 216 | "7/7 [==============================] - 0s 168us/sample - loss: 217.6797\n", 217 | "Epoch 72/500\n", 218 | "7/7 [==============================] - 0s 146us/sample - loss: 215.3009\n", 219 | "Epoch 73/500\n", 220 | "7/7 [==============================] - 0s 231us/sample - loss: 212.9486\n", 221 | "Epoch 74/500\n", 222 | "7/7 [==============================] - 0s 179us/sample - loss: 210.6214\n", 223 | "Epoch 75/500\n", 224 | "7/7 [==============================] - 0s 150us/sample - loss: 208.3197\n", 225 | "Epoch 76/500\n", 226 | "7/7 [==============================] - 0s 190us/sample - loss: 206.0435\n", 227 | "Epoch 77/500\n", 228 | "7/7 [==============================] - 0s 228us/sample - loss: 203.7920\n", 229 | "Epoch 78/500\n", 230 | "7/7 [==============================] - 0s 155us/sample - loss: 201.5650\n", 231 | "Epoch 79/500\n", 232 | "7/7 [==============================] - 0s 168us/sample - loss: 199.3626\n", 233 | "Epoch 80/500\n", 234 | "7/7 [==============================] - 0s 166us/sample - loss: 197.1839\n", 235 | "Epoch 81/500\n", 236 | "7/7 [==============================] - 0s 213us/sample - loss: 195.0293\n", 237 | "Epoch 82/500\n", 238 | "7/7 [==============================] - 0s 149us/sample - loss: 192.8979\n", 239 | "Epoch 83/500\n", 240 | "7/7 [==============================] - 0s 241us/sample - loss: 190.7901\n", 241 | "Epoch 84/500\n", 242 | "7/7 [==============================] - 0s 253us/sample - loss: 188.7053\n", 243 | "Epoch 85/500\n", 244 | "7/7 [==============================] - 0s 180us/sample - loss: 186.6432\n", 245 | "Epoch 86/500\n", 246 | "7/7 [==============================] - 0s 182us/sample - loss: 184.6038\n", 247 | "Epoch 87/500\n", 248 | "7/7 [==============================] - 0s 169us/sample - loss: 182.5864\n", 249 | "Epoch 88/500\n", 250 | "7/7 [==============================] - 0s 222us/sample - loss: 180.5911\n", 251 | "Epoch 89/500\n", 252 | "7/7 [==============================] - 0s 192us/sample - loss: 178.6180\n", 253 | "Epoch 90/500\n", 254 | "7/7 [==============================] - 0s 246us/sample - loss: 176.6658\n", 255 | "Epoch 91/500\n", 256 | "7/7 [==============================] - 0s 246us/sample - loss: 174.7355\n", 257 | "Epoch 92/500\n", 258 | "7/7 [==============================] - 0s 163us/sample - loss: 172.8260\n", 259 | "Epoch 93/500\n", 260 | "7/7 [==============================] - 0s 193us/sample - loss: 170.9374\n", 261 | "Epoch 94/500\n", 262 | "7/7 [==============================] - 0s 179us/sample - loss: 169.0697\n", 263 | "Epoch 95/500\n", 264 | "7/7 [==============================] - 0s 181us/sample - loss: 167.2221\n", 265 | "Epoch 96/500\n", 266 | "7/7 [==============================] - 0s 228us/sample - loss: 165.3949\n", 267 | "Epoch 97/500\n", 268 | "7/7 [==============================] - 0s 247us/sample - loss: 163.5876\n", 269 | "Epoch 98/500\n", 270 | "7/7 [==============================] - 0s 165us/sample - loss: 161.7997\n", 271 | "Epoch 99/500\n", 272 | "7/7 [==============================] - 0s 182us/sample - loss: 160.0318\n", 273 | "Epoch 100/500\n", 274 | "7/7 [==============================] - 0s 151us/sample - loss: 158.2829\n", 275 | "Epoch 101/500\n", 276 | "7/7 [==============================] - 0s 224us/sample - loss: 156.5532\n", 277 | "Epoch 102/500\n", 278 | "7/7 [==============================] - 0s 231us/sample - loss: 154.8428\n", 279 | "Epoch 103/500\n", 280 | "7/7 [==============================] - 0s 145us/sample - loss: 153.1505\n", 281 | "Epoch 104/500\n", 282 | "7/7 [==============================] - 0s 223us/sample - loss: 151.4772\n", 283 | "Epoch 105/500\n", 284 | "7/7 [==============================] - 0s 176us/sample - loss: 149.8220\n", 285 | "Epoch 106/500\n", 286 | "7/7 [==============================] - 0s 183us/sample - loss: 148.1847\n", 287 | "Epoch 107/500\n", 288 | "7/7 [==============================] - 0s 142us/sample - loss: 146.5651\n", 289 | "Epoch 108/500\n", 290 | "7/7 [==============================] - 0s 222us/sample - loss: 144.9636\n", 291 | "Epoch 109/500\n", 292 | "7/7 [==============================] - 0s 192us/sample - loss: 143.3797\n", 293 | "Epoch 110/500\n", 294 | "7/7 [==============================] - 0s 232us/sample - loss: 141.8130\n", 295 | "Epoch 111/500\n", 296 | "7/7 [==============================] - 0s 190us/sample - loss: 140.2633\n", 297 | "Epoch 112/500\n", 298 | "7/7 [==============================] - 0s 140us/sample - loss: 138.7304\n", 299 | "Epoch 113/500\n", 300 | "7/7 [==============================] - 0s 136us/sample - loss: 137.2146\n", 301 | "Epoch 114/500\n", 302 | "7/7 [==============================] - 0s 139us/sample - loss: 135.7150\n", 303 | "Epoch 115/500\n", 304 | "7/7 [==============================] - 0s 139us/sample - loss: 134.2323\n", 305 | "Epoch 116/500\n", 306 | "7/7 [==============================] - 0s 230us/sample - loss: 132.7652\n", 307 | "Epoch 117/500\n", 308 | "7/7 [==============================] - 0s 142us/sample - loss: 131.3146\n", 309 | "Epoch 118/500\n", 310 | "7/7 [==============================] - 0s 158us/sample - loss: 129.8795\n", 311 | "Epoch 119/500\n", 312 | "7/7 [==============================] - 0s 227us/sample - loss: 128.4603\n", 313 | "Epoch 120/500\n", 314 | "7/7 [==============================] - 0s 224us/sample - loss: 127.0565\n", 315 | "Epoch 121/500\n", 316 | "7/7 [==============================] - 0s 142us/sample - loss: 125.6681\n", 317 | "Epoch 122/500\n", 318 | "7/7 [==============================] - 0s 168us/sample - loss: 124.2949\n", 319 | "Epoch 123/500\n", 320 | "7/7 [==============================] - 0s 231us/sample - loss: 122.9367\n", 321 | "Epoch 124/500\n", 322 | "7/7 [==============================] - 0s 154us/sample - loss: 121.5934\n", 323 | "Epoch 125/500\n", 324 | "7/7 [==============================] - 0s 146us/sample - loss: 120.2647\n", 325 | "Epoch 126/500\n", 326 | "7/7 [==============================] - 0s 227us/sample - loss: 118.9506\n", 327 | "Epoch 127/500\n", 328 | "7/7 [==============================] - 0s 250us/sample - loss: 117.6505\n", 329 | "Epoch 128/500\n", 330 | "7/7 [==============================] - 0s 147us/sample - loss: 116.3650\n", 331 | "Epoch 129/500\n", 332 | "7/7 [==============================] - 0s 245us/sample - loss: 115.0932\n", 333 | "Epoch 130/500\n", 334 | "7/7 [==============================] - 0s 232us/sample - loss: 113.8357\n", 335 | "Epoch 131/500\n", 336 | "7/7 [==============================] - 0s 156us/sample - loss: 112.5918\n", 337 | "Epoch 132/500\n", 338 | "7/7 [==============================] - 0s 153us/sample - loss: 111.3615\n", 339 | "Epoch 133/500\n", 340 | "7/7 [==============================] - 0s 149us/sample - loss: 110.1447\n", 341 | "Epoch 134/500\n", 342 | "7/7 [==============================] - 0s 164us/sample - loss: 108.9409\n", 343 | "Epoch 135/500\n", 344 | "7/7 [==============================] - 0s 194us/sample - loss: 107.7503\n", 345 | "Epoch 136/500\n", 346 | "7/7 [==============================] - 0s 168us/sample - loss: 106.5730\n", 347 | "Epoch 137/500\n", 348 | "7/7 [==============================] - 0s 162us/sample - loss: 105.4085\n", 349 | "Epoch 138/500\n", 350 | "7/7 [==============================] - 0s 224us/sample - loss: 104.2566\n", 351 | "Epoch 139/500\n", 352 | "7/7 [==============================] - 0s 197us/sample - loss: 103.1174\n", 353 | "Epoch 140/500\n", 354 | "7/7 [==============================] - 0s 155us/sample - loss: 101.9905\n", 355 | "Epoch 141/500\n", 356 | "7/7 [==============================] - 0s 253us/sample - loss: 100.8761\n", 357 | "Epoch 142/500\n", 358 | "7/7 [==============================] - 0s 235us/sample - loss: 99.7738\n", 359 | "Epoch 143/500\n", 360 | "7/7 [==============================] - 0s 158us/sample - loss: 98.6834\n", 361 | "Epoch 144/500\n", 362 | "7/7 [==============================] - 0s 245us/sample - loss: 97.6053\n", 363 | "Epoch 145/500\n", 364 | "7/7 [==============================] - 0s 225us/sample - loss: 96.5387\n", 365 | "Epoch 146/500\n", 366 | "7/7 [==============================] - 0s 301us/sample - loss: 95.4837\n", 367 | "Epoch 147/500\n", 368 | "7/7 [==============================] - 0s 186us/sample - loss: 94.4401\n", 369 | "Epoch 148/500\n", 370 | "7/7 [==============================] - 0s 156us/sample - loss: 93.4082\n", 371 | "Epoch 149/500\n", 372 | "7/7 [==============================] - 0s 204us/sample - loss: 92.3875\n", 373 | "Epoch 150/500\n", 374 | "7/7 [==============================] - 0s 147us/sample - loss: 91.3780\n", 375 | "Epoch 151/500\n", 376 | "7/7 [==============================] - 0s 182us/sample - loss: 90.3794\n", 377 | "Epoch 152/500\n", 378 | "7/7 [==============================] - 0s 239us/sample - loss: 89.3917\n", 379 | "Epoch 153/500\n", 380 | "7/7 [==============================] - 0s 242us/sample - loss: 88.4151\n", 381 | "Epoch 154/500\n", 382 | "7/7 [==============================] - 0s 203us/sample - loss: 87.4487\n", 383 | "Epoch 155/500\n", 384 | "7/7 [==============================] - 0s 231us/sample - loss: 86.4932\n", 385 | "Epoch 156/500\n", 386 | "7/7 [==============================] - 0s 252us/sample - loss: 85.5481\n", 387 | "Epoch 157/500\n", 388 | "7/7 [==============================] - 0s 305us/sample - loss: 84.6131\n", 389 | "Epoch 158/500\n", 390 | "7/7 [==============================] - 0s 279us/sample - loss: 83.6888\n", 391 | "Epoch 159/500\n", 392 | "7/7 [==============================] - 0s 228us/sample - loss: 82.7742\n", 393 | "Epoch 160/500\n", 394 | "7/7 [==============================] - 0s 184us/sample - loss: 81.8696\n", 395 | "Epoch 161/500\n", 396 | "7/7 [==============================] - 0s 184us/sample - loss: 80.9750\n", 397 | "Epoch 162/500\n", 398 | "7/7 [==============================] - 0s 273us/sample - loss: 80.0901\n", 399 | "Epoch 163/500\n", 400 | "7/7 [==============================] - 0s 280us/sample - loss: 79.2150\n", 401 | "Epoch 164/500\n", 402 | "7/7 [==============================] - 0s 233us/sample - loss: 78.3493\n", 403 | "Epoch 165/500\n", 404 | "7/7 [==============================] - 0s 204us/sample - loss: 77.4932\n", 405 | "Epoch 166/500\n", 406 | "7/7 [==============================] - 0s 281us/sample - loss: 76.6464\n", 407 | "Epoch 167/500\n", 408 | "7/7 [==============================] - 0s 214us/sample - loss: 75.8089\n", 409 | "Epoch 168/500\n", 410 | "7/7 [==============================] - 0s 267us/sample - loss: 74.9803\n", 411 | "Epoch 169/500\n", 412 | "7/7 [==============================] - 0s 255us/sample - loss: 74.1611\n", 413 | "Epoch 170/500\n", 414 | "7/7 [==============================] - 0s 188us/sample - loss: 73.3507\n", 415 | "Epoch 171/500\n", 416 | "7/7 [==============================] - 0s 179us/sample - loss: 72.5491\n", 417 | "Epoch 172/500\n", 418 | "7/7 [==============================] - 0s 160us/sample - loss: 71.7563\n", 419 | "Epoch 173/500\n", 420 | "7/7 [==============================] - 0s 257us/sample - loss: 70.9723\n", 421 | "Epoch 174/500\n", 422 | "7/7 [==============================] - 0s 254us/sample - loss: 70.1966\n", 423 | "Epoch 175/500\n", 424 | "7/7 [==============================] - 0s 257us/sample - loss: 69.4296\n", 425 | "Epoch 176/500\n", 426 | "7/7 [==============================] - 0s 212us/sample - loss: 68.6708\n", 427 | "Epoch 177/500\n", 428 | "7/7 [==============================] - 0s 249us/sample - loss: 67.9206\n", 429 | "Epoch 178/500\n", 430 | "7/7 [==============================] - 0s 253us/sample - loss: 67.1784\n", 431 | "Epoch 179/500\n", 432 | "7/7 [==============================] - 0s 264us/sample - loss: 66.4441\n", 433 | "Epoch 180/500\n", 434 | "7/7 [==============================] - 0s 247us/sample - loss: 65.7181\n", 435 | "Epoch 181/500\n", 436 | "7/7 [==============================] - 0s 166us/sample - loss: 64.9999\n", 437 | "Epoch 182/500\n", 438 | "7/7 [==============================] - 0s 168us/sample - loss: 64.2897\n", 439 | "Epoch 183/500\n", 440 | "7/7 [==============================] - 0s 243us/sample - loss: 63.5872\n", 441 | "Epoch 184/500\n", 442 | "7/7 [==============================] - 0s 242us/sample - loss: 62.8923\n", 443 | "Epoch 185/500\n", 444 | "7/7 [==============================] - 0s 271us/sample - loss: 62.2050\n", 445 | "Epoch 186/500\n", 446 | "7/7 [==============================] - 0s 209us/sample - loss: 61.5253\n", 447 | "Epoch 187/500\n", 448 | "7/7 [==============================] - 0s 148us/sample - loss: 60.8531\n", 449 | "Epoch 188/500\n", 450 | "7/7 [==============================] - 0s 236us/sample - loss: 60.1881\n", 451 | "Epoch 189/500\n", 452 | "7/7 [==============================] - 0s 246us/sample - loss: 59.5304\n", 453 | "Epoch 190/500\n", 454 | "7/7 [==============================] - 0s 236us/sample - loss: 58.8800\n", 455 | "Epoch 191/500\n", 456 | "7/7 [==============================] - 0s 268us/sample - loss: 58.2365\n", 457 | "Epoch 192/500\n", 458 | "7/7 [==============================] - 0s 256us/sample - loss: 57.6001\n", 459 | "Epoch 193/500\n", 460 | "7/7 [==============================] - 0s 237us/sample - loss: 56.9707\n", 461 | "Epoch 194/500\n", 462 | "7/7 [==============================] - 0s 235us/sample - loss: 56.3481\n", 463 | "Epoch 195/500\n", 464 | "7/7 [==============================] - 0s 144us/sample - loss: 55.7325\n", 465 | "Epoch 196/500\n", 466 | "7/7 [==============================] - 0s 246us/sample - loss: 55.1234\n", 467 | "Epoch 197/500\n", 468 | "7/7 [==============================] - 0s 237us/sample - loss: 54.5211\n", 469 | "Epoch 198/500\n", 470 | "7/7 [==============================] - 0s 227us/sample - loss: 53.9251\n", 471 | "Epoch 199/500\n", 472 | "7/7 [==============================] - 0s 170us/sample - loss: 53.3360\n", 473 | "Epoch 200/500\n", 474 | "7/7 [==============================] - 0s 180us/sample - loss: 52.7532\n", 475 | "Epoch 201/500\n", 476 | "7/7 [==============================] - 0s 172us/sample - loss: 52.1767\n", 477 | "Epoch 202/500\n", 478 | "7/7 [==============================] - 0s 171us/sample - loss: 51.6065\n", 479 | "Epoch 203/500\n", 480 | "7/7 [==============================] - 0s 164us/sample - loss: 51.0426\n", 481 | "Epoch 204/500\n", 482 | "7/7 [==============================] - 0s 226us/sample - loss: 50.4849\n", 483 | "Epoch 205/500\n", 484 | "7/7 [==============================] - 0s 168us/sample - loss: 49.9332\n", 485 | "Epoch 206/500\n", 486 | "7/7 [==============================] - 0s 145us/sample - loss: 49.3875\n", 487 | "Epoch 207/500\n", 488 | "7/7 [==============================] - 0s 230us/sample - loss: 48.8479\n", 489 | "Epoch 208/500\n", 490 | "7/7 [==============================] - 0s 183us/sample - loss: 48.3141\n", 491 | "Epoch 209/500\n", 492 | "7/7 [==============================] - 0s 201us/sample - loss: 47.7862\n", 493 | "Epoch 210/500\n", 494 | "7/7 [==============================] - 0s 180us/sample - loss: 47.2640\n", 495 | "Epoch 211/500\n", 496 | "7/7 [==============================] - 0s 154us/sample - loss: 46.7475\n", 497 | "Epoch 212/500\n", 498 | "7/7 [==============================] - 0s 200us/sample - loss: 46.2367\n", 499 | "Epoch 213/500\n", 500 | "7/7 [==============================] - 0s 243us/sample - loss: 45.7313\n", 501 | "Epoch 214/500\n", 502 | "7/7 [==============================] - 0s 184us/sample - loss: 45.2317\n", 503 | "Epoch 215/500\n", 504 | "7/7 [==============================] - 0s 153us/sample - loss: 44.7375\n", 505 | "Epoch 216/500\n", 506 | "7/7 [==============================] - 0s 241us/sample - loss: 44.2486\n", 507 | "Epoch 217/500\n", 508 | "7/7 [==============================] - 0s 154us/sample - loss: 43.7650\n", 509 | "Epoch 218/500\n", 510 | "7/7 [==============================] - 0s 151us/sample - loss: 43.2868\n", 511 | "Epoch 219/500\n", 512 | "7/7 [==============================] - 0s 230us/sample - loss: 42.8138\n", 513 | "Epoch 220/500\n", 514 | "7/7 [==============================] - 0s 265us/sample - loss: 42.3458\n", 515 | "Epoch 221/500\n", 516 | "7/7 [==============================] - 0s 198us/sample - loss: 41.8833\n", 517 | "Epoch 222/500\n", 518 | "7/7 [==============================] - 0s 185us/sample - loss: 41.4255\n", 519 | "Epoch 223/500\n", 520 | "7/7 [==============================] - 0s 149us/sample - loss: 40.9728\n", 521 | "Epoch 224/500\n", 522 | "7/7 [==============================] - 0s 243us/sample - loss: 40.5251\n", 523 | "Epoch 225/500\n", 524 | "7/7 [==============================] - 0s 181us/sample - loss: 40.0823\n", 525 | "Epoch 226/500\n", 526 | "7/7 [==============================] - 0s 245us/sample - loss: 39.6443\n", 527 | "Epoch 227/500\n", 528 | "7/7 [==============================] - 0s 185us/sample - loss: 39.2111\n", 529 | "Epoch 228/500\n", 530 | "7/7 [==============================] - 0s 157us/sample - loss: 38.7826\n", 531 | "Epoch 229/500\n", 532 | "7/7 [==============================] - 0s 181us/sample - loss: 38.3588\n", 533 | "Epoch 230/500\n", 534 | "7/7 [==============================] - 0s 178us/sample - loss: 37.9396\n", 535 | "Epoch 231/500\n", 536 | "7/7 [==============================] - 0s 234us/sample - loss: 37.5251\n", 537 | "Epoch 232/500\n", 538 | "7/7 [==============================] - 0s 278us/sample - loss: 37.1150\n", 539 | "Epoch 233/500\n", 540 | "7/7 [==============================] - 0s 235us/sample - loss: 36.7094\n", 541 | "Epoch 234/500\n", 542 | "7/7 [==============================] - 0s 236us/sample - loss: 36.3083\n", 543 | "Epoch 235/500\n", 544 | "7/7 [==============================] - 0s 170us/sample - loss: 35.9115\n", 545 | "Epoch 236/500\n", 546 | "7/7 [==============================] - 0s 868us/sample - loss: 35.5191\n", 547 | "Epoch 237/500\n", 548 | "7/7 [==============================] - 0s 284us/sample - loss: 35.1310\n", 549 | "Epoch 238/500\n", 550 | "7/7 [==============================] - 0s 252us/sample - loss: 34.7471\n", 551 | "Epoch 239/500\n", 552 | "7/7 [==============================] - 0s 254us/sample - loss: 34.3674\n", 553 | "Epoch 240/500\n", 554 | "7/7 [==============================] - 0s 256us/sample - loss: 33.9919\n", 555 | "Epoch 241/500\n", 556 | "7/7 [==============================] - 0s 248us/sample - loss: 33.6203\n", 557 | "Epoch 242/500\n", 558 | "7/7 [==============================] - 0s 185us/sample - loss: 33.2530\n", 559 | "Epoch 243/500\n", 560 | "7/7 [==============================] - 0s 198us/sample - loss: 32.8897\n", 561 | "Epoch 244/500\n", 562 | "7/7 [==============================] - 0s 254us/sample - loss: 32.5302\n", 563 | "Epoch 245/500\n", 564 | "7/7 [==============================] - 0s 270us/sample - loss: 32.1749\n", 565 | "Epoch 246/500\n", 566 | "7/7 [==============================] - 0s 181us/sample - loss: 31.8231\n", 567 | "Epoch 247/500\n", 568 | "7/7 [==============================] - 0s 226us/sample - loss: 31.4754\n", 569 | "Epoch 248/500\n", 570 | "7/7 [==============================] - 0s 252us/sample - loss: 31.1315\n", 571 | "Epoch 249/500\n", 572 | "7/7 [==============================] - 0s 239us/sample - loss: 30.7913\n", 573 | "Epoch 250/500\n", 574 | "7/7 [==============================] - 0s 252us/sample - loss: 30.4548\n", 575 | "Epoch 251/500\n", 576 | "7/7 [==============================] - 0s 257us/sample - loss: 30.1221\n", 577 | "Epoch 252/500\n", 578 | "7/7 [==============================] - 0s 257us/sample - loss: 29.7929\n", 579 | "Epoch 253/500\n", 580 | "7/7 [==============================] - 0s 247us/sample - loss: 29.4674\n", 581 | "Epoch 254/500\n", 582 | "7/7 [==============================] - 0s 247us/sample - loss: 29.1453\n", 583 | "Epoch 255/500\n", 584 | "7/7 [==============================] - 0s 250us/sample - loss: 28.8267\n", 585 | "Epoch 256/500\n", 586 | "7/7 [==============================] - 0s 327us/sample - loss: 28.5118\n", 587 | "Epoch 257/500\n", 588 | "7/7 [==============================] - 0s 200us/sample - loss: 28.2003\n", 589 | "Epoch 258/500\n", 590 | "7/7 [==============================] - 0s 164us/sample - loss: 27.8920\n", 591 | "Epoch 259/500\n", 592 | "7/7 [==============================] - 0s 192us/sample - loss: 27.5872\n", 593 | "Epoch 260/500\n", 594 | "7/7 [==============================] - 0s 204us/sample - loss: 27.2859\n", 595 | "Epoch 261/500\n", 596 | "7/7 [==============================] - 0s 160us/sample - loss: 26.9877\n", 597 | "Epoch 262/500\n", 598 | "7/7 [==============================] - 0s 213us/sample - loss: 26.6928\n", 599 | "Epoch 263/500\n", 600 | "7/7 [==============================] - 0s 174us/sample - loss: 26.4011\n", 601 | "Epoch 264/500\n", 602 | "7/7 [==============================] - 0s 158us/sample - loss: 26.1126\n", 603 | "Epoch 265/500\n", 604 | "7/7 [==============================] - 0s 364us/sample - loss: 25.8272\n", 605 | "Epoch 266/500\n", 606 | "7/7 [==============================] - 0s 363us/sample - loss: 25.5451\n", 607 | "Epoch 267/500\n", 608 | "7/7 [==============================] - 0s 397us/sample - loss: 25.2658\n", 609 | "Epoch 268/500\n", 610 | "7/7 [==============================] - 0s 380us/sample - loss: 24.9898\n", 611 | "Epoch 269/500\n", 612 | "7/7 [==============================] - 0s 294us/sample - loss: 24.7167\n", 613 | "Epoch 270/500\n", 614 | "7/7 [==============================] - 0s 383us/sample - loss: 24.4466\n", 615 | "Epoch 271/500\n", 616 | "7/7 [==============================] - 0s 353us/sample - loss: 24.1795\n", 617 | "Epoch 272/500\n", 618 | "7/7 [==============================] - 0s 504us/sample - loss: 23.9153\n", 619 | "Epoch 273/500\n", 620 | "7/7 [==============================] - 0s 420us/sample - loss: 23.6539\n", 621 | "Epoch 274/500\n", 622 | "7/7 [==============================] - 0s 415us/sample - loss: 23.3955\n", 623 | "Epoch 275/500\n", 624 | "7/7 [==============================] - 0s 252us/sample - loss: 23.1398\n", 625 | "Epoch 276/500\n", 626 | "7/7 [==============================] - 0s 400us/sample - loss: 22.8870\n", 627 | "Epoch 277/500\n", 628 | "7/7 [==============================] - 0s 359us/sample - loss: 22.6368\n", 629 | "Epoch 278/500\n", 630 | "7/7 [==============================] - 0s 402us/sample - loss: 22.3895\n", 631 | "Epoch 279/500\n", 632 | "7/7 [==============================] - 0s 383us/sample - loss: 22.1447\n", 633 | "Epoch 280/500\n", 634 | "7/7 [==============================] - 0s 376us/sample - loss: 21.9029\n", 635 | "Epoch 281/500\n", 636 | "7/7 [==============================] - 0s 390us/sample - loss: 21.6635\n", 637 | "Epoch 282/500\n", 638 | "7/7 [==============================] - 0s 423us/sample - loss: 21.4268\n", 639 | "Epoch 283/500\n", 640 | "7/7 [==============================] - 0s 426us/sample - loss: 21.1926\n", 641 | "Epoch 284/500\n", 642 | "7/7 [==============================] - 0s 383us/sample - loss: 20.9610\n", 643 | "Epoch 285/500\n", 644 | "7/7 [==============================] - 0s 338us/sample - loss: 20.7320\n", 645 | "Epoch 286/500\n", 646 | "7/7 [==============================] - 0s 468us/sample - loss: 20.5054\n", 647 | "Epoch 287/500\n", 648 | "7/7 [==============================] - 0s 499us/sample - loss: 20.2813\n", 649 | "Epoch 288/500\n", 650 | "7/7 [==============================] - 0s 432us/sample - loss: 20.0597\n", 651 | "Epoch 289/500\n", 652 | "7/7 [==============================] - 0s 528us/sample - loss: 19.8405\n", 653 | "Epoch 290/500\n", 654 | "7/7 [==============================] - 0s 303us/sample - loss: 19.6237\n", 655 | "Epoch 291/500\n", 656 | "7/7 [==============================] - 0s 288us/sample - loss: 19.4093\n", 657 | "Epoch 292/500\n", 658 | "7/7 [==============================] - 0s 262us/sample - loss: 19.1973\n", 659 | "Epoch 293/500\n", 660 | "7/7 [==============================] - 0s 441us/sample - loss: 18.9875\n", 661 | "Epoch 294/500\n", 662 | "7/7 [==============================] - 0s 310us/sample - loss: 18.7799\n", 663 | "Epoch 295/500\n", 664 | "7/7 [==============================] - 0s 344us/sample - loss: 18.5747\n", 665 | "Epoch 296/500\n", 666 | "7/7 [==============================] - 0s 301us/sample - loss: 18.3717\n", 667 | "Epoch 297/500\n", 668 | "7/7 [==============================] - 0s 308us/sample - loss: 18.1710\n", 669 | "Epoch 298/500\n", 670 | "7/7 [==============================] - 0s 299us/sample - loss: 17.9724\n", 671 | "Epoch 299/500\n", 672 | "7/7 [==============================] - 0s 646us/sample - loss: 17.7760\n", 673 | "Epoch 300/500\n", 674 | "7/7 [==============================] - 0s 202us/sample - loss: 17.5817\n", 675 | "Epoch 301/500\n", 676 | "7/7 [==============================] - 0s 329us/sample - loss: 17.3896\n", 677 | "Epoch 302/500\n", 678 | "7/7 [==============================] - 0s 274us/sample - loss: 17.1996\n", 679 | "Epoch 303/500\n", 680 | "7/7 [==============================] - 0s 327us/sample - loss: 17.0117\n", 681 | "Epoch 304/500\n", 682 | "7/7 [==============================] - 0s 162us/sample - loss: 16.8258\n", 683 | "Epoch 305/500\n", 684 | "7/7 [==============================] - 0s 204us/sample - loss: 16.6419\n", 685 | "Epoch 306/500\n", 686 | "7/7 [==============================] - 0s 247us/sample - loss: 16.4600\n", 687 | "Epoch 307/500\n", 688 | "7/7 [==============================] - 0s 202us/sample - loss: 16.2802\n", 689 | "Epoch 308/500\n", 690 | "7/7 [==============================] - 0s 207us/sample - loss: 16.1023\n", 691 | "Epoch 309/500\n", 692 | "7/7 [==============================] - 0s 253us/sample - loss: 15.9263\n", 693 | "Epoch 310/500\n", 694 | "7/7 [==============================] - 0s 236us/sample - loss: 15.7522\n", 695 | "Epoch 311/500\n", 696 | "7/7 [==============================] - 0s 147us/sample - loss: 15.5801\n", 697 | "Epoch 312/500\n", 698 | "7/7 [==============================] - 0s 246us/sample - loss: 15.4098\n", 699 | "Epoch 313/500\n", 700 | "7/7 [==============================] - 0s 149us/sample - loss: 15.2415\n", 701 | "Epoch 314/500\n", 702 | "7/7 [==============================] - 0s 199us/sample - loss: 15.0749\n", 703 | "Epoch 315/500\n", 704 | "7/7 [==============================] - 0s 369us/sample - loss: 14.9102\n", 705 | "Epoch 316/500\n", 706 | "7/7 [==============================] - 0s 176us/sample - loss: 14.7473\n", 707 | "Epoch 317/500\n", 708 | "7/7 [==============================] - 0s 177us/sample - loss: 14.5861\n", 709 | "Epoch 318/500\n", 710 | "7/7 [==============================] - 0s 239us/sample - loss: 14.4267\n", 711 | "Epoch 319/500\n", 712 | "7/7 [==============================] - 0s 305us/sample - loss: 14.2691\n", 713 | "Epoch 320/500\n", 714 | "7/7 [==============================] - 0s 233us/sample - loss: 14.1132\n", 715 | "Epoch 321/500\n", 716 | "7/7 [==============================] - 0s 203us/sample - loss: 13.9589\n", 717 | "Epoch 322/500\n", 718 | "7/7 [==============================] - 0s 247us/sample - loss: 13.8065\n", 719 | "Epoch 323/500\n", 720 | "7/7 [==============================] - 0s 202us/sample - loss: 13.6555\n", 721 | "Epoch 324/500\n", 722 | "7/7 [==============================] - 0s 280us/sample - loss: 13.5063\n", 723 | "Epoch 325/500\n", 724 | "7/7 [==============================] - 0s 205us/sample - loss: 13.3588\n", 725 | "Epoch 326/500\n", 726 | "7/7 [==============================] - 0s 279us/sample - loss: 13.2127\n", 727 | "Epoch 327/500\n", 728 | "7/7 [==============================] - 0s 150us/sample - loss: 13.0684\n", 729 | "Epoch 328/500\n", 730 | "7/7 [==============================] - 0s 172us/sample - loss: 12.9255\n", 731 | "Epoch 329/500\n", 732 | "7/7 [==============================] - 0s 205us/sample - loss: 12.7844\n", 733 | "Epoch 330/500\n", 734 | "7/7 [==============================] - 0s 170us/sample - loss: 12.6446\n", 735 | "Epoch 331/500\n", 736 | "7/7 [==============================] - 0s 171us/sample - loss: 12.5065\n", 737 | "Epoch 332/500\n", 738 | "7/7 [==============================] - 0s 199us/sample - loss: 12.3698\n", 739 | "Epoch 333/500\n", 740 | "7/7 [==============================] - 0s 215us/sample - loss: 12.2346\n", 741 | "Epoch 334/500\n", 742 | "7/7 [==============================] - 0s 195us/sample - loss: 12.1009\n", 743 | "Epoch 335/500\n", 744 | "7/7 [==============================] - 0s 199us/sample - loss: 11.9687\n", 745 | "Epoch 336/500\n", 746 | "7/7 [==============================] - 0s 217us/sample - loss: 11.8379\n", 747 | "Epoch 337/500\n", 748 | "7/7 [==============================] - 0s 195us/sample - loss: 11.7086\n", 749 | "Epoch 338/500\n", 750 | "7/7 [==============================] - 0s 257us/sample - loss: 11.5806\n", 751 | "Epoch 339/500\n", 752 | "7/7 [==============================] - 0s 206us/sample - loss: 11.4541\n", 753 | "Epoch 340/500\n", 754 | "7/7 [==============================] - 0s 324us/sample - loss: 11.3289\n", 755 | "Epoch 341/500\n", 756 | "7/7 [==============================] - 0s 217us/sample - loss: 11.2051\n", 757 | "Epoch 342/500\n", 758 | "7/7 [==============================] - 0s 208us/sample - loss: 11.0827\n", 759 | "Epoch 343/500\n", 760 | "7/7 [==============================] - 0s 287us/sample - loss: 10.9616\n", 761 | "Epoch 344/500\n", 762 | "7/7 [==============================] - 0s 219us/sample - loss: 10.8418\n", 763 | "Epoch 345/500\n", 764 | "7/7 [==============================] - 0s 175us/sample - loss: 10.7233\n", 765 | "Epoch 346/500\n", 766 | "7/7 [==============================] - 0s 246us/sample - loss: 10.6062\n", 767 | "Epoch 347/500\n", 768 | "7/7 [==============================] - 0s 245us/sample - loss: 10.4902\n", 769 | "Epoch 348/500\n", 770 | "7/7 [==============================] - 0s 291us/sample - loss: 10.3756\n", 771 | "Epoch 349/500\n", 772 | "7/7 [==============================] - 0s 201us/sample - loss: 10.2622\n", 773 | "Epoch 350/500\n", 774 | "7/7 [==============================] - 0s 231us/sample - loss: 10.1501\n", 775 | "Epoch 351/500\n", 776 | "7/7 [==============================] - 0s 203us/sample - loss: 10.0391\n", 777 | "Epoch 352/500\n", 778 | "7/7 [==============================] - 0s 240us/sample - loss: 9.9294\n", 779 | "Epoch 353/500\n", 780 | "7/7 [==============================] - 0s 273us/sample - loss: 9.8209\n", 781 | "Epoch 354/500\n", 782 | "7/7 [==============================] - 0s 221us/sample - loss: 9.7136\n", 783 | "Epoch 355/500\n", 784 | "7/7 [==============================] - 0s 294us/sample - loss: 9.6075\n", 785 | "Epoch 356/500\n", 786 | "7/7 [==============================] - 0s 205us/sample - loss: 9.5025\n", 787 | "Epoch 357/500\n", 788 | "7/7 [==============================] - 0s 246us/sample - loss: 9.3987\n", 789 | "Epoch 358/500\n", 790 | "7/7 [==============================] - 0s 217us/sample - loss: 9.2959\n", 791 | "Epoch 359/500\n", 792 | "7/7 [==============================] - 0s 212us/sample - loss: 9.1944\n", 793 | "Epoch 360/500\n", 794 | "7/7 [==============================] - 0s 206us/sample - loss: 9.0939\n", 795 | "Epoch 361/500\n", 796 | "7/7 [==============================] - 0s 264us/sample - loss: 8.9946\n", 797 | "Epoch 362/500\n", 798 | "7/7 [==============================] - 0s 204us/sample - loss: 8.8963\n", 799 | "Epoch 363/500\n", 800 | "7/7 [==============================] - 0s 264us/sample - loss: 8.7990\n", 801 | "Epoch 364/500\n", 802 | "7/7 [==============================] - 0s 173us/sample - loss: 8.7029\n", 803 | "Epoch 365/500\n", 804 | "7/7 [==============================] - 0s 234us/sample - loss: 8.6078\n", 805 | "Epoch 366/500\n", 806 | "7/7 [==============================] - 0s 223us/sample - loss: 8.5137\n", 807 | "Epoch 367/500\n", 808 | "7/7 [==============================] - 0s 182us/sample - loss: 8.4207\n", 809 | "Epoch 368/500\n", 810 | "7/7 [==============================] - 0s 144us/sample - loss: 8.3287\n", 811 | "Epoch 369/500\n", 812 | "7/7 [==============================] - 0s 324us/sample - loss: 8.2377\n", 813 | "Epoch 370/500\n", 814 | "7/7 [==============================] - 0s 400us/sample - loss: 8.1476\n", 815 | "Epoch 371/500\n", 816 | "7/7 [==============================] - 0s 430us/sample - loss: 8.0586\n", 817 | "Epoch 372/500\n", 818 | "7/7 [==============================] - 0s 213us/sample - loss: 7.9705\n", 819 | "Epoch 373/500\n", 820 | "7/7 [==============================] - 0s 185us/sample - loss: 7.8834\n", 821 | "Epoch 374/500\n", 822 | "7/7 [==============================] - 0s 200us/sample - loss: 7.7973\n", 823 | "Epoch 375/500\n", 824 | "7/7 [==============================] - 0s 247us/sample - loss: 7.7121\n", 825 | "Epoch 376/500\n", 826 | "7/7 [==============================] - 0s 193us/sample - loss: 7.6278\n", 827 | "Epoch 377/500\n", 828 | "7/7 [==============================] - 0s 217us/sample - loss: 7.5445\n", 829 | "Epoch 378/500\n", 830 | "7/7 [==============================] - 0s 262us/sample - loss: 7.4620\n", 831 | "Epoch 379/500\n", 832 | "7/7 [==============================] - 0s 190us/sample - loss: 7.3805\n", 833 | "Epoch 380/500\n", 834 | "7/7 [==============================] - 0s 278us/sample - loss: 7.2998\n", 835 | "Epoch 381/500\n", 836 | "7/7 [==============================] - 0s 192us/sample - loss: 7.2200\n", 837 | "Epoch 382/500\n", 838 | "7/7 [==============================] - 0s 265us/sample - loss: 7.1411\n", 839 | "Epoch 383/500\n", 840 | "7/7 [==============================] - 0s 190us/sample - loss: 7.0631\n", 841 | "Epoch 384/500\n", 842 | "7/7 [==============================] - 0s 253us/sample - loss: 6.9860\n", 843 | "Epoch 385/500\n", 844 | "7/7 [==============================] - 0s 302us/sample - loss: 6.9096\n", 845 | "Epoch 386/500\n", 846 | "7/7 [==============================] - 0s 256us/sample - loss: 6.8341\n", 847 | "Epoch 387/500\n", 848 | "7/7 [==============================] - 0s 168us/sample - loss: 6.7594\n", 849 | "Epoch 388/500\n", 850 | "7/7 [==============================] - 0s 188us/sample - loss: 6.6856\n", 851 | "Epoch 389/500\n", 852 | "7/7 [==============================] - 0s 241us/sample - loss: 6.6125\n", 853 | "Epoch 390/500\n", 854 | "7/7 [==============================] - 0s 183us/sample - loss: 6.5402\n", 855 | "Epoch 391/500\n", 856 | "7/7 [==============================] - 0s 200us/sample - loss: 6.4688\n", 857 | "Epoch 392/500\n", 858 | "7/7 [==============================] - 0s 264us/sample - loss: 6.3981\n", 859 | "Epoch 393/500\n", 860 | "7/7 [==============================] - 0s 201us/sample - loss: 6.3282\n", 861 | "Epoch 394/500\n", 862 | "7/7 [==============================] - 0s 247us/sample - loss: 6.2590\n", 863 | "Epoch 395/500\n", 864 | "7/7 [==============================] - 0s 190us/sample - loss: 6.1906\n", 865 | "Epoch 396/500\n", 866 | "7/7 [==============================] - 0s 236us/sample - loss: 6.1230\n", 867 | "Epoch 397/500\n", 868 | "7/7 [==============================] - 0s 187us/sample - loss: 6.0561\n", 869 | "Epoch 398/500\n", 870 | "7/7 [==============================] - 0s 157us/sample - loss: 5.9899\n", 871 | "Epoch 399/500\n", 872 | "7/7 [==============================] - 0s 264us/sample - loss: 5.9244\n", 873 | "Epoch 400/500\n", 874 | "7/7 [==============================] - 0s 288us/sample - loss: 5.8597\n", 875 | "Epoch 401/500\n", 876 | "7/7 [==============================] - 0s 271us/sample - loss: 5.7957\n", 877 | "Epoch 402/500\n", 878 | "7/7 [==============================] - 0s 189us/sample - loss: 5.7323\n", 879 | "Epoch 403/500\n", 880 | "7/7 [==============================] - 0s 170us/sample - loss: 5.6697\n", 881 | "Epoch 404/500\n", 882 | "7/7 [==============================] - 0s 182us/sample - loss: 5.6078\n", 883 | "Epoch 405/500\n", 884 | "7/7 [==============================] - 0s 162us/sample - loss: 5.5464\n", 885 | "Epoch 406/500\n", 886 | "7/7 [==============================] - 0s 180us/sample - loss: 5.4858\n", 887 | "Epoch 407/500\n", 888 | "7/7 [==============================] - 0s 254us/sample - loss: 5.4259\n", 889 | "Epoch 408/500\n", 890 | "7/7 [==============================] - 0s 268us/sample - loss: 5.3666\n", 891 | "Epoch 409/500\n", 892 | "7/7 [==============================] - 0s 233us/sample - loss: 5.3080\n", 893 | "Epoch 410/500\n", 894 | "7/7 [==============================] - 0s 235us/sample - loss: 5.2500\n", 895 | "Epoch 411/500\n", 896 | "7/7 [==============================] - 0s 162us/sample - loss: 5.1926\n", 897 | "Epoch 412/500\n", 898 | "7/7 [==============================] - 0s 148us/sample - loss: 5.1358\n", 899 | "Epoch 413/500\n", 900 | "7/7 [==============================] - 0s 147us/sample - loss: 5.0797\n", 901 | "Epoch 414/500\n", 902 | "7/7 [==============================] - 0s 251us/sample - loss: 5.0242\n", 903 | "Epoch 415/500\n", 904 | "7/7 [==============================] - 0s 245us/sample - loss: 4.9693\n", 905 | "Epoch 416/500\n", 906 | "7/7 [==============================] - 0s 143us/sample - loss: 4.9150\n", 907 | "Epoch 417/500\n", 908 | "7/7 [==============================] - 0s 143us/sample - loss: 4.8613\n", 909 | "Epoch 418/500\n", 910 | "7/7 [==============================] - 0s 240us/sample - loss: 4.8082\n", 911 | "Epoch 419/500\n", 912 | "7/7 [==============================] - 0s 269us/sample - loss: 4.7557\n", 913 | "Epoch 420/500\n", 914 | "7/7 [==============================] - 0s 148us/sample - loss: 4.7037\n", 915 | "Epoch 421/500\n", 916 | "7/7 [==============================] - 0s 233us/sample - loss: 4.6523\n", 917 | "Epoch 422/500\n", 918 | "7/7 [==============================] - 0s 193us/sample - loss: 4.6014\n", 919 | "Epoch 423/500\n", 920 | "7/7 [==============================] - 0s 166us/sample - loss: 4.5512\n", 921 | "Epoch 424/500\n", 922 | "7/7 [==============================] - 0s 182us/sample - loss: 4.5015\n", 923 | "Epoch 425/500\n", 924 | "7/7 [==============================] - 0s 143us/sample - loss: 4.4522\n", 925 | "Epoch 426/500\n", 926 | "7/7 [==============================] - 0s 168us/sample - loss: 4.4036\n", 927 | "Epoch 427/500\n", 928 | "7/7 [==============================] - 0s 181us/sample - loss: 4.3555\n", 929 | "Epoch 428/500\n", 930 | "7/7 [==============================] - 0s 160us/sample - loss: 4.3079\n", 931 | "Epoch 429/500\n", 932 | "7/7 [==============================] - 0s 236us/sample - loss: 4.2608\n", 933 | "Epoch 430/500\n", 934 | "7/7 [==============================] - 0s 170us/sample - loss: 4.2142\n", 935 | "Epoch 431/500\n", 936 | "7/7 [==============================] - 0s 167us/sample - loss: 4.1682\n", 937 | "Epoch 432/500\n", 938 | "7/7 [==============================] - 0s 178us/sample - loss: 4.1227\n", 939 | "Epoch 433/500\n", 940 | "7/7 [==============================] - 0s 166us/sample - loss: 4.0776\n", 941 | "Epoch 434/500\n", 942 | "7/7 [==============================] - 0s 144us/sample - loss: 4.0330\n", 943 | "Epoch 435/500\n", 944 | "7/7 [==============================] - 0s 148us/sample - loss: 3.9890\n", 945 | "Epoch 436/500\n", 946 | "7/7 [==============================] - 0s 158us/sample - loss: 3.9454\n", 947 | "Epoch 437/500\n", 948 | "7/7 [==============================] - 0s 173us/sample - loss: 3.9023\n", 949 | "Epoch 438/500\n", 950 | "7/7 [==============================] - 0s 139us/sample - loss: 3.8596\n", 951 | "Epoch 439/500\n", 952 | "7/7 [==============================] - 0s 166us/sample - loss: 3.8175\n", 953 | "Epoch 440/500\n", 954 | "7/7 [==============================] - 0s 156us/sample - loss: 3.7757\n", 955 | "Epoch 441/500\n", 956 | "7/7 [==============================] - 0s 170us/sample - loss: 3.7345\n", 957 | "Epoch 442/500\n", 958 | "7/7 [==============================] - 0s 135us/sample - loss: 3.6936\n", 959 | "Epoch 443/500\n", 960 | "7/7 [==============================] - 0s 332us/sample - loss: 3.6533\n", 961 | "Epoch 444/500\n", 962 | "7/7 [==============================] - 0s 197us/sample - loss: 3.6134\n", 963 | "Epoch 445/500\n", 964 | "7/7 [==============================] - 0s 458us/sample - loss: 3.5739\n", 965 | "Epoch 446/500\n", 966 | "7/7 [==============================] - 0s 231us/sample - loss: 3.5348\n", 967 | "Epoch 447/500\n", 968 | "7/7 [==============================] - 0s 178us/sample - loss: 3.4962\n", 969 | "Epoch 448/500\n", 970 | "7/7 [==============================] - 0s 144us/sample - loss: 3.4580\n", 971 | "Epoch 449/500\n", 972 | "7/7 [==============================] - 0s 147us/sample - loss: 3.4202\n", 973 | "Epoch 450/500\n", 974 | "7/7 [==============================] - 0s 279us/sample - loss: 3.3829\n", 975 | "Epoch 451/500\n", 976 | "7/7 [==============================] - 0s 159us/sample - loss: 3.3459\n", 977 | "Epoch 452/500\n", 978 | "7/7 [==============================] - 0s 166us/sample - loss: 3.3093\n", 979 | "Epoch 453/500\n", 980 | "7/7 [==============================] - 0s 231us/sample - loss: 3.2732\n", 981 | "Epoch 454/500\n", 982 | "7/7 [==============================] - 0s 225us/sample - loss: 3.2374\n", 983 | "Epoch 455/500\n", 984 | "7/7 [==============================] - 0s 259us/sample - loss: 3.2020\n", 985 | "Epoch 456/500\n", 986 | "7/7 [==============================] - 0s 157us/sample - loss: 3.1670\n", 987 | "Epoch 457/500\n", 988 | "7/7 [==============================] - 0s 163us/sample - loss: 3.1324\n", 989 | "Epoch 458/500\n", 990 | "7/7 [==============================] - 0s 251us/sample - loss: 3.0982\n", 991 | "Epoch 459/500\n", 992 | "7/7 [==============================] - 0s 170us/sample - loss: 3.0643\n", 993 | "Epoch 460/500\n", 994 | "7/7 [==============================] - 0s 254us/sample - loss: 3.0308\n", 995 | "Epoch 461/500\n", 996 | "7/7 [==============================] - 0s 269us/sample - loss: 2.9977\n", 997 | "Epoch 462/500\n", 998 | "7/7 [==============================] - 0s 175us/sample - loss: 2.9650\n", 999 | "Epoch 463/500\n", 1000 | "7/7 [==============================] - 0s 265us/sample - loss: 2.9326\n", 1001 | "Epoch 464/500\n", 1002 | "7/7 [==============================] - 0s 250us/sample - loss: 2.9005\n", 1003 | "Epoch 465/500\n", 1004 | "7/7 [==============================] - 0s 182us/sample - loss: 2.8688\n", 1005 | "Epoch 466/500\n", 1006 | "7/7 [==============================] - 0s 238us/sample - loss: 2.8375\n", 1007 | "Epoch 467/500\n", 1008 | "7/7 [==============================] - 0s 240us/sample - loss: 2.8065\n", 1009 | "Epoch 468/500\n", 1010 | "7/7 [==============================] - 0s 179us/sample - loss: 2.7758\n", 1011 | "Epoch 469/500\n", 1012 | "7/7 [==============================] - 0s 164us/sample - loss: 2.7455\n", 1013 | "Epoch 470/500\n", 1014 | "7/7 [==============================] - 0s 162us/sample - loss: 2.7155\n", 1015 | "Epoch 471/500\n", 1016 | "7/7 [==============================] - 0s 173us/sample - loss: 2.6858\n", 1017 | "Epoch 472/500\n", 1018 | "7/7 [==============================] - 0s 205us/sample - loss: 2.6565\n", 1019 | "Epoch 473/500\n", 1020 | "7/7 [==============================] - 0s 245us/sample - loss: 2.6274\n", 1021 | "Epoch 474/500\n", 1022 | "7/7 [==============================] - 0s 165us/sample - loss: 2.5987\n", 1023 | "Epoch 475/500\n", 1024 | "7/7 [==============================] - 0s 160us/sample - loss: 2.5703\n", 1025 | "Epoch 476/500\n", 1026 | "7/7 [==============================] - 0s 244us/sample - loss: 2.5422\n", 1027 | "Epoch 477/500\n", 1028 | "7/7 [==============================] - 0s 235us/sample - loss: 2.5145\n", 1029 | "Epoch 478/500\n", 1030 | "7/7 [==============================] - 0s 193us/sample - loss: 2.4870\n", 1031 | "Epoch 479/500\n", 1032 | "7/7 [==============================] - 0s 188us/sample - loss: 2.4598\n", 1033 | "Epoch 480/500\n", 1034 | "7/7 [==============================] - 0s 170us/sample - loss: 2.4329\n", 1035 | "Epoch 481/500\n", 1036 | "7/7 [==============================] - 0s 268us/sample - loss: 2.4063\n", 1037 | "Epoch 482/500\n", 1038 | "7/7 [==============================] - 0s 252us/sample - loss: 2.3801\n", 1039 | "Epoch 483/500\n", 1040 | "7/7 [==============================] - 0s 188us/sample - loss: 2.3540\n", 1041 | "Epoch 484/500\n", 1042 | "7/7 [==============================] - 0s 254us/sample - loss: 2.3283\n", 1043 | "Epoch 485/500\n", 1044 | "7/7 [==============================] - 0s 166us/sample - loss: 2.3029\n", 1045 | "Epoch 486/500\n", 1046 | "7/7 [==============================] - 0s 308us/sample - loss: 2.2777\n", 1047 | "Epoch 487/500\n", 1048 | "7/7 [==============================] - 0s 146us/sample - loss: 2.2528\n", 1049 | "Epoch 488/500\n", 1050 | "7/7 [==============================] - 0s 192us/sample - loss: 2.2282\n", 1051 | "Epoch 489/500\n", 1052 | "7/7 [==============================] - 0s 189us/sample - loss: 2.2038\n", 1053 | "Epoch 490/500\n", 1054 | "7/7 [==============================] - 0s 197us/sample - loss: 2.1798\n", 1055 | "Epoch 491/500\n", 1056 | "7/7 [==============================] - 0s 242us/sample - loss: 2.1560\n", 1057 | "Epoch 492/500\n", 1058 | "7/7 [==============================] - 0s 241us/sample - loss: 2.1324\n", 1059 | "Epoch 493/500\n", 1060 | "7/7 [==============================] - 0s 240us/sample - loss: 2.1091\n", 1061 | "Epoch 494/500\n", 1062 | "7/7 [==============================] - 0s 342us/sample - loss: 2.0860\n", 1063 | "Epoch 495/500\n", 1064 | "7/7 [==============================] - 0s 360us/sample - loss: 2.0632\n", 1065 | "Epoch 496/500\n", 1066 | "7/7 [==============================] - 0s 244us/sample - loss: 2.0407\n", 1067 | "Epoch 497/500\n", 1068 | "7/7 [==============================] - 0s 266us/sample - loss: 2.0184\n", 1069 | "Epoch 498/500\n", 1070 | "7/7 [==============================] - 0s 356us/sample - loss: 1.9963\n", 1071 | "Epoch 499/500\n", 1072 | "7/7 [==============================] - 0s 263us/sample - loss: 1.9745\n", 1073 | "Epoch 500/500\n", 1074 | "7/7 [==============================] - 0s 232us/sample - loss: 1.9530\n", 1075 | "[[400.3869]]\n" 1076 | ], 1077 | "name": "stdout" 1078 | } 1079 | ] 1080 | }, 1081 | { 1082 | "metadata": { 1083 | "id": "SYSfPA5LSQ0_", 1084 | "colab_type": "code", 1085 | "outputId": "a578f970-f04d-444e-cee8-e1d62d356e30", 1086 | "colab": { 1087 | "base_uri": "https://localhost:8080/", 1088 | "height": 34 1089 | } 1090 | }, 1091 | "cell_type": "code", 1092 | "source": [ 1093 | "print('A house with %d bedrooms will worth about %fk' % (7, model.predict([7.0])))" 1094 | ], 1095 | "execution_count": 0, 1096 | "outputs": [ 1097 | { 1098 | "output_type": "stream", 1099 | "text": [ 1100 | "A house with 7 bedrooms will worth about 400.386902k\n" 1101 | ], 1102 | "name": "stdout" 1103 | } 1104 | ] 1105 | } 1106 | ] 1107 | } -------------------------------------------------------------------------------- /Exercise_3_Question_MNIST_ConvNet.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Exercise 3 - Question.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "collapsed_sections": [], 10 | "include_colab_link": true 11 | }, 12 | "kernelspec": { 13 | "name": "python3", 14 | "display_name": "Python 3" 15 | }, 16 | "accelerator": "GPU" 17 | }, 18 | "cells": [ 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "id": "view-in-github", 23 | "colab_type": "text" 24 | }, 25 | "source": [ 26 | "\"Open" 27 | ] 28 | }, 29 | { 30 | "metadata": { 31 | "id": "iQjHqsmTAVLU", 32 | "colab_type": "text" 33 | }, 34 | "cell_type": "markdown", 35 | "source": [ 36 | "## Exercise 3\n", 37 | "In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.\n", 38 | "\n", 39 | "I've started the code for you -- you need to finish it!\n", 40 | "\n", 41 | "When 99.8% accuracy has been hit, you should print out the string \"Reached 99.8% accuracy so cancelling training!\"\n" 42 | ] 43 | }, 44 | { 45 | "metadata": { 46 | "id": "sfQRyaJWAIdg", 47 | "colab_type": "code", 48 | "outputId": "e1017cd5-a02d-47e5-e720-426c24af9d02", 49 | "colab": { 50 | "base_uri": "https://localhost:8080/", 51 | "height": 836 52 | } 53 | }, 54 | "cell_type": "code", 55 | "source": [ 56 | "import tensorflow as tf\n", 57 | "\n", 58 | "# YOUR CODE STARTS HERE\n", 59 | "class StopTrainingCallback(tf.keras.callbacks.Callback):\n", 60 | " def on_epoch_end(self, epoch, logs={}):\n", 61 | " if (logs.get('acc') >= 0.998):\n", 62 | " print('\\nReached 99.8% accuracy so cancelling training!')\n", 63 | " self.model.stop_training = True\n", 64 | "\n", 65 | "mCallback = StopTrainingCallback()\n", 66 | "# YOUR CODE ENDS HERE\n", 67 | "\n", 68 | "mnist = tf.keras.datasets.mnist\n", 69 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 70 | "\n", 71 | "# YOUR CODE STARTS HERE\n", 72 | "training_images = training_images.reshape([60000, 28, 28, 1])\n", 73 | "training_images = training_images / 255.0;\n", 74 | "test_images = test_images.reshape([10000, 28, 28, 1])\n", 75 | "test_images = test_images / 255.0;\n", 76 | "# YOUR CODE ENDS HERE\n", 77 | "\n", 78 | "model = tf.keras.models.Sequential([\n", 79 | " # YOUR CODE STARTS HERE\n", 80 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),\n", 81 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 82 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 83 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 84 | " tf.keras.layers.Flatten(),\n", 85 | " tf.keras.layers.Dense(128, activation='relu'),\n", 86 | " tf.keras.layers.Dense(10, activation='softmax')\n", 87 | " # YOUR CODE ENDS HERE\n", 88 | "])\n", 89 | "\n", 90 | "# YOUR CODE STARTS HERE\n", 91 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 92 | "model.summary()\n", 93 | "\n", 94 | "model.fit(training_images, training_labels, epochs=20, callbacks=[mCallback])\n", 95 | "test_loss = model.evaluate(test_images, test_labels)\n", 96 | "# YOUR CODE ENDS HERE\n", 97 | "\n" 98 | ], 99 | "execution_count": 0, 100 | "outputs": [ 101 | { 102 | "output_type": "stream", 103 | "text": [ 104 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 105 | "Instructions for updating:\n", 106 | "Colocations handled automatically by placer.\n", 107 | "_________________________________________________________________\n", 108 | "Layer (type) Output Shape Param # \n", 109 | "=================================================================\n", 110 | "conv2d (Conv2D) (None, 26, 26, 64) 640 \n", 111 | "_________________________________________________________________\n", 112 | "max_pooling2d (MaxPooling2D) (None, 13, 13, 64) 0 \n", 113 | "_________________________________________________________________\n", 114 | "conv2d_1 (Conv2D) (None, 11, 11, 64) 36928 \n", 115 | "_________________________________________________________________\n", 116 | "max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n", 117 | "_________________________________________________________________\n", 118 | "flatten (Flatten) (None, 1600) 0 \n", 119 | "_________________________________________________________________\n", 120 | "dense (Dense) (None, 128) 204928 \n", 121 | "_________________________________________________________________\n", 122 | "dense_1 (Dense) (None, 10) 1290 \n", 123 | "=================================================================\n", 124 | "Total params: 243,786\n", 125 | "Trainable params: 243,786\n", 126 | "Non-trainable params: 0\n", 127 | "_________________________________________________________________\n", 128 | "Epoch 1/10\n", 129 | "60000/60000 [==============================] - 17s 288us/sample - loss: 0.1260 - acc: 0.9602\n", 130 | "Epoch 2/10\n", 131 | "60000/60000 [==============================] - 14s 239us/sample - loss: 0.0428 - acc: 0.9870\n", 132 | "Epoch 3/10\n", 133 | "60000/60000 [==============================] - 14s 238us/sample - loss: 0.0279 - acc: 0.9912\n", 134 | "Epoch 4/10\n", 135 | "60000/60000 [==============================] - 14s 238us/sample - loss: 0.0198 - acc: 0.9936\n", 136 | "Epoch 5/10\n", 137 | "60000/60000 [==============================] - 14s 238us/sample - loss: 0.0158 - acc: 0.9948\n", 138 | "Epoch 6/10\n", 139 | "60000/60000 [==============================] - 14s 237us/sample - loss: 0.0120 - acc: 0.9963\n", 140 | "Epoch 7/10\n", 141 | "60000/60000 [==============================] - 14s 238us/sample - loss: 0.0098 - acc: 0.9969\n", 142 | "Epoch 8/10\n", 143 | "60000/60000 [==============================] - 14s 237us/sample - loss: 0.0084 - acc: 0.9974\n", 144 | "Epoch 9/10\n", 145 | "60000/60000 [==============================] - 14s 237us/sample - loss: 0.0071 - acc: 0.9975\n", 146 | "Epoch 10/10\n", 147 | "59872/60000 [============================>.] - ETA: 0s - loss: 0.0059 - acc: 0.9983\n", 148 | "Reached 99.8% accuracy so cancelling training!\n", 149 | "60000/60000 [==============================] - 14s 237us/sample - loss: 0.0059 - acc: 0.9983\n", 150 | "10000/10000 [==============================] - 1s 105us/sample - loss: 0.0374 - acc: 0.9919\n" 151 | ], 152 | "name": "stdout" 153 | } 154 | ] 155 | } 156 | ] 157 | } -------------------------------------------------------------------------------- /Horse_Human_150x150_Course_1_Part_8_Lesson_4_Notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Course 1 - Part 8 - Lesson 4 - Notebook.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "collapsed_sections": [], 10 | "include_colab_link": true 11 | }, 12 | "kernelspec": { 13 | "name": "python3", 14 | "display_name": "Python 3" 15 | }, 16 | "accelerator": "GPU" 17 | }, 18 | "cells": [ 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "id": "view-in-github", 23 | "colab_type": "text" 24 | }, 25 | "source": [ 26 | "\"Open" 27 | ] 28 | }, 29 | { 30 | "metadata": { 31 | "colab_type": "code", 32 | "id": "RXZT2UsyIVe_", 33 | "colab": { 34 | "base_uri": "https://localhost:8080/", 35 | "height": 204 36 | }, 37 | "outputId": "ec8550d2-6830-4213-868a-1286eb289185" 38 | }, 39 | "cell_type": "code", 40 | "source": [ 41 | "!wget --no-check-certificate \\\n", 42 | " https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \\\n", 43 | " -O /tmp/horse-or-human.zip" 44 | ], 45 | "execution_count": 1, 46 | "outputs": [ 47 | { 48 | "output_type": "stream", 49 | "text": [ 50 | "--2019-04-09 13:43:49-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip\n", 51 | "Resolving storage.googleapis.com (storage.googleapis.com)... 108.177.111.128, 2607:f8b0:4001:c12::80\n", 52 | "Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.111.128|:443... connected.\n", 53 | "HTTP request sent, awaiting response... 200 OK\n", 54 | "Length: 149574867 (143M) [application/zip]\n", 55 | "Saving to: ‘/tmp/horse-or-human.zip’\n", 56 | "\n", 57 | "/tmp/horse-or-human 100%[===================>] 142.65M 132MB/s in 1.1s \n", 58 | "\n", 59 | "2019-04-09 13:43:50 (132 MB/s) - ‘/tmp/horse-or-human.zip’ saved [149574867/149574867]\n", 60 | "\n" 61 | ], 62 | "name": "stdout" 63 | } 64 | ] 65 | }, 66 | { 67 | "metadata": { 68 | "id": "0mLij6qde6Ox", 69 | "colab_type": "code", 70 | "colab": { 71 | "base_uri": "https://localhost:8080/", 72 | "height": 204 73 | }, 74 | "outputId": "4812b623-cfb4-47b9-a292-b87b1d05ae48" 75 | }, 76 | "cell_type": "code", 77 | "source": [ 78 | "!wget --no-check-certificate \\\n", 79 | " https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \\\n", 80 | " -O /tmp/validation-horse-or-human.zip" 81 | ], 82 | "execution_count": 2, 83 | "outputs": [ 84 | { 85 | "output_type": "stream", 86 | "text": [ 87 | "--2019-04-09 13:43:52-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip\n", 88 | "Resolving storage.googleapis.com (storage.googleapis.com)... 108.177.111.128, 2607:f8b0:4001:c07::80\n", 89 | "Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.111.128|:443... connected.\n", 90 | "HTTP request sent, awaiting response... 200 OK\n", 91 | "Length: 11480187 (11M) [application/zip]\n", 92 | "Saving to: ‘/tmp/validation-horse-or-human.zip’\n", 93 | "\n", 94 | "\r /tmp/vali 0%[ ] 0 --.-KB/s \r /tmp/valid 73%[=============> ] 8.01M 29.2MB/s \r/tmp/validation-hor 100%[===================>] 10.95M 38.0MB/s in 0.3s \n", 95 | "\n", 96 | "2019-04-09 13:43:53 (38.0 MB/s) - ‘/tmp/validation-horse-or-human.zip’ saved [11480187/11480187]\n", 97 | "\n" 98 | ], 99 | "name": "stdout" 100 | } 101 | ] 102 | }, 103 | { 104 | "metadata": { 105 | "id": "9brUxyTpYZHy", 106 | "colab_type": "text" 107 | }, 108 | "cell_type": "markdown", 109 | "source": [ 110 | "The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data. " 111 | ] 112 | }, 113 | { 114 | "metadata": { 115 | "colab_type": "code", 116 | "id": "PLy3pthUS0D2", 117 | "colab": {} 118 | }, 119 | "cell_type": "code", 120 | "source": [ 121 | "import os\n", 122 | "import zipfile\n", 123 | "\n", 124 | "local_zip = '/tmp/horse-or-human.zip'\n", 125 | "zip_ref = zipfile.ZipFile(local_zip, 'r')\n", 126 | "zip_ref.extractall('/tmp/horse-or-human')\n", 127 | "local_zip = '/tmp/validation-horse-or-human.zip'\n", 128 | "zip_ref = zipfile.ZipFile(local_zip, 'r')\n", 129 | "zip_ref.extractall('/tmp/validation-horse-or-human')\n", 130 | "zip_ref.close()" 131 | ], 132 | "execution_count": 0, 133 | "outputs": [] 134 | }, 135 | { 136 | "metadata": { 137 | "colab_type": "text", 138 | "id": "o-qUPyfO7Qr8" 139 | }, 140 | "cell_type": "markdown", 141 | "source": [ 142 | "The contents of the .zip are extracted to the base directory `/tmp/horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.\n", 143 | "\n", 144 | "In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc. \n", 145 | "\n", 146 | "One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step. \n", 147 | "\n", 148 | "Let's define each of these directories:" 149 | ] 150 | }, 151 | { 152 | "metadata": { 153 | "colab_type": "code", 154 | "id": "NR_M9nWN-K8B", 155 | "colab": {} 156 | }, 157 | "cell_type": "code", 158 | "source": [ 159 | "# Directory with our training horse pictures\n", 160 | "train_horse_dir = os.path.join('/tmp/horse-or-human/horses')\n", 161 | "\n", 162 | "# Directory with our training human pictures\n", 163 | "train_human_dir = os.path.join('/tmp/horse-or-human/humans')\n", 164 | "\n", 165 | "# Directory with our training horse pictures\n", 166 | "validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/validation-horses')\n", 167 | "\n", 168 | "# Directory with our training human pictures\n", 169 | "validation_human_dir = os.path.join('/tmp/validation-horse-or-human/validation-humans')" 170 | ], 171 | "execution_count": 0, 172 | "outputs": [] 173 | }, 174 | { 175 | "metadata": { 176 | "colab_type": "text", 177 | "id": "5oqBkNBJmtUv" 178 | }, 179 | "cell_type": "markdown", 180 | "source": [ 181 | "## Building a Small Model from Scratch\n", 182 | "\n", 183 | "But before we continue, let's start defining the model:\n", 184 | "\n", 185 | "Step 1 will be to import tensorflow." 186 | ] 187 | }, 188 | { 189 | "metadata": { 190 | "id": "qvfZg3LQbD-5", 191 | "colab_type": "code", 192 | "colab": {} 193 | }, 194 | "cell_type": "code", 195 | "source": [ 196 | "import tensorflow as tf" 197 | ], 198 | "execution_count": 0, 199 | "outputs": [] 200 | }, 201 | { 202 | "metadata": { 203 | "colab_type": "text", 204 | "id": "BnhYCP4tdqjC" 205 | }, 206 | "cell_type": "markdown", 207 | "source": [ 208 | "We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers." 209 | ] 210 | }, 211 | { 212 | "metadata": { 213 | "id": "gokG5HKpdtzm", 214 | "colab_type": "text" 215 | }, 216 | "cell_type": "markdown", 217 | "source": [ 218 | "Finally we add the densely connected layers. \n", 219 | "\n", 220 | "Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0)." 221 | ] 222 | }, 223 | { 224 | "metadata": { 225 | "id": "PixZ2s5QbYQ3", 226 | "colab_type": "code", 227 | "colab": { 228 | "base_uri": "https://localhost:8080/", 229 | "height": 88 230 | }, 231 | "outputId": "c6419ae5-dd31-494c-e09d-9a1a7c663141" 232 | }, 233 | "cell_type": "code", 234 | "source": [ 235 | "model = tf.keras.models.Sequential([\n", 236 | " # Note the input shape is the desired size of the image 150x150 with 3 bytes color\n", 237 | " # This is the first convolution\n", 238 | " tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n", 239 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 240 | " # The second convolution\n", 241 | " tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n", 242 | " tf.keras.layers.MaxPooling2D(2,2),\n", 243 | " # The third convolution\n", 244 | " tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 245 | " tf.keras.layers.MaxPooling2D(2,2),\n", 246 | " # The fourth convolution\n", 247 | " #tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 248 | " #tf.keras.layers.MaxPooling2D(2,2),\n", 249 | " # The fifth convolution\n", 250 | " #tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n", 251 | " #tf.keras.layers.MaxPooling2D(2,2),\n", 252 | " # Flatten the results to feed into a DNN\n", 253 | " tf.keras.layers.Flatten(),\n", 254 | " # 512 neuron hidden layer\n", 255 | " tf.keras.layers.Dense(512, activation='relu'),\n", 256 | " # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')\n", 257 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 258 | "])" 259 | ], 260 | "execution_count": 6, 261 | "outputs": [ 262 | { 263 | "output_type": "stream", 264 | "text": [ 265 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 266 | "Instructions for updating:\n", 267 | "Colocations handled automatically by placer.\n" 268 | ], 269 | "name": "stdout" 270 | } 271 | ] 272 | }, 273 | { 274 | "metadata": { 275 | "colab_type": "text", 276 | "id": "s9EaFDP5srBa" 277 | }, 278 | "cell_type": "markdown", 279 | "source": [ 280 | "The model.summary() method call prints a summary of the NN " 281 | ] 282 | }, 283 | { 284 | "metadata": { 285 | "colab_type": "code", 286 | "id": "7ZKj8392nbgP", 287 | "colab": { 288 | "base_uri": "https://localhost:8080/", 289 | "height": 442 290 | }, 291 | "outputId": "cc4de8dc-6625-410e-b180-41fe0733b8bc" 292 | }, 293 | "cell_type": "code", 294 | "source": [ 295 | "model.summary()" 296 | ], 297 | "execution_count": 7, 298 | "outputs": [ 299 | { 300 | "output_type": "stream", 301 | "text": [ 302 | "_________________________________________________________________\n", 303 | "Layer (type) Output Shape Param # \n", 304 | "=================================================================\n", 305 | "conv2d (Conv2D) (None, 148, 148, 16) 448 \n", 306 | "_________________________________________________________________\n", 307 | "max_pooling2d (MaxPooling2D) (None, 74, 74, 16) 0 \n", 308 | "_________________________________________________________________\n", 309 | "conv2d_1 (Conv2D) (None, 72, 72, 32) 4640 \n", 310 | "_________________________________________________________________\n", 311 | "max_pooling2d_1 (MaxPooling2 (None, 36, 36, 32) 0 \n", 312 | "_________________________________________________________________\n", 313 | "conv2d_2 (Conv2D) (None, 34, 34, 64) 18496 \n", 314 | "_________________________________________________________________\n", 315 | "max_pooling2d_2 (MaxPooling2 (None, 17, 17, 64) 0 \n", 316 | "_________________________________________________________________\n", 317 | "flatten (Flatten) (None, 18496) 0 \n", 318 | "_________________________________________________________________\n", 319 | "dense (Dense) (None, 512) 9470464 \n", 320 | "_________________________________________________________________\n", 321 | "dense_1 (Dense) (None, 1) 513 \n", 322 | "=================================================================\n", 323 | "Total params: 9,494,561\n", 324 | "Trainable params: 9,494,561\n", 325 | "Non-trainable params: 0\n", 326 | "_________________________________________________________________\n" 327 | ], 328 | "name": "stdout" 329 | } 330 | ] 331 | }, 332 | { 333 | "metadata": { 334 | "colab_type": "text", 335 | "id": "DmtkTn06pKxF" 336 | }, 337 | "cell_type": "markdown", 338 | "source": [ 339 | "The \"output shape\" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions." 340 | ] 341 | }, 342 | { 343 | "metadata": { 344 | "colab_type": "text", 345 | "id": "PEkKSpZlvJXA" 346 | }, 347 | "cell_type": "markdown", 348 | "source": [ 349 | "Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy.\n", 350 | "\n", 351 | "**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/#SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descent#Adam) and [Adagrad](https://developers.google.com/machine-learning/glossary/#AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.)" 352 | ] 353 | }, 354 | { 355 | "metadata": { 356 | "colab_type": "code", 357 | "id": "8DHWhFP_uhq3", 358 | "colab": {} 359 | }, 360 | "cell_type": "code", 361 | "source": [ 362 | "from tensorflow.keras.optimizers import RMSprop\n", 363 | "\n", 364 | "model.compile(loss='binary_crossentropy',\n", 365 | " optimizer=RMSprop(lr=0.001),\n", 366 | " metrics=['acc'])" 367 | ], 368 | "execution_count": 0, 369 | "outputs": [] 370 | }, 371 | { 372 | "metadata": { 373 | "colab_type": "text", 374 | "id": "Sn9m9D3UimHM" 375 | }, 376 | "cell_type": "markdown", 377 | "source": [ 378 | "### Data Preprocessing\n", 379 | "\n", 380 | "Let's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).\n", 381 | "\n", 382 | "As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).\n", 383 | "\n", 384 | "In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit_generator`, `evaluate_generator`, and `predict_generator`." 385 | ] 386 | }, 387 | { 388 | "metadata": { 389 | "colab_type": "code", 390 | "id": "ClebU9NJg99G", 391 | "colab": { 392 | "base_uri": "https://localhost:8080/", 393 | "height": 51 394 | }, 395 | "outputId": "a9e9d8f0-4c5e-42ca-9304-14d22c3e0052" 396 | }, 397 | "cell_type": "code", 398 | "source": [ 399 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 400 | "\n", 401 | "# All images will be rescaled by 1./255\n", 402 | "train_datagen = ImageDataGenerator(rescale=1/255)\n", 403 | "validation_datagen = ImageDataGenerator(rescale=1/255)\n", 404 | "\n", 405 | "# Flow training images in batches of 128 using train_datagen generator\n", 406 | "train_generator = train_datagen.flow_from_directory(\n", 407 | " '/tmp/horse-or-human/', # This is the source directory for training images\n", 408 | " target_size=(150, 150), # All images will be resized to 150x150\n", 409 | " batch_size=128,\n", 410 | " # Since we use binary_crossentropy loss, we need binary labels\n", 411 | " class_mode='binary')\n", 412 | "\n", 413 | "# Flow training images in batches of 128 using train_datagen generator\n", 414 | "validation_generator = validation_datagen.flow_from_directory(\n", 415 | " '/tmp/validation-horse-or-human/', # This is the source directory for training images\n", 416 | " target_size=(150, 150), # All images will be resized to 150x150\n", 417 | " batch_size=32,\n", 418 | " # Since we use binary_crossentropy loss, we need binary labels\n", 419 | " class_mode='binary')" 420 | ], 421 | "execution_count": 9, 422 | "outputs": [ 423 | { 424 | "output_type": "stream", 425 | "text": [ 426 | "Found 1027 images belonging to 2 classes.\n", 427 | "Found 256 images belonging to 2 classes.\n" 428 | ], 429 | "name": "stdout" 430 | } 431 | ] 432 | }, 433 | { 434 | "metadata": { 435 | "colab_type": "text", 436 | "id": "mu3Jdwkjwax4" 437 | }, 438 | "cell_type": "markdown", 439 | "source": [ 440 | "### Training\n", 441 | "Let's train for 15 epochs -- this may take a few minutes to run.\n", 442 | "\n", 443 | "Do note the values per epoch.\n", 444 | "\n", 445 | "The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses. " 446 | ] 447 | }, 448 | { 449 | "metadata": { 450 | "colab_type": "code", 451 | "id": "Fb1_lgobv81m", 452 | "colab": { 453 | "base_uri": "https://localhost:8080/", 454 | "height": 853 455 | }, 456 | "outputId": "a41ecf71-0541-4e41-94b6-fd10ecbd99a5" 457 | }, 458 | "cell_type": "code", 459 | "source": [ 460 | "history = model.fit_generator(\n", 461 | " train_generator,\n", 462 | " steps_per_epoch=8, \n", 463 | " epochs=15,\n", 464 | " verbose=1,\n", 465 | " validation_data = validation_generator,\n", 466 | " validation_steps=8)" 467 | ], 468 | "execution_count": 10, 469 | "outputs": [ 470 | { 471 | "output_type": "stream", 472 | "text": [ 473 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n", 474 | "Instructions for updating:\n", 475 | "Use tf.cast instead.\n", 476 | "Epoch 1/15\n", 477 | "8/8 [==============================] - 1s 171ms/step - loss: 0.4917 - acc: 0.8242\n", 478 | "9/9 [==============================] - 8s 886ms/step - loss: 1.3348 - acc: 0.5404 - val_loss: 0.4917 - val_acc: 0.8242\n", 479 | "Epoch 2/15\n", 480 | "8/8 [==============================] - 1s 172ms/step - loss: 0.3586 - acc: 0.8477\n", 481 | "9/9 [==============================] - 7s 810ms/step - loss: 0.5800 - acc: 0.7683 - val_loss: 0.3586 - val_acc: 0.8477\n", 482 | "Epoch 3/15\n", 483 | "8/8 [==============================] - 1s 166ms/step - loss: 0.7681 - acc: 0.8008\n", 484 | "9/9 [==============================] - 7s 768ms/step - loss: 0.4404 - acc: 0.8053 - val_loss: 0.7681 - val_acc: 0.8008\n", 485 | "Epoch 4/15\n", 486 | "8/8 [==============================] - 1s 170ms/step - loss: 0.6159 - acc: 0.7070\n", 487 | "9/9 [==============================] - 7s 792ms/step - loss: 0.3083 - acc: 0.9143 - val_loss: 0.6159 - val_acc: 0.7070\n", 488 | "Epoch 5/15\n", 489 | "8/8 [==============================] - 1s 163ms/step - loss: 1.3329 - acc: 0.7305\n", 490 | "9/9 [==============================] - 7s 768ms/step - loss: 0.1992 - acc: 0.9318 - val_loss: 1.3329 - val_acc: 0.7305\n", 491 | "Epoch 6/15\n", 492 | "8/8 [==============================] - 1s 171ms/step - loss: 1.2009 - acc: 0.7812\n", 493 | "9/9 [==============================] - 7s 776ms/step - loss: 0.0626 - acc: 0.9766 - val_loss: 1.2009 - val_acc: 0.7812\n", 494 | "Epoch 7/15\n", 495 | "8/8 [==============================] - 1s 172ms/step - loss: 1.3188 - acc: 0.7383\n", 496 | "9/9 [==============================] - 7s 790ms/step - loss: 0.0376 - acc: 0.9873 - val_loss: 1.3188 - val_acc: 0.7383\n", 497 | "Epoch 8/15\n", 498 | "8/8 [==============================] - 1s 163ms/step - loss: 3.7869 - acc: 0.5430\n", 499 | "9/9 [==============================] - 7s 761ms/step - loss: 0.0386 - acc: 0.9864 - val_loss: 3.7869 - val_acc: 0.5430\n", 500 | "Epoch 9/15\n", 501 | "8/8 [==============================] - 1s 166ms/step - loss: 1.9365 - acc: 0.7461\n", 502 | "9/9 [==============================] - 7s 773ms/step - loss: 0.6713 - acc: 0.8832 - val_loss: 1.9365 - val_acc: 0.7461\n", 503 | "Epoch 10/15\n", 504 | "8/8 [==============================] - 1s 167ms/step - loss: 1.4046 - acc: 0.8281\n", 505 | "9/9 [==============================] - 7s 803ms/step - loss: 0.0375 - acc: 0.9844 - val_loss: 1.4046 - val_acc: 0.8281\n", 506 | "Epoch 11/15\n", 507 | "8/8 [==============================] - 1s 168ms/step - loss: 1.4844 - acc: 0.8242\n", 508 | "9/9 [==============================] - 7s 764ms/step - loss: 0.0201 - acc: 0.9932 - val_loss: 1.4844 - val_acc: 0.8242\n", 509 | "Epoch 12/15\n", 510 | "8/8 [==============================] - 1s 166ms/step - loss: 1.7159 - acc: 0.8086\n", 511 | "9/9 [==============================] - 7s 780ms/step - loss: 0.0081 - acc: 0.9971 - val_loss: 1.7159 - val_acc: 0.8086\n", 512 | "Epoch 13/15\n", 513 | "8/8 [==============================] - 1s 167ms/step - loss: 1.1833 - acc: 0.8398\n", 514 | "9/9 [==============================] - 7s 779ms/step - loss: 0.2222 - acc: 0.9172 - val_loss: 1.1833 - val_acc: 0.8398\n", 515 | "Epoch 14/15\n", 516 | "8/8 [==============================] - 1s 168ms/step - loss: 1.3563 - acc: 0.8398\n", 517 | "9/9 [==============================] - 7s 770ms/step - loss: 0.0192 - acc: 0.9951 - val_loss: 1.3563 - val_acc: 0.8398\n", 518 | "Epoch 15/15\n", 519 | "8/8 [==============================] - 1s 168ms/step - loss: 1.2044 - acc: 0.8789\n", 520 | "9/9 [==============================] - 7s 762ms/step - loss: 0.0071 - acc: 0.9981 - val_loss: 1.2044 - val_acc: 0.8789\n" 521 | ], 522 | "name": "stdout" 523 | } 524 | ] 525 | }, 526 | { 527 | "metadata": { 528 | "id": "o6vSHzPR2ghH", 529 | "colab_type": "text" 530 | }, 531 | "cell_type": "markdown", 532 | "source": [ 533 | "###Running the Model\n", 534 | "\n", 535 | "Let's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human." 536 | ] 537 | }, 538 | { 539 | "metadata": { 540 | "id": "DoWp43WxJDNT", 541 | "colab_type": "code", 542 | "colab": { 543 | "resources": { 544 | "http://localhost:8080/nbextensions/google.colab/files.js": { 545 | "data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=", 546 | "ok": true, 547 | "headers": [ 548 | [ 549 | "content-type", 550 | "application/javascript" 551 | ] 552 | ], 553 | "status": 200, 554 | "status_text": "" 555 | } 556 | }, 557 | "base_uri": "https://localhost:8080/", 558 | "height": 125 559 | }, 560 | "outputId": "55a8bc7e-7f88-4805-a7eb-c9cdf8f8293a" 561 | }, 562 | "cell_type": "code", 563 | "source": [ 564 | "import numpy as np\n", 565 | "from google.colab import files\n", 566 | "from keras.preprocessing import image\n", 567 | "\n", 568 | "uploaded = files.upload()\n", 569 | "\n", 570 | "for fn in uploaded.keys():\n", 571 | " \n", 572 | " # predicting images\n", 573 | " path = '/content/' + fn\n", 574 | " img = image.load_img(path, target_size=(150, 150))\n", 575 | " x = image.img_to_array(img)\n", 576 | " x = np.expand_dims(x, axis=0)\n", 577 | "\n", 578 | " images = np.vstack([x])\n", 579 | " classes = model.predict(images, batch_size=10)\n", 580 | " print(classes[0])\n", 581 | " if classes[0]>0.5:\n", 582 | " print(fn + \" is a human\")\n", 583 | " else:\n", 584 | " print(fn + \" is a horse\")\n", 585 | " " 586 | ], 587 | "execution_count": 11, 588 | "outputs": [ 589 | { 590 | "output_type": "stream", 591 | "text": [ 592 | "Using TensorFlow backend.\n" 593 | ], 594 | "name": "stderr" 595 | }, 596 | { 597 | "output_type": "display_data", 598 | "data": { 599 | "text/html": [ 600 | "\n", 601 | " \n", 602 | " \n", 603 | " Upload widget is only available when the cell has been executed in the\n", 604 | " current browser session. Please rerun this cell to enable.\n", 605 | " \n", 606 | " " 607 | ], 608 | "text/plain": [ 609 | "" 610 | ] 611 | }, 612 | "metadata": { 613 | "tags": [] 614 | } 615 | }, 616 | { 617 | "output_type": "stream", 618 | "text": [ 619 | "Saving human_horses_06_womenridesman.jpg to human_horses_06_womenridesman.jpg\n", 620 | "[0.]\n", 621 | "human_horses_06_womenridesman.jpg is a horse\n" 622 | ], 623 | "name": "stdout" 624 | } 625 | ] 626 | }, 627 | { 628 | "metadata": { 629 | "colab_type": "text", 630 | "id": "-8EHQyWGDvWz" 631 | }, 632 | "cell_type": "markdown", 633 | "source": [ 634 | "### Visualizing Intermediate Representations\n", 635 | "\n", 636 | "To get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet.\n", 637 | "\n", 638 | "Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images." 639 | ] 640 | }, 641 | { 642 | "metadata": { 643 | "colab_type": "code", 644 | "id": "-5tES8rXFjux", 645 | "colab": {} 646 | }, 647 | "cell_type": "code", 648 | "source": [ 649 | "import numpy as np\n", 650 | "import random\n", 651 | "from tensorflow.keras.preprocessing.image import img_to_array, load_img\n", 652 | "\n", 653 | "# Let's define a new Model that will take an image as input, and will output\n", 654 | "# intermediate representations for all layers in the previous model after\n", 655 | "# the first.\n", 656 | "successive_outputs = [layer.output for layer in model.layers[1:]]\n", 657 | "#visualization_model = Model(img_input, successive_outputs)\n", 658 | "visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)\n", 659 | "# Let's prepare a random input image from the training set.\n", 660 | "horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]\n", 661 | "human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]\n", 662 | "img_path = random.choice(horse_img_files + human_img_files)\n", 663 | "\n", 664 | "img = load_img(img_path, target_size=(300, 300)) # this is a PIL image\n", 665 | "x = img_to_array(img) # Numpy array with shape (150, 150, 3)\n", 666 | "x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)\n", 667 | "\n", 668 | "# Rescale by 1/255\n", 669 | "x /= 255\n", 670 | "\n", 671 | "# Let's run our image through our network, thus obtaining all\n", 672 | "# intermediate representations for this image.\n", 673 | "successive_feature_maps = visualization_model.predict(x)\n", 674 | "\n", 675 | "# These are the names of the layers, so can have them as part of our plot\n", 676 | "layer_names = [layer.name for layer in model.layers]\n", 677 | "\n", 678 | "# Now let's display our representations\n", 679 | "for layer_name, feature_map in zip(layer_names, successive_feature_maps):\n", 680 | " if len(feature_map.shape) == 4:\n", 681 | " # Just do this for the conv / maxpool layers, not the fully-connected layers\n", 682 | " n_features = feature_map.shape[-1] # number of features in feature map\n", 683 | " # The feature map has shape (1, size, size, n_features)\n", 684 | " size = feature_map.shape[1]\n", 685 | " # We will tile our images in this matrix\n", 686 | " display_grid = np.zeros((size, size * n_features))\n", 687 | " for i in range(n_features):\n", 688 | " # Postprocess the feature to make it visually palatable\n", 689 | " x = feature_map[0, :, :, i]\n", 690 | " x -= x.mean()\n", 691 | " x /= x.std()\n", 692 | " x *= 64\n", 693 | " x += 128\n", 694 | " x = np.clip(x, 0, 255).astype('uint8')\n", 695 | " # We'll tile each filter into this big horizontal grid\n", 696 | " display_grid[:, i * size : (i + 1) * size] = x\n", 697 | " # Display the grid\n", 698 | " scale = 20. / n_features\n", 699 | " plt.figure(figsize=(scale * n_features, scale))\n", 700 | " plt.title(layer_name)\n", 701 | " plt.grid(False)\n", 702 | " plt.imshow(display_grid, aspect='auto', cmap='viridis')" 703 | ], 704 | "execution_count": 0, 705 | "outputs": [] 706 | }, 707 | { 708 | "metadata": { 709 | "colab_type": "text", 710 | "id": "tuqK2arJL0wo" 711 | }, 712 | "cell_type": "markdown", 713 | "source": [ 714 | "As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being \"activated\"; most are set to zero. This is called \"sparsity.\" Representation sparsity is a key feature of deep learning.\n", 715 | "\n", 716 | "\n", 717 | "These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline." 718 | ] 719 | }, 720 | { 721 | "metadata": { 722 | "colab_type": "text", 723 | "id": "j4IBgYCYooGD" 724 | }, 725 | "cell_type": "markdown", 726 | "source": [ 727 | "## Clean Up\n", 728 | "\n", 729 | "Before running the next exercise, run the following cell to terminate the kernel and free memory resources:" 730 | ] 731 | }, 732 | { 733 | "metadata": { 734 | "colab_type": "code", 735 | "id": "651IgjLyo-Jx", 736 | "colab": {} 737 | }, 738 | "cell_type": "code", 739 | "source": [ 740 | "import os, signal\n", 741 | "os.kill(os.getpid(), signal.SIGKILL)" 742 | ], 743 | "execution_count": 0, 744 | "outputs": [] 745 | } 746 | ] 747 | } -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # intro-to-tensorflow-for-ai-coursera 2 | Course: Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning 3 | 4 | Specializtion: TensorFlow: From Basics to Mastery 5 | --------------------------------------------------------------------------------