├── README.md ├── Assignment_001.ipynb └── Assignment_002.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # DeepLearningWithTensorflow 2 | A 4-months long course on Codanics 3 | -------------------------------------------------------------------------------- /Assignment_001.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "id": "mw2VBrBcgvGa" 7 | }, 8 | "source": [ 9 | "# Week 1 Assignment: Housing Prices\n", 10 | "\n", 11 | "In this exercise you'll try to build a neural network that predicts the price of a house according to a simple formula.\n", 12 | "\n", 13 | "Imagine that house pricing is as easy as:\n", 14 | "\n", 15 | "A house has a base cost of 50k, and every additional bedroom adds a cost of 50k. This will make a 1 bedroom house cost 100k, a 2 bedroom house cost 150k etc.\n", 16 | "\n", 17 | "How would you create a neural network that learns this relationship so that it would predict a 7 bedroom house as costing close to 400k etc.\n", 18 | "\n", 19 | "Hint: Your network might work better if you scale the house price down. You don't have to give the answer 400...it might be better to create something that predicts the number 4, and then your answer is in the 'hundreds of thousands' etc." 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "id": "PUNO2E6SeURH", 27 | "tags": [ 28 | "graded" 29 | ] 30 | }, 31 | "outputs": [], 32 | "source": [ 33 | "import tensorflow as tf\n", 34 | "import numpy as np" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "metadata": { 41 | "id": "B-74xrKrBqGJ", 42 | "tags": [ 43 | "graded" 44 | ] 45 | }, 46 | "outputs": [], 47 | "source": [ 48 | "# GRADED FUNCTION: house_model\n", 49 | "def house_model():\n", 50 | " ### START CODE HERE\n", 51 | " \n", 52 | " # Define input and output tensors with the values for houses with 1 up to 6 bedrooms\n", 53 | " # Hint: Remember to explictly set the dtype as float\n", 54 | " xs = None\n", 55 | " ys = None\n", 56 | " \n", 57 | " # Define your model (should be a model with 1 dense layer and 1 unit)\n", 58 | " model = None\n", 59 | " \n", 60 | " # Compile your model\n", 61 | " # Set the optimizer to Stochastic Gradient Descent\n", 62 | " # and use Mean Squared Error as the loss function\n", 63 | " model.compile(optimizer=None, loss=None)\n", 64 | " \n", 65 | " # Train your model for 1000 epochs by feeding the i/o tensors\n", 66 | " model.fit(None, None, epochs=None)\n", 67 | " \n", 68 | " ### END CODE HERE\n", 69 | " return model" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "metadata": {}, 75 | "source": [ 76 | "Now that you have a function that returns a compiled and trained model when invoked, use it to get the model to predict the price of houses: " 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": { 83 | "tags": [ 84 | "graded" 85 | ] 86 | }, 87 | "outputs": [], 88 | "source": [ 89 | "# Get your trained model\n", 90 | "model = house_model()" 91 | ] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": {}, 96 | "source": [ 97 | "Now that your model has finished training it is time to test it out! You can do so by running the next cell." 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": null, 103 | "metadata": { 104 | "id": "kMlInDdSBqGK", 105 | "tags": [ 106 | "graded" 107 | ] 108 | }, 109 | "outputs": [], 110 | "source": [ 111 | "new_y = 7.0\n", 112 | "prediction = model.predict([new_y])[0]\n", 113 | "print(prediction)" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "If everything went as expected you should see a prediction value very close to 4. **If not, try adjusting your code before submitting the assignment.** Notice that you can play around with the value of `new_y` to get different predictions. In general you should see that the network was able to learn the linear relationship between `x` and `y`, so if you use a value of 8.0 you should get a prediction close to 4.5 and so on." 121 | ] 122 | }, 123 | { 124 | "cell_type": "markdown", 125 | "metadata": {}, 126 | "source": [ 127 | "**Congratulations on finishing this week's assignment!**\n", 128 | "\n", 129 | "You have successfully coded a neural network that learned the linear relationship between two variables. Nice job!\n", 130 | "\n", 131 | "**Keep it up!**" 132 | ] 133 | } 134 | ], 135 | "metadata": { 136 | "kernelspec": { 137 | "display_name": "Python 3", 138 | "language": "python", 139 | "name": "python3" 140 | }, 141 | "language_info": { 142 | "codemirror_mode": { 143 | "name": "ipython", 144 | "version": 3 145 | }, 146 | "file_extension": ".py", 147 | "mimetype": "text/x-python", 148 | "name": "python", 149 | "nbconvert_exporter": "python", 150 | "pygments_lexer": "ipython3", 151 | "version": "3.8.8" 152 | } 153 | }, 154 | "nbformat": 4, 155 | "nbformat_minor": 4 156 | } 157 | -------------------------------------------------------------------------------- /Assignment_002.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "id": "DP0XA05fATch" 7 | }, 8 | "source": [ 9 | "\"Open" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": { 15 | "id": "qnyTxjK_GbOD" 16 | }, 17 | "source": [ 18 | "# Ungraded Lab: Beyond Hello World, A Computer Vision Example\n", 19 | "In the previous exercise, you saw how to create a neural network that figured out the problem you were trying to solve. This gave an explicit example of learned behavior. Of course, in that instance, it was a bit of overkill because it would have been easier to write the function `y=2x-1` directly instead of bothering with using machine learning to learn the relationship between `x` and `y`.\n", 20 | "\n", 21 | "But what about a scenario where writing rules like that is much more difficult -- for example a computer vision problem? Let's take a look at a scenario where you will build a neural network to recognize different items of clothing, trained from a dataset containing 10 different types." 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "metadata": { 27 | "id": "H41FYgtlHPjW" 28 | }, 29 | "source": [ 30 | "## Start Coding\n", 31 | "\n", 32 | "Let's start with our import of TensorFlow." 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": null, 38 | "metadata": { 39 | "id": "q3KzJyjv3rnA" 40 | }, 41 | "outputs": [], 42 | "source": [ 43 | "import tensorflow as tf\n", 44 | "\n", 45 | "print(tf.__version__)" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": { 51 | "id": "n_n1U5do3u_F" 52 | }, 53 | "source": [ 54 | "The [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist) is a collection of grayscale 28x28 pixel clothing images. Each image is associated with a label as shown in this table⁉\n", 55 | "\n", 56 | "| Label | Description |\n", 57 | "| --- | --- |\n", 58 | "| 0 | T-shirt/top |\n", 59 | "| 1 | Trouser |\n", 60 | "| 2 | Pullover |\n", 61 | "| 3 | Dress |\n", 62 | "| 4 | Coat |\n", 63 | "| 5 | Sandal |\n", 64 | "| 6 | Shirt |\n", 65 | "| 7 | Sneaker |\n", 66 | "| 8 | Bag |\n", 67 | "| 9 | Ankle boot |\n", 68 | "\n", 69 | "This dataset is available directly in the [tf.keras.datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets) API and you load it like this:" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "metadata": { 76 | "id": "PmxkHFpt31bM" 77 | }, 78 | "outputs": [], 79 | "source": [ 80 | "# Load the Fashion MNIST dataset\n", 81 | "fmnist = tf.keras.datasets.fashion_mnist" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": { 87 | "id": "GuoLQQBT4E-_" 88 | }, 89 | "source": [ 90 | "Calling `load_data()` on this object will give you two tuples with two lists each. These will be the training and testing values for the graphics that contain the clothing items and their labels.\n" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": null, 96 | "metadata": { 97 | "id": "BTdRgExe4TRB" 98 | }, 99 | "outputs": [], 100 | "source": [ 101 | "# Load the training and test split of the Fashion MNIST dataset\n", 102 | "(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": { 108 | "id": "rw395ROx4f5Q" 109 | }, 110 | "source": [ 111 | "What does these values look like? Let's print a training image (both as an image and a numpy array), and a training label to see. Experiment with different indices in the array. For example, also take a look at index `42`. That's a different boot than the one at index `0`.\n" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "metadata": { 118 | "id": "FPc9d3gJ3jWF" 119 | }, 120 | "outputs": [], 121 | "source": [ 122 | "import numpy as np\n", 123 | "import matplotlib.pyplot as plt\n", 124 | "\n", 125 | "# You can put between 0 to 59999 here\n", 126 | "index = 0\n", 127 | "\n", 128 | "# Set number of characters per row when printing\n", 129 | "np.set_printoptions(linewidth=320)\n", 130 | "\n", 131 | "# Print the label and image\n", 132 | "print(f'LABEL: {training_labels[index]}')\n", 133 | "print(f'\\nIMAGE PIXEL ARRAY:\\n {training_images[index]}')\n", 134 | "\n", 135 | "# Visualize the image\n", 136 | "plt.imshow(training_images[index])" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": { 142 | "id": "3cbrdH225_nH" 143 | }, 144 | "source": [ 145 | "You'll notice that all of the values in the number are between 0 and 255. If you are training a neural network especially in image processing, for various reasons it will usually learn better if you scale all values to between 0 and 1. It's a process called _normalization_ and fortunately in Python, it's easy to normalize an array without looping. You do it like this:" 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": { 152 | "id": "kRH19pWs6ZDn" 153 | }, 154 | "outputs": [], 155 | "source": [ 156 | "# Normalize the pixel values of the train and test images\n", 157 | "training_images = training_images / 255.0\n", 158 | "test_images = test_images / 255.0" 159 | ] 160 | }, 161 | { 162 | "cell_type": "markdown", 163 | "metadata": { 164 | "id": "3DkO0As46lRn" 165 | }, 166 | "source": [ 167 | "Now you might be wondering why the dataset is split into two: training and testing? Remember we spoke about this in the intro? The idea is to have 1 set of data for training, and then another set of data that the model hasn't yet seen. This will be used to evaluate how good it would be at classifying values." 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": { 173 | "id": "dIn7S9gf62ie" 174 | }, 175 | "source": [ 176 | "Let's now design the model. There's quite a few new concepts here. But don't worry, you'll get the hang of them. " 177 | ] 178 | }, 179 | { 180 | "cell_type": "code", 181 | "execution_count": null, 182 | "metadata": { 183 | "id": "7mAyndG3kVlK" 184 | }, 185 | "outputs": [], 186 | "source": [ 187 | "# Build the classification model\n", 188 | "model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), \n", 189 | " tf.keras.layers.Dense(128, activation=tf.nn.relu), \n", 190 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)])" 191 | ] 192 | }, 193 | { 194 | "cell_type": "markdown", 195 | "metadata": { 196 | "id": "-lUcWaiX7MFj" 197 | }, 198 | "source": [ 199 | "[Sequential](https://keras.io/api/models/sequential/): That defines a sequence of layers in the neural network.\n", 200 | "\n", 201 | "[Flatten](https://keras.io/api/layers/reshaping_layers/flatten/): Remember earlier where our images were a 28x28 pixel matrix when you printed them out? Flatten just takes that square and turns it into a 1-dimensional array.\n", 202 | "\n", 203 | "[Dense](https://keras.io/api/layers/core_layers/dense/): Adds a layer of neurons\n", 204 | "\n", 205 | "Each layer of neurons need an [activation function](https://keras.io/api/layers/activations/) to tell them what to do. There are a lot of options, but just use these for now: \n", 206 | "\n", 207 | "[ReLU](https://keras.io/api/layers/activations/#relu-function) effectively means:\n", 208 | "\n", 209 | "```\n", 210 | "if x > 0: \n", 211 | " return x\n", 212 | "\n", 213 | "else: \n", 214 | " return 0\n", 215 | "```\n", 216 | "\n", 217 | "In other words, it only passes values 0 or greater to the next layer in the network.\n", 218 | "\n", 219 | "[Softmax](https://keras.io/api/layers/activations/#softmax-function) takes a list of values and scales these so the sum of all elements will be equal to 1. When applied to model outputs, you can think of the scaled values as the probability for that class. For example, in your classification model which has 10 units in the output dense layer, having the highest value at `index = 4` means that the model is most confident that the input clothing image is a coat. If it is at index = 5, then it is a sandal, and so forth. See the short code block below which demonstrates these concepts. You can also watch this [lecture](https://www.youtube.com/watch?v=LLux1SW--oM&ab_channel=DeepLearningAI) if you want to know more about the Softmax function and how the values are computed.\n" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": null, 225 | "metadata": { 226 | "id": "Dk1hzzpDoGPI" 227 | }, 228 | "outputs": [], 229 | "source": [ 230 | "# Declare sample inputs and convert to a tensor\n", 231 | "inputs = np.array([[1.0, 3.0, 4.0, 2.0]])\n", 232 | "inputs = tf.convert_to_tensor(inputs)\n", 233 | "print(f'input to softmax function: {inputs.numpy()}')\n", 234 | "\n", 235 | "# Feed the inputs to a softmax activation function\n", 236 | "outputs = tf.keras.activations.softmax(inputs)\n", 237 | "print(f'output of softmax function: {outputs.numpy()}')\n", 238 | "\n", 239 | "# Get the sum of all values after the softmax\n", 240 | "sum = tf.reduce_sum(outputs)\n", 241 | "print(f'sum of outputs: {sum}')\n", 242 | "\n", 243 | "# Get the index with highest value\n", 244 | "prediction = np.argmax(outputs)\n", 245 | "print(f'class with highest probability: {prediction}')" 246 | ] 247 | }, 248 | { 249 | "cell_type": "markdown", 250 | "metadata": { 251 | "id": "c8vbMCqb9Mh6" 252 | }, 253 | "source": [ 254 | "The next thing to do, now that the model is defined, is to actually build it. You do this by compiling it with an optimizer and loss function as before -- and then you train it by calling `model.fit()` asking it to fit your training data to your training labels. It will figure out the relationship between the training data and its actual labels so in the future if you have inputs that looks like the training data, then it can predict what the label for that input is." 255 | ] 256 | }, 257 | { 258 | "cell_type": "code", 259 | "execution_count": null, 260 | "metadata": { 261 | "id": "BLMdl9aP8nQ0" 262 | }, 263 | "outputs": [], 264 | "source": [ 265 | "model.compile(optimizer = tf.optimizers.Adam(),\n", 266 | " loss = 'sparse_categorical_crossentropy',\n", 267 | " metrics=['accuracy'])\n", 268 | "\n", 269 | "model.fit(training_images, training_labels, epochs=5)" 270 | ] 271 | }, 272 | { 273 | "cell_type": "markdown", 274 | "metadata": { 275 | "id": "-JJMsvSB-1UY" 276 | }, 277 | "source": [ 278 | "Once it's done training -- you should see an accuracy value at the end of the final epoch. It might look something like `0.9098`. This tells you that your neural network is about 91% accurate in classifying the training data. That is, it figured out a pattern match between the image and the labels that worked 91% of the time. Not great, but not bad considering it was only trained for 5 epochs and done quite quickly.\n", 279 | "\n", 280 | "But how would it work with unseen data? That's why we have the test images and labels. We can call [`model.evaluate()`](https://keras.io/api/models/model_training_apis/#evaluate-method) with this test dataset as inputs and it will report back the loss and accuracy of the model. Let's give it a try:" 281 | ] 282 | }, 283 | { 284 | "cell_type": "code", 285 | "execution_count": null, 286 | "metadata": { 287 | "id": "WzlqsEzX9s5P" 288 | }, 289 | "outputs": [], 290 | "source": [ 291 | "# Evaluate the model on unseen data\n", 292 | "model.evaluate(test_images, test_labels)" 293 | ] 294 | }, 295 | { 296 | "cell_type": "markdown", 297 | "metadata": { 298 | "id": "6tki-Aro_Uax" 299 | }, 300 | "source": [ 301 | "You can expect the accuracy here to be about `0.88` which means it was 88% accurate on the entire test set. As expected, it probably would not do as well with *unseen* data as it did with data it was trained on! As you go through this course, you'll look at ways to improve this. " 302 | ] 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "metadata": { 307 | "id": "htldZNWcIPSN" 308 | }, 309 | "source": [ 310 | "# Exploration Exercises\n", 311 | "\n", 312 | "To explore further and deepen your understanding, try the below exercises:" 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "metadata": { 318 | "id": "rquQqIx4AaGR" 319 | }, 320 | "source": [ 321 | "### Exercise 1:\n", 322 | "For this first exercise run the below code: It creates a set of classifications for each of the test images, and then prints the first entry in the classifications. The output, after you run it is a list of numbers. Why do you think this is, and what do those numbers represent? " 323 | ] 324 | }, 325 | { 326 | "cell_type": "code", 327 | "execution_count": null, 328 | "metadata": { 329 | "id": "RyEIki0z_hAD" 330 | }, 331 | "outputs": [], 332 | "source": [ 333 | "classifications = model.predict(test_images)\n", 334 | "\n", 335 | "print(classifications[0])" 336 | ] 337 | }, 338 | { 339 | "cell_type": "markdown", 340 | "metadata": { 341 | "id": "MdzqbQhRArzm" 342 | }, 343 | "source": [ 344 | "**Hint:** try running `print(test_labels[0])` -- and you'll get a `9`. Does that help you understand why this list looks the way it does? " 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": null, 350 | "metadata": { 351 | "id": "WnBGOrMiA1n5" 352 | }, 353 | "outputs": [], 354 | "source": [ 355 | "print(test_labels[0])" 356 | ] 357 | }, 358 | { 359 | "cell_type": "markdown", 360 | "metadata": { 361 | "id": "uUs7eqr7uSvs" 362 | }, 363 | "source": [ 364 | "### E1Q1: What does this list represent?\n", 365 | "\n", 366 | "\n", 367 | "1. It's 10 random meaningless values\n", 368 | "2. It's the first 10 classifications that the computer made\n", 369 | "3. It's the probability that this item is each of the 10 classes\n" 370 | ] 371 | }, 372 | { 373 | "cell_type": "markdown", 374 | "metadata": { 375 | "id": "wAbr92RTA67u" 376 | }, 377 | "source": [ 378 | "
Click for Answer\n", 379 | "

\n", 380 | "\n", 381 | "#### Answer: \n", 382 | "The correct answer is (3)\n", 383 | "\n", 384 | "The output of the model is a list of 10 numbers. These numbers are a probability that the value being classified is the corresponding value (https://github.com/zalandoresearch/fashion-mnist#labels), i.e. the first value in the list is the probability that the image is of a '0' (T-shirt/top), the next is a '1' (Trouser) etc. Notice that they are all VERY LOW probabilities.\n", 385 | "\n", 386 | "For index 9 (Ankle boot), the probability was in the 90's, i.e. the neural network is telling us that the image is most likely an ankle boot.\n", 387 | "\n", 388 | "

\n", 389 | "
" 390 | ] 391 | }, 392 | { 393 | "cell_type": "markdown", 394 | "metadata": { 395 | "id": "CD4kC6TBu-69" 396 | }, 397 | "source": [ 398 | "### E1Q2: How do you know that this list tells you that the item is an ankle boot?\n", 399 | "\n", 400 | "\n", 401 | "1. There's not enough information to answer that question\n", 402 | "2. The 10th element on the list is the biggest, and the ankle boot is labelled 9\n", 403 | "2. The ankle boot is label 9, and there are 0->9 elements in the list\n" 404 | ] 405 | }, 406 | { 407 | "cell_type": "markdown", 408 | "metadata": { 409 | "id": "I-haLncrva5L" 410 | }, 411 | "source": [ 412 | "
Click for Answer\n", 413 | "

\n", 414 | "\n", 415 | "#### Answer\n", 416 | "The correct answer is (2). Both the list and the labels are 0 based, so the ankle boot having label 9 means that it is the 10th of the 10 classes. The list having the 10th element being the highest value means that the Neural Network has predicted that the item it is classifying is most likely an ankle boot\n", 417 | "\n", 418 | "

\n", 419 | "
" 420 | ] 421 | }, 422 | { 423 | "cell_type": "markdown", 424 | "metadata": { 425 | "id": "OgQSIfDSOWv6" 426 | }, 427 | "source": [ 428 | "### Exercise 2: \n", 429 | "Let's now look at the layers in your model. Experiment with different values for the dense layer with 512 neurons. What different results do you get for loss, training time etc? Why do you think that's the case? \n" 430 | ] 431 | }, 432 | { 433 | "cell_type": "code", 434 | "execution_count": null, 435 | "metadata": { 436 | "id": "GSZSwV5UObQP" 437 | }, 438 | "outputs": [], 439 | "source": [ 440 | "mnist = tf.keras.datasets.mnist\n", 441 | "\n", 442 | "(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n", 443 | "\n", 444 | "training_images = training_images/255.0\n", 445 | "test_images = test_images/255.0\n", 446 | "\n", 447 | "model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n", 448 | " tf.keras.layers.Dense(1024, activation=tf.nn.relu), # Try experimenting with this layer\n", 449 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n", 450 | "\n", 451 | "model.compile(optimizer = 'adam',\n", 452 | " loss = 'sparse_categorical_crossentropy')\n", 453 | "\n", 454 | "model.fit(training_images, training_labels, epochs=5)\n", 455 | "\n", 456 | "model.evaluate(test_images, test_labels)\n", 457 | "\n", 458 | "classifications = model.predict(test_images)\n", 459 | "\n", 460 | "print(classifications[0])\n", 461 | "print(test_labels[0])" 462 | ] 463 | }, 464 | { 465 | "cell_type": "markdown", 466 | "metadata": { 467 | "id": "bOOEnHZFv5cS" 468 | }, 469 | "source": [ 470 | "### E2Q1: Increase to 1024 Neurons -- What's the impact?\n", 471 | "\n", 472 | "1. Training takes longer, but is more accurate\n", 473 | "2. Training takes longer, but no impact on accuracy\n", 474 | "3. Training takes the same time, but is more accurate\n" 475 | ] 476 | }, 477 | { 478 | "cell_type": "markdown", 479 | "metadata": { 480 | "id": "U73MUP2lwrI2" 481 | }, 482 | "source": [ 483 | "
Click for Answer\n", 484 | "

\n", 485 | "\n", 486 | "#### Answer\n", 487 | "The correct answer is (1) by adding more Neurons we have to do more calculations, slowing down the process, but in this case they have a good impact -- we do get more accurate. That doesn't mean it's always a case of 'more is better', you can hit the law of diminishing returns very quickly!\n", 488 | "\n", 489 | "

\n", 490 | "
" 491 | ] 492 | }, 493 | { 494 | "cell_type": "markdown", 495 | "metadata": { 496 | "id": "WtWxK16hQxLN" 497 | }, 498 | "source": [ 499 | "### Exercise 3: \n", 500 | "\n", 501 | "### E3Q1: What would happen if you remove the Flatten() layer. Why do you think that's the case? \n", 502 | "\n", 503 | "
Click for Answer\n", 504 | "

\n", 505 | "\n", 506 | "#### Answer\n", 507 | "You get an error about the shape of the data. It may seem vague right now, but it reinforces the rule of thumb that the first layer in your network should be the same shape as your data. Right now our data is 28x28 images, and 28 layers of 28 neurons would be infeasible, so it makes more sense to 'flatten' that 28,28 into a 784x1. Instead of writng all the code to handle that ourselves, we add the Flatten() layer at the begining, and when the arrays are loaded into the model later, they'll automatically be flattened for us.\n", 508 | "\n", 509 | "

\n", 510 | "
" 511 | ] 512 | }, 513 | { 514 | "cell_type": "code", 515 | "execution_count": null, 516 | "metadata": { 517 | "id": "ExNxCwhcQ18S" 518 | }, 519 | "outputs": [], 520 | "source": [ 521 | "mnist = tf.keras.datasets.mnist\n", 522 | "\n", 523 | "(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n", 524 | "\n", 525 | "training_images = training_images/255.0\n", 526 | "test_images = test_images/255.0\n", 527 | "\n", 528 | "model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), #Try removing this layer\n", 529 | " tf.keras.layers.Dense(64, activation=tf.nn.relu),\n", 530 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n", 531 | "\n", 532 | "model.compile(optimizer = 'adam',\n", 533 | " loss = 'sparse_categorical_crossentropy')\n", 534 | "\n", 535 | "model.fit(training_images, training_labels, epochs=5)\n", 536 | "\n", 537 | "model.evaluate(test_images, test_labels)\n", 538 | "\n", 539 | "classifications = model.predict(test_images)\n", 540 | "\n", 541 | "print(classifications[0])\n", 542 | "print(test_labels[0])" 543 | ] 544 | }, 545 | { 546 | "cell_type": "markdown", 547 | "metadata": { 548 | "id": "VqoCR-ieSGDg" 549 | }, 550 | "source": [ 551 | "### Exercise 4: \n", 552 | "\n", 553 | "Consider the final (output) layers. Why are there 10 of them? What would happen if you had a different amount than 10? For example, try training the network with 5.\n", 554 | "\n", 555 | "
Click for Answer\n", 556 | "

\n", 557 | "\n", 558 | "#### Answer\n", 559 | "You get an error as soon as it finds an unexpected value. Another rule of thumb -- the number of neurons in the last layer should match the number of classes you are classifying for. In this case it's the digits 0-9, so there are 10 of them, hence you should have 10 neurons in your final layer.\n", 560 | "\n", 561 | "

\n", 562 | "
" 563 | ] 564 | }, 565 | { 566 | "cell_type": "code", 567 | "execution_count": null, 568 | "metadata": { 569 | "id": "MMckVntcSPvo" 570 | }, 571 | "outputs": [], 572 | "source": [ 573 | "mnist = tf.keras.datasets.mnist\n", 574 | "\n", 575 | "(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n", 576 | "\n", 577 | "training_images = training_images/255.0\n", 578 | "test_images = test_images/255.0\n", 579 | "\n", 580 | "model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n", 581 | " tf.keras.layers.Dense(64, activation=tf.nn.relu),\n", 582 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax) # Try experimenting with this layer\n", 583 | " ])\n", 584 | "\n", 585 | "model.compile(optimizer = 'adam',\n", 586 | " loss = 'sparse_categorical_crossentropy')\n", 587 | "\n", 588 | "model.fit(training_images, training_labels, epochs=5)\n", 589 | "\n", 590 | "model.evaluate(test_images, test_labels)\n", 591 | "\n", 592 | "classifications = model.predict(test_images)\n", 593 | "\n", 594 | "print(classifications[0])\n", 595 | "print(test_labels[0])" 596 | ] 597 | }, 598 | { 599 | "cell_type": "markdown", 600 | "metadata": { 601 | "id": "-0lF5MuvSuZF" 602 | }, 603 | "source": [ 604 | "### Exercise 5: \n", 605 | "\n", 606 | "Consider the effects of additional layers in the network. What will happen if you add another layer between the one with 512 and the final layer with 10. \n", 607 | "\n", 608 | "
Click for Answer\n", 609 | "

\n", 610 | "\n", 611 | "#### Answer \n", 612 | "There isn't a significant impact -- because this is relatively simple data. For far more complex data (including color images to be classified as flowers that you'll see in the next lesson), extra layers are often necessary. \n", 613 | "\n", 614 | "

\n", 615 | "
" 616 | ] 617 | }, 618 | { 619 | "cell_type": "code", 620 | "execution_count": null, 621 | "metadata": { 622 | "id": "b1YPa6UhS8Es" 623 | }, 624 | "outputs": [], 625 | "source": [ 626 | "mnist = tf.keras.datasets.mnist\n", 627 | "\n", 628 | "(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n", 629 | "\n", 630 | "training_images = training_images/255.0\n", 631 | "test_images = test_images/255.0\n", 632 | "\n", 633 | "model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n", 634 | " # Add a layer here,\n", 635 | " tf.keras.layers.Dense(256, activation=tf.nn.relu),\n", 636 | " # Add a layer here\n", 637 | " ])\n", 638 | "\n", 639 | "model.compile(optimizer = 'adam',\n", 640 | " loss = 'sparse_categorical_crossentropy')\n", 641 | "\n", 642 | "model.fit(training_images, training_labels, epochs=5)\n", 643 | "\n", 644 | "model.evaluate(test_images, test_labels)\n", 645 | "\n", 646 | "classifications = model.predict(test_images)\n", 647 | "\n", 648 | "print(classifications[0])\n", 649 | "print(test_labels[0])" 650 | ] 651 | }, 652 | { 653 | "cell_type": "markdown", 654 | "metadata": { 655 | "id": "Bql9fyaNUSFy" 656 | }, 657 | "source": [ 658 | "### Exercise 6: \n", 659 | "\n", 660 | "### E6Q1: Consider the impact of training for more or less epochs. Why do you think that would be the case? \n", 661 | "\n", 662 | "- Try 15 epochs -- you'll probably get a model with a much better loss than the one with 5\n", 663 | "- Try 30 epochs -- you might see the loss value stops decreasing, and sometimes increases.\n", 664 | "\n", 665 | "This is a side effect of something called 'overfitting' which you can learn about later and it's something you need to keep an eye out for when training neural networks. There's no point in wasting your time training if you aren't improving your loss, right! :)" 666 | ] 667 | }, 668 | { 669 | "cell_type": "code", 670 | "execution_count": null, 671 | "metadata": { 672 | "id": "uE3esj9BURQe" 673 | }, 674 | "outputs": [], 675 | "source": [ 676 | "mnist = tf.keras.datasets.mnist\n", 677 | "\n", 678 | "(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n", 679 | "\n", 680 | "training_images = training_images/255.0\n", 681 | "test_images = test_images/255.0\n", 682 | "\n", 683 | "model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n", 684 | " tf.keras.layers.Dense(128, activation=tf.nn.relu),\n", 685 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n", 686 | "\n", 687 | "model.compile(optimizer = 'adam',\n", 688 | " loss = 'sparse_categorical_crossentropy')\n", 689 | "\n", 690 | "model.fit(training_images, training_labels, epochs=5) # Experiment with the number of epochs\n", 691 | "\n", 692 | "model.evaluate(test_images, test_labels)\n", 693 | "\n", 694 | "classifications = model.predict(test_images)\n", 695 | "\n", 696 | "print(classifications[34])\n", 697 | "print(test_labels[34])" 698 | ] 699 | }, 700 | { 701 | "cell_type": "markdown", 702 | "metadata": { 703 | "id": "HS3vVkOgCDGZ" 704 | }, 705 | "source": [ 706 | "### Exercise 7: \n", 707 | "\n", 708 | "Before you trained, you normalized the data, going from values that were 0-255 to values that were 0-1. What would be the impact of removing that? Here's the complete code to give it a try. Why do you think you get different results? " 709 | ] 710 | }, 711 | { 712 | "cell_type": "code", 713 | "execution_count": null, 714 | "metadata": { 715 | "id": "JDqNAqrpCNg0" 716 | }, 717 | "outputs": [], 718 | "source": [ 719 | "mnist = tf.keras.datasets.mnist\n", 720 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 721 | "training_images=training_images/255.0 # Experiment with removing this line\n", 722 | "test_images=test_images/255.0 # Experiment with removing this line\n", 723 | "model = tf.keras.models.Sequential([\n", 724 | " tf.keras.layers.Flatten(),\n", 725 | " tf.keras.layers.Dense(512, activation=tf.nn.relu),\n", 726 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n", 727 | "])\n", 728 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')\n", 729 | "model.fit(training_images, training_labels, epochs=5)\n", 730 | "model.evaluate(test_images, test_labels)\n", 731 | "classifications = model.predict(test_images)\n", 732 | "print(classifications[0])\n", 733 | "print(test_labels[0])" 734 | ] 735 | }, 736 | { 737 | "cell_type": "markdown", 738 | "metadata": { 739 | "id": "E7W2PT66ZBHQ" 740 | }, 741 | "source": [ 742 | "### Exercise 8: \n", 743 | "\n", 744 | "Earlier when you trained for extra epochs you had an issue where your loss might change. It might have taken a bit of time for you to wait for the training to do that, and you might have thought 'wouldn't it be nice if I could stop the training when I reach a desired value?' -- i.e. 95% accuracy might be enough for you, and if you reach that after 3 epochs, why sit around waiting for it to finish a lot more epochs....So how would you fix that? Like any other program...you have callbacks! Let's see them in action..." 745 | ] 746 | }, 747 | { 748 | "cell_type": "code", 749 | "execution_count": null, 750 | "metadata": { 751 | "id": "pkaEHHgqZbYv" 752 | }, 753 | "outputs": [], 754 | "source": [ 755 | "class myCallback(tf.keras.callbacks.Callback):\n", 756 | " def on_epoch_end(self, epoch, logs={}):\n", 757 | " if(logs.get('accuracy') >= 0.6): # Experiment with changing this value\n", 758 | " print(\"\\nReached 60% accuracy so cancelling training!\")\n", 759 | " self.model.stop_training = True\n", 760 | "\n", 761 | "callbacks = myCallback()\n", 762 | "mnist = tf.keras.datasets.fashion_mnist\n", 763 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 764 | "training_images=training_images/255.0\n", 765 | "test_images=test_images/255.0\n", 766 | "model = tf.keras.models.Sequential([\n", 767 | " tf.keras.layers.Flatten(),\n", 768 | " tf.keras.layers.Dense(512, activation=tf.nn.relu),\n", 769 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n", 770 | "])\n", 771 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 772 | "model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])\n" 773 | ] 774 | } 775 | ], 776 | "metadata": { 777 | "colab": { 778 | "collapsed_sections": [], 779 | "name": "C1_W2_Lab_1_beyond_hello_world.ipynb", 780 | "private_outputs": true, 781 | "provenance": [] 782 | }, 783 | "kernelspec": { 784 | "display_name": "Python 3", 785 | "language": "python", 786 | "name": "python3" 787 | }, 788 | "language_info": { 789 | "codemirror_mode": { 790 | "name": "ipython", 791 | "version": 3 792 | }, 793 | "file_extension": ".py", 794 | "mimetype": "text/x-python", 795 | "name": "python", 796 | "nbconvert_exporter": "python", 797 | "pygments_lexer": "ipython3", 798 | "version": "3.7.4" 799 | } 800 | }, 801 | "nbformat": 4, 802 | "nbformat_minor": 0 803 | } --------------------------------------------------------------------------------