├── Building-Deep-Neural-Networks.ipynb ├── Deep Neural Network - Application.ipynb ├── Gradient_Checking.ipynb ├── Initialization.ipynb ├── Logistic_Regression_with_a_Neural_Network.ipynb ├── Optimization_methods.ipynb ├── Planar_data_classification_with_one_hidden_layer.ipynb ├── README.md ├── Regularization.ipynb ├── Tensorflow-personal.ipynb └── Tensorflow_introduction.ipynb /Building-Deep-Neural-Networks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Building your Deep Neural Network: Step by Step\n", 8 | "\n", 9 | "This week, you will build a deep neural network with as many layers as you want!\n", 10 | "\n", 11 | "- In this notebook, you'll implement all the functions required to build a deep neural network.\n", 12 | "- For the next assignment, you'll use these functions to build a deep neural network for image classification.\n", 13 | "\n", 14 | "**By the end of this assignment, you'll be able to:**\n", 15 | "\n", 16 | "- Use non-linear units like ReLU to improve your model\n", 17 | "- Build a deeper neural network (with more than 1 hidden layer)\n", 18 | "- Implement an easy-to-use neural network class\n", 19 | "\n", 20 | "**Notation**:\n", 21 | "- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. \n", 22 | " - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.\n", 23 | "- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. \n", 24 | " - Example: $x^{(i)}$ is the $i^{th}$ training example.\n", 25 | "- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n", 26 | " - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).\n", 27 | "\n", 28 | "Let's get started!" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "metadata": {}, 34 | "source": [ 35 | "## Table of Contents\n", 36 | "- [1 - Packages](#1)\n", 37 | "- [2 - Outline](#2)\n", 38 | "- [3 - Initialization](#3)\n", 39 | " - [3.1 - 2-layer Neural Network](#3-1)\n", 40 | " - [Exercise 1 - initialize_parameters](#ex-1)\n", 41 | " - [3.2 - L-layer Neural Network](#3-2)\n", 42 | " - [Exercise 2 - initialize_parameters_deep](#ex-2)\n", 43 | "- [4 - Forward Propagation Module](#4)\n", 44 | " - [4.1 - Linear Forward](#4-1)\n", 45 | " - [Exercise 3 - linear_forward](#ex-3)\n", 46 | " - [4.2 - Linear-Activation Forward](#4-2)\n", 47 | " - [Exercise 4 - linear_activation_forward](#ex-4)\n", 48 | " - [4.3 - L-Layer Model](#4-3)\n", 49 | " - [Exercise 5 - L_model_forward](#ex-5)\n", 50 | "- [5 - Cost Function](#5)\n", 51 | " - [Exercise 6 - compute_cost](#ex-6)\n", 52 | "- [6 - Backward Propagation Module](#6)\n", 53 | " - [6.1 - Linear Backward](#6-1)\n", 54 | " - [Exercise 7 - linear_backward](#ex-7)\n", 55 | " - [6.2 - Linear-Activation Backward](#6-2)\n", 56 | " - [Exercise 8 - linear_activation_backward](#ex-8)\n", 57 | " - [6.3 - L-Model Backward](#6-3)\n", 58 | " - [Exercise 9 - L_model_backward](#ex-9)\n", 59 | " - [6.4 - Update Parameters](#6-4)\n", 60 | " - [Exercise 10 - update_parameters](#ex-10)" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "\n", 68 | "## 1 - Packages\n", 69 | "\n", 70 | "First, import all the packages you'll need during this assignment. \n", 71 | "\n", 72 | "- [numpy](www.numpy.org) is the main package for scientific computing with Python.\n", 73 | "- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.\n", 74 | "- dnn_utils provides some necessary functions for this notebook.\n", 75 | "- testCases provides some test cases to assess the correctness of your functions\n", 76 | "- np.random.seed(1) is used to keep all the random function calls consistent. It helps grade your work. Please don't change the seed! " 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 3, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "import numpy as np\n", 86 | "import h5py\n", 87 | "import matplotlib.pyplot as plt\n", 88 | "from testCases import *\n", 89 | "from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward\n", 90 | "from public_tests import *\n", 91 | "\n", 92 | "%matplotlib inline\n", 93 | "plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\n", 94 | "plt.rcParams['image.interpolation'] = 'nearest'\n", 95 | "plt.rcParams['image.cmap'] = 'gray'\n", 96 | "\n", 97 | "%load_ext autoreload\n", 98 | "%autoreload 2\n", 99 | "\n", 100 | "np.random.seed(1)" 101 | ] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": {}, 106 | "source": [ 107 | "\n", 108 | "## 2 - Outline\n", 109 | "\n", 110 | "To build your neural network, you'll be implementing several \"helper functions.\" These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. \n", 111 | "\n", 112 | "Each small helper function will have detailed instructions to walk you through the necessary steps. Here's an outline of the steps in this assignment:\n", 113 | "\n", 114 | "- Initialize the parameters for a two-layer network and for an $L$-layer neural network\n", 115 | "- Implement the forward propagation module (shown in purple in the figure below)\n", 116 | " - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\n", 117 | " - The ACTIVATION function is provided for you (relu/sigmoid)\n", 118 | " - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\n", 119 | " - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\n", 120 | "- Compute the loss\n", 121 | "- Implement the backward propagation module (denoted in red in the figure below)\n", 122 | " - Complete the LINEAR part of a layer's backward propagation step\n", 123 | " - The gradient of the ACTIVATE function is provided for you(relu_backward/sigmoid_backward) \n", 124 | " - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function\n", 125 | " - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\n", 126 | "- Finally, update the parameters\n", 127 | "\n", 128 | "\n", 129 | "
Figure 1

\n", 130 | "\n", 131 | "\n", 132 | "**Note**:\n", 133 | "\n", 134 | "For every forward function, there is a corresponding backward function. This is why at every step of your forward module you will be storing some values in a cache. These cached values are useful for computing gradients. \n", 135 | "\n", 136 | "In the backpropagation module, you can then use the cache to calculate the gradients. Don't worry, this assignment will show you exactly how to carry out each of these steps! " 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "\n", 144 | "## 3 - Initialization\n", 145 | "\n", 146 | "You will write two helper functions to initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one generalizes this initialization process to $L$ layers.\n", 147 | "\n", 148 | "\n", 149 | "### 3.1 - 2-layer Neural Network\n", 150 | "\n", 151 | "\n", 152 | "### Exercise 1 - initialize_parameters\n", 153 | "\n", 154 | "Create and initialize the parameters of the 2-layer neural network.\n", 155 | "\n", 156 | "**Instructions**:\n", 157 | "\n", 158 | "- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. \n", 159 | "- Use this random initialization for the weight matrices: `np.random.randn(shape)*0.01` with the correct shape\n", 160 | "- Use zero initialization for the biases: `np.zeros(shape)`" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": 6, 166 | "metadata": { 167 | "deletable": false, 168 | "nbgrader": { 169 | "cell_type": "code", 170 | "checksum": "c468c89deb6d0cacf2ade5ab4151d26e", 171 | "grade": false, 172 | "grade_id": "cell-96d4e144d9419b32", 173 | "locked": false, 174 | "schema_version": 3, 175 | "solution": true, 176 | "task": false 177 | } 178 | }, 179 | "outputs": [], 180 | "source": [ 181 | "# GRADED FUNCTION: initialize_parameters\n", 182 | "\n", 183 | "def initialize_parameters(n_x, n_h, n_y):\n", 184 | " \"\"\"\n", 185 | " Argument:\n", 186 | " n_x -- size of the input layer\n", 187 | " n_h -- size of the hidden layer\n", 188 | " n_y -- size of the output layer\n", 189 | " \n", 190 | " Returns:\n", 191 | " parameters -- python dictionary containing your parameters:\n", 192 | " W1 -- weight matrix of shape (n_h, n_x)\n", 193 | " b1 -- bias vector of shape (n_h, 1)\n", 194 | " W2 -- weight matrix of shape (n_y, n_h)\n", 195 | " b2 -- bias vector of shape (n_y, 1)\n", 196 | " \"\"\"\n", 197 | " \n", 198 | " np.random.seed(1)\n", 199 | " \n", 200 | " #(≈ 4 lines of code)\n", 201 | " \n", 202 | " # YOUR CODE STARTS HERE\n", 203 | " W1 = np.random.randn(n_h, n_x) * 0.01\n", 204 | " b1 = np.zeros((n_h, 1))\n", 205 | " W2 = np.random.randn(n_y, n_h) * 0.01\n", 206 | " b2 = np.zeros((n_y, 1))\n", 207 | " # YOUR CODE ENDS HERE\n", 208 | " \n", 209 | " parameters = {\"W1\": W1,\n", 210 | " \"b1\": b1,\n", 211 | " \"W2\": W2,\n", 212 | " \"b2\": b2}\n", 213 | " \n", 214 | " return parameters " 215 | ] 216 | }, 217 | { 218 | "cell_type": "code", 219 | "execution_count": 7, 220 | "metadata": { 221 | "deletable": false, 222 | "editable": false, 223 | "nbgrader": { 224 | "cell_type": "code", 225 | "checksum": "cce3e70ca32b353d4eddef1e3fbabcda", 226 | "grade": true, 227 | "grade_id": "cell-4b2bdbdd0f520c8d", 228 | "locked": true, 229 | "points": 10, 230 | "schema_version": 3, 231 | "solution": false, 232 | "task": false 233 | } 234 | }, 235 | "outputs": [ 236 | { 237 | "name": "stdout", 238 | "output_type": "stream", 239 | "text": [ 240 | "W1 = [[ 0.01624345 -0.00611756 -0.00528172]\n", 241 | " [-0.01072969 0.00865408 -0.02301539]]\n", 242 | "b1 = [[0.]\n", 243 | " [0.]]\n", 244 | "W2 = [[ 0.01744812 -0.00761207]]\n", 245 | "b2 = [[0.]]\n", 246 | "\u001b[92m All tests passed.\n" 247 | ] 248 | } 249 | ], 250 | "source": [ 251 | "parameters = initialize_parameters(3,2,1)\n", 252 | "\n", 253 | "print(\"W1 = \" + str(parameters[\"W1\"]))\n", 254 | "print(\"b1 = \" + str(parameters[\"b1\"]))\n", 255 | "print(\"W2 = \" + str(parameters[\"W2\"]))\n", 256 | "print(\"b2 = \" + str(parameters[\"b2\"]))\n", 257 | "\n", 258 | "initialize_parameters_test(initialize_parameters)" 259 | ] 260 | }, 261 | { 262 | "cell_type": "markdown", 263 | "metadata": {}, 264 | "source": [ 265 | "***Expected output***\n", 266 | "```\n", 267 | "W1 = [[ 0.01624345 -0.00611756 -0.00528172]\n", 268 | " [-0.01072969 0.00865408 -0.02301539]]\n", 269 | "b1 = [[0.]\n", 270 | " [0.]]\n", 271 | "W2 = [[ 0.01744812 -0.00761207]]\n", 272 | "b2 = [[0.]]\n", 273 | "```" 274 | ] 275 | }, 276 | { 277 | "cell_type": "markdown", 278 | "metadata": {}, 279 | "source": [ 280 | "\n", 281 | "### 3.2 - L-layer Neural Network\n", 282 | "\n", 283 | "The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep` function, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. For example, if the size of your input $X$ is $(12288, 209)$ (with $m=209$ examples) then:\n", 284 | "\n", 285 | "\n", 286 | " \n", 287 | " \n", 288 | " \n", 289 | " \n", 290 | " \n", 291 | " \n", 292 | " \n", 293 | " \n", 294 | " \n", 295 | " \n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " \n", 300 | " \n", 301 | " \n", 302 | " \n", 303 | " \n", 304 | " \n", 305 | " \n", 306 | " \n", 307 | " \n", 308 | " \n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | "
Shape of W Shape of b Activation Shape of Activation
Layer 1 $(n^{[1]},12288)$ $(n^{[1]},1)$ $Z^{[1]} = W^{[1]} X + b^{[1]} $ $(n^{[1]},209)$
Layer 2 $(n^{[2]}, n^{[1]})$ $(n^{[2]},1)$ $Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ $(n^{[2]}, 209)$
$\\vdots$ $\\vdots$ $\\vdots$ $\\vdots$ $\\vdots$
Layer L-1 $(n^{[L-1]}, n^{[L-2]})$ $(n^{[L-1]}, 1)$ $Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ $(n^{[L-1]}, 209)$
Layer L $(n^{[L]}, n^{[L-1]})$ $(n^{[L]}, 1)$ $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$ $(n^{[L]}, 209)$
\n", 329 | "\n", 330 | "Remember that when you compute $W X + b$ in python, it carries out broadcasting. For example, if: \n", 331 | "\n", 332 | "$$ W = \\begin{bmatrix}\n", 333 | " w_{00} & w_{01} & w_{02} \\\\\n", 334 | " w_{10} & w_{11} & w_{12} \\\\\n", 335 | " w_{20} & w_{21} & w_{22} \n", 336 | "\\end{bmatrix}\\;\\;\\; X = \\begin{bmatrix}\n", 337 | " x_{00} & x_{01} & x_{02} \\\\\n", 338 | " x_{10} & x_{11} & x_{12} \\\\\n", 339 | " x_{20} & x_{21} & x_{22} \n", 340 | "\\end{bmatrix} \\;\\;\\; b =\\begin{bmatrix}\n", 341 | " b_0 \\\\\n", 342 | " b_1 \\\\\n", 343 | " b_2\n", 344 | "\\end{bmatrix}\\tag{2}$$\n", 345 | "\n", 346 | "Then $WX + b$ will be:\n", 347 | "\n", 348 | "$$ WX + b = \\begin{bmatrix}\n", 349 | " (w_{00}x_{00} + w_{01}x_{10} + w_{02}x_{20}) + b_0 & (w_{00}x_{01} + w_{01}x_{11} + w_{02}x_{21}) + b_0 & \\cdots \\\\\n", 350 | " (w_{10}x_{00} + w_{11}x_{10} + w_{12}x_{20}) + b_1 & (w_{10}x_{01} + w_{11}x_{11} + w_{12}x_{21}) + b_1 & \\cdots \\\\\n", 351 | " (w_{20}x_{00} + w_{21}x_{10} + w_{22}x_{20}) + b_2 & (w_{20}x_{01} + w_{21}x_{11} + w_{22}x_{21}) + b_2 & \\cdots\n", 352 | "\\end{bmatrix}\\tag{3} $$\n" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "metadata": {}, 358 | "source": [ 359 | "\n", 360 | "### Exercise 2 - initialize_parameters_deep\n", 361 | "\n", 362 | "Implement initialization for an L-layer Neural Network. \n", 363 | "\n", 364 | "**Instructions**:\n", 365 | "- The model's structure is *[LINEAR -> RELU] $ \\times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.\n", 366 | "- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.\n", 367 | "- Use zeros initialization for the biases. Use `np.zeros(shape)`.\n", 368 | "- You'll store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for last week's Planar Data classification model would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! \n", 369 | "- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).\n", 370 | "```python\n", 371 | " if L == 1:\n", 372 | " parameters[\"W\" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01\n", 373 | " parameters[\"b\" + str(L)] = np.zeros((layer_dims[1], 1))\n", 374 | "```" 375 | ] 376 | }, 377 | { 378 | "cell_type": "code", 379 | "execution_count": 10, 380 | "metadata": { 381 | "deletable": false, 382 | "nbgrader": { 383 | "cell_type": "code", 384 | "checksum": "1773f5c69d941998dc8da88f4151e8d3", 385 | "grade": false, 386 | "grade_id": "cell-37b22e0664a4949e", 387 | "locked": false, 388 | "schema_version": 3, 389 | "solution": true, 390 | "task": false 391 | } 392 | }, 393 | "outputs": [], 394 | "source": [ 395 | "# GRADED FUNCTION: initialize_parameters_deep\n", 396 | "\n", 397 | "def initialize_parameters_deep(layer_dims):\n", 398 | " \"\"\"\n", 399 | " Arguments:\n", 400 | " layer_dims -- python array (list) containing the dimensions of each layer in our network\n", 401 | " \n", 402 | " Returns:\n", 403 | " parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n", 404 | " Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])\n", 405 | " bl -- bias vector of shape (layer_dims[l], 1)\n", 406 | " \"\"\"\n", 407 | " \n", 408 | " np.random.seed(3)\n", 409 | " parameters = {}\n", 410 | " L = len(layer_dims) # number of layers in the network\n", 411 | "\n", 412 | " for l in range(1, L):\n", 413 | " #(≈ 2 lines of code)\n", 414 | " \n", 415 | " # YOUR CODE STARTS HERE\n", 416 | " parameters[\"W\" + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01\n", 417 | " parameters[\"b\" + str(l)] = np.zeros((layer_dims[l], 1))\n", 418 | " # YOUR CODE ENDS HERE\n", 419 | " \n", 420 | " assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))\n", 421 | " assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))\n", 422 | "\n", 423 | " \n", 424 | " return parameters" 425 | ] 426 | }, 427 | { 428 | "cell_type": "code", 429 | "execution_count": 11, 430 | "metadata": { 431 | "deletable": false, 432 | "editable": false, 433 | "nbgrader": { 434 | "cell_type": "code", 435 | "checksum": "2d5dcc2ba4a2f12ba3ef9df4c96e06e4", 436 | "grade": true, 437 | "grade_id": "cell-2ce3df377bb42f76", 438 | "locked": true, 439 | "points": 10, 440 | "schema_version": 3, 441 | "solution": false, 442 | "task": false 443 | } 444 | }, 445 | "outputs": [ 446 | { 447 | "name": "stdout", 448 | "output_type": "stream", 449 | "text": [ 450 | "W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n", 451 | " [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n", 452 | " [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n", 453 | " [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]\n", 454 | "b1 = [[0.]\n", 455 | " [0.]\n", 456 | " [0.]\n", 457 | " [0.]]\n", 458 | "W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]\n", 459 | " [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n", 460 | " [-0.00768836 -0.00230031 0.00745056 0.01976111]]\n", 461 | "b2 = [[0.]\n", 462 | " [0.]\n", 463 | " [0.]]\n", 464 | "\u001b[92m All tests passed.\n" 465 | ] 466 | } 467 | ], 468 | "source": [ 469 | "parameters = initialize_parameters_deep([5,4,3])\n", 470 | "\n", 471 | "print(\"W1 = \" + str(parameters[\"W1\"]))\n", 472 | "print(\"b1 = \" + str(parameters[\"b1\"]))\n", 473 | "print(\"W2 = \" + str(parameters[\"W2\"]))\n", 474 | "print(\"b2 = \" + str(parameters[\"b2\"]))\n", 475 | "\n", 476 | "initialize_parameters_deep_test(initialize_parameters_deep)" 477 | ] 478 | }, 479 | { 480 | "cell_type": "markdown", 481 | "metadata": {}, 482 | "source": [ 483 | "***Expected output***\n", 484 | "```\n", 485 | "W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n", 486 | " [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n", 487 | " [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n", 488 | " [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]\n", 489 | "b1 = [[0.]\n", 490 | " [0.]\n", 491 | " [0.]\n", 492 | " [0.]]\n", 493 | "W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]\n", 494 | " [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n", 495 | " [-0.00768836 -0.00230031 0.00745056 0.01976111]]\n", 496 | "b2 = [[0.]\n", 497 | " [0.]\n", 498 | " [0.]]\n", 499 | "```" 500 | ] 501 | }, 502 | { 503 | "cell_type": "markdown", 504 | "metadata": {}, 505 | "source": [ 506 | "\n", 507 | "## 4 - Forward Propagation Module\n", 508 | "\n", 509 | "\n", 510 | "### 4.1 - Linear Forward \n", 511 | "\n", 512 | "Now that you have initialized your parameters, you can do the forward propagation module. Start by implementing some basic functions that you can use again later when implementing the model. Now, you'll complete three functions in this order:\n", 513 | "\n", 514 | "- LINEAR\n", 515 | "- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. \n", 516 | "- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID (whole model)\n", 517 | "\n", 518 | "The linear forward module (vectorized over all the examples) computes the following equations:\n", 519 | "\n", 520 | "$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\\tag{4}$$\n", 521 | "\n", 522 | "where $A^{[0]} = X$. \n", 523 | "\n", 524 | "\n", 525 | "### Exercise 3 - linear_forward \n", 526 | "\n", 527 | "Build the linear part of forward propagation.\n", 528 | "\n", 529 | "**Reminder**:\n", 530 | "The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help." 531 | ] 532 | }, 533 | { 534 | "cell_type": "code", 535 | "execution_count": 12, 536 | "metadata": { 537 | "deletable": false, 538 | "nbgrader": { 539 | "cell_type": "code", 540 | "checksum": "770763ab229ee87e8f5dfd520428caa3", 541 | "grade": false, 542 | "grade_id": "cell-4d6e09486a53f4c4", 543 | "locked": false, 544 | "schema_version": 3, 545 | "solution": true, 546 | "task": false 547 | } 548 | }, 549 | "outputs": [], 550 | "source": [ 551 | "# GRADED FUNCTION: linear_forward\n", 552 | "\n", 553 | "def linear_forward(A, W, b):\n", 554 | " \"\"\"\n", 555 | " Implement the linear part of a layer's forward propagation.\n", 556 | "\n", 557 | " Arguments:\n", 558 | " A -- activations from previous layer (or input data): (size of previous layer, number of examples)\n", 559 | " W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n", 560 | " b -- bias vector, numpy array of shape (size of the current layer, 1)\n", 561 | "\n", 562 | " Returns:\n", 563 | " Z -- the input of the activation function, also called pre-activation parameter \n", 564 | " cache -- a python tuple containing \"A\", \"W\" and \"b\" ; stored for computing the backward pass efficiently\n", 565 | " \"\"\"\n", 566 | " \n", 567 | " #(≈ 1 line of code)\n", 568 | " \n", 569 | " # YOUR CODE STARTS HERE\n", 570 | " Z = W.dot(A) + b\n", 571 | " # YOUR CODE ENDS HERE\n", 572 | " cache = (A, W, b)\n", 573 | " \n", 574 | " return Z, cache" 575 | ] 576 | }, 577 | { 578 | "cell_type": "code", 579 | "execution_count": 13, 580 | "metadata": { 581 | "deletable": false, 582 | "editable": false, 583 | "nbgrader": { 584 | "cell_type": "code", 585 | "checksum": "e3fd70fd81b04a2c70f37588ee21140c", 586 | "grade": true, 587 | "grade_id": "cell-df6ddb1e30f9c96d", 588 | "locked": true, 589 | "points": 10, 590 | "schema_version": 3, 591 | "solution": false, 592 | "task": false 593 | } 594 | }, 595 | "outputs": [ 596 | { 597 | "name": "stdout", 598 | "output_type": "stream", 599 | "text": [ 600 | "Z = [[ 3.26295337 -1.23429987]]\n", 601 | "\u001b[92m All tests passed.\n" 602 | ] 603 | } 604 | ], 605 | "source": [ 606 | "t_A, t_W, t_b = linear_forward_test_case()\n", 607 | "t_Z, t_linear_cache = linear_forward(t_A, t_W, t_b)\n", 608 | "print(\"Z = \" + str(t_Z))\n", 609 | "\n", 610 | "linear_forward_test(linear_forward)" 611 | ] 612 | }, 613 | { 614 | "cell_type": "markdown", 615 | "metadata": {}, 616 | "source": [ 617 | "***Expected output***\n", 618 | "```\n", 619 | "Z = [[ 3.26295337 -1.23429987]]\n", 620 | "```" 621 | ] 622 | }, 623 | { 624 | "cell_type": "markdown", 625 | "metadata": {}, 626 | "source": [ 627 | "\n", 628 | "### 4.2 - Linear-Activation Forward\n", 629 | "\n", 630 | "In this notebook, you will use two activation functions:\n", 631 | "\n", 632 | "- **Sigmoid**: $\\sigma(Z) = \\sigma(W A + b) = \\frac{1}{ 1 + e^{-(W A + b)}}$. You've been provided with the `sigmoid` function which returns **two** items: the activation value \"`a`\" and a \"`cache`\" that contains \"`Z`\" (it's what we will feed in to the corresponding backward function). To use it you could just call: \n", 633 | "``` python\n", 634 | "A, activation_cache = sigmoid(Z)\n", 635 | "```\n", 636 | "\n", 637 | "- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. You've been provided with the `relu` function. This function returns **two** items: the activation value \"`A`\" and a \"`cache`\" that contains \"`Z`\" (it's what you'll feed in to the corresponding backward function). To use it you could just call:\n", 638 | "``` python\n", 639 | "A, activation_cache = relu(Z)\n", 640 | "```" 641 | ] 642 | }, 643 | { 644 | "cell_type": "markdown", 645 | "metadata": {}, 646 | "source": [ 647 | "For added convenience, you're going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you'll implement a function that does the LINEAR forward step, followed by an ACTIVATION forward step.\n", 648 | "\n", 649 | "\n", 650 | "### Exercise 4 - linear_activation_forward\n", 651 | "\n", 652 | "Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation \"g\" can be sigmoid() or relu(). Use `linear_forward()` and the correct activation function." 653 | ] 654 | }, 655 | { 656 | "cell_type": "code", 657 | "execution_count": 14, 658 | "metadata": { 659 | "deletable": false, 660 | "nbgrader": { 661 | "cell_type": "code", 662 | "checksum": "f09e76f2a56c8ee77db3e89214a676b2", 663 | "grade": false, 664 | "grade_id": "cell-eb48903dd8e48a90", 665 | "locked": false, 666 | "schema_version": 3, 667 | "solution": true, 668 | "task": false 669 | } 670 | }, 671 | "outputs": [], 672 | "source": [ 673 | "# GRADED FUNCTION: linear_activation_forward\n", 674 | "\n", 675 | "def linear_activation_forward(A_prev, W, b, activation):\n", 676 | " \"\"\"\n", 677 | " Implement the forward propagation for the LINEAR->ACTIVATION layer\n", 678 | "\n", 679 | " Arguments:\n", 680 | " A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)\n", 681 | " W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n", 682 | " b -- bias vector, numpy array of shape (size of the current layer, 1)\n", 683 | " activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n", 684 | "\n", 685 | " Returns:\n", 686 | " A -- the output of the activation function, also called the post-activation value \n", 687 | " cache -- a python tuple containing \"linear_cache\" and \"activation_cache\";\n", 688 | " stored for computing the backward pass efficiently\n", 689 | " \"\"\"\n", 690 | " \n", 691 | " if activation == \"sigmoid\":\n", 692 | " #(≈ 2 lines of code)\n", 693 | " \n", 694 | " # YOUR CODE STARTS HERE\n", 695 | " Z, linear_cache = linear_forward(A_prev, W, b)\n", 696 | " A, activation_cache = sigmoid(Z)\n", 697 | " # YOUR CODE ENDS HERE\n", 698 | " \n", 699 | " elif activation == \"relu\":\n", 700 | " #(≈ 2 lines of code)\n", 701 | " \n", 702 | " # YOUR CODE STARTS HERE\n", 703 | " Z, linear_cache = linear_forward(A_prev, W, b)\n", 704 | " A, activation_cache = relu(Z)\n", 705 | " # YOUR CODE ENDS HERE\n", 706 | " cache = (linear_cache, activation_cache)\n", 707 | "\n", 708 | " return A, cache" 709 | ] 710 | }, 711 | { 712 | "cell_type": "code", 713 | "execution_count": 15, 714 | "metadata": { 715 | "deletable": false, 716 | "editable": false, 717 | "nbgrader": { 718 | "cell_type": "code", 719 | "checksum": "2b2a80630d2ecb9d03df4ecf6d76170d", 720 | "grade": true, 721 | "grade_id": "cell-ed5c76db14d687dd", 722 | "locked": true, 723 | "points": 10, 724 | "schema_version": 3, 725 | "solution": false, 726 | "task": false 727 | } 728 | }, 729 | "outputs": [ 730 | { 731 | "name": "stdout", 732 | "output_type": "stream", 733 | "text": [ 734 | "With sigmoid: A = [[0.96890023 0.11013289]]\n", 735 | "With ReLU: A = [[3.43896131 0. ]]\n", 736 | "\u001b[92m All tests passed.\n" 737 | ] 738 | } 739 | ], 740 | "source": [ 741 | "t_A_prev, t_W, t_b = linear_activation_forward_test_case()\n", 742 | "\n", 743 | "t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = \"sigmoid\")\n", 744 | "print(\"With sigmoid: A = \" + str(t_A))\n", 745 | "\n", 746 | "t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = \"relu\")\n", 747 | "print(\"With ReLU: A = \" + str(t_A))\n", 748 | "\n", 749 | "linear_activation_forward_test(linear_activation_forward)" 750 | ] 751 | }, 752 | { 753 | "cell_type": "markdown", 754 | "metadata": {}, 755 | "source": [ 756 | "***Expected output***\n", 757 | "```\n", 758 | "With sigmoid: A = [[0.96890023 0.11013289]]\n", 759 | "With ReLU: A = [[3.43896131 0. ]]\n", 760 | "```" 761 | ] 762 | }, 763 | { 764 | "cell_type": "markdown", 765 | "metadata": {}, 766 | "source": [ 767 | "**Note**: In deep learning, the \"[LINEAR->ACTIVATION]\" computation is counted as a single layer in the neural network, not two layers. " 768 | ] 769 | }, 770 | { 771 | "cell_type": "markdown", 772 | "metadata": {}, 773 | "source": [ 774 | "\n", 775 | "### 4.3 - L-Layer Model \n", 776 | "\n", 777 | "For even *more* convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.\n", 778 | "\n", 779 | "\n", 780 | "
Figure 2 : *[LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID* model

\n", 781 | "\n", 782 | "\n", 783 | "### Exercise 5 - L_model_forward\n", 784 | "\n", 785 | "Implement the forward propagation of the above model.\n", 786 | "\n", 787 | "**Instructions**: In the code below, the variable `AL` will denote $A^{[L]} = \\sigma(Z^{[L]}) = \\sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\\hat{Y}$.) \n", 788 | "\n", 789 | "**Hints**:\n", 790 | "- Use the functions you've previously written \n", 791 | "- Use a for loop to replicate [LINEAR->RELU] (L-1) times\n", 792 | "- Don't forget to keep track of the caches in the \"caches\" list. To add a new value `c` to a `list`, you can use `list.append(c)`." 793 | ] 794 | }, 795 | { 796 | "cell_type": "code", 797 | "execution_count": 29, 798 | "metadata": { 799 | "deletable": false, 800 | "nbgrader": { 801 | "cell_type": "code", 802 | "checksum": "a0071c19f83d4b851dc8a67e66545262", 803 | "grade": false, 804 | "grade_id": "cell-9a8ec52ec8f6e04a", 805 | "locked": false, 806 | "schema_version": 3, 807 | "solution": true, 808 | "task": false 809 | } 810 | }, 811 | "outputs": [], 812 | "source": [ 813 | "# GRADED FUNCTION: L_model_forward\n", 814 | "\n", 815 | "def L_model_forward(X, parameters):\n", 816 | " \"\"\"\n", 817 | " Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation\n", 818 | " \n", 819 | " Arguments:\n", 820 | " X -- data, numpy array of shape (input size, number of examples)\n", 821 | " parameters -- output of initialize_parameters_deep()\n", 822 | " \n", 823 | " Returns:\n", 824 | " AL -- activation value from the output (last) layer\n", 825 | " caches -- list of caches containing:\n", 826 | " every cache of linear_activation_forward() (there are L of them, indexed from 0 to L-1)\n", 827 | " \"\"\"\n", 828 | "\n", 829 | " caches = []\n", 830 | " A = X\n", 831 | " L = len(parameters) // 2 # number of layers in the neural network\n", 832 | " \n", 833 | " # Implement [LINEAR -> RELU]*(L-1). Add \"cache\" to the \"caches\" list.\n", 834 | " # The for loop starts at 1 because layer 0 is the input\n", 835 | " for l in range(1, L):\n", 836 | " A_prev = A \n", 837 | " #(≈ 2 lines of code)\n", 838 | " \n", 839 | " # YOUR CODE STARTS HERE\n", 840 | " A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)] ,parameters['b' + str(l)], activation = \"relu\")\n", 841 | " caches.append(cache)\n", 842 | " # YOUR CODE ENDS HERE\n", 843 | " \n", 844 | " # Implement LINEAR -> SIGMOID. Add \"cache\" to the \"caches\" list.\n", 845 | " #(≈ 2 lines of code)\n", 846 | " # YOUR CODE STARTS HERE\n", 847 | " AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation = \"sigmoid\")\n", 848 | " caches.append(cache)\n", 849 | " # YOUR CODE ENDS HERE\n", 850 | " \n", 851 | " return AL, caches" 852 | ] 853 | }, 854 | { 855 | "cell_type": "code", 856 | "execution_count": 30, 857 | "metadata": { 858 | "deletable": false, 859 | "editable": false, 860 | "nbgrader": { 861 | "cell_type": "code", 862 | "checksum": "18d99d8170d2fed802a3e97e362339c6", 863 | "grade": true, 864 | "grade_id": "cell-ddc3a524cd1a0782", 865 | "locked": true, 866 | "points": 10, 867 | "schema_version": 3, 868 | "solution": false, 869 | "task": false 870 | } 871 | }, 872 | "outputs": [ 873 | { 874 | "name": "stdout", 875 | "output_type": "stream", 876 | "text": [ 877 | "AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]\n", 878 | "\u001b[92m All tests passed.\n" 879 | ] 880 | } 881 | ], 882 | "source": [ 883 | "t_X, t_parameters = L_model_forward_test_case_2hidden()\n", 884 | "t_AL, t_caches = L_model_forward(t_X, t_parameters)\n", 885 | "\n", 886 | "print(\"AL = \" + str(t_AL))\n", 887 | "\n", 888 | "L_model_forward_test(L_model_forward)" 889 | ] 890 | }, 891 | { 892 | "cell_type": "markdown", 893 | "metadata": {}, 894 | "source": [ 895 | "***Expected output***\n", 896 | "```\n", 897 | "AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]\n", 898 | "```" 899 | ] 900 | }, 901 | { 902 | "cell_type": "markdown", 903 | "metadata": {}, 904 | "source": [ 905 | "**Awesome!** You've implemented a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in \"caches\". Using $A^{[L]}$, you can compute the cost of your predictions." 906 | ] 907 | }, 908 | { 909 | "cell_type": "markdown", 910 | "metadata": {}, 911 | "source": [ 912 | "\n", 913 | "## 5 - Cost Function\n", 914 | "\n", 915 | "Now you can implement forward and backward propagation! You need to compute the cost, in order to check whether your model is actually learning.\n", 916 | "\n", 917 | "\n", 918 | "### Exercise 6 - compute_cost\n", 919 | "Compute the cross-entropy cost $J$, using the following formula: $$-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} (y^{(i)}\\log\\left(a^{[L] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right)) \\tag{7}$$\n" 920 | ] 921 | }, 922 | { 923 | "cell_type": "code", 924 | "execution_count": 31, 925 | "metadata": { 926 | "deletable": false, 927 | "nbgrader": { 928 | "cell_type": "code", 929 | "checksum": "17919bb7d82635554b52aed7e96e8d9b", 930 | "grade": false, 931 | "grade_id": "cell-abad606772066f14", 932 | "locked": false, 933 | "schema_version": 3, 934 | "solution": true, 935 | "task": false 936 | } 937 | }, 938 | "outputs": [], 939 | "source": [ 940 | "# GRADED FUNCTION: compute_cost\n", 941 | "\n", 942 | "def compute_cost(AL, Y):\n", 943 | " \"\"\"\n", 944 | " Implement the cost function defined by equation (7).\n", 945 | "\n", 946 | " Arguments:\n", 947 | " AL -- probability vector corresponding to your label predictions, shape (1, number of examples)\n", 948 | " Y -- true \"label\" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)\n", 949 | "\n", 950 | " Returns:\n", 951 | " cost -- cross-entropy cost\n", 952 | " \"\"\"\n", 953 | " \n", 954 | " m = Y.shape[1]\n", 955 | "\n", 956 | " # Compute loss from aL and y.\n", 957 | " # (≈ 1 lines of code).\n", 958 | " \n", 959 | " # YOUR CODE STARTS HERE\n", 960 | " cost = - np.mean(Y * np.log(AL)+ (1 - Y) * np.log(1 - AL))\n", 961 | " # YOUR CODE ENDS HERE\n", 962 | " \n", 963 | " cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).\n", 964 | "\n", 965 | " \n", 966 | " return cost" 967 | ] 968 | }, 969 | { 970 | "cell_type": "code", 971 | "execution_count": 32, 972 | "metadata": { 973 | "deletable": false, 974 | "editable": false, 975 | "nbgrader": { 976 | "cell_type": "code", 977 | "checksum": "913bc99f9f1380196c0f88d82d1af893", 978 | "grade": true, 979 | "grade_id": "cell-e82b9dd1fa6e970b", 980 | "locked": true, 981 | "points": 10, 982 | "schema_version": 3, 983 | "solution": false, 984 | "task": false 985 | } 986 | }, 987 | "outputs": [ 988 | { 989 | "name": "stdout", 990 | "output_type": "stream", 991 | "text": [ 992 | "Cost: 0.2797765635793423\n", 993 | "\u001b[92m All tests passed.\n" 994 | ] 995 | } 996 | ], 997 | "source": [ 998 | "t_Y, t_AL = compute_cost_test_case()\n", 999 | "t_cost = compute_cost(t_AL, t_Y)\n", 1000 | "\n", 1001 | "print(\"Cost: \" + str(t_cost))\n", 1002 | "\n", 1003 | "compute_cost_test(compute_cost)" 1004 | ] 1005 | }, 1006 | { 1007 | "cell_type": "markdown", 1008 | "metadata": {}, 1009 | "source": [ 1010 | "**Expected Output**:\n", 1011 | "\n", 1012 | "\n", 1013 | " \n", 1014 | " \n", 1015 | " \n", 1016 | " \n", 1017 | "
cost 0.2797765635793422
" 1018 | ] 1019 | }, 1020 | { 1021 | "cell_type": "markdown", 1022 | "metadata": {}, 1023 | "source": [ 1024 | "\n", 1025 | "## 6 - Backward Propagation Module\n", 1026 | "\n", 1027 | "Just as you did for the forward propagation, you'll implement helper functions for backpropagation. Remember that backpropagation is used to calculate the gradient of the loss function with respect to the parameters. \n", 1028 | "\n", 1029 | "**Reminder**: \n", 1030 | "\n", 1031 | "
Figure 3: Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID
The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.
\n", 1032 | "\n", 1033 | "\n", 1034 | "\n", 1045 | "\n", 1046 | "Now, similarly to forward propagation, you're going to build the backward propagation in three steps:\n", 1047 | "1. LINEAR backward\n", 1048 | "2. LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation\n", 1049 | "3. [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)" 1050 | ] 1051 | }, 1052 | { 1053 | "cell_type": "markdown", 1054 | "metadata": {}, 1055 | "source": [ 1056 | "For the next exercise, you will need to remember that:\n", 1057 | "\n", 1058 | "- `b` is a matrix(np.ndarray) with 1 column and n rows, i.e: b = [[1.0], [2.0]] (remember that `b` is a constant)\n", 1059 | "- np.sum performs a sum over the elements of a ndarray\n", 1060 | "- axis=1 or axis=0 specify if the sum is carried out by rows or by columns respectively\n", 1061 | "- keepdims specifies if the original dimensions of the matrix must be kept.\n", 1062 | "- Look at the following example to clarify:" 1063 | ] 1064 | }, 1065 | { 1066 | "cell_type": "code", 1067 | "execution_count": 33, 1068 | "metadata": {}, 1069 | "outputs": [ 1070 | { 1071 | "name": "stdout", 1072 | "output_type": "stream", 1073 | "text": [ 1074 | "axis=1 and keepdims=True\n", 1075 | "[[3]\n", 1076 | " [7]]\n", 1077 | "axis=1 and keepdims=False\n", 1078 | "[3 7]\n", 1079 | "axis=0 and keepdims=True\n", 1080 | "[[4 6]]\n", 1081 | "axis=0 and keepdims=False\n", 1082 | "[4 6]\n" 1083 | ] 1084 | } 1085 | ], 1086 | "source": [ 1087 | "A = np.array([[1, 2], [3, 4]])\n", 1088 | "\n", 1089 | "print('axis=1 and keepdims=True')\n", 1090 | "print(np.sum(A, axis=1, keepdims=True))\n", 1091 | "print('axis=1 and keepdims=False')\n", 1092 | "print(np.sum(A, axis=1, keepdims=False))\n", 1093 | "print('axis=0 and keepdims=True')\n", 1094 | "print(np.sum(A, axis=0, keepdims=True))\n", 1095 | "print('axis=0 and keepdims=False')\n", 1096 | "print(np.sum(A, axis=0, keepdims=False))" 1097 | ] 1098 | }, 1099 | { 1100 | "cell_type": "markdown", 1101 | "metadata": {}, 1102 | "source": [ 1103 | "\n", 1104 | "### 6.1 - Linear Backward\n", 1105 | "\n", 1106 | "For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).\n", 1107 | "\n", 1108 | "Suppose you have already calculated the derivative $dZ^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$.\n", 1109 | "\n", 1110 | "\n", 1111 | "
Figure 4
\n", 1112 | "\n", 1113 | "The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.\n", 1114 | "\n", 1115 | "Here are the formulas you need:\n", 1116 | "$$ dW^{[l]} = \\frac{\\partial \\mathcal{J} }{\\partial W^{[l]}} = \\frac{1}{m} dZ^{[l]} A^{[l-1] T} \\tag{8}$$\n", 1117 | "$$ db^{[l]} = \\frac{\\partial \\mathcal{J} }{\\partial b^{[l]}} = \\frac{1}{m} \\sum_{i = 1}^{m} dZ^{[l](i)}\\tag{9}$$\n", 1118 | "$$ dA^{[l-1]} = \\frac{\\partial \\mathcal{L} }{\\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \\tag{10}$$\n", 1119 | "\n", 1120 | "\n", 1121 | "$A^{[l-1] T}$ is the transpose of $A^{[l-1]}$. " 1122 | ] 1123 | }, 1124 | { 1125 | "cell_type": "markdown", 1126 | "metadata": {}, 1127 | "source": [ 1128 | "\n", 1129 | "### Exercise 7 - linear_backward \n", 1130 | "\n", 1131 | "Use the 3 formulas above to implement `linear_backward()`.\n", 1132 | "\n", 1133 | "**Hint**:\n", 1134 | "\n", 1135 | "- In numpy you can get the transpose of an ndarray `A` using `A.T` or `A.transpose()`" 1136 | ] 1137 | }, 1138 | { 1139 | "cell_type": "code", 1140 | "execution_count": 36, 1141 | "metadata": { 1142 | "deletable": false, 1143 | "nbgrader": { 1144 | "cell_type": "code", 1145 | "checksum": "137d11e28068848079eb6c315a59f2be", 1146 | "grade": false, 1147 | "grade_id": "cell-418e156a9203fe72", 1148 | "locked": false, 1149 | "schema_version": 3, 1150 | "solution": true, 1151 | "task": false 1152 | } 1153 | }, 1154 | "outputs": [], 1155 | "source": [ 1156 | "# GRADED FUNCTION: linear_backward\n", 1157 | "\n", 1158 | "def linear_backward(dZ, cache):\n", 1159 | " \"\"\"\n", 1160 | " Implement the linear portion of backward propagation for a single layer (layer l)\n", 1161 | "\n", 1162 | " Arguments:\n", 1163 | " dZ -- Gradient of the cost with respect to the linear output (of current layer l)\n", 1164 | " cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer\n", 1165 | "\n", 1166 | " Returns:\n", 1167 | " dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n", 1168 | " dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n", 1169 | " db -- Gradient of the cost with respect to b (current layer l), same shape as b\n", 1170 | " \"\"\"\n", 1171 | " A_prev, W, b = cache\n", 1172 | " m = A_prev.shape[1]\n", 1173 | "\n", 1174 | " ### START CODE HERE ### (≈ 3 lines of code)\n", 1175 | " \n", 1176 | " # YOUR CODE STARTS HERE\n", 1177 | " dW = 1/m * dZ.dot(A_prev.T)\n", 1178 | " db = np.mean(dZ, axis=1, keepdims=True)\n", 1179 | " dA_prev = W.T.dot(dZ)\n", 1180 | " # YOUR CODE ENDS HERE\n", 1181 | " \n", 1182 | " return dA_prev, dW, db" 1183 | ] 1184 | }, 1185 | { 1186 | "cell_type": "code", 1187 | "execution_count": 37, 1188 | "metadata": { 1189 | "deletable": false, 1190 | "editable": false, 1191 | "nbgrader": { 1192 | "cell_type": "code", 1193 | "checksum": "35a1c64c59ad26318ab2f807acb9093c", 1194 | "grade": true, 1195 | "grade_id": "cell-b826650c7bd2a7ec", 1196 | "locked": true, 1197 | "points": 10, 1198 | "schema_version": 3, 1199 | "solution": false, 1200 | "task": false 1201 | } 1202 | }, 1203 | "outputs": [ 1204 | { 1205 | "name": "stdout", 1206 | "output_type": "stream", 1207 | "text": [ 1208 | "dA_prev: [[-1.15171336 0.06718465 -0.3204696 2.09812712]\n", 1209 | " [ 0.60345879 -3.72508701 5.81700741 -3.84326836]\n", 1210 | " [-0.4319552 -1.30987417 1.72354705 0.05070578]\n", 1211 | " [-0.38981415 0.60811244 -1.25938424 1.47191593]\n", 1212 | " [-2.52214926 2.67882552 -0.67947465 1.48119548]]\n", 1213 | "dW: [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]\n", 1214 | " [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]\n", 1215 | " [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]\n", 1216 | "db: [[-0.14713786]\n", 1217 | " [-0.11313155]\n", 1218 | " [-0.13209101]]\n", 1219 | "\u001b[92m All tests passed.\n" 1220 | ] 1221 | } 1222 | ], 1223 | "source": [ 1224 | "t_dZ, t_linear_cache = linear_backward_test_case()\n", 1225 | "t_dA_prev, t_dW, t_db = linear_backward(t_dZ, t_linear_cache)\n", 1226 | "\n", 1227 | "print(\"dA_prev: \" + str(t_dA_prev))\n", 1228 | "print(\"dW: \" + str(t_dW))\n", 1229 | "print(\"db: \" + str(t_db))\n", 1230 | "\n", 1231 | "linear_backward_test(linear_backward)" 1232 | ] 1233 | }, 1234 | { 1235 | "cell_type": "markdown", 1236 | "metadata": {}, 1237 | "source": [ 1238 | "**Expected Output**:\n", 1239 | "```\n", 1240 | "dA_prev: [[-1.15171336 0.06718465 -0.3204696 2.09812712]\n", 1241 | " [ 0.60345879 -3.72508701 5.81700741 -3.84326836]\n", 1242 | " [-0.4319552 -1.30987417 1.72354705 0.05070578]\n", 1243 | " [-0.38981415 0.60811244 -1.25938424 1.47191593]\n", 1244 | " [-2.52214926 2.67882552 -0.67947465 1.48119548]]\n", 1245 | "dW: [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]\n", 1246 | " [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]\n", 1247 | " [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]\n", 1248 | "db: [[-0.14713786]\n", 1249 | " [-0.11313155]\n", 1250 | " [-0.13209101]]\n", 1251 | " ```" 1252 | ] 1253 | }, 1254 | { 1255 | "cell_type": "markdown", 1256 | "metadata": {}, 1257 | "source": [ 1258 | "\n", 1259 | "### 6.2 - Linear-Activation Backward\n", 1260 | "\n", 1261 | "Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. \n", 1262 | "\n", 1263 | "To help you implement `linear_activation_backward`, two backward functions have been provided:\n", 1264 | "- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:\n", 1265 | "\n", 1266 | "```python\n", 1267 | "dZ = sigmoid_backward(dA, activation_cache)\n", 1268 | "```\n", 1269 | "\n", 1270 | "- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:\n", 1271 | "\n", 1272 | "```python\n", 1273 | "dZ = relu_backward(dA, activation_cache)\n", 1274 | "```\n", 1275 | "\n", 1276 | "If $g(.)$ is the activation function, \n", 1277 | "`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}). \\tag{11}$$ \n", 1278 | "\n", 1279 | "\n", 1280 | "### Exercise 8 - linear_activation_backward\n", 1281 | "\n", 1282 | "Implement the backpropagation for the *LINEAR->ACTIVATION* layer." 1283 | ] 1284 | }, 1285 | { 1286 | "cell_type": "code", 1287 | "execution_count": 38, 1288 | "metadata": { 1289 | "deletable": false, 1290 | "nbgrader": { 1291 | "cell_type": "code", 1292 | "checksum": "3497ac4aa36a57278edbfb84a44e1d72", 1293 | "grade": false, 1294 | "grade_id": "cell-6c59263d69168c17", 1295 | "locked": false, 1296 | "schema_version": 3, 1297 | "solution": true, 1298 | "task": false 1299 | } 1300 | }, 1301 | "outputs": [], 1302 | "source": [ 1303 | "# GRADED FUNCTION: linear_activation_backward\n", 1304 | "\n", 1305 | "def linear_activation_backward(dA, cache, activation):\n", 1306 | " \"\"\"\n", 1307 | " Implement the backward propagation for the LINEAR->ACTIVATION layer.\n", 1308 | " \n", 1309 | " Arguments:\n", 1310 | " dA -- post-activation gradient for current layer l \n", 1311 | " cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently\n", 1312 | " activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n", 1313 | " \n", 1314 | " Returns:\n", 1315 | " dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n", 1316 | " dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n", 1317 | " db -- Gradient of the cost with respect to b (current layer l), same shape as b\n", 1318 | " \"\"\"\n", 1319 | " linear_cache, activation_cache = cache\n", 1320 | " \n", 1321 | " if activation == \"relu\":\n", 1322 | " #(≈ 2 lines of code)\n", 1323 | " \n", 1324 | " # YOUR CODE STARTS HERE\n", 1325 | " dz = relu_backward(dA, activation_cache)\n", 1326 | " dA_prev, dW, db = linear_backward(dz, linear_cache)\n", 1327 | " # YOUR CODE ENDS HERE\n", 1328 | " \n", 1329 | " elif activation == \"sigmoid\":\n", 1330 | " #(≈ 2 lines of code)\n", 1331 | " \n", 1332 | " # YOUR CODE STARTS HERE\n", 1333 | " dz = sigmoid_backward(dA, activation_cache)\n", 1334 | " dA_prev, dW, db = linear_backward(dz, linear_cache)\n", 1335 | " # YOUR CODE ENDS HERE\n", 1336 | " \n", 1337 | " return dA_prev, dW, db" 1338 | ] 1339 | }, 1340 | { 1341 | "cell_type": "code", 1342 | "execution_count": 39, 1343 | "metadata": { 1344 | "deletable": false, 1345 | "editable": false, 1346 | "nbgrader": { 1347 | "cell_type": "code", 1348 | "checksum": "2aa3ea709e212c8d39caf189f86b5866", 1349 | "grade": true, 1350 | "grade_id": "cell-d88535fde29cd1d6", 1351 | "locked": true, 1352 | "points": 10, 1353 | "schema_version": 3, 1354 | "solution": false, 1355 | "task": false 1356 | } 1357 | }, 1358 | "outputs": [ 1359 | { 1360 | "name": "stdout", 1361 | "output_type": "stream", 1362 | "text": [ 1363 | "With sigmoid: dA_prev = [[ 0.11017994 0.01105339]\n", 1364 | " [ 0.09466817 0.00949723]\n", 1365 | " [-0.05743092 -0.00576154]]\n", 1366 | "With sigmoid: dW = [[ 0.10266786 0.09778551 -0.01968084]]\n", 1367 | "With sigmoid: db = [[-0.05729622]]\n", 1368 | "With relu: dA_prev = [[ 0.44090989 0. ]\n", 1369 | " [ 0.37883606 0. ]\n", 1370 | " [-0.2298228 0. ]]\n", 1371 | "With relu: dW = [[ 0.44513824 0.37371418 -0.10478989]]\n", 1372 | "With relu: db = [[-0.20837892]]\n", 1373 | "\u001b[92m All tests passed.\n" 1374 | ] 1375 | } 1376 | ], 1377 | "source": [ 1378 | "t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()\n", 1379 | "\n", 1380 | "t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = \"sigmoid\")\n", 1381 | "print(\"With sigmoid: dA_prev = \" + str(t_dA_prev))\n", 1382 | "print(\"With sigmoid: dW = \" + str(t_dW))\n", 1383 | "print(\"With sigmoid: db = \" + str(t_db))\n", 1384 | "\n", 1385 | "t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = \"relu\")\n", 1386 | "print(\"With relu: dA_prev = \" + str(t_dA_prev))\n", 1387 | "print(\"With relu: dW = \" + str(t_dW))\n", 1388 | "print(\"With relu: db = \" + str(t_db))\n", 1389 | "\n", 1390 | "linear_activation_backward_test(linear_activation_backward)" 1391 | ] 1392 | }, 1393 | { 1394 | "cell_type": "markdown", 1395 | "metadata": {}, 1396 | "source": [ 1397 | "**Expected output:**\n", 1398 | "\n", 1399 | "```\n", 1400 | "With sigmoid: dA_prev = [[ 0.11017994 0.01105339]\n", 1401 | " [ 0.09466817 0.00949723]\n", 1402 | " [-0.05743092 -0.00576154]]\n", 1403 | "With sigmoid: dW = [[ 0.10266786 0.09778551 -0.01968084]]\n", 1404 | "With sigmoid: db = [[-0.05729622]]\n", 1405 | "With relu: dA_prev = [[ 0.44090989 0. ]\n", 1406 | " [ 0.37883606 0. ]\n", 1407 | " [-0.2298228 0. ]]\n", 1408 | "With relu: dW = [[ 0.44513824 0.37371418 -0.10478989]]\n", 1409 | "With relu: db = [[-0.20837892]]\n", 1410 | "```" 1411 | ] 1412 | }, 1413 | { 1414 | "cell_type": "markdown", 1415 | "metadata": {}, 1416 | "source": [ 1417 | "\n", 1418 | "### 6.3 - L-Model Backward \n", 1419 | "\n", 1420 | "Now you will implement the backward function for the whole network! \n", 1421 | "\n", 1422 | "Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you'll use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you'll iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. \n", 1423 | "\n", 1424 | "\n", 1425 | "\n", 1426 | "
Figure 5: Backward pass
\n", 1427 | "\n", 1428 | "**Initializing backpropagation**:\n", 1429 | "\n", 1430 | "To backpropagate through this network, you know that the output is: \n", 1431 | "$A^{[L]} = \\sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \\frac{\\partial \\mathcal{L}}{\\partial A^{[L]}}$.\n", 1432 | "To do so, use this formula (derived using calculus which, again, you don't need in-depth knowledge of!):\n", 1433 | "```python\n", 1434 | "dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL\n", 1435 | "```\n", 1436 | "\n", 1437 | "You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). \n", 1438 | "\n", 1439 | "After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : \n", 1440 | "\n", 1441 | "$$grads[\"dW\" + str(l)] = dW^{[l]}\\tag{15} $$\n", 1442 | "\n", 1443 | "For example, for $l=3$ this would store $dW^{[l]}$ in `grads[\"dW3\"]`.\n", 1444 | "\n", 1445 | "\n", 1446 | "### Exercise 9 - L_model_backward\n", 1447 | "\n", 1448 | "Implement backpropagation for the *[LINEAR->RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID* model." 1449 | ] 1450 | }, 1451 | { 1452 | "cell_type": "code", 1453 | "execution_count": 103, 1454 | "metadata": { 1455 | "deletable": false, 1456 | "nbgrader": { 1457 | "cell_type": "code", 1458 | "checksum": "d3e23a2b5f3b33e264a122b3c4b0d760", 1459 | "grade": false, 1460 | "grade_id": "cell-9eec96b6d83ff809", 1461 | "locked": false, 1462 | "schema_version": 3, 1463 | "solution": true, 1464 | "task": false 1465 | } 1466 | }, 1467 | "outputs": [], 1468 | "source": [ 1469 | "# GRADED FUNCTION: L_model_backward\n", 1470 | "\n", 1471 | "def L_model_backward(AL, Y, caches):\n", 1472 | " \"\"\"\n", 1473 | " Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group\n", 1474 | " \n", 1475 | " Arguments:\n", 1476 | " AL -- probability vector, output of the forward propagation (L_model_forward())\n", 1477 | " Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat)\n", 1478 | " caches -- list of caches containing:\n", 1479 | " every cache of linear_activation_forward() with \"relu\" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)\n", 1480 | " the cache of linear_activation_forward() with \"sigmoid\" (it's caches[L-1])\n", 1481 | " \n", 1482 | " Returns:\n", 1483 | " grads -- A dictionary with the gradients\n", 1484 | " grads[\"dA\" + str(l)] = ... \n", 1485 | " grads[\"dW\" + str(l)] = ...\n", 1486 | " grads[\"db\" + str(l)] = ... \n", 1487 | " \"\"\"\n", 1488 | " grads = {}\n", 1489 | " L = len(caches) # the number of layers\n", 1490 | " m = AL.shape[1]\n", 1491 | " Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL\n", 1492 | " \n", 1493 | " # Initializing the backpropagation\n", 1494 | " #(1 line of code)\n", 1495 | " \n", 1496 | " # YOUR CODE STARTS HERE\n", 1497 | " dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))\n", 1498 | " # YOUR CODE ENDS HERE\n", 1499 | " \n", 1500 | " # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: \"dAL, current_cache\". Outputs: \"grads[\"dAL-1\"], grads[\"dWL\"], grads[\"dbL\"]\n", 1501 | " #(approx. 5 lines)\n", 1502 | " \n", 1503 | " # YOUR CODE STARTS HERE\n", 1504 | " current_cache = caches[L-1]\n", 1505 | " grads[\"dA\" + str(L-1)], grads[\"dW\" + str(L)], grads[\"db\" + str(L)] = linear_activation_backward(dAL, current_cache, \"sigmoid\")\n", 1506 | " \n", 1507 | " # YOUR CODE ENDS HERE\n", 1508 | " \n", 1509 | " # Loop from l=L-2 to l=0\n", 1510 | " for l in reversed(range(L-1)):\n", 1511 | " # lth layer: (RELU -> LINEAR) gradients.\n", 1512 | " # Inputs: \"grads[\"dA\" + str(l + 1)], current_cache\". Outputs: \"grads[\"dA\" + str(l)] , grads[\"dW\" + str(l + 1)] , grads[\"db\" + str(l + 1)] \n", 1513 | " #(approx. 5 lines)\n", 1514 | " \n", 1515 | " # YOUR CODE STARTS HERE\n", 1516 | " current_cache = caches[l]\n", 1517 | " grads[\"dA\" + str(l)], grads[\"dW\" + str(l + 1)], grads[\"db\" + str(l + 1)] = linear_activation_backward(grads[\"dA\" + str(l + 1)], current_cache, \"relu\")\n", 1518 | " # YOUR CODE ENDS HERE\n", 1519 | "\n", 1520 | " return grads" 1521 | ] 1522 | }, 1523 | { 1524 | "cell_type": "code", 1525 | "execution_count": 104, 1526 | "metadata": { 1527 | "deletable": false, 1528 | "editable": false, 1529 | "nbgrader": { 1530 | "cell_type": "code", 1531 | "checksum": "c38a79bcdf2de4284faabd1b758812b5", 1532 | "grade": true, 1533 | "grade_id": "cell-7e61e6a1bfaa382d", 1534 | "locked": true, 1535 | "points": 10, 1536 | "schema_version": 3, 1537 | "solution": false, 1538 | "task": false 1539 | } 1540 | }, 1541 | "outputs": [ 1542 | { 1543 | "name": "stdout", 1544 | "output_type": "stream", 1545 | "text": [ 1546 | "dA0 = [[ 0. 0.52257901]\n", 1547 | " [ 0. -0.3269206 ]\n", 1548 | " [ 0. -0.32070404]\n", 1549 | " [ 0. -0.74079187]]\n", 1550 | "dA1 = [[ 0.12913162 -0.44014127]\n", 1551 | " [-0.14175655 0.48317296]\n", 1552 | " [ 0.01663708 -0.05670698]]\n", 1553 | "dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]\n", 1554 | " [0. 0. 0. 0. ]\n", 1555 | " [0.05283652 0.01005865 0.01777766 0.0135308 ]]\n", 1556 | "dW2 = [[-0.39202432 -0.13325855 -0.04601089]]\n", 1557 | "db1 = [[-0.22007063]\n", 1558 | " [ 0. ]\n", 1559 | " [-0.02835349]]\n", 1560 | "db2 = [[0.15187861]]\n", 1561 | "\u001b[92m All tests passed.\n" 1562 | ] 1563 | } 1564 | ], 1565 | "source": [ 1566 | "t_AL, t_Y_assess, t_caches = L_model_backward_test_case()\n", 1567 | "grads = L_model_backward(t_AL, t_Y_assess, t_caches)\n", 1568 | "\n", 1569 | "print(\"dA0 = \" + str(grads['dA0']))\n", 1570 | "print(\"dA1 = \" + str(grads['dA1']))\n", 1571 | "print(\"dW1 = \" + str(grads['dW1']))\n", 1572 | "print(\"dW2 = \" + str(grads['dW2']))\n", 1573 | "print(\"db1 = \" + str(grads['db1']))\n", 1574 | "print(\"db2 = \" + str(grads['db2']))\n", 1575 | "\n", 1576 | "L_model_backward_test(L_model_backward)" 1577 | ] 1578 | }, 1579 | { 1580 | "cell_type": "markdown", 1581 | "metadata": {}, 1582 | "source": [ 1583 | "**Expected output:**\n", 1584 | "\n", 1585 | "```\n", 1586 | "dA0 = [[ 0. 0.52257901]\n", 1587 | " [ 0. -0.3269206 ]\n", 1588 | " [ 0. -0.32070404]\n", 1589 | " [ 0. -0.74079187]]\n", 1590 | "dA1 = [[ 0.12913162 -0.44014127]\n", 1591 | " [-0.14175655 0.48317296]\n", 1592 | " [ 0.01663708 -0.05670698]]\n", 1593 | "dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]\n", 1594 | " [0. 0. 0. 0. ]\n", 1595 | " [0.05283652 0.01005865 0.01777766 0.0135308 ]]\n", 1596 | "dW2 = [[-0.39202432 -0.13325855 -0.04601089]]\n", 1597 | "db1 = [[-0.22007063]\n", 1598 | " [ 0. ]\n", 1599 | " [-0.02835349]]\n", 1600 | "db2 = [[0.15187861]]\n", 1601 | "```" 1602 | ] 1603 | }, 1604 | { 1605 | "cell_type": "markdown", 1606 | "metadata": {}, 1607 | "source": [ 1608 | "\n", 1609 | "### 6.4 - Update Parameters\n", 1610 | "\n", 1611 | "In this section, you'll update the parameters of the model, using gradient descent: \n", 1612 | "\n", 1613 | "$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{16}$$\n", 1614 | "$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{17}$$\n", 1615 | "\n", 1616 | "where $\\alpha$ is the learning rate. \n", 1617 | "\n", 1618 | "After computing the updated parameters, store them in the parameters dictionary. " 1619 | ] 1620 | }, 1621 | { 1622 | "cell_type": "markdown", 1623 | "metadata": {}, 1624 | "source": [ 1625 | "\n", 1626 | "### Exercise 10 - update_parameters\n", 1627 | "\n", 1628 | "Implement `update_parameters()` to update your parameters using gradient descent.\n", 1629 | "\n", 1630 | "**Instructions**:\n", 1631 | "Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$. " 1632 | ] 1633 | }, 1634 | { 1635 | "cell_type": "code", 1636 | "execution_count": 107, 1637 | "metadata": { 1638 | "deletable": false, 1639 | "nbgrader": { 1640 | "cell_type": "code", 1641 | "checksum": "a497191ef80b70967006d707cac91c20", 1642 | "grade": false, 1643 | "grade_id": "cell-3cb535f16aba3339", 1644 | "locked": false, 1645 | "schema_version": 3, 1646 | "solution": true, 1647 | "task": false 1648 | } 1649 | }, 1650 | "outputs": [], 1651 | "source": [ 1652 | "# GRADED FUNCTION: update_parameters\n", 1653 | "\n", 1654 | "def update_parameters(params, grads, learning_rate):\n", 1655 | " \"\"\"\n", 1656 | " Update parameters using gradient descent\n", 1657 | " \n", 1658 | " Arguments:\n", 1659 | " params -- python dictionary containing your parameters \n", 1660 | " grads -- python dictionary containing your gradients, output of L_model_backward\n", 1661 | " \n", 1662 | " Returns:\n", 1663 | " parameters -- python dictionary containing your updated parameters \n", 1664 | " parameters[\"W\" + str(l)] = ... \n", 1665 | " parameters[\"b\" + str(l)] = ...\n", 1666 | " \"\"\"\n", 1667 | " parameters = params.copy()\n", 1668 | " L = len(parameters) // 2 # number of layers in the neural network\n", 1669 | "\n", 1670 | " # Update rule for each parameter. Use a for loop.\n", 1671 | " #(≈ 2 lines of code)\n", 1672 | " for l in range(L):\n", 1673 | " # YOUR CODE STARTS HERE\n", 1674 | " parameters[\"W\" + str(l + 1)] = parameters[\"W\" + str(l + 1)] - learning_rate * grads[\"dW\" + str(l + 1)]\n", 1675 | " parameters[\"b\" + str(l + 1)] = parameters[\"b\" + str(l + 1)] - learning_rate * grads[\"db\" + str(l + 1)]\n", 1676 | " # YOUR CODE ENDS HERE\n", 1677 | " return parameters" 1678 | ] 1679 | }, 1680 | { 1681 | "cell_type": "code", 1682 | "execution_count": 108, 1683 | "metadata": { 1684 | "deletable": false, 1685 | "editable": false, 1686 | "nbgrader": { 1687 | "cell_type": "code", 1688 | "checksum": "e0606cae114ec47754dc5383bc3dcdea", 1689 | "grade": true, 1690 | "grade_id": "cell-139de12ee845c39c", 1691 | "locked": true, 1692 | "points": 10, 1693 | "schema_version": 3, 1694 | "solution": false, 1695 | "task": false 1696 | } 1697 | }, 1698 | "outputs": [ 1699 | { 1700 | "name": "stdout", 1701 | "output_type": "stream", 1702 | "text": [ 1703 | "W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n", 1704 | " [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n", 1705 | " [-1.0535704 -0.86128581 0.68284052 2.20374577]]\n", 1706 | "b1 = [[-0.04659241]\n", 1707 | " [-1.28888275]\n", 1708 | " [ 0.53405496]]\n", 1709 | "W2 = [[-0.55569196 0.0354055 1.32964895]]\n", 1710 | "b2 = [[-0.84610769]]\n", 1711 | "\u001b[92m All tests passed.\n" 1712 | ] 1713 | } 1714 | ], 1715 | "source": [ 1716 | "t_parameters, grads = update_parameters_test_case()\n", 1717 | "t_parameters = update_parameters(t_parameters, grads, 0.1)\n", 1718 | "\n", 1719 | "print (\"W1 = \"+ str(t_parameters[\"W1\"]))\n", 1720 | "print (\"b1 = \"+ str(t_parameters[\"b1\"]))\n", 1721 | "print (\"W2 = \"+ str(t_parameters[\"W2\"]))\n", 1722 | "print (\"b2 = \"+ str(t_parameters[\"b2\"]))\n", 1723 | "\n", 1724 | "update_parameters_test(update_parameters)" 1725 | ] 1726 | }, 1727 | { 1728 | "cell_type": "markdown", 1729 | "metadata": {}, 1730 | "source": [ 1731 | "**Expected output:**\n", 1732 | "\n", 1733 | "```\n", 1734 | "W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n", 1735 | " [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n", 1736 | " [-1.0535704 -0.86128581 0.68284052 2.20374577]]\n", 1737 | "b1 = [[-0.04659241]\n", 1738 | " [-1.28888275]\n", 1739 | " [ 0.53405496]]\n", 1740 | "W2 = [[-0.55569196 0.0354055 1.32964895]]\n", 1741 | "b2 = [[-0.84610769]]\n", 1742 | "```" 1743 | ] 1744 | }, 1745 | { 1746 | "cell_type": "markdown", 1747 | "metadata": {}, 1748 | "source": [ 1749 | "### Congratulations! \n", 1750 | "\n", 1751 | "You've just implemented all the functions required for building a deep neural network, including: \n", 1752 | "\n", 1753 | "- Using non-linear units improve your model\n", 1754 | "- Building a deeper neural network (with more than 1 hidden layer)\n", 1755 | "- Implementing an easy-to-use neural network class\n", 1756 | "\n", 1757 | "This was indeed a long assignment, but the next part of the assignment is easier. ;) \n", 1758 | "\n", 1759 | "In the next assignment, you'll be putting all these together to build two models:\n", 1760 | "\n", 1761 | "- A two-layer neural network\n", 1762 | "- An L-layer neural network\n", 1763 | "\n", 1764 | "You will in fact use these models to classify cat vs non-cat images! (Meow!) Great work and see you next time. " 1765 | ] 1766 | } 1767 | ], 1768 | "metadata": { 1769 | "coursera": { 1770 | "course_slug": "neural-networks-deep-learning", 1771 | "graded_item_id": "c4HO0", 1772 | "launcher_item_id": "lSYZM" 1773 | }, 1774 | "kernelspec": { 1775 | "display_name": "Python 3", 1776 | "language": "python", 1777 | "name": "python3" 1778 | }, 1779 | "language_info": { 1780 | "codemirror_mode": { 1781 | "name": "ipython", 1782 | "version": 3 1783 | }, 1784 | "file_extension": ".py", 1785 | "mimetype": "text/x-python", 1786 | "name": "python", 1787 | "nbconvert_exporter": "python", 1788 | "pygments_lexer": "ipython3", 1789 | "version": "3.9.7" 1790 | } 1791 | }, 1792 | "nbformat": 4, 1793 | "nbformat_minor": 2 1794 | } 1795 | -------------------------------------------------------------------------------- /Gradient_Checking.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Gradient Checking\n", 8 | "\n", 9 | "Welcome to the final assignment for this week! In this assignment you'll be implementing gradient checking.\n", 10 | "\n", 11 | "By the end of this notebook, you'll be able to:\n", 12 | "\n", 13 | "Implement gradient checking to verify the accuracy of your backprop implementation.\n", 14 | "\n", 15 | "## Important Note on Submission to the AutoGrader\n", 16 | "\n", 17 | "Before submitting your assignment to the AutoGrader, please make sure you are not doing the following:\n", 18 | "\n", 19 | "1. You have not added any _extra_ `print` statement(s) in the assignment.\n", 20 | "2. You have not added any _extra_ code cell(s) in the assignment.\n", 21 | "3. You have not changed any of the function parameters.\n", 22 | "4. You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.\n", 23 | "5. You are not changing the assignment code where it is not required, like creating _extra_ variables.\n", 24 | "\n", 25 | "If you do any of the following, you will get something like, `Grader not found` (or similarly unexpected) error upon submitting your assignment. Before asking for help/debugging the errors in your assignment, check for these first. If this is the case, and you don't remember the changes you have made, you can get a fresh copy of the assignment by following these [instructions](https://www.coursera.org/learn/deep-neural-network/supplement/QWEnZ/h-ow-to-refresh-your-workspace)." 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "## Table of Contents\n", 33 | "- [1 - Packages](#1)\n", 34 | "- [2 - Problem Statement](#2)\n", 35 | "- [3 - How does Gradient Checking work?](#3)\n", 36 | "- [4 - 1-Dimensional Gradient Checking](#4)\n", 37 | " - [Exercise 1 - forward_propagation](#ex-1)\n", 38 | " - [Exercise 2 - backward_propagation](#ex-2)\n", 39 | " - [Exercise 3 - gradient_check](#ex-3)\n", 40 | "- [5 - N-Dimensional Gradient Checking](#5)\n", 41 | " - [Exercise 4 - gradient_check_n](#ex-4)" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "metadata": {}, 47 | "source": [ 48 | "\n", 49 | "## 1 - Packages" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 1, 55 | "metadata": {}, 56 | "outputs": [], 57 | "source": [ 58 | "import numpy as np\n", 59 | "from testCases import *\n", 60 | "from public_tests import *\n", 61 | "from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector\n", 62 | "\n", 63 | "%load_ext autoreload\n", 64 | "%autoreload 2" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": {}, 70 | "source": [ 71 | "\n", 72 | "## 2 - Problem Statement\n", 73 | "\n", 74 | "You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.\n", 75 | "\n", 76 | "You already know that backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, \"Give me proof that your backpropagation is actually working!\" To give this reassurance, you are going to use \"gradient checking.\"\n", 77 | "\n", 78 | "Let's do it!" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "\n", 86 | "## 3 - How does Gradient Checking work?\n", 87 | "Backpropagation computes the gradients $\\frac{\\partial J}{\\partial \\theta}$, where $\\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.\n", 88 | "\n", 89 | "Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\\frac{\\partial J}{\\partial \\theta}$.\n", 90 | "\n", 91 | "Let's look back at the definition of a derivative (or gradient):$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\n", 92 | "\n", 93 | "If you're not familiar with the \"$\\displaystyle \\lim_{\\varepsilon \\to 0}$\" notation, it's just a way of saying \"when $\\varepsilon$ is really, really small.\"\n", 94 | "\n", 95 | "You know the following:\n", 96 | "\n", 97 | "$\\frac{\\partial J}{\\partial \\theta}$ is what you want to make sure you're computing correctly.\n", 98 | "You can compute $J(\\theta + \\varepsilon)$ and $J(\\theta - \\varepsilon)$ (in the case that $\\theta$ is a real number), since you're confident your implementation for $J$ is correct.\n", 99 | "Let's use equation (1) and a small value for $\\varepsilon$ to convince your CEO that your code for computing $\\frac{\\partial J}{\\partial \\theta}$ is correct!" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "metadata": {}, 105 | "source": [ 106 | "\n", 107 | "## 4 - 1-Dimensional Gradient Checking\n", 108 | "\n", 109 | "Consider a 1D linear function $J(\\theta) = \\theta x$. The model contains only a single real-valued parameter $\\theta$, and takes $x$ as input.\n", 110 | "\n", 111 | "You will implement code to compute $J(.)$ and its derivative $\\frac{\\partial J}{\\partial \\theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. \n", 112 | "\n", 113 | "\n", 114 | "
Figure 1:1D linear model
\n", 115 | "\n", 116 | "The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ (\"forward propagation\"). Then compute the derivative $\\frac{\\partial J}{\\partial \\theta}$ (\"backward propagation\"). \n", 117 | "\n", 118 | "\n", 119 | "### Exercise 1 - forward_propagation\n", 120 | "\n", 121 | "Implement `forward propagation`. For this simple function compute $J(.)$" 122 | ] 123 | }, 124 | { 125 | "cell_type": "code", 126 | "execution_count": 3, 127 | "metadata": { 128 | "deletable": false, 129 | "nbgrader": { 130 | "cell_type": "code", 131 | "checksum": "0f934d7a5ec9e6a41fc9ece5ec6a07fa", 132 | "grade": false, 133 | "grade_id": "cell-a4be88c5c0419ab7", 134 | "locked": false, 135 | "schema_version": 3, 136 | "solution": true, 137 | "task": false 138 | } 139 | }, 140 | "outputs": [], 141 | "source": [ 142 | "# GRADED FUNCTION: forward_propagation\n", 143 | "\n", 144 | "def forward_propagation(x, theta):\n", 145 | " \"\"\"\n", 146 | " Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)\n", 147 | " \n", 148 | " Arguments:\n", 149 | " x -- a real-valued input\n", 150 | " theta -- our parameter, a real number as well\n", 151 | " \n", 152 | " Returns:\n", 153 | " J -- the value of function J, computed using the formula J(theta) = theta * x\n", 154 | " \"\"\"\n", 155 | " \n", 156 | " # (approx. 1 line)\n", 157 | " # J = \n", 158 | " # YOUR CODE STARTS HERE\n", 159 | " J = np.dot(theta,x)\n", 160 | " \n", 161 | " # YOUR CODE ENDS HERE\n", 162 | " \n", 163 | " return J" 164 | ] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "execution_count": 4, 169 | "metadata": { 170 | "deletable": false, 171 | "editable": false, 172 | "nbgrader": { 173 | "cell_type": "code", 174 | "checksum": "c775107f8d8491592913f1991d0fc3da", 175 | "grade": true, 176 | "grade_id": "cell-805a7fd19d554221", 177 | "locked": true, 178 | "points": 10, 179 | "schema_version": 3, 180 | "solution": false, 181 | "task": false 182 | } 183 | }, 184 | "outputs": [ 185 | { 186 | "name": "stdout", 187 | "output_type": "stream", 188 | "text": [ 189 | "J = 8\n", 190 | "\u001b[92m All tests passed.\n" 191 | ] 192 | } 193 | ], 194 | "source": [ 195 | "x, theta = 2, 4\n", 196 | "J = forward_propagation(x, theta)\n", 197 | "print (\"J = \" + str(J))\n", 198 | "forward_propagation_test(forward_propagation)" 199 | ] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "metadata": {}, 204 | "source": [ 205 | "\n", 206 | "### Exercise 2 - backward_propagation\n", 207 | "\n", 208 | "Now, implement the `backward propagation` step (derivative computation) of Figure 1. That is, compute the derivative of $J(\\theta) = \\theta x$ with respect to $\\theta$. To save you from doing the calculus, you should get $dtheta = \\frac { \\partial J }{ \\partial \\theta} = x$." 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": 5, 214 | "metadata": { 215 | "deletable": false, 216 | "nbgrader": { 217 | "cell_type": "code", 218 | "checksum": "7315e45824efc41770654b46c64c1c14", 219 | "grade": false, 220 | "grade_id": "cell-c06a1275399b210f", 221 | "locked": false, 222 | "schema_version": 3, 223 | "solution": true, 224 | "task": false 225 | } 226 | }, 227 | "outputs": [], 228 | "source": [ 229 | "# GRADED FUNCTION: backward_propagation\n", 230 | "\n", 231 | "def backward_propagation(x, theta):\n", 232 | " \"\"\"\n", 233 | " Computes the derivative of J with respect to theta (see Figure 1).\n", 234 | " \n", 235 | " Arguments:\n", 236 | " x -- a real-valued input\n", 237 | " theta -- our parameter, a real number as well\n", 238 | " \n", 239 | " Returns:\n", 240 | " dtheta -- the gradient of the cost with respect to theta\n", 241 | " \"\"\"\n", 242 | " \n", 243 | " # (approx. 1 line)\n", 244 | " # dtheta = \n", 245 | " # YOUR CODE STARTS HERE\n", 246 | " dtheta = x\n", 247 | " \n", 248 | " # YOUR CODE ENDS HERE\n", 249 | " \n", 250 | " return dtheta" 251 | ] 252 | }, 253 | { 254 | "cell_type": "code", 255 | "execution_count": 6, 256 | "metadata": { 257 | "deletable": false, 258 | "editable": false, 259 | "nbgrader": { 260 | "cell_type": "code", 261 | "checksum": "79ac4ec84141d381d3f9fffccc19b723", 262 | "grade": true, 263 | "grade_id": "cell-7b67ed84ac8bfd91", 264 | "locked": true, 265 | "points": 10, 266 | "schema_version": 3, 267 | "solution": false, 268 | "task": false 269 | } 270 | }, 271 | "outputs": [ 272 | { 273 | "name": "stdout", 274 | "output_type": "stream", 275 | "text": [ 276 | "dtheta = 2\n", 277 | "\u001b[92m All tests passed.\n" 278 | ] 279 | } 280 | ], 281 | "source": [ 282 | "x, theta = 2, 4\n", 283 | "dtheta = backward_propagation(x, theta)\n", 284 | "print (\"dtheta = \" + str(dtheta))\n", 285 | "backward_propagation_test(backward_propagation)" 286 | ] 287 | }, 288 | { 289 | "cell_type": "markdown", 290 | "metadata": {}, 291 | "source": [ 292 | "\n", 293 | "### Exercise 3 - gradient_check\n", 294 | "\n", 295 | "To show that the `backward_propagation()` function is correctly computing the gradient $\\frac{\\partial J}{\\partial \\theta}$, let's implement gradient checking.\n", 296 | "\n", 297 | "**Instructions**:\n", 298 | "- First compute \"gradapprox\" using the formula above (1) and a small value of $\\varepsilon$. Here are the Steps to follow:\n", 299 | " 1. $\\theta^{+} = \\theta + \\varepsilon$\n", 300 | " 2. $\\theta^{-} = \\theta - \\varepsilon$\n", 301 | " 3. $J^{+} = J(\\theta^{+})$\n", 302 | " 4. $J^{-} = J(\\theta^{-})$\n", 303 | " 5. $gradapprox = \\frac{J^{+} - J^{-}}{2 \\varepsilon}$\n", 304 | "- Then compute the gradient using backward propagation, and store the result in a variable \"grad\"\n", 305 | "- Finally, compute the relative difference between \"gradapprox\" and the \"grad\" using the following formula:\n", 306 | "$$ difference = \\frac {\\mid\\mid grad - gradapprox \\mid\\mid_2}{\\mid\\mid grad \\mid\\mid_2 + \\mid\\mid gradapprox \\mid\\mid_2} \\tag{2}$$\n", 307 | "You will need 3 Steps to compute this formula:\n", 308 | " - 1'. compute the numerator using np.linalg.norm(...)\n", 309 | " - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.\n", 310 | " - 3'. divide them.\n", 311 | "- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation. \n" 312 | ] 313 | }, 314 | { 315 | "cell_type": "code", 316 | "execution_count": 7, 317 | "metadata": { 318 | "deletable": false, 319 | "nbgrader": { 320 | "cell_type": "code", 321 | "checksum": "2bf63f28a6871b8e01d942be9f6ba8fe", 322 | "grade": false, 323 | "grade_id": "cell-ed57ede577f9d607", 324 | "locked": false, 325 | "schema_version": 3, 326 | "solution": true, 327 | "task": false 328 | } 329 | }, 330 | "outputs": [], 331 | "source": [ 332 | "# GRADED FUNCTION: gradient_check\n", 333 | "\n", 334 | "def gradient_check(x, theta, epsilon=1e-7, print_msg=False):\n", 335 | " \"\"\"\n", 336 | " Implement the backward propagation presented in Figure 1.\n", 337 | " \n", 338 | " Arguments:\n", 339 | " x -- a float input\n", 340 | " theta -- our parameter, a float as well\n", 341 | " epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n", 342 | " \n", 343 | " Returns:\n", 344 | " difference -- difference (2) between the approximated gradient and the backward propagation gradient. Float output\n", 345 | " \"\"\"\n", 346 | " \n", 347 | " # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.\n", 348 | " # (approx. 5 lines)\n", 349 | " thetaplus = theta + epsilon # Step 1\n", 350 | " thetaminus = theta - epsilon # Step 2\n", 351 | " J_plus = np.dot(thetaplus,x) # Step 3\n", 352 | " J_minus = np.dot(thetaminus,x) # Step 4\n", 353 | " gradapprox = (J_plus - J_minus)/(2*epsilon) # Step 5\n", 354 | " # YOUR CODE STARTS HERE\n", 355 | " \n", 356 | " grad = x\n", 357 | " # YOUR CODE ENDS HERE\n", 358 | " \n", 359 | " # Check if gradapprox is close enough to the output of backward_propagation()\n", 360 | " #(approx. 1 line) DO NOT USE \"grad = gradapprox\"\n", 361 | " # grad =\n", 362 | " # YOUR CODE STARTS HERE\n", 363 | " \n", 364 | " \n", 365 | " # YOUR CODE ENDS HERE\n", 366 | " \n", 367 | " #(approx. 3 lines)\n", 368 | " numerator = np.linalg.norm(gradapprox-grad) # Step 1'\n", 369 | " denominator = np.linalg.norm(gradapprox) + np.linalg.norm(grad) # Step 2'\n", 370 | " difference = numerator/denominator # Step 3'\n", 371 | " # YOUR CODE STARTS HERE\n", 372 | " \n", 373 | " \n", 374 | " # YOUR CODE ENDS HERE\n", 375 | " if print_msg:\n", 376 | " if difference > 2e-7:\n", 377 | " print (\"\\033[93m\" + \"There is a mistake in the backward propagation! difference = \" + str(difference) + \"\\033[0m\")\n", 378 | " else:\n", 379 | " print (\"\\033[92m\" + \"Your backward propagation works perfectly fine! difference = \" + str(difference) + \"\\033[0m\")\n", 380 | " \n", 381 | " return difference" 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": 8, 387 | "metadata": { 388 | "deletable": false, 389 | "editable": false, 390 | "nbgrader": { 391 | "cell_type": "code", 392 | "checksum": "17d329eb2edb7732a350c344de1ea9a2", 393 | "grade": true, 394 | "grade_id": "cell-be0338be7d50dd11", 395 | "locked": true, 396 | "points": 10, 397 | "schema_version": 3, 398 | "solution": false, 399 | "task": false 400 | } 401 | }, 402 | "outputs": [ 403 | { 404 | "name": "stdout", 405 | "output_type": "stream", 406 | "text": [ 407 | "\u001b[92mYour backward propagation works perfectly fine! difference = 2.919335883291695e-10\u001b[0m\n" 408 | ] 409 | } 410 | ], 411 | "source": [ 412 | "x, theta = 2, 4\n", 413 | "difference = gradient_check(x, theta, print_msg=True)\n" 414 | ] 415 | }, 416 | { 417 | "cell_type": "markdown", 418 | "metadata": {}, 419 | "source": [ 420 | "Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. \n", 421 | "\n", 422 | "Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!" 423 | ] 424 | }, 425 | { 426 | "cell_type": "markdown", 427 | "metadata": {}, 428 | "source": [ 429 | "\n", 430 | "## 5 - N-Dimensional Gradient Checking" 431 | ] 432 | }, 433 | { 434 | "cell_type": "markdown", 435 | "metadata": { 436 | "collapsed": true 437 | }, 438 | "source": [ 439 | "The following figure describes the forward and backward propagation of your fraud detection model.\n", 440 | "\n", 441 | "\n", 442 | "
Figure 2: Deep neural network. LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
\n", 443 | "\n", 444 | "Let's look at your implementations for forward propagation and backward propagation. " 445 | ] 446 | }, 447 | { 448 | "cell_type": "code", 449 | "execution_count": 9, 450 | "metadata": {}, 451 | "outputs": [], 452 | "source": [ 453 | "def forward_propagation_n(X, Y, parameters):\n", 454 | " \"\"\"\n", 455 | " Implements the forward propagation (and computes the cost) presented in Figure 3.\n", 456 | " \n", 457 | " Arguments:\n", 458 | " X -- training set for m examples\n", 459 | " Y -- labels for m examples \n", 460 | " parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n", 461 | " W1 -- weight matrix of shape (5, 4)\n", 462 | " b1 -- bias vector of shape (5, 1)\n", 463 | " W2 -- weight matrix of shape (3, 5)\n", 464 | " b2 -- bias vector of shape (3, 1)\n", 465 | " W3 -- weight matrix of shape (1, 3)\n", 466 | " b3 -- bias vector of shape (1, 1)\n", 467 | " \n", 468 | " Returns:\n", 469 | " cost -- the cost function (logistic cost for one example)\n", 470 | " cache -- a tuple with the intermediate values (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)\n", 471 | "\n", 472 | " \"\"\"\n", 473 | " \n", 474 | " # retrieve parameters\n", 475 | " m = X.shape[1]\n", 476 | " W1 = parameters[\"W1\"]\n", 477 | " b1 = parameters[\"b1\"]\n", 478 | " W2 = parameters[\"W2\"]\n", 479 | " b2 = parameters[\"b2\"]\n", 480 | " W3 = parameters[\"W3\"]\n", 481 | " b3 = parameters[\"b3\"]\n", 482 | "\n", 483 | " # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n", 484 | " Z1 = np.dot(W1, X) + b1\n", 485 | " A1 = relu(Z1)\n", 486 | " Z2 = np.dot(W2, A1) + b2\n", 487 | " A2 = relu(Z2)\n", 488 | " Z3 = np.dot(W3, A2) + b3\n", 489 | " A3 = sigmoid(Z3)\n", 490 | "\n", 491 | " # Cost\n", 492 | " log_probs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)\n", 493 | " cost = 1. / m * np.sum(log_probs)\n", 494 | " \n", 495 | " cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)\n", 496 | " \n", 497 | " return cost, cache" 498 | ] 499 | }, 500 | { 501 | "cell_type": "markdown", 502 | "metadata": {}, 503 | "source": [ 504 | "Now, run backward propagation." 505 | ] 506 | }, 507 | { 508 | "cell_type": "code", 509 | "execution_count": 10, 510 | "metadata": {}, 511 | "outputs": [], 512 | "source": [ 513 | "def backward_propagation_n(X, Y, cache):\n", 514 | " \"\"\"\n", 515 | " Implement the backward propagation presented in figure 2.\n", 516 | " \n", 517 | " Arguments:\n", 518 | " X -- input datapoint, of shape (input size, 1)\n", 519 | " Y -- true \"label\"\n", 520 | " cache -- cache output from forward_propagation_n()\n", 521 | " \n", 522 | " Returns:\n", 523 | " gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.\n", 524 | " \"\"\"\n", 525 | " \n", 526 | " m = X.shape[1]\n", 527 | " (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n", 528 | " \n", 529 | " dZ3 = A3 - Y\n", 530 | " dW3 = 1. / m * np.dot(dZ3, A2.T)\n", 531 | " db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)\n", 532 | " \n", 533 | " dA2 = np.dot(W3.T, dZ3)\n", 534 | " dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n", 535 | " dW2 = 1. / m * np.dot(dZ2, A1.T) * 2\n", 536 | " db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)\n", 537 | " \n", 538 | " dA1 = np.dot(W2.T, dZ2)\n", 539 | " dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n", 540 | " dW1 = 1. / m * np.dot(dZ1, X.T)\n", 541 | " db1 = 4. / m * np.sum(dZ1, axis=1, keepdims=True)\n", 542 | " \n", 543 | " gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\n", 544 | " \"dA2\": dA2, \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2,\n", 545 | " \"dA1\": dA1, \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n", 546 | " \n", 547 | " return gradients" 548 | ] 549 | }, 550 | { 551 | "cell_type": "markdown", 552 | "metadata": { 553 | "collapsed": true 554 | }, 555 | "source": [ 556 | "You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct." 557 | ] 558 | }, 559 | { 560 | "cell_type": "markdown", 561 | "metadata": {}, 562 | "source": [ 563 | "**How does gradient checking work?**.\n", 564 | "\n", 565 | "As in Section 3 and 4, you want to compare \"gradapprox\" to the gradient computed by backpropagation. The formula is still:\n", 566 | "\n", 567 | "$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\n", 568 | "\n", 569 | "However, $\\theta$ is not a scalar anymore. It is a dictionary called \"parameters\". The function \"`dictionary_to_vector()`\" has been implemented for you. It converts the \"parameters\" dictionary into a vector called \"values\", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.\n", 570 | "\n", 571 | "The inverse function is \"`vector_to_dictionary`\" which outputs back the \"parameters\" dictionary.\n", 572 | "\n", 573 | "\n", 574 | "
Figure 2: dictionary_to_vector() and vector_to_dictionary(). You will need these functions in gradient_check_n()
\n", 575 | "\n", 576 | "The \"gradients\" dictionary has also been converted into a vector \"grad\" using gradients_to_vector(), so you don't need to worry about that.\n", 577 | "\n", 578 | "Now, for every single parameter in your vector, you will apply the same procedure as for the gradient_check exercise. You will store each gradient approximation in a vector `gradapprox`. If the check goes as expected, each value in this approximation must match the real gradient values stored in the `grad` vector. \n", 579 | "\n", 580 | "Note that `grad` is calculated using the function `gradients_to_vector`, which uses the gradients outputs of the `backward_propagation_n` function.\n", 581 | "\n", 582 | "\n", 583 | "### Exercise 4 - gradient_check_n\n", 584 | "\n", 585 | "Implement the function below.\n", 586 | "\n", 587 | "**Instructions**: Here is pseudo-code that will help you implement the gradient check.\n", 588 | "\n", 589 | "For each i in num_parameters:\n", 590 | "- To compute `J_plus[i]`:\n", 591 | " 1. Set $\\theta^{+}$ to `np.copy(parameters_values)`\n", 592 | " 2. Set $\\theta^{+}_i$ to $\\theta^{+}_i + \\varepsilon$\n", 593 | " 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\\theta^{+}$ `))`. \n", 594 | "- To compute `J_minus[i]`: do the same thing with $\\theta^{-}$\n", 595 | "- Compute $gradapprox[i] = \\frac{J^{+}_i - J^{-}_i}{2 \\varepsilon}$\n", 596 | "\n", 597 | "Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: \n", 598 | "$$ difference = \\frac {\\| grad - gradapprox \\|_2}{\\| grad \\|_2 + \\| gradapprox \\|_2 } \\tag{3}$$\n", 599 | "\n", 600 | "**Note**: Use `np.linalg.norm` to get the norms" 601 | ] 602 | }, 603 | { 604 | "cell_type": "code", 605 | "execution_count": 12, 606 | "metadata": { 607 | "deletable": false, 608 | "nbgrader": { 609 | "cell_type": "code", 610 | "checksum": "c5494ee946549d6f9a5fac6e05f12845", 611 | "grade": false, 612 | "grade_id": "cell-1e5a768bc4e28e66", 613 | "locked": false, 614 | "schema_version": 3, 615 | "solution": true, 616 | "task": false 617 | } 618 | }, 619 | "outputs": [], 620 | "source": [ 621 | "# GRADED FUNCTION: gradient_check_n\n", 622 | "\n", 623 | "def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7, print_msg=False):\n", 624 | " \"\"\"\n", 625 | " Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n\n", 626 | " \n", 627 | " Arguments:\n", 628 | " parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n", 629 | " grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. \n", 630 | " x -- input datapoint, of shape (input size, 1)\n", 631 | " y -- true \"label\"\n", 632 | " epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n", 633 | " \n", 634 | " Returns:\n", 635 | " difference -- difference (2) between the approximated gradient and the backward propagation gradient\n", 636 | " \"\"\"\n", 637 | " \n", 638 | " # Set-up variables\n", 639 | " parameters_values, _ = dictionary_to_vector(parameters)\n", 640 | " \n", 641 | " grad = gradients_to_vector(gradients)\n", 642 | " num_parameters = parameters_values.shape[0]\n", 643 | " J_plus = np.zeros((num_parameters, 1))\n", 644 | " J_minus = np.zeros((num_parameters, 1))\n", 645 | " gradapprox = np.zeros((num_parameters, 1))\n", 646 | " \n", 647 | " # Compute gradapprox\n", 648 | " for i in range(num_parameters):\n", 649 | " \n", 650 | " # Compute J_plus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_plus[i]\".\n", 651 | " # \"_\" is used because the function you have to outputs two parameters but we only care about the first one\n", 652 | " #(approx. 3 lines)\n", 653 | " thetaplus = np.copy(parameters_values) # Step 1\n", 654 | " thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2\n", 655 | " J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3\n", 656 | " # YOUR CODE STARTS HERE\n", 657 | " \n", 658 | " \n", 659 | " # YOUR CODE ENDS HERE\n", 660 | " \n", 661 | " # Compute J_minus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_minus[i]\".\n", 662 | " #(approx. 3 lines)\n", 663 | " thetaminus = np.copy(parameters_values) # Step 1\n", 664 | " thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2\n", 665 | " J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3\n", 666 | " # YOUR CODE STARTS HERE\n", 667 | " \n", 668 | " \n", 669 | " # YOUR CODE ENDS HERE\n", 670 | " \n", 671 | " # Compute gradapprox[i]\n", 672 | " # (approx. 1 line)\n", 673 | " gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)\n", 674 | " # YOUR CODE STARTS HERE\n", 675 | " \n", 676 | " \n", 677 | " # YOUR CODE ENDS HERE\n", 678 | " \n", 679 | " # Compare gradapprox to backward propagation gradients by computing difference.\n", 680 | " # (approx. 3 line)\n", 681 | " numerator = np.linalg.norm(grad - gradapprox) # Step 1'\n", 682 | " denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'\n", 683 | " difference = numerator/denominator # Step 3'\n", 684 | " # YOUR CODE STARTS HERE\n", 685 | " \n", 686 | " \n", 687 | " # YOUR CODE ENDS HERE\n", 688 | " if print_msg:\n", 689 | " if difference > 2e-7:\n", 690 | " print (\"\\033[93m\" + \"There is a mistake in the backward propagation! difference = \" + str(difference) + \"\\033[0m\")\n", 691 | " else:\n", 692 | " print (\"\\033[92m\" + \"Your backward propagation works perfectly fine! difference = \" + str(difference) + \"\\033[0m\")\n", 693 | "\n", 694 | " return difference" 695 | ] 696 | }, 697 | { 698 | "cell_type": "code", 699 | "execution_count": 13, 700 | "metadata": { 701 | "deletable": false, 702 | "editable": false, 703 | "nbgrader": { 704 | "cell_type": "code", 705 | "checksum": "e119ddabcb075e6e3391464b48e11234", 706 | "grade": true, 707 | "grade_id": "cell-0d7896ce7c954fc9", 708 | "locked": true, 709 | "points": 20, 710 | "schema_version": 3, 711 | "solution": false, 712 | "task": false 713 | } 714 | }, 715 | "outputs": [ 716 | { 717 | "name": "stdout", 718 | "output_type": "stream", 719 | "text": [ 720 | "\u001b[93mThere is a mistake in the backward propagation! difference = 0.2850931567761623\u001b[0m\n" 721 | ] 722 | } 723 | ], 724 | "source": [ 725 | "X, Y, parameters = gradient_check_n_test_case()\n", 726 | "\n", 727 | "cost, cache = forward_propagation_n(X, Y, parameters)\n", 728 | "gradients = backward_propagation_n(X, Y, cache)\n", 729 | "difference = gradient_check_n(parameters, gradients, X, Y, 1e-7, True)\n", 730 | "expected_values = [0.2850931567761623, 1.1890913024229996e-07]\n", 731 | "assert not(type(difference) == np.ndarray), \"You are not using np.linalg.norm for numerator or denominator\"\n", 732 | "assert np.any(np.isclose(difference, expected_values)), \"Wrong value. It is not one of the expected values\"\n" 733 | ] 734 | }, 735 | { 736 | "cell_type": "markdown", 737 | "metadata": {}, 738 | "source": [ 739 | "**Expected output**:\n", 740 | "\n", 741 | "\n", 742 | " \n", 743 | " \n", 744 | " \n", 745 | " \n", 746 | "
There is a mistake in the backward propagation! difference = 0.2850931567761623
" 747 | ] 748 | }, 749 | { 750 | "cell_type": "markdown", 751 | "metadata": {}, 752 | "source": [ 753 | "It seems that there were errors in the `backward_propagation_n` code! Good thing you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember, you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code. \n", 754 | "\n", 755 | "Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, you should try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. \n", 756 | "\n", 757 | "**Notes** \n", 758 | "- Gradient Checking is slow! Approximating the gradient with $\\frac{\\partial J}{\\partial \\theta} \\approx \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. \n", 759 | "- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. \n", 760 | "\n", 761 | "Congrats! Now you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) \n", 762 | "
\n", 763 | "\n", 764 | " \n", 765 | "**What you should remember from this notebook**:\n", 766 | "- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).\n", 767 | "- Gradient checking is slow, so you don't want to run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process. " 768 | ] 769 | } 770 | ], 771 | "metadata": { 772 | "coursera": { 773 | "course_slug": "deep-neural-network", 774 | "graded_item_id": "n6NBD", 775 | "launcher_item_id": "yfOsE" 776 | }, 777 | "kernelspec": { 778 | "display_name": "Python 3", 779 | "language": "python", 780 | "name": "python3" 781 | }, 782 | "language_info": { 783 | "codemirror_mode": { 784 | "name": "ipython", 785 | "version": 3 786 | }, 787 | "file_extension": ".py", 788 | "mimetype": "text/x-python", 789 | "name": "python", 790 | "nbconvert_exporter": "python", 791 | "pygments_lexer": "ipython3", 792 | "version": "3.9.7" 793 | } 794 | }, 795 | "nbformat": 4, 796 | "nbformat_minor": 1 797 | } 798 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deep Learning Projects in TensorFlow and Keras 2 | -------------------------------------------------------------------------------- /Tensorflow-personal.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 5, 6 | "id": "3526eac7", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "# import tensorflow as tf" 11 | ] 12 | }, 13 | { 14 | "cell_type": "raw", 15 | "id": "04084d44", 16 | "metadata": {}, 17 | "source": [ 18 | "tf.Tensor --> multidimensional arrays that contain information about computational graph" 19 | ] 20 | }, 21 | { 22 | "cell_type": "raw", 23 | "id": "ec37fca7", 24 | "metadata": {}, 25 | "source": [ 26 | "tf.Variable --> we can modify the state of the variable once created\n", 27 | "\n", 28 | "Variables can be created only once as its initial values defines the variable shape and size.\n", 29 | "dtype parameter in tf.Variable converts the variable to that data-type, else by default Tensor." 30 | ] 31 | }, 32 | { 33 | "cell_type": "raw", 34 | "id": "4170ae41", 35 | "metadata": {}, 36 | "source": [ 37 | "tf.constant --> we cannot modify the state of the constant once created" 38 | ] 39 | }, 40 | { 41 | "cell_type": "raw", 42 | "id": "195280c9", 43 | "metadata": {}, 44 | "source": [ 45 | "One-hot encoding --> It is a pre-processing technique used for conversion of categorical \n", 46 | "values to assign 0 and 1, where area utilization is more than \"label encoding\"." 47 | ] 48 | }, 49 | { 50 | "cell_type": "raw", 51 | "id": "7b16b1ee", 52 | "metadata": {}, 53 | "source": [ 54 | "tf.GradientTape --> which records operations for differentation\n", 55 | "TensorFlow will compute the derivatives for you, by moving backwards through the graph recorded with \"GradientTape\"" 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "id": "b15f5f3e", 62 | "metadata": {}, 63 | "outputs": [], 64 | "source": [] 65 | } 66 | ], 67 | "metadata": { 68 | "kernelspec": { 69 | "display_name": "Python 3", 70 | "language": "python", 71 | "name": "python3" 72 | }, 73 | "language_info": { 74 | "codemirror_mode": { 75 | "name": "ipython", 76 | "version": 3 77 | }, 78 | "file_extension": ".py", 79 | "mimetype": "text/x-python", 80 | "name": "python", 81 | "nbconvert_exporter": "python", 82 | "pygments_lexer": "ipython3", 83 | "version": "3.9.7" 84 | } 85 | }, 86 | "nbformat": 4, 87 | "nbformat_minor": 5 88 | } 89 | --------------------------------------------------------------------------------