├── Course 1
├── Week1_Assignment.ipynb
├── Week2_Assignment.ipynb
├── Week3_Assignment.ipynb
├── Week4_Assignment.ipynb
└── Week5_Bonus Notebook.ipynb
├── Course 2
├── Week1_Assignment.ipynb
├── Week2_Assignment.ipynb
├── Week3_Assignment.ipynb
└── Week4_Assignment.ipynb
├── Course 3
├── Week 1
│ ├── Week1_Assignment.ipynb
│ └── birds.h5
├── Week 2
│ ├── Week2_Assignment.ipynb
│ └── results.data
├── Week 3
│ ├── Week3_Assignment.ipynb
│ └── model.h5
└── Week 4
│ ├── Week4_Assignment.ipynb
│ └── images.zip
├── Course 4
├── Week 1
│ ├── Week1_Assignment.ipynb
│ └── doggo.png
├── Week 2
│ ├── Week2_Assignment.ipynb
│ └── mymodel.h5
├── Week 3
│ ├── Week3_Assignment.ipynb
│ └── anime.h5
└── Week 4
│ ├── Week4_Assignment.ipynb
│ └── mysigns.zip
└── README.md
/Course 1/Week2_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# W2 Assignment: Creating a Custom Loss Function"
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "This short exercise will require you to write a simple linear regression neural network that is trained on two arrays: $xs$ (inputs) and $ys$ (labels), where the relationship between each corresponding element is $y=2x-1$.\n",
15 | "\n",
16 | "\n",
17 | "$xs = [-1.0, 0.0, 1.0, 2.0, 3.0, 4.0]$\n",
18 | "\n",
19 | "$ys = [-3.0, -1.0, 1.0, 3.0, 5.0, 7.0]$\n",
20 | "\n",
21 | "\n",
22 | "You will need to implement a custom loss function that returns the root mean square error (RMSE) of $y_{true} - y_{pred}$. Let's begin!"
23 | ]
24 | },
25 | {
26 | "cell_type": "code",
27 | "execution_count": 1,
28 | "metadata": {
29 | "colab": {},
30 | "colab_type": "code",
31 | "id": "0pajvrhrInPa"
32 | },
33 | "outputs": [],
34 | "source": [
35 | "import tensorflow as tf\n",
36 | "import numpy as np\n",
37 | "from tensorflow import keras\n",
38 | "from tensorflow.keras import backend as K\n",
39 | "\n",
40 | "import utils"
41 | ]
42 | },
43 | {
44 | "cell_type": "code",
45 | "execution_count": 2,
46 | "metadata": {},
47 | "outputs": [],
48 | "source": [
49 | "# inputs\n",
50 | "xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)\n",
51 | "\n",
52 | "# labels. relationship with the inputs above is y=2x-1.\n",
53 | "ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)"
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "metadata": {},
59 | "source": [
60 | "### Define the custom loss function (TODO)\n",
61 | "Define the custom loss function below called `my_rmse()` that returns the RMSE between the target (`y_true`) and prediction (`y_pred`). \n",
62 | "\n",
63 | "You will return $\\sqrt{error}$, where $error$ = $mean((y_{true} - y_{pred})^2)$\n",
64 | "- error: the difference between the true label and predicted label.\n",
65 | "- sqr_error: the square of the error.\n",
66 | "- mean_sqr_error: the mean of the square of the error\n",
67 | "- sqrt_mean_sqr_error: the square root of hte mean of the square of the error (the root mean squared error).\n",
68 | "- Please use `K.mean`, `K.square`, and `K.sqrt`\n",
69 | "- The steps are broken down into separate lines of code for clarity. Feel free to combine them, and just remember to return the root mean squared error."
70 | ]
71 | },
72 | {
73 | "cell_type": "code",
74 | "execution_count": 3,
75 | "metadata": {
76 | "colab": {},
77 | "colab_type": "code",
78 | "deletable": false,
79 | "id": "bXNGIkq2Azmf",
80 | "nbgrader": {
81 | "cell_type": "code",
82 | "checksum": "8301324615aba1e02e1f756b4bf1b092",
83 | "grade": false,
84 | "grade_id": "cell-31648b482908e493",
85 | "locked": false,
86 | "schema_version": 3,
87 | "solution": true,
88 | "task": false
89 | }
90 | },
91 | "outputs": [],
92 | "source": [
93 | "# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.\n",
94 | "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n",
95 | "\n",
96 | "\n",
97 | "\n",
98 | "def my_rmse(y_true, y_pred):\n",
99 | " error = y_true - y_pred\n",
100 | " sqr_error = K.square(error)\n",
101 | " mean_sqr_error = K.mean(sqr_error)\n",
102 | " sqrt_mean_sqr_error = K.sqrt(mean_sqr_error)\n",
103 | " return sqrt_mean_sqr_error"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "execution_count": 4,
109 | "metadata": {
110 | "deletable": false,
111 | "editable": false,
112 | "nbgrader": {
113 | "cell_type": "code",
114 | "checksum": "afa4ace3428496820b8b6fb542ca5117",
115 | "grade": true,
116 | "grade_id": "cell-578f76b36f8ee858",
117 | "locked": true,
118 | "points": 1,
119 | "schema_version": 3,
120 | "solution": false,
121 | "task": false
122 | }
123 | },
124 | "outputs": [
125 | {
126 | "name": "stdout",
127 | "output_type": "stream",
128 | "text": [
129 | "\u001b[92m All public tests passed\n"
130 | ]
131 | }
132 | ],
133 | "source": [
134 | "utils.test_my_rmse(my_rmse)\n"
135 | ]
136 | },
137 | {
138 | "cell_type": "markdown",
139 | "metadata": {},
140 | "source": [
141 | "### Define a model using the custom loss function (TODO)\n",
142 | "Similar to the ungraded labs, you will define a simple model and pass the function you just coded as the loss.\n",
143 | "- When compiling the model, you'll choose the `sgd` optimizer and set the `loss` parameter to the custom loss function that you just defined.\n",
144 | "- For grading purposes, please leave the other parameter values as is."
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": 6,
150 | "metadata": {
151 | "colab": {
152 | "base_uri": "https://localhost:8080/",
153 | "height": 34
154 | },
155 | "colab_type": "code",
156 | "deletable": false,
157 | "id": "2eY7fw0EHwda",
158 | "nbgrader": {
159 | "cell_type": "code",
160 | "checksum": "8af71f8408d04ff7abaf41eb3414c8f6",
161 | "grade": false,
162 | "grade_id": "cell-5a29bb71c93124fc",
163 | "locked": false,
164 | "schema_version": 3,
165 | "solution": true,
166 | "task": false
167 | },
168 | "outputId": "a3ea92e4-050e-463d-82c9-9b149554ae41"
169 | },
170 | "outputs": [
171 | {
172 | "name": "stdout",
173 | "output_type": "stream",
174 | "text": [
175 | "[[19.094328]]\n"
176 | ]
177 | }
178 | ],
179 | "source": [
180 | "# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.\n",
181 | "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n",
182 | "\n",
183 | "\n",
184 | "\n",
185 | "# define the model architecture\n",
186 | "model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])\n",
187 | "\n",
188 | "# use the function you just coded as the loss\n",
189 | "model.compile(optimizer='sgd', loss=my_rmse)\n",
190 | " \n",
191 | "# train the model \n",
192 | "model.fit(xs, ys, epochs=500,verbose=0)\n",
193 | " \n",
194 | "# test with a sample input\n",
195 | "print(model.predict([10.0]))"
196 | ]
197 | },
198 | {
199 | "cell_type": "code",
200 | "execution_count": 7,
201 | "metadata": {
202 | "deletable": false,
203 | "editable": false,
204 | "nbgrader": {
205 | "cell_type": "code",
206 | "checksum": "b8da4dc42fa87a1722251adddae9516c",
207 | "grade": true,
208 | "grade_id": "cell-e46bc4e00375b387",
209 | "locked": true,
210 | "points": 1,
211 | "schema_version": 3,
212 | "solution": false,
213 | "task": false
214 | }
215 | },
216 | "outputs": [
217 | {
218 | "name": "stdout",
219 | "output_type": "stream",
220 | "text": [
221 | "\u001b[92m All public tests passed\n"
222 | ]
223 | }
224 | ],
225 | "source": [
226 | "utils.test_model_loss(model.loss)\n"
227 | ]
228 | },
229 | {
230 | "cell_type": "code",
231 | "execution_count": null,
232 | "metadata": {},
233 | "outputs": [],
234 | "source": []
235 | }
236 | ],
237 | "metadata": {
238 | "colab": {
239 | "include_colab_link": true,
240 | "name": "exercise-answer.ipynb",
241 | "provenance": []
242 | },
243 | "kernelspec": {
244 | "display_name": "Python 3",
245 | "language": "python",
246 | "name": "python3"
247 | },
248 | "language_info": {
249 | "codemirror_mode": {
250 | "name": "ipython",
251 | "version": 3
252 | },
253 | "file_extension": ".py",
254 | "mimetype": "text/x-python",
255 | "name": "python",
256 | "nbconvert_exporter": "python",
257 | "pygments_lexer": "ipython3",
258 | "version": "3.7.6"
259 | }
260 | },
261 | "nbformat": 4,
262 | "nbformat_minor": 4
263 | }
264 |
--------------------------------------------------------------------------------
/Course 1/Week3_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Week 3 Assignment: Implement a Quadratic Layer"
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "In this week's programming exercise, you will build a custom quadratic layer which computes $y = ax^2 + bx + c$. Similar to the ungraded lab, this layer will be plugged into a model that will be trained on the MNIST dataset. Let's get started!"
15 | ]
16 | },
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {},
20 | "source": [
21 | "### Imports"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": 5,
27 | "metadata": {},
28 | "outputs": [],
29 | "source": [
30 | "import tensorflow as tf\n",
31 | "from tensorflow.keras.layers import Layer\n",
32 | "\n",
33 | "import utils"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "### Define the quadratic layer (TODO)\n",
41 | "Implement a simple quadratic layer. It has 3 state variables: $a$, $b$ and $c$. The computation returned is $ax^2 + bx + c$. Make sure it can also accept an activation function.\n",
42 | "\n",
43 | "#### `__init__`\n",
44 | "- call `super(my_fun, self)` to access the base class of `my_fun`, and call the `__init__()` function to initialize that base class. In this case, `my_fun` is `SimpleQuadratic` and its base class is `Layer`.\n",
45 | "- self.units: set this using one of the function parameters.\n",
46 | "- self.activation: The function parameter `activation` will be passed in as a string. To get the tensorflow object associated with the string, please use `tf.keras.activations.get()` \n",
47 | "\n",
48 | "\n",
49 | "#### `build`\n",
50 | "The following are suggested steps for writing your code. If you prefer to use fewer lines to implement it, feel free to do so. Either way, you'll want to set `self.a`, `self.b` and `self.c`.\n",
51 | "\n",
52 | "- a_init: set this to tensorflow's `random_normal_initializer()`\n",
53 | "- a_init_val: Use the `random_normal_initializer()` that you just created and invoke it, setting the `shape` and `dtype`.\n",
54 | " - The `shape` of `a` should have its row dimension equal to the last dimension of `input_shape`, and its column dimension equal to the number of units in the layer. \n",
55 | " - This is because you'll be matrix multiplying x^2 * a, so the dimensions should be compatible.\n",
56 | " - set the dtype to 'float32'\n",
57 | "- self.a: create a tensor using tf.Variable, setting the initial_value and set trainable to True.\n",
58 | "\n",
59 | "- b_init, b_init_val, and self.b: these will be set in the same way that you implemented a_init, a_init_val and self.a\n",
60 | "- c_init: set this to `tf.zeros_initializer`.\n",
61 | "- c_init_val: Set this by calling the tf.zeros_initializer that you just instantiated, and set the `shape` and `dtype`\n",
62 | " - shape: This will be a vector equal to the number of units. This expects a tuple, and remember that a tuple `(9,)` includes a comma.\n",
63 | " - dtype: set to 'float32'.\n",
64 | "- self.c: create a tensor using tf.Variable, and set the parameters `initial_value` and `trainable`.\n",
65 | "\n",
66 | "#### `call`\n",
67 | "The following section performs the multiplication x^2*a + x*b + c. The steps are broken down for clarity, but you can also perform this calculation in fewer lines if you prefer.\n",
68 | "- x_squared: use tf.math.square()\n",
69 | "- x_squared_times_a: use tf.matmul(). \n",
70 | " - If you see an error saying `InvalidArgumentError: Matrix size-incompatible`, please check the order of the matrix multiplication to make sure that the matrix dimensions line up.\n",
71 | "- x_times_b: use tf.matmul().\n",
72 | "- x2a_plus_xb_plus_c: add the three terms together.\n",
73 | "- activated_x2a_plus_xb_plus_c: apply the class's `activation` to the sum of the three terms.\n"
74 | ]
75 | },
76 | {
77 | "cell_type": "code",
78 | "execution_count": 14,
79 | "metadata": {
80 | "colab": {},
81 | "colab_type": "code",
82 | "deletable": false,
83 | "id": "Ga20PttZFXm4",
84 | "nbgrader": {
85 | "cell_type": "code",
86 | "checksum": "0df055c519bde80c488c22be89fdb8ef",
87 | "grade": false,
88 | "grade_id": "cell-c302ddc177c098f8",
89 | "locked": false,
90 | "schema_version": 3,
91 | "solution": true,
92 | "task": false
93 | }
94 | },
95 | "outputs": [],
96 | "source": [
97 | "# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.\n",
98 | "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n",
99 | "\n",
100 | "\n",
101 | "\n",
102 | "class SimpleQuadratic(Layer):\n",
103 | "\n",
104 | " def __init__(self, units=32, activation=None):\n",
105 | " '''Initializes the class and sets up the internal variables'''\n",
106 | " super(SimpleQuadratic, self).__init__()\n",
107 | " self.units = units\n",
108 | " self.activation = tf.keras.activations.get(activation)\n",
109 | " \n",
110 | " def build(self, input_shape):\n",
111 | " '''Create the state of the layer (weights)'''\n",
112 | " # a and b should be initialized with random normal, c (or the bias) with zeros.\n",
113 | " # remember to set these as trainable.\n",
114 | " a_init = tf.random_normal_initializer()\n",
115 | " self.a = tf.Variable(name=\"kernel_a\",\n",
116 | " initial_value = a_init(shape=(input_shape[-1], self.units),\n",
117 | " dtype= 'float32'),\n",
118 | " trainable = True)\n",
119 | " \n",
120 | " b_init = tf.random_normal_initializer()\n",
121 | " self.b = tf.Variable(name=\"Kernel_b\",\n",
122 | " initial_value = b_init(shape=(input_shape[-1], self.units),\n",
123 | " dtype='float32'),\n",
124 | " trainable = True)\n",
125 | " \n",
126 | " c_init = tf.zeros_initializer()\n",
127 | " self.c = tf.Variable(name='Bias',\n",
128 | " initial_value = c_init(shape=(self.units, ), dtype='float32'),\n",
129 | " trainable=True)\n",
130 | " super().build(input_shape)\n",
131 | " \n",
132 | " def call(self, inputs):\n",
133 | " '''Defines the computation from inputs to outputs'''\n",
134 | " return self.activation(tf.matmul(tf.square(inputs), self.a) + tf.matmul(inputs, self.b) + self.c)\n",
135 | " "
136 | ]
137 | },
138 | {
139 | "cell_type": "markdown",
140 | "metadata": {},
141 | "source": [
142 | "Test your implementation"
143 | ]
144 | },
145 | {
146 | "cell_type": "code",
147 | "execution_count": 15,
148 | "metadata": {
149 | "deletable": false,
150 | "editable": false,
151 | "nbgrader": {
152 | "cell_type": "code",
153 | "checksum": "0965bec4878a263cf06b286cd0fe3b2a",
154 | "grade": true,
155 | "grade_id": "cell-c3ebc4cccbb7f454",
156 | "locked": true,
157 | "points": 1,
158 | "schema_version": 3,
159 | "solution": false,
160 | "task": false
161 | }
162 | },
163 | "outputs": [
164 | {
165 | "name": "stdout",
166 | "output_type": "stream",
167 | "text": [
168 | "\u001b[92m All public tests passed\n"
169 | ]
170 | }
171 | ],
172 | "source": [
173 | "utils.test_simple_quadratic(SimpleQuadratic)\n"
174 | ]
175 | },
176 | {
177 | "cell_type": "markdown",
178 | "metadata": {},
179 | "source": [
180 | "Train your model with the `SimpleQuadratic` layer that you just implemented."
181 | ]
182 | },
183 | {
184 | "cell_type": "code",
185 | "execution_count": 16,
186 | "metadata": {
187 | "colab": {},
188 | "colab_type": "code",
189 | "id": "14tl1CluExjJ"
190 | },
191 | "outputs": [
192 | {
193 | "name": "stdout",
194 | "output_type": "stream",
195 | "text": [
196 | "Train on 60000 samples\n",
197 | "Epoch 1/5\n",
198 | "60000/60000 [==============================] - 12s 194us/sample - loss: 0.2750 - accuracy: 0.9181\n",
199 | "Epoch 2/5\n",
200 | "60000/60000 [==============================] - 11s 187us/sample - loss: 0.1372 - accuracy: 0.9594\n",
201 | "Epoch 3/5\n",
202 | "60000/60000 [==============================] - 11s 187us/sample - loss: 0.1032 - accuracy: 0.9675- loss: 0.1037 - accuracy: 0.\n",
203 | "Epoch 4/5\n",
204 | "60000/60000 [==============================] - 11s 186us/sample - loss: 0.0853 - accuracy: 0.9734\n",
205 | "Epoch 5/5\n",
206 | "60000/60000 [==============================] - 11s 185us/sample - loss: 0.0754 - accuracy: 0.9762\n",
207 | "10000/10000 [==============================] - 1s 69us/sample - loss: 0.0746 - accuracy: 0.9776\n"
208 | ]
209 | },
210 | {
211 | "data": {
212 | "text/plain": [
213 | "[0.07458630193537101, 0.9776]"
214 | ]
215 | },
216 | "execution_count": 16,
217 | "metadata": {},
218 | "output_type": "execute_result"
219 | }
220 | ],
221 | "source": [
222 | "# THIS CODE SHOULD RUN WITHOUT MODIFICATION\n",
223 | "# AND SHOULD RETURN TRAINING/TESTING ACCURACY at 97%+\n",
224 | "\n",
225 | "mnist = tf.keras.datasets.mnist\n",
226 | "\n",
227 | "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n",
228 | "x_train, x_test = x_train / 255.0, x_test / 255.0\n",
229 | "\n",
230 | "model = tf.keras.models.Sequential([\n",
231 | " tf.keras.layers.Flatten(input_shape=(28, 28)),\n",
232 | " SimpleQuadratic(128, activation='relu'),\n",
233 | " tf.keras.layers.Dropout(0.2),\n",
234 | " tf.keras.layers.Dense(10, activation='softmax')\n",
235 | "])\n",
236 | "\n",
237 | "model.compile(optimizer='adam',\n",
238 | " loss='sparse_categorical_crossentropy',\n",
239 | " metrics=['accuracy'])\n",
240 | "\n",
241 | "model.fit(x_train, y_train, epochs=5)\n",
242 | "model.evaluate(x_test, y_test)"
243 | ]
244 | },
245 | {
246 | "cell_type": "code",
247 | "execution_count": null,
248 | "metadata": {},
249 | "outputs": [],
250 | "source": []
251 | }
252 | ],
253 | "metadata": {
254 | "colab": {
255 | "authorship_tag": "ABX9TyMTFXTWT0EUVuqg6u/LBbJK",
256 | "collapsed_sections": [],
257 | "include_colab_link": true,
258 | "name": "QuadraticLayer_Answer.ipynb",
259 | "provenance": []
260 | },
261 | "kernelspec": {
262 | "display_name": "Python 3",
263 | "language": "python",
264 | "name": "python3"
265 | },
266 | "language_info": {
267 | "codemirror_mode": {
268 | "name": "ipython",
269 | "version": 3
270 | },
271 | "file_extension": ".py",
272 | "mimetype": "text/x-python",
273 | "name": "python",
274 | "nbconvert_exporter": "python",
275 | "pygments_lexer": "ipython3",
276 | "version": "3.7.6"
277 | }
278 | },
279 | "nbformat": 4,
280 | "nbformat_minor": 4
281 | }
282 |
--------------------------------------------------------------------------------
/Course 1/Week4_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "GC7zSrbOWiz0"
8 | },
9 | "source": [
10 | "# Week 4 Assignment: Create a VGG network"
11 | ]
12 | },
13 | {
14 | "cell_type": "markdown",
15 | "metadata": {},
16 | "source": [
17 | "In this exercise, you will build a class that implements a [VGG network](https://towardsdatascience.com/vgg-neural-networks-the-next-step-after-alexnet-3f91fa9ffe2c) and then train it to classify images of cats and dogs. The model will look something like this:\n",
18 | "\n",
19 | "
\n",
20 | "\n",
21 | "It is primarily made up of a series of Conv2D layers followed by a softmax activated layers to classify the image. As you can see, this will be a handful and the code will look huge if you specify each layer individually. As shown in the lectures, you can instead use model subclassing to build complex architectures. You can encapsulate repeating parts of a network then reuse that code when building the final model. You will get to practice that in this exercise. Let's get started!"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": 1,
27 | "metadata": {
28 | "colab": {},
29 | "colab_type": "code",
30 | "id": "Z01I5nj0NAOu"
31 | },
32 | "outputs": [],
33 | "source": [
34 | "import tensorflow as tf\n",
35 | "import tensorflow_datasets as tfds\n",
36 | "import utils"
37 | ]
38 | },
39 | {
40 | "cell_type": "markdown",
41 | "metadata": {},
42 | "source": [
43 | "## Create named-variables dynamically\n",
44 | "\n",
45 | "In this assignment, you will see the use of the Python function `vars()`. This will allow you to use a for loop to define and set multiple variables with a similar name, such as var1, var2, var3. \n",
46 | "\n",
47 | "Please go through the following examples to get familiar with `vars()`, as you will use it when building the VGG model.\n",
48 | "- You'll start by defining a class `MyClass`\n",
49 | "- It contains one variable `var1`. \n",
50 | "- Create an object of type `MyClass`."
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": 2,
56 | "metadata": {},
57 | "outputs": [],
58 | "source": [
59 | "# Define a small class MyClass\n",
60 | "class MyClass:\n",
61 | " def __init__(self):\n",
62 | " # One class variable 'a' is set to 1\n",
63 | " self.var1 = 1\n",
64 | "\n",
65 | "# Create an object of type MyClass()\n",
66 | "my_obj = MyClass()"
67 | ]
68 | },
69 | {
70 | "cell_type": "markdown",
71 | "metadata": {},
72 | "source": [
73 | "Python classes have an attribute called `__dict__`.\n",
74 | "- `__dict__` is a Python dictionary that contains the object's instance variables and values as key value pairs."
75 | ]
76 | },
77 | {
78 | "cell_type": "code",
79 | "execution_count": 3,
80 | "metadata": {},
81 | "outputs": [
82 | {
83 | "data": {
84 | "text/plain": [
85 | "{'var1': 1}"
86 | ]
87 | },
88 | "execution_count": 3,
89 | "metadata": {},
90 | "output_type": "execute_result"
91 | }
92 | ],
93 | "source": [
94 | "my_obj.__dict__"
95 | ]
96 | },
97 | {
98 | "cell_type": "markdown",
99 | "metadata": {},
100 | "source": [
101 | "If you call `vars()` and pass in an object, it will call the object's `__dict__` attribute, which is a Python dictionary containing the object's instance variables and their values as ke"
102 | ]
103 | },
104 | {
105 | "cell_type": "code",
106 | "execution_count": 4,
107 | "metadata": {},
108 | "outputs": [
109 | {
110 | "data": {
111 | "text/plain": [
112 | "{'var1': 1}"
113 | ]
114 | },
115 | "execution_count": 4,
116 | "metadata": {},
117 | "output_type": "execute_result"
118 | }
119 | ],
120 | "source": [
121 | "vars(my_obj)"
122 | ]
123 | },
124 | {
125 | "cell_type": "markdown",
126 | "metadata": {},
127 | "source": [
128 | "You may be familiar with adding new variable like this:"
129 | ]
130 | },
131 | {
132 | "cell_type": "code",
133 | "execution_count": 5,
134 | "metadata": {},
135 | "outputs": [
136 | {
137 | "data": {
138 | "text/plain": [
139 | "{'var1': 1, 'var2': 2}"
140 | ]
141 | },
142 | "execution_count": 5,
143 | "metadata": {},
144 | "output_type": "execute_result"
145 | }
146 | ],
147 | "source": [
148 | "# Add a new instance variable and give it a value\n",
149 | "my_obj.var2 = 2\n",
150 | "\n",
151 | "# Calls vars() again to see the object's instance variables\n",
152 | "vars(my_obj)"
153 | ]
154 | },
155 | {
156 | "cell_type": "markdown",
157 | "metadata": {},
158 | "source": [
159 | "Here is another way that you can add an instance variable to an object, using `vars()`.\n",
160 | "- Retrieve the Python dictionary `__dict__` of the object using vars(my_obj).\n",
161 | "- Modify this `__dict__` dictionary using square bracket notation and passing in the variable's name as a string: `['var3'] = 3`"
162 | ]
163 | },
164 | {
165 | "cell_type": "code",
166 | "execution_count": 6,
167 | "metadata": {},
168 | "outputs": [
169 | {
170 | "data": {
171 | "text/plain": [
172 | "{'var1': 1, 'var2': 2, 'var3': 3}"
173 | ]
174 | },
175 | "execution_count": 6,
176 | "metadata": {},
177 | "output_type": "execute_result"
178 | }
179 | ],
180 | "source": [
181 | "# Call vars, passing in the object. Then access the __dict__ dictionary using square brackets\n",
182 | "vars(my_obj)['var3'] = 3\n",
183 | "\n",
184 | "# Call vars() to see the object's instance variables\n",
185 | "vars(my_obj)"
186 | ]
187 | },
188 | {
189 | "cell_type": "markdown",
190 | "metadata": {},
191 | "source": [
192 | "#### Why this is helpful!\n",
193 | "You may be wondering why you would need another way to access an object's instance variables. \n",
194 | "- Notice that when using `vars()`, you can now pass in the name of the variable `var3` as a string.\n",
195 | "- What if you plan to use several variables that are similarly named (`var4`, `var5` ... `var9`) and wanted a convenient way to access them by incrementing a number?\n",
196 | "\n",
197 | "Try this!"
198 | ]
199 | },
200 | {
201 | "cell_type": "code",
202 | "execution_count": 8,
203 | "metadata": {},
204 | "outputs": [
205 | {
206 | "data": {
207 | "text/plain": [
208 | "{'var1': 1,\n",
209 | " 'var2': 2,\n",
210 | " 'var3': 3,\n",
211 | " 'var4': 4,\n",
212 | " 'var5': 5,\n",
213 | " 'var6': 6,\n",
214 | " 'var7': 7,\n",
215 | " 'var8': 8,\n",
216 | " 'var9': 9}"
217 | ]
218 | },
219 | "execution_count": 8,
220 | "metadata": {},
221 | "output_type": "execute_result"
222 | }
223 | ],
224 | "source": [
225 | "# Use a for loop to increment the index 'i'\n",
226 | "for i in range(4,10):\n",
227 | " # Format a string that is var\n",
228 | " vars(my_obj)[f'var{i}'] = i\n",
229 | " \n",
230 | "# View the object's instance variables!\n",
231 | "vars(my_obj)"
232 | ]
233 | },
234 | {
235 | "cell_type": "markdown",
236 | "metadata": {},
237 | "source": [
238 | "There are couple equivalent ways in Python to format a string. Here are two of those ways:\n",
239 | "- f-string: f\"var{i}\"\n",
240 | "- .format: \"var{}\".format(i)"
241 | ]
242 | },
243 | {
244 | "cell_type": "code",
245 | "execution_count": 9,
246 | "metadata": {},
247 | "outputs": [
248 | {
249 | "name": "stdout",
250 | "output_type": "stream",
251 | "text": [
252 | "var1\n",
253 | "var2\n"
254 | ]
255 | }
256 | ],
257 | "source": [
258 | "# Format a string using f-string notation\n",
259 | "i=1\n",
260 | "print(f\"var{i}\")\n",
261 | "\n",
262 | "# Format a string using .format notation\n",
263 | "i=2\n",
264 | "print(\"var{}\".format(i))"
265 | ]
266 | },
267 | {
268 | "cell_type": "markdown",
269 | "metadata": {},
270 | "source": [
271 | "You can access the variables of a class inside the class definition using `vars(self)`"
272 | ]
273 | },
274 | {
275 | "cell_type": "code",
276 | "execution_count": 11,
277 | "metadata": {},
278 | "outputs": [
279 | {
280 | "data": {
281 | "text/plain": [
282 | "{'var1': 1, 'var2': 2}"
283 | ]
284 | },
285 | "execution_count": 11,
286 | "metadata": {},
287 | "output_type": "execute_result"
288 | }
289 | ],
290 | "source": [
291 | "# Define a small class MyClass\n",
292 | "class MyClass:\n",
293 | " def __init__(self):\n",
294 | " # Use vars(self) to access the class's dictionary of variables\n",
295 | " vars(self)['var1'] = 1\n",
296 | " vars(self)['var2'] = 2\n",
297 | "\n",
298 | "# Create an object of type MyClass()\n",
299 | "my_obj = MyClass()\n",
300 | "vars(my_obj)"
301 | ]
302 | },
303 | {
304 | "cell_type": "markdown",
305 | "metadata": {},
306 | "source": [
307 | "You'll see this in the upcoming code. Now you'll start building the VGG network!"
308 | ]
309 | },
310 | {
311 | "cell_type": "markdown",
312 | "metadata": {
313 | "colab_type": "text",
314 | "id": "k1T1UMw5YAkp"
315 | },
316 | "source": [
317 | "## Create a generic VGG block (TODO)\n",
318 | "\n",
319 | "The VGG Network has blocks of layers, where each block has a varied number of layers.\n",
320 | "- In order to create blocks of layers that have a customizable number of conv2D layers, you'll define a class `Block`, which can generate a customizable block of layers \n",
321 | "\n",
322 | "\n",
323 | "### `__init__`\n",
324 | "In the constructor `__init__`, store the conv2D parameters and also define the number of conv2D layers using the parameters passed into `__init__`.\n",
325 | "- Store the filters, kernel_size, and repetitions as class variables so that they can be used later in the `call` function.\n",
326 | "- Using a for loop, define a number of Conv2D [Conv2D](https://keras.io/api/layers/convolution_layers/convolution2d/) layers, based on the number of `repetitions` desired for this block.\n",
327 | " - You can define each conv2D layer using `vars` and string formatting to create conv2D_0, conv2D_1, conv2D_3 etc.\n",
328 | " - Set these four parameters of Conv2D:\n",
329 | " - filters\n",
330 | " - kernel_size\n",
331 | " - activation: set this to 'relu'\n",
332 | " - padding: set this to 'same' (default pading is 'valid').\n",
333 | " \n",
334 | "- Define the [MaxPool2D](https://keras.io/api/layers/pooling_layers/max_pooling2d/) layer that follows these Conv2D layers. \n",
335 | " - Set the following parameters for MaxPool2D:\n",
336 | " - pool_size: this will be a tuple with two values.\n",
337 | " - strides: this will also be a tuple with two values.\n",
338 | "\n",
339 | "### `call`\n",
340 | "In `call`, you will connect the layers together.\n",
341 | "- The 0-th conv2D layer, `conv2D_0`, immediately follows the `inputs`.\n",
342 | "- For conv2D layers 1,2 and onward, you can use a for loop to connect conv2D_1 to conv2D_0, and connect conv2D_2 to conv2D_1, and so on.\n",
343 | "- After connecting all of the conv2D_i layers, add connect the max_pool layer and return the max_pool layer."
344 | ]
345 | },
346 | {
347 | "cell_type": "code",
348 | "execution_count": 12,
349 | "metadata": {
350 | "colab": {},
351 | "colab_type": "code",
352 | "deletable": false,
353 | "id": "WGJGaxVjM00W",
354 | "nbgrader": {
355 | "cell_type": "code",
356 | "checksum": "7f19295d8925e1d2e60eefd42a6b4dd8",
357 | "grade": false,
358 | "grade_id": "cell-1449db9892707876",
359 | "locked": false,
360 | "schema_version": 3,
361 | "solution": true,
362 | "task": false
363 | }
364 | },
365 | "outputs": [],
366 | "source": [
367 | "# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.\n",
368 | "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n",
369 | "\n",
370 | "\n",
371 | "\n",
372 | "class Block(tf.keras.Model):\n",
373 | " def __init__(self, filters, kernel_size, repetitions, pool_size=2, strides=2):\n",
374 | " super(Block, self).__init__()\n",
375 | " self.filters = filters\n",
376 | " self.kernel_size = kernel_size\n",
377 | " self.repetitions = repetitions\n",
378 | " \n",
379 | " # Define a conv2D_0, conv2D_1, etc based on the number of repetitions\n",
380 | " for i in range(0, repetitions):\n",
381 | " \n",
382 | " # Define a Conv2D layer, specifying filters, kernel_size, activation and padding.\n",
383 | " vars(self)[f'conv2D_{i}'] = tf.keras.layers.Conv2D(filters, kernel_size, activation='relu', padding='same')\n",
384 | " \n",
385 | " # Define the max pool layer that will be added after the Conv2D blocks\n",
386 | " self.max_pool = tf.keras.layers.MaxPooling2D(pool_size=pool_size, strides=strides)\n",
387 | " \n",
388 | " def call(self, inputs):\n",
389 | " # access the class's conv2D_0 layer\n",
390 | " conv2D_0 = vars(self)['conv2D_0']\n",
391 | " \n",
392 | " # Connect the conv2D_0 layer to inputs\n",
393 | " x = conv2D_0(inputs)\n",
394 | "\n",
395 | " # for the remaining conv2D_i layers from 1 to `repetitions` they will be connected to the previous layer\n",
396 | " for i in range(1, self.repetitions):\n",
397 | " # access conv2D_i by formatting the integer `i`. (hint: check how these were saved using `vars()` earlier)\n",
398 | " conv2D_i = vars(self)[f'conv2D_{i}']\n",
399 | " \n",
400 | " # Use the conv2D_i and connect it to the previous layer\n",
401 | " x = conv2D_i(x)\n",
402 | "\n",
403 | " # Finally, add the max_pool layer\n",
404 | " max_pool = self.max_pool(x)\n",
405 | " \n",
406 | " return max_pool"
407 | ]
408 | },
409 | {
410 | "cell_type": "code",
411 | "execution_count": 13,
412 | "metadata": {
413 | "deletable": false,
414 | "editable": false,
415 | "nbgrader": {
416 | "cell_type": "code",
417 | "checksum": "4027611c9615b1f518a95d76a81bc8d1",
418 | "grade": true,
419 | "grade_id": "cell-2911e521bce8793b",
420 | "locked": true,
421 | "points": 1,
422 | "schema_version": 3,
423 | "solution": false,
424 | "task": false
425 | }
426 | },
427 | "outputs": [
428 | {
429 | "name": "stdout",
430 | "output_type": "stream",
431 | "text": [
432 | "\u001b[92m All public tests passed\n"
433 | ]
434 | }
435 | ],
436 | "source": [
437 | "utils.test_block_class(Block)\n"
438 | ]
439 | },
440 | {
441 | "cell_type": "markdown",
442 | "metadata": {
443 | "colab_type": "text",
444 | "id": "peM2GP6uYT0U"
445 | },
446 | "source": [
447 | "## Create the Custom VGG network (TODO)\n",
448 | "This model stack has a series of VGG blocks, which can be created using the `Block` class that you defined earlier.\n",
449 | "\n",
450 | "### `__init__`\n",
451 | "- Recall that the `__init__` constructor of `Block` takes several function parameters, \n",
452 | " - filters, kernel_size, repetitions: you'll set these.\n",
453 | " - kernel_size and strides: you can use the default values.\n",
454 | "- For blocks a through e, build the blocks according to the following specifications:\n",
455 | "- block_a: 64 filters, kernel_size 3, repetitions 2\n",
456 | "- block_b: 128 filters, kernel_size 3, repetitions 2\n",
457 | "- block_c: 256 filters, kernel_size 3, repetitions 3\n",
458 | "- block_d: 512 filters, kernel_size 3, repetitions 3\n",
459 | "- block_e: 512 filters, kernel_size 3, repetitions 3\n",
460 | "\n",
461 | "After block 'e', add the following layers:\n",
462 | "- flatten: use [Flatten](https://keras.io/api/layers/reshaping_layers/flatten/).\n",
463 | "- fc: create a fully connected layer using [Dense](https://keras.io/api/layers/core_layers/dense/). Give this 256 units, and a `'relu'` activation.\n",
464 | "- classifier: create the classifier using a Dense layer. The number of units equals the number of classes. For multi-class classification, use a `'softmax'` activation.\n",
465 | "\n",
466 | "### `call`\n",
467 | "Connect these layers together using the functional API syntax:\n",
468 | "- inputs\n",
469 | "- block_a\n",
470 | "- block_b\n",
471 | "- block_c\n",
472 | "- block_d\n",
473 | "- block_e\n",
474 | "- flatten\n",
475 | "- fc\n",
476 | "- classifier\n",
477 | "\n",
478 | "Return the classifier layer."
479 | ]
480 | },
481 | {
482 | "cell_type": "code",
483 | "execution_count": 17,
484 | "metadata": {
485 | "colab": {},
486 | "colab_type": "code",
487 | "deletable": false,
488 | "id": "yD-paeGiNGvz",
489 | "nbgrader": {
490 | "cell_type": "code",
491 | "checksum": "523346a38f53bc31e080114e98e8eca6",
492 | "grade": false,
493 | "grade_id": "cell-d9e90af0898eb47f",
494 | "locked": false,
495 | "schema_version": 3,
496 | "solution": true,
497 | "task": false
498 | }
499 | },
500 | "outputs": [],
501 | "source": [
502 | "# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.\n",
503 | "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n",
504 | "\n",
505 | "\n",
506 | "\n",
507 | "class MyVGG(tf.keras.Model):\n",
508 | "\n",
509 | " def __init__(self, num_classes):\n",
510 | " super(MyVGG, self).__init__()\n",
511 | "\n",
512 | " # Creating blocks of VGG with the following \n",
513 | " # (filters, kernel_size, repetitions) configurations\n",
514 | " self.block_a = Block(filters=64, kernel_size=3, repetitions=2)\n",
515 | " self.block_b = Block(filters=128, kernel_size=3, repetitions=2)\n",
516 | " self.block_c = Block(filters=256, kernel_size=3, repetitions=3)\n",
517 | " self.block_d = Block(filters=512, kernel_size=3, repetitions=3)\n",
518 | " self.block_e = Block(filters=512, kernel_size=3, repetitions=3)\n",
519 | "\n",
520 | " # Classification head\n",
521 | " # Define a Flatten layer\n",
522 | " self.flatten = tf.keras.layers.Flatten()\n",
523 | " # Create a Dense layer with 256 units and ReLU as the activation function\n",
524 | " self.fc = tf.keras.layers.Dense(256, activation='relu')\n",
525 | " # Finally add the softmax classifier using a Dense layer\n",
526 | " self.classifier = tf.keras.layers.Dense(num_classes,activation='softmax')\n",
527 | " def call(self, inputs):\n",
528 | " # Chain all the layers one after the other\n",
529 | " x = self.block_a(inputs)\n",
530 | " x = self.block_b(x)\n",
531 | " x = self.block_c(x)\n",
532 | " x = self.block_d(x)\n",
533 | " x = self.block_e(x)\n",
534 | " x = self.flatten(x)\n",
535 | " x = self.fc(x)\n",
536 | " x = self.classifier(x)\n",
537 | " return x"
538 | ]
539 | },
540 | {
541 | "cell_type": "code",
542 | "execution_count": 18,
543 | "metadata": {
544 | "deletable": false,
545 | "editable": false,
546 | "nbgrader": {
547 | "cell_type": "code",
548 | "checksum": "79d77a2aa7ee7f82d707558cf5206868",
549 | "grade": true,
550 | "grade_id": "cell-559ac19437f4f2b2",
551 | "locked": true,
552 | "points": 1,
553 | "schema_version": 3,
554 | "solution": false,
555 | "task": false
556 | }
557 | },
558 | "outputs": [
559 | {
560 | "name": "stdout",
561 | "output_type": "stream",
562 | "text": [
563 | "\u001b[92m All public tests passed\n"
564 | ]
565 | }
566 | ],
567 | "source": [
568 | "utils.test_myvgg_class(MyVGG, Block)"
569 | ]
570 | },
571 | {
572 | "cell_type": "markdown",
573 | "metadata": {},
574 | "source": [
575 | "### Load data and train the VGG network (Optional)\n",
576 | "You can now load the dataset and proceed to train your VGG network. \n",
577 | "- This will take a few minutes to complete and is **not required to complete the assignment**.\n",
578 | "- You can submit your work before starting the training."
579 | ]
580 | },
581 | {
582 | "cell_type": "code",
583 | "execution_count": 19,
584 | "metadata": {
585 | "colab": {},
586 | "colab_type": "code",
587 | "id": "MaF763OKNJxU"
588 | },
589 | "outputs": [
590 | {
591 | "name": "stdout",
592 | "output_type": "stream",
593 | "text": [
594 | "Epoch 1/10\n",
595 | " 1/Unknown - 19s 19s/step"
596 | ]
597 | },
598 | {
599 | "ename": "KeyboardInterrupt",
600 | "evalue": "",
601 | "output_type": "error",
602 | "traceback": [
603 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
604 | "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
605 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 17\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 18\u001b[0m \u001b[0;31m# Train the custom VGG model\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 19\u001b[0;31m \u001b[0mvgg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdataset\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mepochs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
606 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py\u001b[0m in \u001b[0;36mfit\u001b[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)\u001b[0m\n\u001b[1;32m 817\u001b[0m \u001b[0mmax_queue_size\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmax_queue_size\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 818\u001b[0m \u001b[0mworkers\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mworkers\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 819\u001b[0;31m use_multiprocessing=use_multiprocessing)\n\u001b[0m\u001b[1;32m 820\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 821\u001b[0m def evaluate(self,\n",
607 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py\u001b[0m in \u001b[0;36mfit\u001b[0;34m(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)\u001b[0m\n\u001b[1;32m 340\u001b[0m \u001b[0mmode\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mModeKeys\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTRAIN\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 341\u001b[0m \u001b[0mtraining_context\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtraining_context\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 342\u001b[0;31m total_epochs=epochs)\n\u001b[0m\u001b[1;32m 343\u001b[0m \u001b[0mcbks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmake_logs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mepoch_logs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtraining_result\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mModeKeys\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTRAIN\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 344\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
608 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py\u001b[0m in \u001b[0;36mrun_one_epoch\u001b[0;34m(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)\u001b[0m\n\u001b[1;32m 126\u001b[0m step=step, mode=mode, size=current_batch_size) as batch_logs:\n\u001b[1;32m 127\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 128\u001b[0;31m \u001b[0mbatch_outs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mexecution_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0miterator\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 129\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mStopIteration\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0merrors\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mOutOfRangeError\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 130\u001b[0m \u001b[0;31m# TODO(kaftan): File bug about tf function and errors.OutOfRangeError?\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
609 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py\u001b[0m in \u001b[0;36mexecution_function\u001b[0;34m(input_fn)\u001b[0m\n\u001b[1;32m 96\u001b[0m \u001b[0;31m# `numpy` translates Tensors to values in Eager mode.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 97\u001b[0m return nest.map_structure(_non_none_constant_value,\n\u001b[0;32m---> 98\u001b[0;31m distributed_function(input_fn))\n\u001b[0m\u001b[1;32m 99\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 100\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mexecution_function\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
610 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 566\u001b[0m \u001b[0mxla_context\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mExit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 567\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 568\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 569\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 570\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mtracing_count\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_get_tracing_count\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
611 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py\u001b[0m in \u001b[0;36m_call\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 630\u001b[0m \u001b[0;31m# Lifting succeeded, so variables are initialized and we can run the\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 631\u001b[0m \u001b[0;31m# stateless function.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 632\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_stateless_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 633\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 634\u001b[0m \u001b[0mcanon_args\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcanon_kwds\u001b[0m \u001b[0;34m=\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
612 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 2361\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_lock\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2362\u001b[0m \u001b[0mgraph_function\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwargs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_maybe_define_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2363\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mgraph_function\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_filtered_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# pylint: disable=protected-access\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2364\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2365\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0mproperty\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
613 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py\u001b[0m in \u001b[0;36m_filtered_call\u001b[0;34m(self, args, kwargs)\u001b[0m\n\u001b[1;32m 1609\u001b[0m if isinstance(t, (ops.Tensor,\n\u001b[1;32m 1610\u001b[0m resource_variable_ops.BaseResourceVariable))),\n\u001b[0;32m-> 1611\u001b[0;31m self.captured_inputs)\n\u001b[0m\u001b[1;32m 1612\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1613\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_call_flat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcaptured_inputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcancellation_manager\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
614 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py\u001b[0m in \u001b[0;36m_call_flat\u001b[0;34m(self, args, captured_inputs, cancellation_manager)\u001b[0m\n\u001b[1;32m 1690\u001b[0m \u001b[0;31m# No tape is watching; skip to running the function.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1691\u001b[0m return self._build_call_outputs(self._inference_function.call(\n\u001b[0;32m-> 1692\u001b[0;31m ctx, args, cancellation_manager=cancellation_manager))\n\u001b[0m\u001b[1;32m 1693\u001b[0m forward_backward = self._select_forward_and_backward_functions(\n\u001b[1;32m 1694\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
615 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py\u001b[0m in \u001b[0;36mcall\u001b[0;34m(self, ctx, args, cancellation_manager)\u001b[0m\n\u001b[1;32m 543\u001b[0m \u001b[0minputs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 544\u001b[0m \u001b[0mattrs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"executor_type\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mexecutor_type\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"config_proto\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 545\u001b[0;31m ctx=ctx)\n\u001b[0m\u001b[1;32m 546\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 547\u001b[0m outputs = execute.execute_with_cancellation(\n",
616 | "\u001b[0;32m/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py\u001b[0m in \u001b[0;36mquick_execute\u001b[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)\u001b[0m\n\u001b[1;32m 59\u001b[0m tensors = pywrap_tensorflow.TFE_Py_Execute(ctx._handle, device_name,\n\u001b[1;32m 60\u001b[0m \u001b[0mop_name\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mattrs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 61\u001b[0;31m num_outputs)\n\u001b[0m\u001b[1;32m 62\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mcore\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_NotOkStatusException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 63\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mname\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
617 | "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
618 | ]
619 | }
620 | ],
621 | "source": [
622 | "# dataset = tfds.load('cats_vs_dogs', split=tfds.Split.TRAIN, data_dir='data/')\n",
623 | "\n",
624 | "# # Initialize VGG with the number of classes \n",
625 | "# vgg = MyVGG(num_classes=2)\n",
626 | "\n",
627 | "# # Compile with losses and metrics\n",
628 | "# vgg.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
629 | "\n",
630 | "# # Define preprocessing function\n",
631 | "# def preprocess(features):\n",
632 | "# # Resize and normalize\n",
633 | "# image = tf.image.resize(features['image'], (224, 224))\n",
634 | "# return tf.cast(image, tf.float32) / 255., features['label']\n",
635 | "\n",
636 | "# # Apply transformations to dataset\n",
637 | "# dataset = dataset.map(preprocess).batch(32)\n",
638 | "\n",
639 | "# # Train the custom VGG model\n",
640 | "# vgg.fit(dataset, epochs=10)"
641 | ]
642 | }
643 | ],
644 | "metadata": {
645 | "colab": {
646 | "collapsed_sections": [],
647 | "include_colab_link": true,
648 | "name": "ExerciseAnswer.ipynb",
649 | "provenance": []
650 | },
651 | "kernelspec": {
652 | "display_name": "Python 3",
653 | "language": "python",
654 | "name": "python3"
655 | },
656 | "language_info": {
657 | "codemirror_mode": {
658 | "name": "ipython",
659 | "version": 3
660 | },
661 | "file_extension": ".py",
662 | "mimetype": "text/x-python",
663 | "name": "python",
664 | "nbconvert_exporter": "python",
665 | "pygments_lexer": "ipython3",
666 | "version": "3.7.6"
667 | }
668 | },
669 | "nbformat": 4,
670 | "nbformat_minor": 4
671 | }
672 |
--------------------------------------------------------------------------------
/Course 1/Week5_Bonus Notebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "view-in-github"
8 | },
9 | "source": [
10 | "
"
11 | ]
12 | },
13 | {
14 | "cell_type": "markdown",
15 | "metadata": {
16 | "colab_type": "text",
17 | "id": "0c_TYhQOUe1j"
18 | },
19 | "source": [
20 | "# Ungraded Lab: Introduction to Keras callbacks\n",
21 | "\n",
22 | "In Keras, `Callback` is a Python class meant to be subclassed to provide specific functionality, with a set of methods called at various stages of training (including batch/epoch start and ends), testing, and predicting. Callbacks are useful to get a view on internal states and statistics of the model during training. The methods of the callbacks can be called at different stages of training/evaluating/inference. Keras has available [callbacks](https://keras.io/api/callbacks/) and we'll show how you can use it in the following sections. Please click the **Open in Colab** badge above to complete this exercise in Colab. This will allow you to take advantage of the free GPU runtime (for faster training) and compatibility with all the packages needed in this notebook."
23 | ]
24 | },
25 | {
26 | "cell_type": "markdown",
27 | "metadata": {
28 | "colab_type": "text",
29 | "id": "Uyl69EyRQx-f"
30 | },
31 | "source": [
32 | "## Model methods that take callbacks\n",
33 | "Users can supply a list of callbacks to the following `tf.keras.Model` methods:\n",
34 | "* [`fit()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit), [`fit_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit_generator)\n",
35 | "Trains the model for a fixed number of epochs (iterations over a dataset, or data yielded batch-by-batch by a Python generator).\n",
36 | "* [`evaluate()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate), [`evaluate_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate_generator)\n",
37 | "Evaluates the model for given data or data generator. Outputs the loss and metric values from the evaluation.\n",
38 | "* [`predict()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict), [`predict_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict_generator)\n",
39 | "Generates output predictions for the input data or data generator."
40 | ]
41 | },
42 | {
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "## Imports"
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": null,
52 | "metadata": {
53 | "colab": {},
54 | "colab_type": "code",
55 | "id": "AlT1Kh3uA9lZ"
56 | },
57 | "outputs": [],
58 | "source": [
59 | "from __future__ import absolute_import, division, print_function, unicode_literals\n",
60 | "\n",
61 | "try:\n",
62 | " # %tensorflow_version only exists in Colab.\n",
63 | " %tensorflow_version 2.x\n",
64 | "except Exception:\n",
65 | " pass\n",
66 | "\n",
67 | "import tensorflow as tf\n",
68 | "import tensorflow_datasets as tfds\n",
69 | "import matplotlib.pyplot as plt\n",
70 | "import io\n",
71 | "from PIL import Image\n",
72 | "\n",
73 | "from tensorflow.keras.callbacks import TensorBoard, EarlyStopping, LearningRateScheduler, ModelCheckpoint, CSVLogger, ReduceLROnPlateau\n",
74 | "%load_ext tensorboard\n",
75 | "\n",
76 | "import os\n",
77 | "import matplotlib.pylab as plt\n",
78 | "import numpy as np\n",
79 | "import math\n",
80 | "import datetime\n",
81 | "import pandas as pd\n",
82 | "\n",
83 | "print(\"Version: \", tf.__version__)\n",
84 | "tf.get_logger().setLevel('INFO')"
85 | ]
86 | },
87 | {
88 | "cell_type": "markdown",
89 | "metadata": {
90 | "colab_type": "text",
91 | "id": "HnSljqtsXKfb"
92 | },
93 | "source": [
94 | "# Examples of Keras callback applications\n",
95 | "The following section will guide you through creating simple [Callback](https://keras.io/api/callbacks/) applications."
96 | ]
97 | },
98 | {
99 | "cell_type": "code",
100 | "execution_count": null,
101 | "metadata": {
102 | "colab": {},
103 | "colab_type": "code",
104 | "id": "spskRuxvCYQE"
105 | },
106 | "outputs": [],
107 | "source": [
108 | "# Download and prepare the horses or humans dataset\n",
109 | "\n",
110 | "splits, info = tfds.load('horses_or_humans', as_supervised=True, with_info=True, split=['train[:80%]', 'train[80%:]', 'test'])\n",
111 | "\n",
112 | "(train_examples, validation_examples, test_examples) = splits\n",
113 | "\n",
114 | "num_examples = info.splits['train'].num_examples\n",
115 | "num_classes = info.features['label'].num_classes"
116 | ]
117 | },
118 | {
119 | "cell_type": "code",
120 | "execution_count": null,
121 | "metadata": {
122 | "colab": {},
123 | "colab_type": "code",
124 | "id": "veIsubKTCZsN"
125 | },
126 | "outputs": [],
127 | "source": [
128 | "SIZE = 150 #@param {type:\"slider\", min:64, max:300, step:1}\n",
129 | "IMAGE_SIZE = (SIZE, SIZE)"
130 | ]
131 | },
132 | {
133 | "cell_type": "code",
134 | "execution_count": null,
135 | "metadata": {
136 | "colab": {},
137 | "colab_type": "code",
138 | "id": "faajLlErCb1S"
139 | },
140 | "outputs": [],
141 | "source": [
142 | "def format_image(image, label):\n",
143 | " image = tf.image.resize(image, IMAGE_SIZE) / 255.0\n",
144 | " return image, label"
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": null,
150 | "metadata": {
151 | "colab": {},
152 | "colab_type": "code",
153 | "id": "AVXPuU12Cdka"
154 | },
155 | "outputs": [],
156 | "source": [
157 | "BATCH_SIZE = 32 #@param {type:\"integer\"}"
158 | ]
159 | },
160 | {
161 | "cell_type": "code",
162 | "execution_count": null,
163 | "metadata": {
164 | "colab": {},
165 | "colab_type": "code",
166 | "id": "0lHDkFVaCe48"
167 | },
168 | "outputs": [],
169 | "source": [
170 | "train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)\n",
171 | "validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)\n",
172 | "test_batches = test_examples.map(format_image).batch(1)"
173 | ]
174 | },
175 | {
176 | "cell_type": "code",
177 | "execution_count": null,
178 | "metadata": {
179 | "colab": {},
180 | "colab_type": "code",
181 | "id": "DxsCqEIkCgUt"
182 | },
183 | "outputs": [],
184 | "source": [
185 | "for image_batch, label_batch in train_batches.take(1):\n",
186 | " pass\n",
187 | "\n",
188 | "image_batch.shape"
189 | ]
190 | },
191 | {
192 | "cell_type": "code",
193 | "execution_count": null,
194 | "metadata": {
195 | "colab": {},
196 | "colab_type": "code",
197 | "id": "iDBpWvHXCh2A"
198 | },
199 | "outputs": [],
200 | "source": [
201 | "def build_model(dense_units, input_shape=IMAGE_SIZE + (3,)):\n",
202 | " model = tf.keras.models.Sequential([\n",
203 | " tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=input_shape),\n",
204 | " tf.keras.layers.MaxPooling2D(2, 2),\n",
205 | " tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n",
206 | " tf.keras.layers.MaxPooling2D(2, 2),\n",
207 | " tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),\n",
208 | " tf.keras.layers.MaxPooling2D(2, 2),\n",
209 | " tf.keras.layers.Flatten(),\n",
210 | " tf.keras.layers.Dense(dense_units, activation='relu'),\n",
211 | " tf.keras.layers.Dense(2, activation='softmax')\n",
212 | " ])\n",
213 | " return model"
214 | ]
215 | },
216 | {
217 | "cell_type": "markdown",
218 | "metadata": {
219 | "colab_type": "text",
220 | "id": "0ZKGkjagENSw"
221 | },
222 | "source": [
223 | "## [TensorBoard](https://keras.io/api/callbacks/tensorboard/)\n",
224 | "\n",
225 | "Enable visualizations for TensorBoard."
226 | ]
227 | },
228 | {
229 | "cell_type": "code",
230 | "execution_count": null,
231 | "metadata": {
232 | "colab": {},
233 | "colab_type": "code",
234 | "id": "CeiD2WVEHbex"
235 | },
236 | "outputs": [],
237 | "source": [
238 | "!rm -rf logs"
239 | ]
240 | },
241 | {
242 | "cell_type": "code",
243 | "execution_count": null,
244 | "metadata": {
245 | "colab": {},
246 | "colab_type": "code",
247 | "id": "PpLwPLnAEOzv"
248 | },
249 | "outputs": [],
250 | "source": [
251 | "model = build_model(dense_units=256)\n",
252 | "model.compile(\n",
253 | " optimizer='sgd',\n",
254 | " loss='sparse_categorical_crossentropy', \n",
255 | " metrics=['accuracy'])\n",
256 | " \n",
257 | "logdir = os.path.join(\"logs\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\n",
258 | "tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir)\n",
259 | "\n",
260 | "model.fit(train_batches, \n",
261 | " epochs=10, \n",
262 | " validation_data=validation_batches, \n",
263 | " callbacks=[tensorboard_callback])"
264 | ]
265 | },
266 | {
267 | "cell_type": "code",
268 | "execution_count": null,
269 | "metadata": {
270 | "colab": {},
271 | "colab_type": "code",
272 | "id": "iJunWOjZE0ir"
273 | },
274 | "outputs": [],
275 | "source": [
276 | "%tensorboard --logdir logs"
277 | ]
278 | },
279 | {
280 | "cell_type": "markdown",
281 | "metadata": {
282 | "colab_type": "text",
283 | "id": "wv9H4Pc2Mfl7"
284 | },
285 | "source": [
286 | "## [Model Checkpoint](https://keras.io/api/callbacks/model_checkpoint/)\n",
287 | "\n",
288 | "Callback to save the Keras model or model weights at some frequency."
289 | ]
290 | },
291 | {
292 | "cell_type": "code",
293 | "execution_count": null,
294 | "metadata": {
295 | "colab": {},
296 | "colab_type": "code",
297 | "id": "PYV4FJ8iMmDq"
298 | },
299 | "outputs": [],
300 | "source": [
301 | "model = build_model(dense_units=256)\n",
302 | "model.compile(\n",
303 | " optimizer='sgd',\n",
304 | " loss='sparse_categorical_crossentropy', \n",
305 | " metrics=['accuracy'])\n",
306 | " \n",
307 | "model.fit(train_batches, \n",
308 | " epochs=5, \n",
309 | " validation_data=validation_batches, \n",
310 | " verbose=2,\n",
311 | " callbacks=[ModelCheckpoint('weights.{epoch:02d}-{val_loss:.2f}.h5', verbose=1),\n",
312 | " ])"
313 | ]
314 | },
315 | {
316 | "cell_type": "code",
317 | "execution_count": null,
318 | "metadata": {
319 | "colab": {},
320 | "colab_type": "code",
321 | "id": "oGvjQ8IlMmK6"
322 | },
323 | "outputs": [],
324 | "source": [
325 | "model = build_model(dense_units=256)\n",
326 | "model.compile(\n",
327 | " optimizer='sgd',\n",
328 | " loss='sparse_categorical_crossentropy', \n",
329 | " metrics=['accuracy'])\n",
330 | " \n",
331 | "model.fit(train_batches, \n",
332 | " epochs=1, \n",
333 | " validation_data=validation_batches, \n",
334 | " verbose=2,\n",
335 | " callbacks=[ModelCheckpoint('saved_model', verbose=1)\n",
336 | " ])"
337 | ]
338 | },
339 | {
340 | "cell_type": "code",
341 | "execution_count": null,
342 | "metadata": {
343 | "colab": {},
344 | "colab_type": "code",
345 | "id": "Y1ConwoB0EjD"
346 | },
347 | "outputs": [],
348 | "source": [
349 | "model = build_model(dense_units=256)\n",
350 | "model.compile(\n",
351 | " optimizer='sgd',\n",
352 | " loss='sparse_categorical_crossentropy', \n",
353 | " metrics=['accuracy'])\n",
354 | " \n",
355 | "model.fit(train_batches, \n",
356 | " epochs=2, \n",
357 | " validation_data=validation_batches, \n",
358 | " verbose=2,\n",
359 | " callbacks=[ModelCheckpoint('model.h5', verbose=1)\n",
360 | " ])"
361 | ]
362 | },
363 | {
364 | "cell_type": "markdown",
365 | "metadata": {
366 | "colab_type": "text",
367 | "id": "kptNF0--Lznv"
368 | },
369 | "source": [
370 | "## [Early stopping](https://keras.io/api/callbacks/early_stopping/)\n",
371 | "\n",
372 | "Stop training when a monitored metric has stopped improving."
373 | ]
374 | },
375 | {
376 | "cell_type": "code",
377 | "execution_count": null,
378 | "metadata": {
379 | "colab": {},
380 | "colab_type": "code",
381 | "id": "KJOJTJYdCkdY"
382 | },
383 | "outputs": [],
384 | "source": [
385 | "model = build_model(dense_units=256)\n",
386 | "model.compile(\n",
387 | " optimizer='sgd',\n",
388 | " loss='sparse_categorical_crossentropy', \n",
389 | " metrics=['accuracy'])\n",
390 | " \n",
391 | "model.fit(train_batches, \n",
392 | " epochs=50, \n",
393 | " validation_data=validation_batches, \n",
394 | " verbose=2,\n",
395 | " callbacks=[EarlyStopping(\n",
396 | " patience=3,\n",
397 | " min_delta=0.05,\n",
398 | " baseline=0.8,\n",
399 | " mode='min',\n",
400 | " monitor='val_loss',\n",
401 | " restore_best_weights=True,\n",
402 | " verbose=1)\n",
403 | " ])"
404 | ]
405 | },
406 | {
407 | "cell_type": "markdown",
408 | "metadata": {
409 | "colab_type": "text",
410 | "id": "8mDzWUD4Pqq5"
411 | },
412 | "source": [
413 | "## [CSV Logger](https://keras.io/api/callbacks/csv_logger/)\n",
414 | "\n",
415 | "Callback that streams epoch results to a CSV file."
416 | ]
417 | },
418 | {
419 | "cell_type": "code",
420 | "execution_count": null,
421 | "metadata": {
422 | "colab": {},
423 | "colab_type": "code",
424 | "id": "cffnMpmGPtMh"
425 | },
426 | "outputs": [],
427 | "source": [
428 | "model = build_model(dense_units=256)\n",
429 | "model.compile(\n",
430 | " optimizer='sgd',\n",
431 | " loss='sparse_categorical_crossentropy', \n",
432 | " metrics=['accuracy'])\n",
433 | " \n",
434 | "csv_file = 'training.csv'\n",
435 | "\n",
436 | "model.fit(train_batches, \n",
437 | " epochs=5, \n",
438 | " validation_data=validation_batches, \n",
439 | " callbacks=[CSVLogger(csv_file)\n",
440 | " ])"
441 | ]
442 | },
443 | {
444 | "cell_type": "code",
445 | "execution_count": null,
446 | "metadata": {
447 | "colab": {},
448 | "colab_type": "code",
449 | "id": "B9tkYi03QV7R"
450 | },
451 | "outputs": [],
452 | "source": [
453 | "pd.read_csv(csv_file).head()"
454 | ]
455 | },
456 | {
457 | "cell_type": "markdown",
458 | "metadata": {
459 | "colab_type": "text",
460 | "id": "Dt9C2Y9fRBKN"
461 | },
462 | "source": [
463 | "## [Learning Rate Scheduler](https://keras.io/api/callbacks/learning_rate_scheduler/)\n",
464 | "\n",
465 | "Updates the learning rate during training."
466 | ]
467 | },
468 | {
469 | "cell_type": "code",
470 | "execution_count": null,
471 | "metadata": {
472 | "colab": {},
473 | "colab_type": "code",
474 | "id": "aJi-xY2VRC03"
475 | },
476 | "outputs": [],
477 | "source": [
478 | "model = build_model(dense_units=256)\n",
479 | "model.compile(\n",
480 | " optimizer='sgd',\n",
481 | " loss='sparse_categorical_crossentropy', \n",
482 | " metrics=['accuracy'])\n",
483 | " \n",
484 | "def step_decay(epoch):\n",
485 | "\tinitial_lr = 0.01\n",
486 | "\tdrop = 0.5\n",
487 | "\tepochs_drop = 1\n",
488 | "\tlr = initial_lr * math.pow(drop, math.floor((1+epoch)/epochs_drop))\n",
489 | "\treturn lr\n",
490 | "\n",
491 | "model.fit(train_batches, \n",
492 | " epochs=5, \n",
493 | " validation_data=validation_batches, \n",
494 | " callbacks=[LearningRateScheduler(step_decay, verbose=1),\n",
495 | " TensorBoard(log_dir='./log_dir')])"
496 | ]
497 | },
498 | {
499 | "cell_type": "code",
500 | "execution_count": null,
501 | "metadata": {
502 | "colab": {},
503 | "colab_type": "code",
504 | "id": "M2S4n8nrbV91"
505 | },
506 | "outputs": [],
507 | "source": [
508 | "%tensorboard --logdir log_dir"
509 | ]
510 | },
511 | {
512 | "cell_type": "markdown",
513 | "metadata": {
514 | "colab_type": "text",
515 | "id": "y0wcuQyJE_UK"
516 | },
517 | "source": [
518 | "## [ReduceLROnPlateau](https://keras.io/api/callbacks/reduce_lr_on_plateau/)\n",
519 | "\n",
520 | "Reduce learning rate when a metric has stopped improving."
521 | ]
522 | },
523 | {
524 | "cell_type": "code",
525 | "execution_count": null,
526 | "metadata": {
527 | "colab": {},
528 | "colab_type": "code",
529 | "id": "4naxZ-eCFB27"
530 | },
531 | "outputs": [],
532 | "source": [
533 | "model = build_model(dense_units=256)\n",
534 | "model.compile(\n",
535 | " optimizer='sgd',\n",
536 | " loss='sparse_categorical_crossentropy', \n",
537 | " metrics=['accuracy'])\n",
538 | " \n",
539 | "model.fit(train_batches, \n",
540 | " epochs=50, \n",
541 | " validation_data=validation_batches, \n",
542 | " callbacks=[ReduceLROnPlateau(monitor='val_loss', \n",
543 | " factor=0.2, verbose=1,\n",
544 | " patience=1, min_lr=0.001),\n",
545 | " TensorBoard(log_dir='./log_dir')])"
546 | ]
547 | },
548 | {
549 | "cell_type": "code",
550 | "execution_count": null,
551 | "metadata": {
552 | "colab": {},
553 | "colab_type": "code",
554 | "id": "isfTWP4NYudk"
555 | },
556 | "outputs": [],
557 | "source": [
558 | "%tensorboard --logdir log_dir"
559 | ]
560 | }
561 | ],
562 | "metadata": {
563 | "colab": {
564 | "collapsed_sections": [],
565 | "include_colab_link": true,
566 | "name": "ExploringCallbacks.ipynb",
567 | "provenance": []
568 | },
569 | "kernelspec": {
570 | "display_name": "Python 3",
571 | "language": "python",
572 | "name": "python3"
573 | },
574 | "language_info": {
575 | "codemirror_mode": {
576 | "name": "ipython",
577 | "version": 3
578 | },
579 | "file_extension": ".py",
580 | "mimetype": "text/x-python",
581 | "name": "python",
582 | "nbconvert_exporter": "python",
583 | "pygments_lexer": "ipython3",
584 | "version": "3.7.6"
585 | }
586 | },
587 | "nbformat": 4,
588 | "nbformat_minor": 4
589 | }
590 |
--------------------------------------------------------------------------------
/Course 2/Week1_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Basic Tensor operations and GradientTape.\n",
8 | "\n",
9 | "In this graded assignment, you will perform different tensor operations as well as use [GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape). These are important building blocks for the next parts of this course so it's important to master the basics. Let's begin!"
10 | ]
11 | },
12 | {
13 | "cell_type": "code",
14 | "execution_count": 1,
15 | "metadata": {
16 | "colab": {},
17 | "colab_type": "code",
18 | "id": "jqev488WJ9-R"
19 | },
20 | "outputs": [],
21 | "source": [
22 | "import tensorflow as tf\n",
23 | "import numpy as np"
24 | ]
25 | },
26 | {
27 | "cell_type": "markdown",
28 | "metadata": {},
29 | "source": [
30 | "## Exercise 1 - [tf.constant]((https://www.tensorflow.org/api_docs/python/tf/constant))\n",
31 | "\n",
32 | "Creates a constant tensor from a tensor-like object. "
33 | ]
34 | },
35 | {
36 | "cell_type": "code",
37 | "execution_count": 2,
38 | "metadata": {
39 | "colab": {},
40 | "colab_type": "code",
41 | "id": "MYdVyiSoLPgO"
42 | },
43 | "outputs": [],
44 | "source": [
45 | "# Convert NumPy array to Tensor using `tf.constant`\n",
46 | "def tf_constant(array):\n",
47 | " \"\"\"\n",
48 | " Args:\n",
49 | " array (numpy.ndarray): tensor-like array.\n",
50 | "\n",
51 | " Returns:\n",
52 | " tensorflow.python.framework.ops.EagerTensor: tensor.\n",
53 | " \"\"\"\n",
54 | " ### START CODE HERE ###\n",
55 | " tf_constant_array = tf.constant(array)\n",
56 | " ### END CODE HERE ###\n",
57 | " return tf_constant_array"
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": 3,
63 | "metadata": {},
64 | "outputs": [
65 | {
66 | "output_type": "execute_result",
67 | "data": {
68 | "text/plain": [
69 | ""
70 | ]
71 | },
72 | "metadata": {},
73 | "execution_count": 3
74 | }
75 | ],
76 | "source": [
77 | "tmp_array = np.arange(1,10)\n",
78 | "x = tf_constant(tmp_array)\n",
79 | "x\n",
80 | "\n",
81 | "# Expected output:\n",
82 | "# "
83 | ]
84 | },
85 | {
86 | "cell_type": "markdown",
87 | "metadata": {},
88 | "source": [
89 | "Note that for future docstrings, the type `EagerTensor` will be used as a shortened version of `tensorflow.python.framework.ops.EagerTensor`."
90 | ]
91 | },
92 | {
93 | "cell_type": "markdown",
94 | "metadata": {},
95 | "source": [
96 | "## Exercise 2 - [tf.square](https://www.tensorflow.org/api_docs/python/tf/math/square)\n",
97 | "\n",
98 | "Computes the square of a tensor element-wise."
99 | ]
100 | },
101 | {
102 | "cell_type": "code",
103 | "execution_count": 4,
104 | "metadata": {
105 | "colab": {},
106 | "colab_type": "code",
107 | "id": "W6BTwNJCLjV8"
108 | },
109 | "outputs": [],
110 | "source": [
111 | "# Square the input tensor\n",
112 | "def tf_square(array):\n",
113 | " \"\"\"\n",
114 | " Args:\n",
115 | " array (numpy.ndarray): tensor-like array.\n",
116 | "\n",
117 | " Returns:\n",
118 | " EagerTensor: tensor.\n",
119 | " \"\"\"\n",
120 | " # make sure it's a tensor\n",
121 | " array = tf.constant(array)\n",
122 | " \n",
123 | " ### START CODE HERE ###\n",
124 | " tf_squared_array = tf.square(array)\n",
125 | " ### END CODE HERE ###\n",
126 | " return tf_squared_array"
127 | ]
128 | },
129 | {
130 | "cell_type": "code",
131 | "execution_count": 5,
132 | "metadata": {},
133 | "outputs": [
134 | {
135 | "data": {
136 | "text/plain": [
137 | ""
138 | ]
139 | },
140 | "execution_count": 5,
141 | "metadata": {},
142 | "output_type": "execute_result"
143 | }
144 | ],
145 | "source": [
146 | "tmp_array = tf.constant(np.arange(1, 10))\n",
147 | "x = tf_square(tmp_array)\n",
148 | "x\n",
149 | "\n",
150 | "# Expected output:\n",
151 | "# "
152 | ]
153 | },
154 | {
155 | "cell_type": "markdown",
156 | "metadata": {},
157 | "source": [
158 | "## Exercise 3 - [tf.reshape](https://www.tensorflow.org/api_docs/python/tf/reshape)\n",
159 | "\n",
160 | "Reshapes a tensor."
161 | ]
162 | },
163 | {
164 | "cell_type": "code",
165 | "execution_count": 6,
166 | "metadata": {
167 | "colab": {},
168 | "colab_type": "code",
169 | "id": "7nzBSX8-L0Xt"
170 | },
171 | "outputs": [],
172 | "source": [
173 | "# Reshape tensor into the given shape parameter\n",
174 | "def tf_reshape(array, shape):\n",
175 | " \"\"\"\n",
176 | " Args:\n",
177 | " array (EagerTensor): tensor to reshape.\n",
178 | " shape (tuple): desired shape.\n",
179 | "\n",
180 | " Returns:\n",
181 | " EagerTensor: reshaped tensor.\n",
182 | " \"\"\"\n",
183 | " # make sure it's a tensor\n",
184 | " array = tf.constant(array)\n",
185 | " ### START CODE HERE ###\n",
186 | " tf_reshaped_array = tf.reshape(array, shape)\n",
187 | " ### END CODE HERE ###\n",
188 | " return tf_reshaped_array"
189 | ]
190 | },
191 | {
192 | "cell_type": "code",
193 | "execution_count": 9,
194 | "metadata": {},
195 | "outputs": [
196 | {
197 | "data": {
198 | "text/plain": [
199 | ""
203 | ]
204 | },
205 | "execution_count": 9,
206 | "metadata": {},
207 | "output_type": "execute_result"
208 | }
209 | ],
210 | "source": [
211 | "# Check your function\n",
212 | "tmp_array = np.array([1,2,3,4,5,6,7,8,9])\n",
213 | "# Check that your function reshapes a vector into a matrix\n",
214 | "x = tf_reshape(tmp_array, (3, 3))\n",
215 | "x\n",
216 | "\n",
217 | "# Expected output:\n",
218 | "# "
271 | ]
272 | },
273 | "execution_count": 11,
274 | "metadata": {},
275 | "output_type": "execute_result"
276 | }
277 | ],
278 | "source": [
279 | "# Check your function\n",
280 | "tmp_array = [1,2,3,4]\n",
281 | "x = tf_cast(tmp_array, tf.float32)\n",
282 | "x\n",
283 | "\n",
284 | "# Expected output:\n",
285 | "# "
286 | ]
287 | },
288 | {
289 | "cell_type": "markdown",
290 | "metadata": {},
291 | "source": [
292 | "## Exercise 5 - [tf.multiply](https://www.tensorflow.org/api_docs/python/tf/multiply)\n",
293 | "\n",
294 | "Returns an element-wise x * y."
295 | ]
296 | },
297 | {
298 | "cell_type": "code",
299 | "execution_count": 12,
300 | "metadata": {
301 | "colab": {},
302 | "colab_type": "code",
303 | "id": "ivepGtD5MKP5"
304 | },
305 | "outputs": [],
306 | "source": [
307 | "# Multiply tensor1 and tensor2\n",
308 | "def tf_multiply(tensor1, tensor2):\n",
309 | " \"\"\"\n",
310 | " Args:\n",
311 | " tensor1 (EagerTensor): a tensor.\n",
312 | " tensor2 (EagerTensor): another tensor.\n",
313 | "\n",
314 | " Returns:\n",
315 | " EagerTensor: resulting tensor.\n",
316 | " \"\"\"\n",
317 | " # make sure these are tensors\n",
318 | " tensor1 = tf.constant(tensor1)\n",
319 | " tensor2 = tf.constant(tensor2)\n",
320 | " \n",
321 | " ### START CODE HERE ###\n",
322 | " product = tf.multiply(tensor1, tensor2)\n",
323 | " ### END CODE HERE ###\n",
324 | " return product\n"
325 | ]
326 | },
327 | {
328 | "cell_type": "code",
329 | "execution_count": 13,
330 | "metadata": {},
331 | "outputs": [
332 | {
333 | "data": {
334 | "text/plain": [
335 | ""
338 | ]
339 | },
340 | "execution_count": 13,
341 | "metadata": {},
342 | "output_type": "execute_result"
343 | }
344 | ],
345 | "source": [
346 | "# Check your function\n",
347 | "tmp_1 = tf.constant(np.array([[1,2],[3,4]]))\n",
348 | "tmp_2 = tf.constant(np.array(2))\n",
349 | "result = tf_multiply(tmp_1, tmp_2)\n",
350 | "result\n",
351 | "\n",
352 | "# Expected output:\n",
353 | "# "
356 | ]
357 | },
358 | {
359 | "cell_type": "markdown",
360 | "metadata": {},
361 | "source": [
362 | "## Exercise 6 - [tf.add](https://www.tensorflow.org/api_docs/python/tf/add)\n",
363 | "\n",
364 | "Returns x + y element-wise."
365 | ]
366 | },
367 | {
368 | "cell_type": "code",
369 | "execution_count": 14,
370 | "metadata": {
371 | "colab": {},
372 | "colab_type": "code",
373 | "id": "BVlntdYnMboh"
374 | },
375 | "outputs": [],
376 | "source": [
377 | "# Add tensor1 and tensor2\n",
378 | "def tf_add(tensor1, tensor2):\n",
379 | " \"\"\"\n",
380 | " Args:\n",
381 | " tensor1 (EagerTensor): a tensor.\n",
382 | " tensor2 (EagerTensor): another tensor.\n",
383 | "\n",
384 | " Returns:\n",
385 | " EagerTensor: resulting tensor.\n",
386 | " \"\"\"\n",
387 | " # make sure these are tensors\n",
388 | " tensor1 = tf.constant(tensor1)\n",
389 | " tensor2 = tf.constant(tensor2)\n",
390 | " \n",
391 | " ### START CODE HERE ###\n",
392 | " total = tf.add(tensor1, tensor2)\n",
393 | " ### END CODE HERE ###\n",
394 | " return total"
395 | ]
396 | },
397 | {
398 | "cell_type": "code",
399 | "execution_count": 15,
400 | "metadata": {},
401 | "outputs": [
402 | {
403 | "data": {
404 | "text/plain": [
405 | ""
406 | ]
407 | },
408 | "execution_count": 15,
409 | "metadata": {},
410 | "output_type": "execute_result"
411 | }
412 | ],
413 | "source": [
414 | "# Check your function\n",
415 | "tmp_1 = tf.constant(np.array([1, 2, 3]))\n",
416 | "tmp_2 = tf.constant(np.array([4, 5, 6]))\n",
417 | "tf_add(tmp_1, tmp_2)\n",
418 | "\n",
419 | "# Expected output:\n",
420 | "# "
421 | ]
422 | },
423 | {
424 | "cell_type": "markdown",
425 | "metadata": {
426 | "colab_type": "text",
427 | "id": "9EN0W15EWNjD"
428 | },
429 | "source": [
430 | "## Exercise 7 - Gradient Tape\n",
431 | "\n",
432 | "Implement the function `tf_gradient_tape` by replacing the instances of `None` in the code below. The instructions are given in the code comments.\n",
433 | "\n",
434 | "You can review the [docs](https://www.tensorflow.org/api_docs/python/tf/GradientTape) or revisit the lectures to complete this task."
435 | ]
436 | },
437 | {
438 | "cell_type": "code",
439 | "execution_count": 18,
440 | "metadata": {
441 | "colab": {},
442 | "colab_type": "code",
443 | "id": "p3K94BWZM6nW"
444 | },
445 | "outputs": [],
446 | "source": [
447 | "def tf_gradient_tape(x):\n",
448 | " \"\"\"\n",
449 | " Args:\n",
450 | " x (EagerTensor): a tensor.\n",
451 | "\n",
452 | " Returns:\n",
453 | " EagerTensor: Derivative of z with respect to the input tensor x.\n",
454 | " \"\"\"\n",
455 | " with tf.GradientTape() as t:\n",
456 | " \n",
457 | " ### START CODE HERE ###\n",
458 | " # Record the actions performed on tensor x with `watch`\n",
459 | " t.watch(x) \n",
460 | "\n",
461 | " # Define a polynomial of form 3x^3 - 2x^2 + x\n",
462 | " y = (3 * x ** 3) - (2 * x ** 2) + x\n",
463 | "\n",
464 | " # Obtain the sum of the elements in variable y\n",
465 | " z = tf.reduce_sum(y)\n",
466 | " \n",
467 | " # Get the derivative of z with respect to the original input tensor x\n",
468 | " dz_dx = t.gradient(z, x)\n",
469 | " ### END CODE HERE\n",
470 | " \n",
471 | " return dz_dx"
472 | ]
473 | },
474 | {
475 | "cell_type": "code",
476 | "execution_count": 19,
477 | "metadata": {},
478 | "outputs": [
479 | {
480 | "data": {
481 | "text/plain": [
482 | "29.0"
483 | ]
484 | },
485 | "execution_count": 19,
486 | "metadata": {},
487 | "output_type": "execute_result"
488 | }
489 | ],
490 | "source": [
491 | "# Check your function\n",
492 | "tmp_x = tf.constant(2.0)\n",
493 | "dz_dx = tf_gradient_tape(tmp_x)\n",
494 | "result = dz_dx.numpy()\n",
495 | "result\n",
496 | "\n",
497 | "# Expected output:\n",
498 | "# 29.0"
499 | ]
500 | },
501 | {
502 | "cell_type": "markdown",
503 | "metadata": {},
504 | "source": [
505 | "**Congratulations on finishing this week's assignment!**\n",
506 | "\n",
507 | "**Keep it up!**"
508 | ]
509 | }
510 | ],
511 | "metadata": {
512 | "coursera": {
513 | "schema_names": [
514 | "TF3C2W1-1",
515 | "TF3C2W1-2",
516 | "TF3C2W1-3",
517 | "TF3C2W1-4",
518 | "TF3C2W1-5",
519 | "TF3C2W1-6",
520 | "TF3C2W1-7"
521 | ]
522 | },
523 | "kernelspec": {
524 | "display_name": "Python 3",
525 | "language": "python",
526 | "name": "python3"
527 | },
528 | "language_info": {
529 | "codemirror_mode": {
530 | "name": "ipython",
531 | "version": 3
532 | },
533 | "file_extension": ".py",
534 | "mimetype": "text/x-python",
535 | "name": "python",
536 | "nbconvert_exporter": "python",
537 | "pygments_lexer": "ipython3",
538 | "version": "3.8.5-final"
539 | }
540 | },
541 | "nbformat": 4,
542 | "nbformat_minor": 4
543 | }
--------------------------------------------------------------------------------
/Course 2/Week3_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Horse or Human? In-graph training loop Assignment\n",
8 | "\n",
9 | "This assignment lets you practice how to train a Keras model on the [horses_or_humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset with the entire training process performed in graph mode. These steps include:\n",
10 | "- loading batches\n",
11 | "- calculating gradients\n",
12 | "- updating parameters\n",
13 | "- calculating validation accuracy\n",
14 | "- repeating the loop until convergence"
15 | ]
16 | },
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "colab_type": "text",
21 | "id": "n4EKOpw9mObL"
22 | },
23 | "source": [
24 | "## Setup\n",
25 | "\n",
26 | "Import TensorFlow 2.0:"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": 1,
32 | "metadata": {
33 | "colab": {},
34 | "colab_type": "code",
35 | "id": "V9oECvVSI1Kj"
36 | },
37 | "outputs": [],
38 | "source": [
39 | "from __future__ import absolute_import, division, print_function, unicode_literals\n",
40 | "import numpy as np"
41 | ]
42 | },
43 | {
44 | "cell_type": "code",
45 | "execution_count": 2,
46 | "metadata": {
47 | "colab": {},
48 | "colab_type": "code",
49 | "id": "mT7meGqrZTz9"
50 | },
51 | "outputs": [],
52 | "source": [
53 | "import tensorflow as tf\n",
54 | "import tensorflow_datasets as tfds\n",
55 | "import tensorflow_hub as hub\n",
56 | "import matplotlib.pyplot as plt"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {
62 | "colab_type": "text",
63 | "id": "Em5dzSUOtLRP"
64 | },
65 | "source": [
66 | "### Prepare the dataset\n",
67 | "\n",
68 | "Load the horses to human dataset, splitting 80% for the training set and 20% for the test set."
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [
76 | {
77 | "name": "stdout",
78 | "output_type": "stream",
79 | "text": [
80 | "1027 2\n"
81 | ]
82 | }
83 | ],
84 | "source": [
85 | "splits, info = tfds.load('horses_or_humans', as_supervised=True, with_info=True, split=['train[:80%]', 'train[80%:]', 'test'], data_dir='./data')\n",
86 | "\n",
87 | "(train_examples, validation_examples, test_examples) = splits\n",
88 | "\n",
89 | "num_examples = info.splits['train'].num_examples\n",
90 | "num_classes = info.features['label'].num_classes\n",
91 | "print(num_examples, num_classes)"
92 | ]
93 | },
94 | {
95 | "cell_type": "code",
96 | "execution_count": 4,
97 | "metadata": {
98 | "colab": {},
99 | "colab_type": "code",
100 | "id": "cJdruxxGhBi5"
101 | },
102 | "outputs": [],
103 | "source": [
104 | "BATCH_SIZE = 32\n",
105 | "IMAGE_SIZE = 224"
106 | ]
107 | },
108 | {
109 | "cell_type": "markdown",
110 | "metadata": {},
111 | "source": [
112 | "## Pre-process an image (please complete this section)\n",
113 | "\n",
114 | "You'll define a mapping function that resizes the image to a height of 224 by 224, and normalizes the pixels to the range of 0 to 1. Note that pixels range from 0 to 255.\n",
115 | "\n",
116 | "- You'll use the following function: [tf.image.resize](https://www.tensorflow.org/api_docs/python/tf/image/resize) and pass in the (height,width) as a tuple (or list).\n",
117 | "- To normalize, divide by a floating value so that the pixel range changes from [0,255] to [0,1]."
118 | ]
119 | },
120 | {
121 | "cell_type": "code",
122 | "execution_count": 5,
123 | "metadata": {
124 | "colab": {},
125 | "colab_type": "code",
126 | "id": "qpQi4Jo9cFq0"
127 | },
128 | "outputs": [],
129 | "source": [
130 | "# Create a autograph pre-processing function to resize and normalize an image\n",
131 | "### START CODE HERE ###\n",
132 | "@tf.function\n",
133 | "def map_fn(img, label):\n",
134 | " image_height = 224\n",
135 | " image_width = 224\n",
136 | "### START CODE HERE ###\n",
137 | " # resize the image\n",
138 | " img = tf.image.resize(img, (image_height, image_width))\n",
139 | " # normalize the image\n",
140 | " img /= 255.\n",
141 | "### END CODE HERE\n",
142 | " return img, label"
143 | ]
144 | },
145 | {
146 | "cell_type": "code",
147 | "execution_count": 6,
148 | "metadata": {},
149 | "outputs": [
150 | {
151 | "name": "stdout",
152 | "output_type": "stream",
153 | "text": [
154 | "(224, 224, 3)\n",
155 | "()\n"
156 | ]
157 | }
158 | ],
159 | "source": [
160 | "## TEST CODE:\n",
161 | "\n",
162 | "test_image, test_label = list(train_examples)[0]\n",
163 | "\n",
164 | "test_result = map_fn(test_image, test_label)\n",
165 | "\n",
166 | "print(test_result[0].shape)\n",
167 | "print(test_result[1].shape)\n",
168 | "\n",
169 | "del test_image, test_label, test_result"
170 | ]
171 | },
172 | {
173 | "cell_type": "markdown",
174 | "metadata": {},
175 | "source": [
176 | "**Expected Output:**\n",
177 | "\n",
178 | "```\n",
179 | "(224, 224, 3)\n",
180 | "()\n",
181 | "```"
182 | ]
183 | },
184 | {
185 | "cell_type": "markdown",
186 | "metadata": {},
187 | "source": [
188 | "## Apply pre-processing to the datasets (please complete this section)\n",
189 | "\n",
190 | "Apply the following steps to the training_examples:\n",
191 | "- Apply the `map_fn` to the training_examples\n",
192 | "- Shuffle the training data using `.shuffle(buffer_size=)` and set the buffer size to the number of examples.\n",
193 | "- Group these into batches using `.batch()` and set the batch size given by the parameter.\n",
194 | "\n",
195 | "Hint: You can look at how validation_examples and test_examples are pre-processed to get a sense of how to chain together multiple function calls."
196 | ]
197 | },
198 | {
199 | "cell_type": "code",
200 | "execution_count": 7,
201 | "metadata": {
202 | "colab": {},
203 | "colab_type": "code",
204 | "id": "sv5bEYhaeUUO"
205 | },
206 | "outputs": [],
207 | "source": [
208 | "# Prepare train dataset by using preprocessing with map_fn, shuffling and batching\n",
209 | "def prepare_dataset(train_examples, validation_examples, test_examples, num_examples, map_fn, batch_size):\n",
210 | " ### START CODE HERE ###\n",
211 | " train_ds = train_examples.map(map_fn).shuffle(buffer_size=num_examples).batch(batch_size)\n",
212 | " ### END CODE HERE ###\n",
213 | " valid_ds = validation_examples.map(map_fn).batch(batch_size)\n",
214 | " test_ds = test_examples.map(map_fn).batch(batch_size)\n",
215 | " \n",
216 | " return train_ds, valid_ds, test_ds"
217 | ]
218 | },
219 | {
220 | "cell_type": "code",
221 | "execution_count": 8,
222 | "metadata": {},
223 | "outputs": [],
224 | "source": [
225 | "train_ds, valid_ds, test_ds = prepare_dataset(train_examples, validation_examples, test_examples, num_examples, map_fn, BATCH_SIZE)"
226 | ]
227 | },
228 | {
229 | "cell_type": "code",
230 | "execution_count": 9,
231 | "metadata": {},
232 | "outputs": [
233 | {
234 | "name": "stdout",
235 | "output_type": "stream",
236 | "text": [
237 | "26\n",
238 | "(32, 224, 224, 3)\n"
239 | ]
240 | }
241 | ],
242 | "source": [
243 | "## TEST CODE:\n",
244 | "\n",
245 | "test_train_ds = list(train_ds)\n",
246 | "print(len(test_train_ds))\n",
247 | "print(test_train_ds[0][0].shape)\n",
248 | "\n",
249 | "del test_train_ds"
250 | ]
251 | },
252 | {
253 | "cell_type": "markdown",
254 | "metadata": {},
255 | "source": [
256 | "**Expected Output:**\n",
257 | "\n",
258 | "```\n",
259 | "26\n",
260 | "(32, 224, 224, 3)\n",
261 | "```"
262 | ]
263 | },
264 | {
265 | "cell_type": "markdown",
266 | "metadata": {
267 | "colab_type": "text",
268 | "id": "znmy4l8ntMvW"
269 | },
270 | "source": [
271 | "### Define the model"
272 | ]
273 | },
274 | {
275 | "cell_type": "code",
276 | "execution_count": 10,
277 | "metadata": {
278 | "colab": {},
279 | "colab_type": "code",
280 | "id": "ltxyJVWTqNAO"
281 | },
282 | "outputs": [
283 | {
284 | "name": "stdout",
285 | "output_type": "stream",
286 | "text": [
287 | "Model: \"sequential\"\n",
288 | "_________________________________________________________________\n",
289 | "Layer (type) Output Shape Param # \n",
290 | "=================================================================\n",
291 | "keras_layer (KerasLayer) (None, 2048) 23561152 \n",
292 | "_________________________________________________________________\n",
293 | "dense (Dense) (None, 2) 4098 \n",
294 | "=================================================================\n",
295 | "Total params: 23,565,250\n",
296 | "Trainable params: 4,098\n",
297 | "Non-trainable params: 23,561,152\n",
298 | "_________________________________________________________________\n"
299 | ]
300 | }
301 | ],
302 | "source": [
303 | "MODULE_HANDLE = 'data/resnet_50_feature_vector'\n",
304 | "model = tf.keras.Sequential([\n",
305 | " hub.KerasLayer(MODULE_HANDLE, input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3)),\n",
306 | " tf.keras.layers.Dense(num_classes, activation='softmax')\n",
307 | "])\n",
308 | "model.summary()"
309 | ]
310 | },
311 | {
312 | "cell_type": "markdown",
313 | "metadata": {
314 | "colab_type": "text",
315 | "id": "Ikb79EzkjpPk"
316 | },
317 | "source": [
318 | "## Define optimizer: (please complete these sections)\n",
319 | "Define the [Adam optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam) that is in the tf.keras.optimizers module."
320 | ]
321 | },
322 | {
323 | "cell_type": "code",
324 | "execution_count": 11,
325 | "metadata": {},
326 | "outputs": [],
327 | "source": [
328 | "def set_adam_optimizer():\n",
329 | " ### START CODE HERE ###\n",
330 | " # Define the adam optimizer\n",
331 | " optimizer = tf.keras.optimizers.Adam()\n",
332 | " ### END CODE HERE ###\n",
333 | " return optimizer"
334 | ]
335 | },
336 | {
337 | "cell_type": "code",
338 | "execution_count": 12,
339 | "metadata": {},
340 | "outputs": [
341 | {
342 | "name": "stdout",
343 | "output_type": "stream",
344 | "text": [
345 | "\n"
346 | ]
347 | }
348 | ],
349 | "source": [
350 | "## TEST CODE:\n",
351 | "\n",
352 | "test_optimizer = set_adam_optimizer()\n",
353 | "\n",
354 | "print(type(test_optimizer))\n",
355 | "\n",
356 | "del test_optimizer"
357 | ]
358 | },
359 | {
360 | "cell_type": "markdown",
361 | "metadata": {},
362 | "source": [
363 | "**Expected Output:**\n",
364 | "```\n",
365 | "\n",
366 | "```"
367 | ]
368 | },
369 | {
370 | "cell_type": "markdown",
371 | "metadata": {},
372 | "source": [
373 | "## Define the loss function (please complete this section)\n",
374 | "\n",
375 | "Define the loss function as the [sparse categorical cross entropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy) that's in the tf.keras.losses module. Use the same function for both training and validation."
376 | ]
377 | },
378 | {
379 | "cell_type": "code",
380 | "execution_count": 13,
381 | "metadata": {},
382 | "outputs": [],
383 | "source": [
384 | "def set_sparse_cat_crossentropy_loss():\n",
385 | " ### START CODE HERE ###\n",
386 | " # Define object oriented metric of Sparse categorical crossentropy for train and val loss\n",
387 | " train_loss = tf.keras.losses.SparseCategoricalCrossentropy()\n",
388 | " val_loss = tf.keras.losses.SparseCategoricalCrossentropy()\n",
389 | " ### END CODE HERE ###\n",
390 | " return train_loss, val_loss"
391 | ]
392 | },
393 | {
394 | "cell_type": "code",
395 | "execution_count": 14,
396 | "metadata": {},
397 | "outputs": [
398 | {
399 | "name": "stdout",
400 | "output_type": "stream",
401 | "text": [
402 | "\n",
403 | "\n"
404 | ]
405 | }
406 | ],
407 | "source": [
408 | "## TEST CODE:\n",
409 | "\n",
410 | "test_train_loss, test_val_loss = set_sparse_cat_crossentropy_loss()\n",
411 | "\n",
412 | "print(type(test_train_loss))\n",
413 | "print(type(test_val_loss))\n",
414 | "\n",
415 | "del test_train_loss, test_val_loss"
416 | ]
417 | },
418 | {
419 | "cell_type": "markdown",
420 | "metadata": {},
421 | "source": [
422 | "**Expected Output:**\n",
423 | "```\n",
424 | "\n",
425 | "\n",
426 | "```"
427 | ]
428 | },
429 | {
430 | "cell_type": "markdown",
431 | "metadata": {},
432 | "source": [
433 | "## Define the acccuracy function (please complete this section)\n",
434 | "Define the accuracy function as the [spare categorical accuracy](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/SparseCategoricalAccuracy) that's contained in the tf.keras.metrics module. Use the same function for both training and validation."
435 | ]
436 | },
437 | {
438 | "cell_type": "code",
439 | "execution_count": 15,
440 | "metadata": {},
441 | "outputs": [],
442 | "source": [
443 | "def set_sparse_cat_crossentropy_accuracy():\n",
444 | " ### START CODE HERE ###\n",
445 | " # Define object oriented metric of Sparse categorical accuracy for train and val accuracy\n",
446 | " train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()\n",
447 | " val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()\n",
448 | " ### END CODE HERE ###\n",
449 | " return train_accuracy, val_accuracy"
450 | ]
451 | },
452 | {
453 | "cell_type": "code",
454 | "execution_count": 16,
455 | "metadata": {},
456 | "outputs": [
457 | {
458 | "name": "stdout",
459 | "output_type": "stream",
460 | "text": [
461 | "\n",
462 | "\n"
463 | ]
464 | }
465 | ],
466 | "source": [
467 | "## TEST CODE:\n",
468 | "\n",
469 | "test_train_accuracy, test_val_accuracy = set_sparse_cat_crossentropy_accuracy()\n",
470 | "\n",
471 | "print(type(test_train_accuracy))\n",
472 | "print(type(test_val_accuracy))\n",
473 | "\n",
474 | "del test_train_accuracy, test_val_accuracy"
475 | ]
476 | },
477 | {
478 | "cell_type": "markdown",
479 | "metadata": {},
480 | "source": [
481 | "**Expected Output:**\n",
482 | "```\n",
483 | "\n",
484 | "\n",
485 | "```"
486 | ]
487 | },
488 | {
489 | "cell_type": "markdown",
490 | "metadata": {},
491 | "source": [
492 | "Call the three functions that you defined to set the optimizer, loss and accuracy"
493 | ]
494 | },
495 | {
496 | "cell_type": "code",
497 | "execution_count": 17,
498 | "metadata": {
499 | "colab": {},
500 | "colab_type": "code",
501 | "id": "j92oDYGCjnBh"
502 | },
503 | "outputs": [],
504 | "source": [
505 | "optimizer = set_adam_optimizer()\n",
506 | "train_loss, val_loss = set_sparse_cat_crossentropy_loss()\n",
507 | "train_accuracy, val_accuracy = set_sparse_cat_crossentropy_accuracy()"
508 | ]
509 | },
510 | {
511 | "cell_type": "markdown",
512 | "metadata": {
513 | "colab_type": "text",
514 | "id": "oeYV6mKnJGMr"
515 | },
516 | "source": [
517 | "### Define the training loop (please complete this section)\n",
518 | "\n",
519 | "In the training loop:\n",
520 | "- Get the model predictions: use the model, passing in the input `x`\n",
521 | "- Get the training loss: Call `train_loss`, passing in the true `y` and the predicted `y`.\n",
522 | "- Calculate the gradient of the loss with respect to the model's variables: use `tape.gradient` and pass in the loss and the model's `trainable_variables`.\n",
523 | "- Optimize the model variables using the gradients: call `optimizer.apply_gradients` and pass in a `zip()` of the two lists: the gradients and the model's `trainable_variables`.\n",
524 | "- Calculate accuracy: Call `train_accuracy`, passing in the true `y` and the predicted `y`."
525 | ]
526 | },
527 | {
528 | "cell_type": "code",
529 | "execution_count": 18,
530 | "metadata": {
531 | "colab": {},
532 | "colab_type": "code",
533 | "id": "3xtg_MMhJETd"
534 | },
535 | "outputs": [],
536 | "source": [
537 | "# this code uses the GPU if available, otherwise uses a CPU\n",
538 | "device = '/gpu:0' if tf.config.list_physical_devices('GPU') else '/cpu:0'\n",
539 | "EPOCHS = 2\n",
540 | "\n",
541 | "# Custom training step\n",
542 | "def train_one_step(model, optimizer, x, y, train_loss, train_accuracy):\n",
543 | " '''\n",
544 | " Trains on a batch of images for one step.\n",
545 | " \n",
546 | " Args:\n",
547 | " model (keras Model) -- image classifier\n",
548 | " optimizer (keras Optimizer) -- optimizer to use during training\n",
549 | " x (Tensor) -- training images\n",
550 | " y (Tensor) -- training labels\n",
551 | " train_loss (keras Loss) -- loss object for training\n",
552 | " train_accuracy (keras Metric) -- accuracy metric for training\n",
553 | " '''\n",
554 | " with tf.GradientTape() as tape:\n",
555 | " ### START CODE HERE ###\n",
556 | " # Run the model on input x to get predictions\n",
557 | " predictions = model(x)\n",
558 | " # Compute the training loss using `train_loss`, passing in the true y and the predicted y\n",
559 | " loss = train_loss(y, predictions)\n",
560 | "\n",
561 | " # Using the tape and loss, compute the gradients on model variables using tape.gradient\n",
562 | " grads = tape.gradient(loss, model.trainable_weights)\n",
563 | " \n",
564 | " # Zip the gradients and model variables, and then apply the result on the optimizer\n",
565 | " optimizer.apply_gradients(zip(grads, model.trainable_weights))\n",
566 | " # Call the train accuracy object on ground truth and predictions\n",
567 | " train_accuracy(y, predictions)\n",
568 | " ### END CODE HERE\n",
569 | " return loss"
570 | ]
571 | },
572 | {
573 | "cell_type": "code",
574 | "execution_count": 19,
575 | "metadata": {},
576 | "outputs": [
577 | {
578 | "name": "stdout",
579 | "output_type": "stream",
580 | "text": [
581 | "tf.Tensor(0.6931472, shape=(), dtype=float32)\n"
582 | ]
583 | }
584 | ],
585 | "source": [
586 | "## TEST CODE:\n",
587 | "\n",
588 | "def base_model():\n",
589 | " inputs = tf.keras.layers.Input(shape=(2))\n",
590 | " x = tf.keras.layers.Dense(64, activation='relu')(inputs)\n",
591 | " outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)\n",
592 | " model = tf.keras.Model(inputs=inputs, outputs=outputs)\n",
593 | " return model\n",
594 | "\n",
595 | "test_model = base_model()\n",
596 | "\n",
597 | "test_optimizer = set_adam_optimizer()\n",
598 | "test_image = tf.ones((2,2))\n",
599 | "test_label = tf.ones((1,))\n",
600 | "test_train_loss, _ = set_sparse_cat_crossentropy_loss()\n",
601 | "test_train_accuracy, _ = set_sparse_cat_crossentropy_accuracy()\n",
602 | "\n",
603 | "test_result = train_one_step(test_model, test_optimizer, test_image, test_label, test_train_loss, test_train_accuracy)\n",
604 | "print(test_result)\n",
605 | "\n",
606 | "del test_result, test_model, test_optimizer, test_image, test_label, test_train_loss, test_train_accuracy"
607 | ]
608 | },
609 | {
610 | "cell_type": "markdown",
611 | "metadata": {},
612 | "source": [
613 | "**Expected Output:**\n",
614 | "\n",
615 | "You will see a Tensor with the same shape and dtype. The value might be different.\n",
616 | "\n",
617 | "```\n",
618 | "tf.Tensor(0.6931472, shape=(), dtype=float32)\n",
619 | "```"
620 | ]
621 | },
622 | {
623 | "cell_type": "markdown",
624 | "metadata": {},
625 | "source": [
626 | "## Define the 'train' function (please complete this section)\n",
627 | "\n",
628 | "You'll first loop through the training batches to train the model. (Please complete these sections)\n",
629 | "- The `train` function will use a for loop to iteratively call the `train_one_step` function that you just defined.\n",
630 | "- You'll use `tf.print` to print the step number, loss, and train_accuracy.result() at each step. Remember to use tf.print when you plan to generate autograph code.\n",
631 | "\n",
632 | "Next, you'll loop through the batches of the validation set to calculation the validation loss and validation accuracy. (This code is provided for you). At each iteration of the loop:\n",
633 | "- Use the model to predict on x, where x is the input from the validation set.\n",
634 | "- Use val_loss to calculate the validation loss between the true validation 'y' and predicted y.\n",
635 | "- Use val_accuracy to calculate the accuracy of the predicted y compared to the true y.\n",
636 | "\n",
637 | "Finally, you'll print the validation loss and accuracy using tf.print. (Please complete this section)\n",
638 | "- print the final `loss`, which is the validation loss calculated by the last loop through the validation dataset.\n",
639 | "- Also print the val_accuracy.result().\n",
640 | "\n",
641 | "**HINT**\n",
642 | "If you submit your assignment and see this error for your stderr output: \n",
643 | "```\n",
644 | "Cannot convert 1e-07 to EagerTensor of dtype int64\n",
645 | "```\n",
646 | "Please check your calls to train_accuracy and val_accuracy to make sure that you pass in the true and predicted values in the correct order (check the documentation to verify the order of parameters)."
647 | ]
648 | },
649 | {
650 | "cell_type": "code",
651 | "execution_count": 20,
652 | "metadata": {},
653 | "outputs": [],
654 | "source": [
655 | "# Decorate this function with tf.function to enable autograph on the training loop\n",
656 | "@tf.function\n",
657 | "def train(model, optimizer, epochs, device, train_ds, train_loss, train_accuracy, valid_ds, val_loss, val_accuracy):\n",
658 | " '''\n",
659 | " Performs the entire training loop. Prints the loss and accuracy per step and epoch.\n",
660 | " \n",
661 | " Args:\n",
662 | " model (keras Model) -- image classifier\n",
663 | " optimizer (keras Optimizer) -- optimizer to use during training\n",
664 | " epochs (int) -- number of epochs\n",
665 | " train_ds (tf Dataset) -- the train set containing image-label pairs\n",
666 | " train_loss (keras Loss) -- loss function for training\n",
667 | " train_accuracy (keras Metric) -- accuracy metric for training\n",
668 | " valid_ds (Tensor) -- the val set containing image-label pairs\n",
669 | " val_loss (keras Loss) -- loss object for validation\n",
670 | " val_accuracy (keras Metric) -- accuracy metric for validation\n",
671 | " '''\n",
672 | " step = 0\n",
673 | " loss = 0.0\n",
674 | " for epoch in range(epochs):\n",
675 | " for x, y in train_ds:\n",
676 | " # training step number increments at each iteration\n",
677 | " step += 1\n",
678 | " with tf.device(device_name=device):\n",
679 | " ### START CODE HERE ###\n",
680 | " # Run one training step by passing appropriate model parameters\n",
681 | " # required by the function and finally get the loss to report the results\n",
682 | " loss = train_one_step(model, optimizer, x, y, train_loss, train_accuracy)\n",
683 | " ### END CODE HERE ###\n",
684 | " # Use tf.print to report your results.\n",
685 | " # Print the training step number, loss and accuracy\n",
686 | " tf.print('Step', step, \n",
687 | " ': train loss', loss, \n",
688 | " '; train accuracy', train_accuracy.result())\n",
689 | "\n",
690 | " with tf.device(device_name=device):\n",
691 | " for x, y in valid_ds:\n",
692 | " # Call the model on the batches of inputs x and get the predictions\n",
693 | " y_pred = model(x)\n",
694 | " loss = val_loss(y, y_pred)\n",
695 | " val_accuracy(y, y_pred)\n",
696 | " \n",
697 | " # Print the validation loss and accuracy\n",
698 | " ### START CODE HERE ###\n",
699 | " tf.print('val loss', loss, '; val accuracy', val_accuracy.result())\n",
700 | " ### END CODE HERE ###"
701 | ]
702 | },
703 | {
704 | "cell_type": "markdown",
705 | "metadata": {},
706 | "source": [
707 | "Run the `train` function to train your model! You should see the loss generally decreasing and the accuracy increasing.\n",
708 | "\n",
709 | "**Note**: **Please let the training finish before submitting** and **do not** modify the next cell. It is required for grading. This will take around 5 minutes to run. "
710 | ]
711 | },
712 | {
713 | "cell_type": "code",
714 | "execution_count": 21,
715 | "metadata": {
716 | "colab": {},
717 | "colab_type": "code",
718 | "graded": true,
719 | "id": "6iDWgg977wb9",
720 | "name": "train"
721 | },
722 | "outputs": [
723 | {
724 | "name": "stdout",
725 | "output_type": "stream",
726 | "text": [
727 | "Step 1 : train loss 1.21917498 ; train accuracy 0.5\n",
728 | "Step 2 : train loss 0.917052567 ; train accuracy 0.5\n",
729 | "Step 3 : train loss 0.825171947 ; train accuracy 0.489583343\n",
730 | "Step 4 : train loss 0.686144173 ; train accuracy 0.515625\n",
731 | "Step 5 : train loss 0.415853709 ; train accuracy 0.58125\n",
732 | "Step 6 : train loss 0.284147114 ; train accuracy 0.640625\n",
733 | "Step 7 : train loss 0.185585767 ; train accuracy 0.6875\n",
734 | "Step 8 : train loss 0.121098533 ; train accuracy 0.7265625\n",
735 | "Step 9 : train loss 0.0811917931 ; train accuracy 0.756944418\n",
736 | "Step 10 : train loss 0.055098258 ; train accuracy 0.78125\n",
737 | "Step 11 : train loss 0.0380168855 ; train accuracy 0.801136374\n",
738 | "Step 12 : train loss 0.0399652459 ; train accuracy 0.817708313\n",
739 | "Step 13 : train loss 0.0557225086 ; train accuracy 0.831730783\n",
740 | "Step 14 : train loss 0.0181157868 ; train accuracy 0.84375\n",
741 | "Step 15 : train loss 0.0291922893 ; train accuracy 0.854166687\n",
742 | "Step 16 : train loss 0.0109908851 ; train accuracy 0.86328125\n",
743 | "Step 17 : train loss 0.00853151083 ; train accuracy 0.871323526\n",
744 | "Step 18 : train loss 0.00965600647 ; train accuracy 0.878472209\n",
745 | "Step 19 : train loss 0.0313087478 ; train accuracy 0.883223712\n",
746 | "Step 20 : train loss 0.00720000081 ; train accuracy 0.889062524\n",
747 | "Step 21 : train loss 0.0120384349 ; train accuracy 0.894345224\n",
748 | "Step 22 : train loss 0.00890160725 ; train accuracy 0.899147749\n",
749 | "Step 23 : train loss 0.0044059949 ; train accuracy 0.903532624\n",
750 | "Step 24 : train loss 0.00872336328 ; train accuracy 0.907552063\n",
751 | "Step 25 : train loss 0.0103128813 ; train accuracy 0.91125\n",
752 | "Step 26 : train loss 0.00879616942 ; train accuracy 0.9136253\n",
753 | "val loss 0.0068729911 ; val accuracy 1\n",
754 | "Step 27 : train loss 0.00395290926 ; train accuracy 0.916861832\n",
755 | "Step 28 : train loss 0.0051707495 ; train accuracy 0.919864535\n",
756 | "Step 29 : train loss 0.00393836945 ; train accuracy 0.922657967\n",
757 | "Step 30 : train loss 0.00472681085 ; train accuracy 0.925263166\n",
758 | "Step 31 : train loss 0.00774505502 ; train accuracy 0.927698553\n",
759 | "Step 32 : train loss 0.00496031251 ; train accuracy 0.929980278\n",
760 | "Step 33 : train loss 0.00350904535 ; train accuracy 0.93212235\n",
761 | "Step 34 : train loss 0.00432588765 ; train accuracy 0.934137285\n",
762 | "Step 35 : train loss 0.00301667536 ; train accuracy 0.93603605\n",
763 | "Step 36 : train loss 0.0731941238 ; train accuracy 0.93695271\n",
764 | "Step 37 : train loss 0.00497959927 ; train accuracy 0.938671231\n",
765 | "Step 38 : train loss 0.00578634 ; train accuracy 0.940298498\n",
766 | "Step 39 : train loss 0.00199790439 ; train accuracy 0.941841662\n",
767 | "Step 40 : train loss 0.0018982949 ; train accuracy 0.943307102\n",
768 | "Step 41 : train loss 0.0038810533 ; train accuracy 0.94470048\n",
769 | "Step 42 : train loss 0.00942570902 ; train accuracy 0.946027\n",
770 | "Step 43 : train loss 0.00231171679 ; train accuracy 0.947291374\n",
771 | "Step 44 : train loss 0.00359165063 ; train accuracy 0.948497832\n",
772 | "Step 45 : train loss 0.00170328189 ; train accuracy 0.949650347\n",
773 | "Step 46 : train loss 0.00207206211 ; train accuracy 0.950752378\n",
774 | "Step 47 : train loss 0.00310707465 ; train accuracy 0.951807201\n",
775 | "Step 48 : train loss 0.00271400716 ; train accuracy 0.952817798\n",
776 | "Step 49 : train loss 0.00190765085 ; train accuracy 0.95378691\n",
777 | "Step 50 : train loss 0.00228788611 ; train accuracy 0.954717\n",
778 | "Step 51 : train loss 0.00411130534 ; train accuracy 0.955610335\n",
779 | "Step 52 : train loss 0.00402449211 ; train accuracy 0.956204355\n",
780 | "val loss 0.00347637222 ; val accuracy 1\n"
781 | ]
782 | }
783 | ],
784 | "source": [
785 | "train(model, optimizer, EPOCHS, device, train_ds, train_loss, train_accuracy, valid_ds, val_loss, val_accuracy)"
786 | ]
787 | },
788 | {
789 | "cell_type": "markdown",
790 | "metadata": {
791 | "colab_type": "text",
792 | "id": "N8m3iJgx7SV1"
793 | },
794 | "source": [
795 | "# Evaluation\n",
796 | "\n",
797 | "You can now see how your model performs on test images. First, let's load the test dataset and generate predictions:"
798 | ]
799 | },
800 | {
801 | "cell_type": "code",
802 | "execution_count": 22,
803 | "metadata": {
804 | "colab": {},
805 | "colab_type": "code",
806 | "id": "HwFx4Nbh25p5"
807 | },
808 | "outputs": [],
809 | "source": [
810 | "test_imgs = []\n",
811 | "test_labels = []\n",
812 | "\n",
813 | "predictions = []\n",
814 | "with tf.device(device_name=device):\n",
815 | " for images, labels in test_ds:\n",
816 | " preds = model(images)\n",
817 | " preds = preds.numpy()\n",
818 | " predictions.extend(preds)\n",
819 | "\n",
820 | " test_imgs.extend(images.numpy())\n",
821 | " test_labels.extend(labels.numpy())"
822 | ]
823 | },
824 | {
825 | "cell_type": "markdown",
826 | "metadata": {},
827 | "source": [
828 | "Let's define a utility function for plotting an image and its prediction."
829 | ]
830 | },
831 | {
832 | "cell_type": "code",
833 | "execution_count": 23,
834 | "metadata": {
835 | "cellView": "form",
836 | "colab": {},
837 | "colab_type": "code",
838 | "id": "IiutdErSpRH_"
839 | },
840 | "outputs": [],
841 | "source": [
842 | "# Utilities for plotting\n",
843 | "\n",
844 | "class_names = ['horse', 'human']\n",
845 | "\n",
846 | "def plot_image(i, predictions_array, true_label, img):\n",
847 | " predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n",
848 | " plt.grid(False)\n",
849 | " plt.xticks([])\n",
850 | " plt.yticks([])\n",
851 | "\n",
852 | " img = np.squeeze(img)\n",
853 | "\n",
854 | " plt.imshow(img, cmap=plt.cm.binary)\n",
855 | "\n",
856 | " predicted_label = np.argmax(predictions_array)\n",
857 | " \n",
858 | " # green-colored annotations will mark correct predictions. red otherwise.\n",
859 | " if predicted_label == true_label:\n",
860 | " color = 'green'\n",
861 | " else:\n",
862 | " color = 'red'\n",
863 | " \n",
864 | " # print the true label first\n",
865 | " print(true_label)\n",
866 | " \n",
867 | " # show the image and overlay the prediction\n",
868 | " plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n",
869 | " 100*np.max(predictions_array),\n",
870 | " class_names[true_label]),\n",
871 | " color=color)\n",
872 | "\n"
873 | ]
874 | },
875 | {
876 | "cell_type": "markdown",
877 | "metadata": {},
878 | "source": [
879 | "### Plot the result of a single image\n",
880 | "\n",
881 | "Choose an index and display the model's prediction for that image."
882 | ]
883 | },
884 | {
885 | "cell_type": "code",
886 | "execution_count": 24,
887 | "metadata": {
888 | "cellView": "form",
889 | "colab": {},
890 | "colab_type": "code",
891 | "id": "aVknjW4A11uz"
892 | },
893 | "outputs": [
894 | {
895 | "name": "stdout",
896 | "output_type": "stream",
897 | "text": [
898 | "1\n"
899 | ]
900 | },
901 | {
902 | "data": {
903 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAKoAAAC0CAYAAAAEqrdpAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO29eZRc133n97n3bbV09d5YiIUgAZIACHERSW0kZdmRaYnUZmecWJKVkWfGmvh4ZOecxM5y4tOD48xMkjNOHMc58YnHnjlzJicjj2zLlpfRTloSKYkSJXERCYgEF+xb77W89+6SP+591YVmdXc1gAbQQH3Pqa7qqrfcevV9v/vbr7DW0kcf1zrk1R5AH330gj5R+9gQ6BO1jw2BPlH72BDoE7WPDYE+UfvYEAjXsvH4+LjdtWvXOg2ljxsdr732GufOnRPdPlsTUXft2sV3v/vdyzOqPvpYgvvvv3/Zz9ZE1D4uDcsFV4ToKkT66ECfqFcI1lq01qStFlmWIqWkVCoTxTHQJ+tq6BN1nVFI0bm5Ob737W/xwg9+QGthlpHBQW6+dQ+77tjLzt27iZMS0CfscugTdZ1hjOHIK6/w53/yJ3zjK1+iMT9DJQ4ZGazx4ugom8bH2Hff/bz9vY8xsfWmPlGXQZ+o6whrLSeOH+MPfu93efqb36TZbCAx5JFE5y0a9TnOnT3JieOvM78wz4c+8cvUBoeu9rCvSfT9qOsIlec8/uUv8a2/e5zpmWmarSZZlqFzhVEKjEFpxbnpab7/9a/x0g+fwRhztYd9TaJP1HWCtZbp6Wme+rsnqC8soLVGAFIIwiAgjiIqpTK1ao0gCEmbDX70na+TtprLegduZPSn/nXE9NQU506fQgqBFIIkiogDSRzFlEtlhgYHGagNIGcFxijOnTxGfX6OUrlytYd+zaEvUdcRrVaTtNVCAKUoplapUi2VCIKAIAgoxRGVOGR4oEocBmRpyuz01NUe9jWJPlHXEQInSaNAEoUBWEMgJVEQEAURAkuz0SD1ftVQQLO+gDGmP/0vQX/qX0ckpRLlSgUZBDTSjFxrTCkhiUKEgGaryfFzMzSyjPHBMlu2buH1Qz8iScrs3n+g76rqQJ+o6wBrLc1mk/rCAklSwmhNwxtJreYCKi0ToKkvWM7OzFMZGKAUx2StJoee/Ar12Rl23b4X6aNWffSJum449KMX+Nf/x7/k/KmTRIFkeKCMFhE6T8mUYrgao4whKQ+wY+tWJiohAsn5qSnCY0fRSrXDq330ibpuyLOMN159hWa9SRhGhEGACELqWKzKMCpnx8QIlbJmy2CMwJK2GiRJTJQkV3v41xz6RF0n1AYHiasDNBpNtDEEQhBJQRKGGCEwRjA0UCGXmjTTCCmxGLaMD/ETj7yfuFS62l/hmkLf6l8n3LR9Bz/1yPsolSsYC7mx5MpgtAYsOoiZT5XLplKaVGmMEGy6ZS877rizb0gtQZ+o64TBoSE+8gsf58Gfei9JpYzRikwrZ2hlGUdOneXHp6ZRSDRgrCaIS9x6z9upDA73iboEfaKuA4QQCCHYsm07H//Ur3D7gbsx1qC1QSEgTDBhgo0qqCBivpWRa4MIE4bHJ5Cy/7MsRf+KrCOEEGzavJl77rsfGSWkKqeROX+qNppUKZQ2ZFnGfCtFRDFDI6NXe9jXJPpEXWcEQcBNO3YSJAmZ1gQSqlGAVYpWs8n4QImRgQpKG+YXFvj+d5+mXq/3I1NL0CfqOsJay/nz5/ncX/wF56ZmyLXBWIijiKHqAKODNQYGhzFhRKYt0zMz/P7v/u888cQTfaIuQZ+olxHW2vZDa02WZZw6dYpnn3+e83MLzDdTGmlOPctppi3OT0/zyrETnJmdp5lrZubrPP/SS3z+839JmqZX++tcU+j7US8BS6VenuecP3+eN944ypEjRzh+/DiHDh/i9JlTKGvIM02Wa1pZSiAEc80SURwhpeT87Dz1Rsrs/AJTU1OkaUqSJG+y/m9Ub0CfqBeJQnLWGw2mp6Y4evQo33n6ab74xS9y+PBh6vU6Wmv3UBkCi7XODWVSQygltapECsnU7AJT03M00wxjLK+8fJg//uN/xb79+9m5YydjY+MMDw8Td4RUbzTC9om6BhQSVCnFzMwMhw8f5qtf+xrPPfccr7/2OidPnqDRaGA6JK31KXvWFiUmEisAAbnSnJmaoZVmKK3IlSIMA6anz/Nv/+0fUylXGJ+YYNv2HRy48y727t3H9u3bGBoaZnBwkCRJ3uTKul4J3CfqGpBlGS+++CLf/OaTfOGLX+TFl15kdmYWpRTGaEfkgqPCk8ZahAgwFrAaaxUQkMQxRhuarRRrNMKTuVxOkFKQpSnNRoMzZ8/ywgs/4qtf+QpxHFOtDjA6Os6+ffvYv38/t9xyK5s2bWLTpk2Mjo4SBMF1SdY+UXtAIUl/+Oyz/E+//du88KMXmZ+fI89zV4xnLW+y0a1/4FQEIaT/V2OMQWsNUYDRikAIAl+uUq0OgDAoZchy5ZKogTzLaTWbzM/Nc/r0KQ4fepH/+Ld/zdjYOOMT49y5/wAf/djH2bt3L3D9SdY+UXtEo9HgM//+MzzzzDNOgmp9gZXfRsEPCwjr33CfSymxUmC1QQiX6R/4KFYgBWEYIKQgCAKUtu4GsAZrLMYU3NdIKQikJs9zsixleuocJ04co1Qu8elP/zq1Wu3KXpwrgD5Re8T09Aw/euklcqXaemcnCpkqrFj83wqEAIREYJFSgJAgA0pJiTiUSARSCoRwhlUYhgSRwGQKpzm4G8G0bwowGoyUSCkRAgIpUXnOyy+/zNzcXJ+oNyqstRw7foyzZ886sgBCCoRxhHSk9NtiXazf/wduGpZCIgPpSqaDkKRUJs0bYC2VJCaQgkBKpBSEYYhWFikDsApraR/TCjCeuFiLNu1bhHp9nmazceUv0BVA3+HfA4wxvPHGURqNJo4fXrphESwmoQAIUTw63vN/BW5aT+KYJEkc4YwhzRXGWqIowdgAC8gwcOcwBflBBOKC41pbqAdOKcjSFq1W6ypcofVHX6KuAmstSimOHTtGmrrSZ2sXK0ytt+wtBTkXDZlCQxWimN7dNF2pDrB1x63oxmZU1kIGITIsE1eHSPN56gunsCbDWDD+KMLfAVYYnAvB3RBSCozRtFpNZmdnqNcXrtKVWl/0idoDsizj+PFj5FkKfmrvNKQKdQAA6wjcaXQXJHPPUB0Y5C33PczY+E0opWk1mywszDM9dZa56WNoNUueZhijMdq0p3YpA6fLBp74gDEalVuaRjM7M82ZM2e8l6Fv9d9waLVSTp8+jcpzwCKkxGj8xE8HSb3Oaq2XoAJnc4kOySqpDY4wMjbBQG0IYyxJqYJF0Gq1yFqDBDJ07ixrMdb4AEKAxSCs04/DICAIBCrPUSrHWkmappw8ceK6TGjpE7UH1OsLnD59Bm00YJBCojvcUm3TSbj/DRZhBYHAR468jumn8drgCNVqjaQUOzVCSirZAPNzsySlKgZneMkgcNZ+YbAZg7aGgNC7tCRKuOgXgUQrxfnz59BaEwTB1blY64S+MdUDZmZmOH/+HNZorI/dW6O8buriocVf4V9bazGmIPPiNByEIVu2bqdarRLIoG0cRWFIqVSiVKpQKlUIgpAkjr0uLAmLUKl1vlRjNMY4V1lxeGMNrVarL1FvVMzOzrFQr2O0Rivlp3bpjClnRbUJ2gmLm7oDEbjtA0GpXGHnrluoVitkuQEtiCJLGMUkpTJBGBHHZaQQlMtlhJxFAEEgUUq3XWFa5wgCfzOAMc6DcL2WsfSJ2gOmp6dpNVtorVFKY6whDANn3LS3utCA8u8U8tVZ/IGkWh1g08QmyqUEbVKsdVUASSlxRA0igjDCWLuYdGKtP/aiO0ori1a6faMEPjtrcHDoupv2oU/UVaGU4scvv0yWZS5G7+P0FogjR0TvnMLaRR9nEQMQctH3KUTA4OAwW7duoVQu02zlWAmEbp9SUiJOEuKkghASMN7pb3A5A6atYihtUMqFU6MwwCIRQjA8PHxdStU+UVeAtZYzZ87wjW8+iVJq8X1Aa4MJLN5WKqiKdVrl4pZ20Q8qpGB4ZJShoSGEiAhDidauw58NAuIkJgpjwrBEGMZkWYs4jgCL1fkF7oU0zdHaEIUBgZR+6ndFg9ejjnr93XqXCYWj/1vf/g6HDx8GUxDVyTTrdUKfWko7mGotBicBi+MUDykkw8MjBIGTlta4hhQC01YNglC6IwqJ0ZrB2gC1gQqF+oBw48oyl7mltHtgDMYYXnv1CHmeXenLte7oS9QVkKYpz7/wAo36vLP42+6ojrTTIkxqjfeZetkqBLJTsuJcVZXqAK1W6tL4spQsywEwBlTmclOlEAjpXFODgzUa9YULpGmR/ieERAiLNUVkTHLy5EmazSalUnndr8+VRJ+oK2B2bo4Xf/QCrWYdbbTPDfWMaYdKBUJYnzXlPi0IbTFgBVbgCCgDkqTM/Pw8xjqJmecZWZpitKHVaqKyJngS5nlOuRSTZSnWOKmLgTzXPlHFYjGY9k0imJ6Zpl6vM3Kd9QfoT/0r4MTx47z84x+TZ6mz9jvS+wQuNQ8hFnOkxaKLyvqEk8Kfqo1BBiFBmNBsZeS5AiEJApd8kquMPGuS5xnaKAQCrXLq9TqNZgNjXc8ql3Rt/A2BV0H87SNgamqKmZmZ605P7RN1GSil+Po3vsnJkydcgZ4xizmhRfTd54MCbWvc5Z56vbXTXWUtUZxQqQ4gZdAmVxjFlMoVoigBJForjNKASzZpNOqukqAtrRdDt8WZC0oaY5g6f45Xjxy57pYB6hO1C6y1TE1P8+Uvf4lWq9nO/aSd4oeb+hEuQRTa+X1FPF/6ROgiowok5VKVWm3IW+nGewQgCGOCMEYEEdYKlHaGm8WVYFsLgQwIgsVk6QvHi3ObKU2z2eCFHz1PlmXXlVTtE3UJiuYRzzzzfV45csS1ifTzattWAk/Cxcx+wJNVAtI9C+nVAxe3HxkdZ2hoyMfw3bQtfD5AEIZIGWCFxBIig4A4jimVyiRJiTB0gQAnxS9UMaxxHgRwM8Hzzz/H+fPnr9g1uxLoE7ULTpw4yWc/+6dMT025mLpLQO1IiKadX1pMvEIUUShHViu8m8n7UMMwZGLzZmq1gUVPgSxcUbgSlDBEytDrriG12hATmzYxUBsiiBLCMKYyMEClUqUQq22XmDYorclyxQ+f/SHf/d53r6vpv0/UDhSLRHzmTz7D449/DZVnXfJOC4t+0bp/8zSPD3X65GYBURQxODRCEIRYawm8ZKQjS1/KoN0DQEgXBjXGbVuc4+677+Ghhx4iDMK2LLcWcu2qVvNcMTs7w+f/6vPt0pnrAX2idkBrzZNPPsUf/dEfMz01hTXOqGlHRTt1VAqiFnqpQAYXpqa0y1KkIIpiqtVBn5wv3BQu3QZGG18waBBCtCVhK83I85wgDLwqAeOjY3z0ox9jfGKifR5jXdw/z1X7GM89+wMef/xxr+NufLL2iepRdN777J/9KWfOnEFp5cKk1nhX1IXSsl3PJ+gwoISfya3zrbajUxCGMVFcIsty8lyhlUYp0w59qjz36YMapRRKaecpiEIq5TKlUoIMQpRSHDhwgHvvvacj+cTfPIUurTTN+jxf/eqXOH369JW+lOuCPlE7cOjQIZ568ily5cKT2hpXXFeQwBMhjGLiJAFsu+ykkFmyk8x+J2usT99LyHNNK81opRm67Zv1JSVKodSiuoEQhGGEEIKBaoU4jkizlGq1wsMPP0ylXO5IeFmUzmmWMTc3xwsvPMePf3z4il/H9UA/MuWhlOLpp7/LmTOnXXsdwFnvblrW3nkvhOBtb3uAMAx55ntP+72FD2kWwVX7JkdqEIYEQUSWK4QyaIMPlTpZoY0hzzMfp9dI7/4KAok2GiFdLVajUSdNM2677XYGBwddeXRhWBnr+g74tMDpqfMcPfoGxpgNn/rXl6i4af/c+fN84YtfbNfFL0opyeKcD8PDw3zyk7/EXXfd5VLwOiL/rquJS0jpLKMWQhAEERaBUpo8N+2IkjGuRt9oTdpqofPMxfytaeu4YRCilUKpnNmZGWZnptmzZw/btm0DnKPfJajotrpijCHPMl5//bULMr82KvpExRH10EuHOHT4sDNkBB0G1KLxJIRg3759vPWtb6VSqThjqNMPUDSGYLFhROF+csnQLj3QWoM2PtplDFo7EuZZSpqlaJ23+1OpXKGUIk1TtNbU6wucOXuGiYkJbt2927UJ6uiismg3uS9x+vQpsmzjZ1P1iQo0my2efPJJFhbmfc2+g7UGY4pOfVAqlXnve3+a7du3uwhR4Tf1s70xtq2sLrqy3HsyCLxlnre3s8aglSLPUrIs9RUECmOFJ2lGq9WkXl8gz3OwLuPq2LFj1GoDvPOd76RULnf4b4tzL6rUZ86eodHY+N1TbniiWmuZnp7m2eeeRSsFi/Ry6XNm0X9688038+ijj1IqlWg2m+1ElLY/c/GgiwaR11fDIOroH+XcTy6Ln3Zo1lgQIiQIY5+c7WL9aauFEII4ThBC8uyzP6TVSnnooYe5aetNF1QV+NP7SgTDzPQM8/NzG95F1Seqtbxy5BUOHT7ssuPpzEryqSBCMFCt8slPfpIDBw74zinHvcW+pLZfXPDU7qoSeZIVN4HTXYsQa/HaWfkyiP0x3RjCIKBarVIdqBGEET/4wTO88srL7N69mwcfeogoivwJi/UDXNw/zTJmZqY4d/bslbqc64YbmqjWWvI851vf+hanTp3CGnvBZ+B6PQZBwMMPv5uf/dmfJY5jlFLUFxZYZKVsM7XTVdRG283k4/sdSdCFAWY6S1YEYLUzzKzzGCRJQlJKsECj2eTIkSPEccx73vOTjI6O+hLtjlxYazEqp1mvc+r0qb5E3eg4efIkX/jiF9t63GIDMh8nFzA+NsbHPvYxtm/fDkAYBgzUqs7BD04/tfiElE6SinbNFMLpnUYrtFZeH83bnU6Myh05rQZrvF7sLPg8z129vpeuRmvyLEUIwf3338+BAwd8p+ni1LZN3CzLeP01Z/lvZLLe0ETVWvPtb3+Hw4cPgTW+n1MnUQVRGPHBD36I9773vURR1NYVd2zf4fqdtp3zeLtpMbGa4pWvBMjzHKWVe+SZJ22OyjPyPHW5BSYH6xoFF0nSaZZRr9eZn58nzzOs1QwODSGEYNu2bTzyM+9joDpwgXFnvQ6jlOLIkVc2fDvKG5ao1lpm5+b45je/SdpKkT7bXkqJMS58GUjJfffdz6/8yq8wNjbW3jeKIu7Yu5dSqewz7zvCVx3JK50+f2sMKs8xWmGN9pGoDJVnZFmLLGuh8iZ52nSd/IzGGKeatJotGo0GCwsLNBp1wiBkbGwcKSWlUonHHnuMOw8cuMATYXFS3iI4ceIE09MzV/waX07csEQFePXIEX7wwx9grUEG0rfYoT1d77z5Zn7t1z7N3r17F3VPnM66f/9+xsbG2vmg2G5dqPHENSjl2pinqXNF5XlGmqakaYu01aRZX6DVbJDnKVor1/RXOJeX1ro9/ed5ThBIBgcH2+fZtWsXP//zP8/gYI2iY6A1LvVPCsHMzBSnTp68Ytd1PXDDElUpxbPPPsvJkyc78kutW6FEwOjoKJ/+9K/xyCM/057yOzE+PsHw8HA7stTu7HdBTRUU7X6U0rRaLZrNJmmrRZamZGlKK01Js5wsy1HKoLRFa7c4RWcfq2LNKoyLahUr+7moV8Ajj/wMb3vbO1yo1PtRtTEYLK1WixMnjy/q3RsQN2Ssvyg1+dKXv8Lc3BzWgJS0I0VDQ0N86lOf4uMf/zi1Wu1NJAW8a8r4174RRSfEogdAyIA0bXHi6GucCSxJHLlqUhNhZUQotKseXaiT5y2k1ITSIHzmf+EXtdairSHNWhw9epR7731rO4Z/880388lf+iVeeOF5Tp486QMQhjxzRtSJEydQShPHGzPmf8MS9YUXXuD7zzyDUq7nqTHOuJJS8uCDD/OLv/iJZUkKrhVlYaC4SlPaaX6LKsCiupCmKbOzsxjdAmMRsgxBBYRA2oxGY8G1CjKKOHaWmZDSLUghF6NOrhWV5ZVXfuzVgKBdQfDwww/zzne9i8/9+Z+3Q6p5rkizlBMnjpNl2QWr/20k3HBTv7WWmZkZ/urzn2dq+nw7icT4fL7du3fzG7/xG+zatWtZkhatfubnFy4ooba+iK9w3hfTthDCrRPValIqDTI0uoOBoXGiAKRpkLYaxJFkaHgApQ0yCAjDECndKimFD7VUdrVT2ghefvkV5ufn22MSQjA6OsoHPvBBhoYG22PXWjM/v8Crrx5hampqw7qobkiivvTSIZ586ql2B2npHfalcpn3v/8xDhQW9ApEnZ6eJs2ydhEfHVEmlzK9KF2ttQRRCSti5hfmabZSZBjTUjC7kNHMBc0MlM/GNwjCKCKOIsrlMpVKhXK5QikpkZRcX6pz58/53IRF4kkpueeee9i16xanc/ueVPV6g9def51Dh17qCO1uLNxwRG00GvzN3/4tb7zxhsve78jO37//Tn7u536u66rOnbB20ZiRomhE4apPRUdaYHuNKGMIw5Da4Ajl8iBhFBMGIXEUE8dlgiAkzzVZqqhWKlgDSVKiVC5TLlcYGKhRq9UYHBxkeGiYiYkx7ty/363ytwTbtt3EO975ThdW9UZYo9Hk3NmzfPVrX2Vubm5drut644YiqrWWY8eO8cQTj7tED9/tpChNfuzRx9i3b9+KJAUnue6//35+4Rc+ytabbmr7X4Us4va+ErUzo8lYrHYFfJGf1l2dlSQMXYVpK3M6atVLzTCM/dhKDA+Pcs+99/H+Rz/IP/pH/5hPfeq/ZGxs7IKxCiGoVKq8610PMjw83C5AdO2CWhw69CJHj76xjld4/XBDGVNKKZ588ileffU1ivVJiwz7pFRi7759q0pTcITYv/9OfvM3/1vK5QE+85k/IW3V0VqRK5e87M6XY4x2bc6TEkkpwWhFGEXemjdEUYwQgevYZ3OajVmU1kCJWm2EiYlxRsc2s2fPXj78oZ9hfHyMKIoIw7DrOIub6M4DBzj3+OOLSdW54vy5M7z88o/Zv//ODZfxf8MQtSjee/yJx8nS1mJ9vgCrNXv27OGuu+5alaSw6HYqlUrUauPccssB8qxBljZR2hKXShhrmZ9fIMtaGGMpl6sEQUgYuDVPq9UKKh9ifqEJFqIQ4rjM8MgwURwyUI6Jo4Db9+xmx47tbNo0ztDQsGuXvsIYhRBs2bKFBx98iKeefJJWq4WxFqUVc/PzvPHGG2it+kS9FlH0On3qySf5wfe/D/hepNI5+mu1Gh/5yM+yZcuWnohaoNlMOXZqFpmMEcejlAYNcRwSBhJjDeObNbkypK0UY0CIAKMNcSgIA0GtWiJJYrSBJI4YqJYpV8oMDFSpDdYoJSVqQ8NQqjGfShYaGR0BqWURRRF33XU35UqZVsutjZUrxcLCAq+9/ippmhLHyUVezauDG4KoAHNzc3zjicdpzc4QYtp19WEU8I53Pshjjz1GGPZ+Oay1zMzOMTvfolwdJIoCkiRGCMh8TF8gEFmGNgHCugUlWs06QSCplBNCcmQusDJhYGCAaiWmlMSun39cojY4xOjoCAO1GkIEpFlvkSUhBDfdtJWhwWGmp6bbYd5mo8nxo0ep1xvUaj0w/hrCDUFUYwzfe/ppXnruWaJAUokjatUBauWEobFR/sEv/UO2b9+xpmO6hXxPoZQmSWLiOCIMA4IwIAhDVJ6T5xptciDESoEMYypVKCcQYBgYGECmAhHGlMsJIpBujSnpWvtIIYijgFolAd/Yt+i+shq2bdvOfffd51YcVAopJNYYTp8+xfT0FJs3b17T7HG1cd1b/dZazp49y1/86X9g+twZypFkbGiQLRMT7Lt5B7/0iV/k7W9/ezvC0+sx81zx6qvHCANJFAZEceQlYUKlUiEpJQShK7wT3iOQ55pWptDNefLmPFFcYWh4iJHhKknslo9UWpNlmXObBYIolMRRQCWJFktbesDIyAgf+vCHmZiYcE3YfBChvjDPyRMnLuGKXh1c1xK16CX1/W89iZ45x/bRISTD3PeT76Xeyjhz5BC7995J3IOlvxRz8/OcOHnWRYqKchSjIZAEUhBHroWkEIKFhYYvk87JUs1sZikPDzBXTymVBXlDI6SkXCm1nfFuHVSLUm7JoDiKULnunlewBIVv+IEHHuCxRx/lb//yL8FqBgZKDMaCqZPHUHlO2CXZ5lrFdS1RjTH88Olv8/p3v8HWwRLVOGRiYpz3PvoBtm7fTqp9NspF4PTp88zP1wkDSRyFlJLQrb4XhwQCcqXIsow0zX2//pw8y9HG0sgsUzPzzM4tMDu7wNzcAnmWIa0mjsLFnFZ/HKW0+z5FpWuP2LZtO//kVz/Nw/fdy1v33MKtmyeIjOWZJ5/g+LGjF/W9rxaua6LW63WOPPsMSRRw/NwUQyMjBHFClqbEYYjNMrKLyHzX2nDixGlaaYaUEAVuXdIwDAgD3+LHL7arlMvWt1a7EhSV0Ww2aTbrlGKJVi1arTpKpVir3aIVON+nUW4JSYtp+307cwtWQuFCGx0bY2RsjEq5xFCpTIDk1dff4PjR1zdUKPW6nfqttczNTFOOAkb2vxX7wxfYe+BOkuHNPPf0t5EyYue2HVQHams+dp5nHD12AqM1UgTtJOckDolCSa4Uea7JM1eLH4QCIWPCIKDZgrnZOtNTDZ8obVwAQEASCeJSmXKp6vNPFRJ3ExhPYKUNvTiWisLFs6dPMzs7y9CmLejZKW4aHSUVsh2U2Ci4bokKUJ+dZWR0nJFb9lAZqBEA73j3uzlz+gznT51m1/sfZduuW9d0TGstc/MLnDkz5d4QridUIN0aftpYcuWkXpam1Ot1331FkDYbzM3NMD97jlISkWcZQgbEpkyzWWduTlLVvgeA0e1s/bYU9Z1WehljfWGBZ779LZ7++hOQtdi+by+HnzlNGAaMjIxQCvsO/2sGAti0bSe37NvPgz/9KC899TjHX3qBux/+KWoPvZswjtccobHW8uqRo8zOzrWL+axfAtIYjUVgrMBYJ1i52BYAAB1BSURBVHm1zklbLfJc0WzMMz8/jcqatHQLrCWKK5jQuJVX8oRmSxJGriTbLXKhMdq19zHGoFRvkjAplbht/37mps4xGElkqUwgIDOaUAhExxoCGwHXtY46sfUmtu+9k3KlwiMf/DD3v/cxXjn8Mt/83GeZP3+27ZJai1uq2Wxx6PARrK/3F75BRWFpF4V9eZahdZGd5bryWQy1gTLv+Ym3s2nTKHmekucttyqgr8mXAteLKs98TyqF9sWGxliU1qvyq92KfWIT23ftplmvc+7kcapDIyBDkJIwSXpzyF4juK4l6nBH5ejQ0BA//YEPMfvQw5w/dZLS4NBFHfPUqbOcOHUWIaVPuvYl08YQBJK0ldFqtGg2mq4Rmidaq7WAlIZf+Hsf5CMfeh+/8zu/x9ee+LZfI8DV8AfeKDPa9UlVmWvsa/1ibMaYtlrR2dJyOTQaDU6fOM74rbczPDzMs9/6Bnfcez/NmWnGt2zdMK4puI4lamfLx+IRxTFjE5u4/S13M3ARRNXacOLkGVrN1HVVsSCEddOzddLOGG/pW9cAzVg3bWdZyuhwjZ9497sYHR2lVC75suyiw5/FaIXRCqxBZa7Dn1auYYWDbddO9fL9B2o13vO+93PPA2+jMTPF29/zUzz8yKMMjYwyULu4G/Vq4bqWqN1wKVIkz3OOHz+DyrXvuY/XSS1B4Iwp5YlVRJGMNaRpC6UUu26+iR3btxGGIdXKANYI18/U9+z3/U0IpMAYT1hPfnz7c6NNz6qllJIojhkYHOKdj7yfiS1baTTq3LTrVsqVjbVW6g1H1EtBo9Hk9JnzGKvRxvcjFbRXic4t3q1k2s0nVJajlEIA++7Yw+Cgc4eNjI5gfYWqlO5nkBR1TgallGvR7suk3cO1rFyL/1MIwY5bbmm/DqOIB97znxAnpQ019feJ2iNcV+oZZmYXMGaxv5TRfjlIa7HaddBzmVPOYZ/nGcZYBgbKvO2Be4njGGMMExNjBN74Ar/EedGSx3fly7OcLE2dCqB9L6oep/4CS8kohKBaW7vv+GqjT9QeYa3l1Kmzrk7eLy1Z5LlqCzILkFKQprmz+pUizTPSLEXlKdt338zuPbcifch2201biaKITFm/zirIYLEGy1n4bjkeZ/Hrdte/S3EqbSQp2onr1pi6nCg6lZw5M+3KqvFLm1vTXjdKaUOWOcMp971JnRR0EvOuA3sZHnIGjBCCsbFRojDA6kLftbS7VnvfrLAdRmF7icuN4/u8nOgTtUe00oypmXm0csuRu5natZkMAld/b6VEaUuj0UTlijzLUEpRKSc8cP/drjLUY3h4mEq17BqZWde7yklWR9pAynZUyvhVWVyrH7GR3J+XDX2i9ohms0WjXnfS1eDr+F3fp1LiIlwqd71NhXAteFxCimLXzq3cftueC45XrVYZHR3CemI6Qrb7UYN3p0kh2s3aLa50Rt6ATO0TtUfMz9dJ0xwhBMU6TsJ1V0OGYZuYsm1k5RijCKTk3Q+/nc2bN11wvEqlzG17bvGZVi4/IM216+MvXfl2HIVY31vVSVlBGCzfGON6Rp+oPcBay/x8g1zlrveTMe0+pNLrj0Xvqfb0LgTGWMbGhnnXOx/w9VTCfySI45i9e+8gCkOscQaVVhprJUEgSZKIMAidpPWhWBca7b0S4XpCn6g9wBhX+pznOda3kyz0Rve5QeBWx7N+e60UKlfcd+9+9uy+5U3HlFKye/ctDA0NAdKTPGzru2EUI4MIKQOCUPqGFY7ENyBP+0RdDYX+OL/QQGt7QfaSMdY70QOM9StGC2eduxaPEQ/cfzeVSvda/LHRUdd8V7qElkCKDt8qUKy1KkOCIHDLoUu/TsANhj5Re4AxhmYzBeFT+oRL5Wun9rksv3ZLSOMXNtu5Ywv33LV/2am6VhtgaGgQvJQW0htO1rdYL9YE9MsILS77c6W++bWDPlF7gDaGVivDzfROO/XeI/DdpJXSRGFMEIRt/+lb797L1q1blj1uqVxicHDAS22XgWX80pNWCATSL4xWLMTbe0ri9YY+UXuAVppWK6PwmSIkxrpsfiElSru1TYVbkASlNIMDFd75jvuorJD8US6VGB8b9QLUuamUT1LpbApsrWl7E25QnvaJ2gtcml6OAJ+MYjHa9e13XZ21WzrdGox2a5tu3TrOHbfvXlYKOss/YmxspC2di0Ur3FLmtBOxi65/Qsgb0ocKfaL2hCxTNFsZ2nT2PNUIDGEg28vxGK1oNVtopbnrwO2Mj4+tOFWHYcj4+AhhIADj1181SNziaVJI37kvIpCRqyiQNyZR+0kpPaDZSmmlbilx16oSpDFICUEAxrgkZ6UCFuoNokjywH1vIUlW7pcfBAGjo8Pe0jcIFvVUY7Rb7DcM27qvaK8jdeOhT9QeMDe3QJblGGudZW4NUewunTGGVpZjjCGbqzM/N8fuW25i397beuqzOjoyjAyCdrgVXGBAewMNIZBBiJQBUgZrakBxPaFP1FVgreX81Kzzn1onUYNAthtNzC80XWl0rpmfWyBNm7z1nr2Mjg63idqZP7q0Q/TwyBBhGJKmqS/cc1LVGVW+2bBPU3X1/TcmUfs66iow1jI9Pd9uCCGwRGHgyqy9oWQtLCw0WFhoMFAtceDOOy7IlFoJtYEBSqXEtU73D+ONKqVUO3vK+VLhRnT2wzoRtTA4VstE73W7qwVrLVppZmYX0DqnWO05jiMQtPvwp2lO2srJlWLz5nF23bzjTZJzOeu/VEooJbELIAhBrjVKK7Q2vuS66BWw2MnvGr1c64pLnvqXkqzbj9FednEDulayLKder6OVqwSVwruKjMBYQytTNJtufVOtMjZvGqU22Huph9YaXbi2jJPYGNd6smiQpov+VWZjteG5nFizRL1UKdi573KSZrlzXC4JvPQYKx03y1ynE+MX3sUaJG4qrjdS5ust0parbdJ5xsjwEEkct9MB24ulLXP8PHfJK8ZClum20z/zBloRjnW9pzrrq24sXNLU3+3Cd3tvuWmvkyAXQ8DlCHc5oYrFcn03vThxPUW11qSpIkszWmmLLHPGUJJE7bqoXpCUEsrlivPN+kx/MGido/LcdQD0unERBLgRse7fejkiLkeotb5/MeNZ0/Y+Idq2HfwCbQxZlmGNImu1yPPMV5sqkjhya5ey/A3aiYFqla1bxhBCtjuuxH4tgdz3nzJFUZ+P91/s975WbYFecM27p3oh7ko/4Eo/Tm8/vIWit6kx5LlrW679GqpplpFnmYvVG00UdV//qZurCiBJYm66yWX/CynQflVoYYv+VS4hReeOtDfgrA9sAKL2ikvRmVd+r5CKPntJBm7RM2NotZo0Gw3yPPdNJ1xodekxVroZZBCwefNEOxm7kOChzslz13BCSNqLCl8sNqIh24nrUuHpRedd6j7q3PeC7aRo+y+FgDCMyJWm0UiZm6vTSltYipY8oPXaziuFYHhoECHwflPt6/kBa9pZVW0jSmzc6ftScN1I1LWiF33ZfUhHtChHqZxGvcH01DQzs/Nkee4NIABBrnRXadrNRVe8J/0q0FmWkyvfIRBXzSplQCAD1z3QXihUe5XaK333jSJpr0uJuhIu5oeRvjRaCEGWZbRaKdpHkKzRPhfVdTjJMvWmm2AlX3PRuqdouqa9O6ro8Of3IBC91UpdCaPpahhmN5xEXfMFFovlH87az31Wk3RNItxRXRsea8lVvqoBt/T/ufl5jHWx/DAIsFikb24hvXqAkL6hWnejbLn/lzPiLockvRQvxFqxrhJ1LWHUaxUCFp33xrYz+YsiPufXtD7uXyQ2914yYozh/NQMIJBBgAzcVN+WpaKDbMauOc1vuet7Kb/N1VAX1ixRew2ZruXL9LLtasdcr7tbeMmGkARRiBQSYxzBZBB4P6fFGrdicxzHPhWwt5tPa8358zPuO5jFxhbWd1zhAom4vlGp1VyB3aTzpYbHe71OlyxRlwsR9jqA1ci32oVY7zwCGUiC0OmooTdojO+yp7XyTdMoRB/Dw4MdKsHqUFozN7dAkfBirXE9/6VsN/gtnoW4MbukwGXSUXsl6HKfdd6pKyW1FNsuzRXo3Kbz885jXyzCwDd9gPYiZVnujB2jXftza11tahhIRkcG1xTmFLgl2d13KPoIWLQtxu9cZPhiv8uNXl1pywmNK6W2XRFjqhddqNvrlaaa5Y672ntrUTOstYRhQBJHWOv6Qllj0BZypcnyzEWRvF4axzGbN42v6eYIw5CRkSGK/lIgXT2W1QQywGhH3OKIV1Kb71UVuNhtevm8wDVt9V/q3bqadO7l3EEQ+i7Rvoe+daTJM02WKV8e4gyrcqXkSXfhcVb6MYIgYOvWzUjpVvzTRmOMcIGDonaq6Je6hvFfDaynGnZd+1Evh0dBSkmplCCF8KuWdHR+FrKtN0ohGBup+c4ni1iqinQ7/pbN40jhOqykaYb266jmSiFlYeeLdl+ra5ms64WLykft9l4vYcu1HnOt+6wX3GJmAJYwjMD3iYraC6oFgGTTpjEq5XJ7fN0MzG6qS5IkvhRaopX2PVFB+Z6p7eYW7R6q14Zk7UUlW8qNldxlK+Gipv5eLfECq0mVbvss995qx1zOwb10v6VG2fLjcoV2hd9Ua+1cVkIgZYCQrr15GAbcvHMbpVL3EumVdGspi35Szh9rres4HQjhOwUCCE/aq0/Qq4FL0lHXqu9dDiwl2VrP1elO62UfY1yFqZQSIQKMNeR56qx1CYEM0cIZXVu3TBCGF3dJ3RB8v1W52GbS+NCqMcavFuh0ZSFWNjw3mhtrtfFeUat/JQm8VMKt9COsFi3pxY3SKVlXgvERqSAIkdIZOWEYUiqXyJUm8D2pqpUy27ZuarfhWWlsS79PGEWuuZpvOoHxLdZxi1co7XMJ8C2FeHOE6lpQBdYTlyxRVyJYt+1X+mw5adnLj7BW1aGT9EtJewHRrVsRBZ+Bj/eZCikR0jXdFSJjaGiQTT24prp9Xi6VCMMAYwK3aIUVXr1wOnAYugYUCLfSnzXAKotir3SzXAtY6421JqKuNm1ejrt6Jed+5+er3RQr6ardigm7beuc78Z3LMEli/iOz1IIwiBsZ02Njw0zOtp9fdHVdPbBwQEqlTJ5nhEWK6z4HFcpA8IgdAaWdTqqsZbgIr3/S2/Ia43Ay+GSY/1r2a8Xo2q5c61Vyl6sZO/2I7p6KevzQkOEUFiTeSNIIqRg8+ZxBqrVC47fq7FZKZcplxOmpn27dWHbnn1r3c1hsX65SXVRv8FKOv1GIOtl01F7IeJaP+tmoV8KegmvdnOtCBlgLW5Zc+1i8sb3nxJSEAQh27ZOkCTxm8bbCxHiJKJUNFSzFvw+Rak03tByHQN1YXld9DXo/J4bgaRwCUS9GF9Y5zarWe5Lf+zO95eeq5d4fy/j7aofA8biVy1RtHIXOQqjyGVQyYAkidl96w6iKHzTeLuNayk5kjimXCk5694WXDXew5D5/Z0aorXues16wXLlNxsB6zL1r4Uka3Uz9UK45f5fiZii8Fku+QHbOirSJzZDEAbYTJHlmiSOGRqscsuubav6ZrsRF1wl6vDQoNNDjVtuXZvEn9vV9hcrWF/KxLLS73KtE/eyWv3LbbOW7ZfbtxOXSxXoxRAUeIc8gjguYaXCIJ1rSkriOGTXzs2Mj492NVRWI64QgqSUsHnzuJPSfqnJLMsJA9EumTbWEgiJseKSa/w3Ii5776nVPrddLvJaiLuUpCvptr1ipe2V1mgjUFYQhDHSt4JM4gSLQKmcm3dupZQkK36X5YhlrSWOIjZtGieOS0SBWycg8IUDcVyiVK4QRTFRlCCWpBBeLGGvex11rVN15+fd9M5etl9tv17f6yWitfTzc+fnmKtnICJXbkLaTpZWeYbEsO+OWwnDYMUbsNu5i22DIGBiYpRqtYqUgijyLrBAEoQh1YFBSuUqgQwoJdGb6qYuhXSr6c/XCtbNPdWLXtqr2rCY4b42Q2rpdsVxehkruAZmrx89SyvTBHEJTIYMQrRWtFoN5mZnueXmm7htz82r5hh0olukamxk2LefhCAUiECCCElbGY1myrCxBIHzMCxXN3WpRLtWSQoX4fDvZnVeTr/e5dx3Oeu/V9Vganqewy8fJwhCqtWYLK2jspCs2WBufp60lXLg9p2MjQy3VZrOc3WO1wUGugcahBCMjQ1TqZSoNzOCICRKSshAkuaW8+enGBwcpFopIyivmOnfTU/uFctJ/GsBayZqnufL/ti9TOfd9l1OGi2Vkkt/hNXOV2zb+VyUiawa4zeGo0dPMjU978KkGLS0NJoNmmlOHMXsvHWQ+956J1EctW/gblPo0veWPgOMjo0wOjpE88Q5wjBCyIAwKZGUKuS5Jms1SeIA0BckUK/mC+4Vl8M4XU+smahpmnbVxXrRXVd63UnG5bZZ7f+lr4sEkUKaFUTtfA1v/rFbzSZHfnyYx7/wJeZVjdLgBNpkNJsLhNKy//btvGX/bvbcejNbtmwmz/MLJOZanovXtYEq99x9B6dOn/fLoCfESYlypUwchYAllDBQiegc7nKE7XZzrxXXksF10VN/LwTs9n8v+y73XLxe7v3VnpdKZLjwR9BaMzM9zQ+f/jbPf/c7NGdmiaKEmaiCkQHlUsKdb72X+x64j6Fht0jEzOysX/XZZYl0kl92pOsVN0jnjbKUsPfcvZdvP/08zZYhjCKiMKBUKlGtVkmSmMFahUp55SWBlruulxLfvxaMrDUbU8sZJNBdsvZikfcyha8mtZdK5c5H5zbtxRuW/K/ynJdeeJ7vfP3vOPnGa4RCUC2VSEJByWrCMGHHHXdw655dpFnK+fPnAWexB0HQJmAx3k6iBoHLiup87iRr8RgeqnHb7u386NBRBBYZhERR7Pv8J5Ti8ILvvJKH4XKj2/muJHnXTNRuF2cl10Yvlv1KUq7bsTqjSN32XzrWTmIW2xXhSOPbPJ44epTH/+avOXfmtOuoJ0OS2FIqlxgfrjK4ZTvjO3ehtKHZbLYJuJy07Dau4vOl6ogQLpUvDCPufsttaK049sZJ6jNN5mJLcxjOnmnRqCZUH3oPteFhXGOMN/8mS6/p0vcuhVxXU49dM1GLKe5iJOVy+6ymOnSz3ItxFBJ+qQTtbN5Q6KrFe53JzUVTsvNnzjIzNUUYBBAEWKMZHqwyNjLAyPAQtU1bieKkTezIx/qL43dLmF4qzZfeIMXrYpmeYpq/797bidQcTz7+OAuvB2xp7aZaCjiZ5zTrc9zzEz/N6KYtLie2i25anHM5L8PSbS8W6y3FO7EmohZ3/lIp2etU3k23XO6z1QynQqp2I0MnKYrttM+SL75Hp3TRSmG1oRSX0ColCiSBkIxUAwZKAbXhYcpDoySl0gVSdKlhVpB1OSn/JnVDKZRStFot0jRldnaWOI6pVCrctHM746NDzJw9zfT0FKZWJstSznz9y5w9e5qHP/ifsWX7DoLgzYGG4juuNkNdDqJdqel/zUTtttDXalNCL8Rcbr/V9l16nM5pvnNqLwhbfN5JNtdHSrJlYhOlwDBaS8iylMpAhdpNN1PbeRvV4THiJLmA5Et/6KXkXCrdpZRu3agOYhlj2iv3pWnK/Pw8p06dconZ5TKagDOzDRqtFgvNJhjDmaeepJlZ3vfzH2Xztu0X3ICdunrne8Xr9cJ6n+OiJCosr0yvpOR327Y4buePu/TidmY1LUfwpUZeNz1UKUWe56RpCuDLPixxkjA4Psbw1s1kM2cZHRliZGyC6s49DG/bycBArW0AdZ67U6J3qhqdN8tSAy4Mw7baoJTyZSbyAoncbDaZnZ2l0WigtOL01Cz1yDK1kDE2VCUKFa//6IccfmY3Y5s2E8XxstduOel+JdSCy0naNRO107q9WOV6JQNoLZKgc9/CLbQcgTun2zRNfd99tzRPuVymVCpRq9WYmzrPyGCNTZs3MzSxmbioue/iJ+08/lJSduqgi8159QWPMAzbz3mee4PKqVZKKeqVCrMYWlnGLTdtZ7DWQCnFQLXCzm3bqZUS3wB4ZZJ1I+hy13Y143el32ClMVwq1mxMLXWid6Kb0dMNS0nYi49v6Tbdfpxu0rwwuornOI4pl8tv0h2NMdx2221tgnXzdXZD53g6x9hNH83z/IJHQdg4jtvPcRy31atWo87s9HliZQijKvtu2U6eK5LKADtv28/W2/chu+ioy0nOXsizdGZb+rrX/TvPdzmk60W5p3qZ8ottu2233Gfd9L5u23R73e3HWou0WKrvdjMWuxlKK12Lzu07pWtB1DRN25LWGEOSJCRJQhRF7f2VUjTPnWWu1WTBbmLTrXsYGZ9gy569DE5saaf9dbthOse+3Ay1EonWIlm7XdvLOf2LNQ7mLPD6ZTlzH328GTdbaye6fbAmovbRx9XCdd3Nr4/rB32i9rEh0CdqHxsC10zHaXFQ7AL+yk7aA1d5KG+COCh+HfhlXG79H9pJ+7v+/buBPwAGgNeAj9tJOycOigeB/xtIgY/aSfuyOCiGgc8A77OT3Q0DcVB8FvhNO2mPiINiwU7agfX+bqtBHBQx8GXgp+ykVVdrHH2JugrEQXEAR9K3AXcDHxAHxW3+438F/Hd20r4F+HPgN/z7/zXwnwL/A/Ar/r3fAv75CiS9EwjspD2yLl/kImEnbQZ8BfjPr+Y4rhmJ6hGIg+IPgXcBx4EP20nbFAfF48B/Yyftd8VBMQ58107aXeKg+CTwEVxvuwPA7wAx8AmcNHvUTtopcVD8MvAp/9nLwCfspG2Ig+LfAHPA/cAWnDT77JIx7QO+ZSdtA0AcFE8APwv8r8AdwN/57b4EfAFHyBwoAxUgFwfFbmCbnbRPrPDdPw78Recb4qD4Z8AHgKa/Fqf9mP+qGGchecVB8R7gIHAauAf4M+A54Nf9WD5iJ+0r4qD4IPA/+mtxHjcLnBYHxT8FdgK3+ufftZP29/xQPgf8C+D/XWH864prTaLeBvxfdtLeCczgpNJqOAB8DCfx/hnQsJP2XuAp4L/w2/yZnbQP2El7N/Ai8A879t8KPIQjxP/c5fjPA+8WB8WYOCgqwKPAjo7PPuRf/3zH+/8C+H+A/wr4fT+u31rlezwIfK/j/yruBrkbdzP88ir7g5P4vw68BXez3m4n7dtwkv/TfptvAO/w1+jfA7/Zsf9e4Gdw13JSHBRFBtLzwAM9nH/dcK0R9VU7aX/gX38P2NXDPl+zk3beTtqzwCzwef/+cx37HxAHxdfFQfEcTnLd2bH/5+ykNXbS/gjYvPTgdtK+CPwvOIn5H4EfAoWu9g+AXxUHxfeAGpD5fX5gJ+077KT9SZyEOgEIcVB8RhwU/04cFG86D+6GOdvxfwb81RqvxdN20p60kzYFXgG+2OVabAe+4K/Fbyy5Fn9tJ21qJ+054ExxPeyk1UAmDopaD2NYF1xrRE07XmsWVRPF4lhLK+xjOv43Hfv/G+CfeF3y4JJjdO7fNd5nJ+0f2Un7Vjtp3w1MAT/2779kJ+0jdtLeB/x/OHIsHuygELhp9reBSf/4d8CvdTlNc8m48g59tuu18MfvLKLq5Vr8n8Dv+2vxj1n+WnSeEyABWl3GfUVwrRF1ObwG3Odf/72L2L8GnPRT2cfXurM4KDb5553Az+FI2fm+xBHyD5bs+vdxUmoap68a/6h0Oc2LwJ4ehvMai9fiw8CbE4RXxhBO/y/GtyrEQTEGnLWTNl/juS4brjVjajn8S+BPxEHxCeCrF7H/bwHfxuUpPIcj7lrwp/7HyoFf9cQD+Kg4KH7Vv/4z4F8XO3h99u8Dj/i3/jfgT3FT+ke7nOOvgffgXEEr4Q+BvxAHxXdw1nh9jd/lnwL/QRwUx4FvAbf0sM9PAn+zxvNcVvRj/dcIxEFRBr4GPOh1wmsG4qD4M+C/t5P20NUaw0aZ+q972EnbxOmw2672WDrhHf6fu5okhb5E7WODoC9R+9gQ6BO1jw2BPlH72BDoE7WPDYE+UfvYEPj/Ac64cDeB5VRSAAAAAElFTkSuQmCC\n",
904 | "text/plain": [
905 | ""
906 | ]
907 | },
908 | "metadata": {},
909 | "output_type": "display_data"
910 | }
911 | ],
912 | "source": [
913 | "# Visualize the outputs \n",
914 | "\n",
915 | "# you can modify the index value here from 0 to 255 to test different images\n",
916 | "index = 58\n",
917 | "plt.figure(figsize=(6,3))\n",
918 | "plt.subplot(1,2,1)\n",
919 | "plot_image(index, predictions, test_labels, test_imgs)\n",
920 | "plt.show()"
921 | ]
922 | },
923 | {
924 | "cell_type": "code",
925 | "execution_count": null,
926 | "metadata": {},
927 | "outputs": [],
928 | "source": []
929 | }
930 | ],
931 | "metadata": {
932 | "coursera": {
933 | "schema_names": [
934 | "TF3C2W3-1",
935 | "TF3C2W3-2",
936 | "TF3C2W3-3",
937 | "TF3C2W3-4",
938 | "TF3C2W3-5",
939 | "TF3C2W3-6",
940 | "TF3C2W3-7"
941 | ]
942 | },
943 | "jupytext": {
944 | "encoding": "# -*- coding: utf-8 -*-"
945 | },
946 | "kernelspec": {
947 | "display_name": "Python 3",
948 | "language": "python",
949 | "name": "python3"
950 | },
951 | "language_info": {
952 | "codemirror_mode": {
953 | "name": "ipython",
954 | "version": 3
955 | },
956 | "file_extension": ".py",
957 | "mimetype": "text/x-python",
958 | "name": "python",
959 | "nbconvert_exporter": "python",
960 | "pygments_lexer": "ipython3",
961 | "version": "3.7.6"
962 | }
963 | },
964 | "nbformat": 4,
965 | "nbformat_minor": 4
966 | }
967 |
--------------------------------------------------------------------------------
/Course 2/Week4_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "jYysdyb-CaWM"
7 | },
8 | "source": [
9 | "# Week 4 Assignment: Custom training with tf.distribute.Strategy\n",
10 | "\n",
11 | "Welcome to the final assignment of this course! For this week, you will implement a distribution strategy to train on the [Oxford Flowers 102](https://www.tensorflow.org/datasets/catalog/oxford_flowers102) dataset. As the name suggests, distribution strategies allow you to setup training across multiple devices. We are just using a single device in this lab but the syntax you'll apply should also work when you have a multi-device setup. Let's begin!"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "## Imports"
19 | ]
20 | },
21 | {
22 | "cell_type": "code",
23 | "execution_count": 1,
24 | "metadata": {
25 | "id": "dzLKpmZICaWN"
26 | },
27 | "outputs": [],
28 | "source": [
29 | "from __future__ import absolute_import, division, print_function, unicode_literals\n",
30 | "\n",
31 | "import tensorflow as tf\n",
32 | "import tensorflow_hub as hub\n",
33 | "\n",
34 | "# Helper libraries\n",
35 | "import numpy as np\n",
36 | "import os\n",
37 | "from tqdm import tqdm"
38 | ]
39 | },
40 | {
41 | "cell_type": "markdown",
42 | "metadata": {
43 | "id": "MM6W__qraV55"
44 | },
45 | "source": [
46 | "## Download the dataset"
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": 2,
52 | "metadata": {
53 | "id": "7NsM-Bma5wNw"
54 | },
55 | "outputs": [],
56 | "source": [
57 | "import tensorflow_datasets as tfds\n",
58 | "tfds.disable_progress_bar()"
59 | ]
60 | },
61 | {
62 | "cell_type": "code",
63 | "execution_count": 3,
64 | "metadata": {
65 | "id": "7MqDQO0KCaWS"
66 | },
67 | "outputs": [],
68 | "source": [
69 | "splits = ['train[:80%]', 'train[80%:90%]', 'train[90%:]']\n",
70 | "\n",
71 | "(train_examples, validation_examples, test_examples), info = tfds.load('oxford_flowers102', with_info=True, as_supervised=True, split = splits, data_dir='data/')\n",
72 | "\n",
73 | "num_examples = info.splits['train'].num_examples\n",
74 | "num_classes = info.features['label'].num_classes"
75 | ]
76 | },
77 | {
78 | "cell_type": "markdown",
79 | "metadata": {
80 | "id": "4AXoHhrsbdF3"
81 | },
82 | "source": [
83 | "## Create a strategy to distribute the variables and the graph"
84 | ]
85 | },
86 | {
87 | "cell_type": "markdown",
88 | "metadata": {
89 | "id": "5mVuLZhbem8d"
90 | },
91 | "source": [
92 | "How does `tf.distribute.MirroredStrategy` strategy work?\n",
93 | "\n",
94 | "* All the variables and the model graph are replicated on the replicas.\n",
95 | "* Input is evenly distributed across the replicas.\n",
96 | "* Each replica calculates the loss and gradients for the input it received.\n",
97 | "* The gradients are synced across all the replicas by summing them.\n",
98 | "* After the sync, the same update is made to the copies of the variables on each replica."
99 | ]
100 | },
101 | {
102 | "cell_type": "code",
103 | "execution_count": 4,
104 | "metadata": {
105 | "id": "F2VeZUWUj5S4"
106 | },
107 | "outputs": [
108 | {
109 | "name": "stdout",
110 | "output_type": "stream",
111 | "text": [
112 | "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n"
113 | ]
114 | },
115 | {
116 | "name": "stderr",
117 | "output_type": "stream",
118 | "text": [
119 | "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n"
120 | ]
121 | },
122 | {
123 | "name": "stdout",
124 | "output_type": "stream",
125 | "text": [
126 | "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)\n"
127 | ]
128 | },
129 | {
130 | "name": "stderr",
131 | "output_type": "stream",
132 | "text": [
133 | "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)\n"
134 | ]
135 | }
136 | ],
137 | "source": [
138 | "# If the list of devices is not specified in the\n",
139 | "# `tf.distribute.MirroredStrategy` constructor, it will be auto-detected.\n",
140 | "strategy = tf.distribute.MirroredStrategy()"
141 | ]
142 | },
143 | {
144 | "cell_type": "code",
145 | "execution_count": 5,
146 | "metadata": {
147 | "id": "ZngeM_2o0_JO"
148 | },
149 | "outputs": [
150 | {
151 | "name": "stdout",
152 | "output_type": "stream",
153 | "text": [
154 | "Number of devices: 1\n"
155 | ]
156 | }
157 | ],
158 | "source": [
159 | "print('Number of devices: {}'.format(strategy.num_replicas_in_sync))"
160 | ]
161 | },
162 | {
163 | "cell_type": "markdown",
164 | "metadata": {
165 | "id": "k53F5I_IiGyI"
166 | },
167 | "source": [
168 | "## Setup input pipeline"
169 | ]
170 | },
171 | {
172 | "cell_type": "markdown",
173 | "metadata": {
174 | "id": "0Qb6nDgxiN_n"
175 | },
176 | "source": [
177 | "Set some constants, including the buffer size, number of epochs, and the image size."
178 | ]
179 | },
180 | {
181 | "cell_type": "code",
182 | "execution_count": 6,
183 | "metadata": {
184 | "id": "jwJtsCQhHK-E"
185 | },
186 | "outputs": [
187 | {
188 | "name": "stdout",
189 | "output_type": "stream",
190 | "text": [
191 | "Using data/resnet_50_feature_vector with input size (224, 224)\n"
192 | ]
193 | }
194 | ],
195 | "source": [
196 | "BUFFER_SIZE = num_examples\n",
197 | "EPOCHS = 10\n",
198 | "pixels = 224\n",
199 | "MODULE_HANDLE = 'data/resnet_50_feature_vector'\n",
200 | "IMAGE_SIZE = (pixels, pixels)\n",
201 | "print(\"Using {} with input size {}\".format(MODULE_HANDLE, IMAGE_SIZE))"
202 | ]
203 | },
204 | {
205 | "cell_type": "markdown",
206 | "metadata": {
207 | "id": "rWUl3kUk8D5d"
208 | },
209 | "source": [
210 | "Define a function to format the image (resizes the image and scales the pixel values to range from [0,1]."
211 | ]
212 | },
213 | {
214 | "cell_type": "code",
215 | "execution_count": 7,
216 | "metadata": {
217 | "id": "RHGFit478BWD"
218 | },
219 | "outputs": [],
220 | "source": [
221 | "def format_image(image, label):\n",
222 | " image = tf.image.resize(image, IMAGE_SIZE) / 255.0\n",
223 | " return image, label"
224 | ]
225 | },
226 | {
227 | "cell_type": "markdown",
228 | "metadata": {},
229 | "source": [
230 | "## Set the global batch size (please complete this section)\n",
231 | "\n",
232 | "Given the batch size per replica and the strategy, set the global batch size. \n",
233 | "- The global batch size is the batch size per replica times the number of replicas in the strategy.\n",
234 | "\n",
235 | "Hint: You'll want to use the `num_replicas_in_sync` stored in the [strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy)."
236 | ]
237 | },
238 | {
239 | "cell_type": "code",
240 | "execution_count": 8,
241 | "metadata": {},
242 | "outputs": [],
243 | "source": [
244 | "# GRADED FUNCTION\n",
245 | "def set_global_batch_size(batch_size_per_replica, strategy):\n",
246 | " '''\n",
247 | " Args:\n",
248 | " batch_size_per_replica (int) - batch size per replica\n",
249 | " strategy (tf.distribute.Strategy) - distribution strategy\n",
250 | " '''\n",
251 | " \n",
252 | " # set the global batch size\n",
253 | " ### START CODE HERE ###\n",
254 | " global_batch_size = batch_size_per_replica * strategy.num_replicas_in_sync\n",
255 | " ### END CODD HERE ###\n",
256 | " \n",
257 | " return global_batch_size"
258 | ]
259 | },
260 | {
261 | "cell_type": "markdown",
262 | "metadata": {},
263 | "source": [
264 | "Set the GLOBAL_BATCH_SIZE with the function that you just defined"
265 | ]
266 | },
267 | {
268 | "cell_type": "code",
269 | "execution_count": 9,
270 | "metadata": {},
271 | "outputs": [
272 | {
273 | "name": "stdout",
274 | "output_type": "stream",
275 | "text": [
276 | "64\n"
277 | ]
278 | }
279 | ],
280 | "source": [
281 | "BATCH_SIZE_PER_REPLICA = 64\n",
282 | "GLOBAL_BATCH_SIZE = set_global_batch_size(BATCH_SIZE_PER_REPLICA, strategy)\n",
283 | "\n",
284 | "print(GLOBAL_BATCH_SIZE)"
285 | ]
286 | },
287 | {
288 | "cell_type": "markdown",
289 | "metadata": {},
290 | "source": [
291 | "**Expected Output:**\n",
292 | "```\n",
293 | "64\n",
294 | "```"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {
300 | "id": "J7fj3GskHC8g"
301 | },
302 | "source": [
303 | "Create the datasets using the global batch size and distribute the batches for training, validation and test batches"
304 | ]
305 | },
306 | {
307 | "cell_type": "code",
308 | "execution_count": 10,
309 | "metadata": {
310 | "id": "WYrMNNDhAvVl"
311 | },
312 | "outputs": [],
313 | "source": [
314 | "train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE_PER_REPLICA).prefetch(1)\n",
315 | "validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE_PER_REPLICA).prefetch(1)\n",
316 | "test_batches = test_examples.map(format_image).batch(1)"
317 | ]
318 | },
319 | {
320 | "cell_type": "markdown",
321 | "metadata": {},
322 | "source": [
323 | "## Define the distributed datasets (please complete this section)\n",
324 | "\n",
325 | "Create the distributed datasets using `experimental_distribute_dataset()` of the [Strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy) class and pass in the training batches.\n",
326 | "- Do the same for the validation batches and test batches."
327 | ]
328 | },
329 | {
330 | "cell_type": "code",
331 | "execution_count": 11,
332 | "metadata": {},
333 | "outputs": [],
334 | "source": [
335 | "# GRADED FUNCTION\n",
336 | "def distribute_datasets(strategy, train_batches, validation_batches, test_batches):\n",
337 | " \n",
338 | " ### START CODE HERE ###\n",
339 | " train_dist_dataset = strategy.experimental_distribute_dataset(train_batches)\n",
340 | " val_dist_dataset = strategy.experimental_distribute_dataset(validation_batches)\n",
341 | " test_dist_dataset = strategy.experimental_distribute_dataset(test_batches)\n",
342 | " ### END CODE HERE ###\n",
343 | " \n",
344 | " return train_dist_dataset, val_dist_dataset, test_dist_dataset"
345 | ]
346 | },
347 | {
348 | "cell_type": "markdown",
349 | "metadata": {},
350 | "source": [
351 | "Call the function that you just defined to get the distributed datasets."
352 | ]
353 | },
354 | {
355 | "cell_type": "code",
356 | "execution_count": 12,
357 | "metadata": {},
358 | "outputs": [],
359 | "source": [
360 | "train_dist_dataset, val_dist_dataset, test_dist_dataset = distribute_datasets(strategy, train_batches, validation_batches, test_batches)"
361 | ]
362 | },
363 | {
364 | "cell_type": "markdown",
365 | "metadata": {},
366 | "source": [
367 | "Take a look at the type of the train_dist_dataset"
368 | ]
369 | },
370 | {
371 | "cell_type": "code",
372 | "execution_count": 13,
373 | "metadata": {},
374 | "outputs": [
375 | {
376 | "name": "stdout",
377 | "output_type": "stream",
378 | "text": [
379 | "\n",
380 | "\n",
381 | "\n"
382 | ]
383 | }
384 | ],
385 | "source": [
386 | "print(type(train_dist_dataset))\n",
387 | "print(type(val_dist_dataset))\n",
388 | "print(type(test_dist_dataset))"
389 | ]
390 | },
391 | {
392 | "cell_type": "markdown",
393 | "metadata": {},
394 | "source": [
395 | "**Expected Output:**\n",
396 | "```\n",
397 | "\n",
398 | "\n",
399 | "\n",
400 | "```"
401 | ]
402 | },
403 | {
404 | "cell_type": "markdown",
405 | "metadata": {},
406 | "source": [
407 | "Also get familiar with a single batch from the train_dist_dataset:\n",
408 | "- Each batch has 64 features and labels"
409 | ]
410 | },
411 | {
412 | "cell_type": "code",
413 | "execution_count": 14,
414 | "metadata": {},
415 | "outputs": [
416 | {
417 | "name": "stdout",
418 | "output_type": "stream",
419 | "text": [
420 | "WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.\n",
421 | "Instructions for updating:\n",
422 | "Use `tf.data.Iterator.get_next_as_optional()` instead.\n"
423 | ]
424 | },
425 | {
426 | "name": "stderr",
427 | "output_type": "stream",
428 | "text": [
429 | "WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.\n",
430 | "Instructions for updating:\n",
431 | "Use `tf.data.Iterator.get_next_as_optional()` instead.\n"
432 | ]
433 | },
434 | {
435 | "name": "stdout",
436 | "output_type": "stream",
437 | "text": [
438 | "x is a tuple that contains 2 values \n",
439 | "x[0] contains the features, and has shape (64, 224, 224, 3)\n",
440 | " so it has 64 examples in the batch, each is an image that is (224, 224, 3)\n",
441 | "x[1] contains the labels, and has shape (64,)\n"
442 | ]
443 | }
444 | ],
445 | "source": [
446 | "# Take a look at a single batch from the train_dist_dataset\n",
447 | "x = iter(train_dist_dataset).get_next()\n",
448 | " \n",
449 | "print(f\"x is a tuple that contains {len(x)} values \")\n",
450 | "print(f\"x[0] contains the features, and has shape {x[0].shape}\")\n",
451 | "print(f\" so it has {x[0].shape[0]} examples in the batch, each is an image that is {x[0].shape[1:]}\")\n",
452 | "print(f\"x[1] contains the labels, and has shape {x[1].shape}\")"
453 | ]
454 | },
455 | {
456 | "cell_type": "markdown",
457 | "metadata": {
458 | "id": "bAXAo_wWbWSb"
459 | },
460 | "source": [
461 | "## Create the model\n",
462 | "\n",
463 | "Use the Model Subclassing API to create model `ResNetModel` as a subclass of `tf.keras.Model`."
464 | ]
465 | },
466 | {
467 | "cell_type": "code",
468 | "execution_count": 15,
469 | "metadata": {
470 | "id": "9ODch-OFCaW4"
471 | },
472 | "outputs": [],
473 | "source": [
474 | "class ResNetModel(tf.keras.Model):\n",
475 | " def __init__(self, classes):\n",
476 | " super(ResNetModel, self).__init__()\n",
477 | " self._feature_extractor = hub.KerasLayer(MODULE_HANDLE,\n",
478 | " trainable=False) \n",
479 | " self._classifier = tf.keras.layers.Dense(classes, activation='softmax')\n",
480 | "\n",
481 | " def call(self, inputs):\n",
482 | " x = self._feature_extractor(inputs)\n",
483 | " x = self._classifier(x)\n",
484 | " return x"
485 | ]
486 | },
487 | {
488 | "cell_type": "markdown",
489 | "metadata": {},
490 | "source": [
491 | "Create a checkpoint directory to store the checkpoints (the model's weights during training)."
492 | ]
493 | },
494 | {
495 | "cell_type": "code",
496 | "execution_count": 16,
497 | "metadata": {
498 | "id": "9iagoTBfijUz"
499 | },
500 | "outputs": [],
501 | "source": [
502 | "# Create a checkpoint directory to store the checkpoints.\n",
503 | "checkpoint_dir = './training_checkpoints'\n",
504 | "checkpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")"
505 | ]
506 | },
507 | {
508 | "cell_type": "markdown",
509 | "metadata": {
510 | "id": "e-wlFFZbP33n"
511 | },
512 | "source": [
513 | "## Define the loss function\n",
514 | "\n",
515 | "You'll define the `loss_object` and `compute_loss` within the `strategy.scope()`.\n",
516 | "- `loss_object` will be used later to calculate the loss on the test set.\n",
517 | "- `compute_loss` will be used later to calculate the average loss on the training data.\n",
518 | "\n",
519 | "You will be using these two loss calculations later."
520 | ]
521 | },
522 | {
523 | "cell_type": "code",
524 | "execution_count": 17,
525 | "metadata": {
526 | "id": "R144Wci782ix"
527 | },
528 | "outputs": [],
529 | "source": [
530 | "with strategy.scope():\n",
531 | " # Set reduction to `NONE` so we can do the reduction afterwards and divide by\n",
532 | " # global batch size.\n",
533 | " loss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n",
534 | " reduction=tf.keras.losses.Reduction.NONE)\n",
535 | " # or loss_fn = tf.keras.losses.sparse_categorical_crossentropy\n",
536 | " def compute_loss(labels, predictions):\n",
537 | " per_example_loss = loss_object(labels, predictions)\n",
538 | " return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)\n",
539 | "\n",
540 | " test_loss = tf.keras.metrics.Mean(name='test_loss')"
541 | ]
542 | },
543 | {
544 | "cell_type": "markdown",
545 | "metadata": {
546 | "id": "w8y54-o9T2Ni"
547 | },
548 | "source": [
549 | "## Define the metrics to track loss and accuracy\n",
550 | "\n",
551 | "These metrics track the test loss and training and test accuracy. \n",
552 | "- You can use `.result()` to get the accumulated statistics at any time, for example, `train_accuracy.result()`."
553 | ]
554 | },
555 | {
556 | "cell_type": "code",
557 | "execution_count": 18,
558 | "metadata": {
559 | "id": "zt3AHb46Tr3w"
560 | },
561 | "outputs": [],
562 | "source": [
563 | "with strategy.scope():\n",
564 | " train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n",
565 | " name='train_accuracy')\n",
566 | " test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n",
567 | " name='test_accuracy')"
568 | ]
569 | },
570 | {
571 | "cell_type": "markdown",
572 | "metadata": {
573 | "id": "iuKuNXPORfqJ"
574 | },
575 | "source": [
576 | "## Instantiate the model, optimizer, and checkpoints\n",
577 | "\n",
578 | "This code is given to you. Just remember that they are created within the `strategy.scope()`.\n",
579 | "- Instantiate the ResNetModel, passing in the number of classes\n",
580 | "- Create an instance of the Adam optimizer.\n",
581 | "- Create a checkpoint for this model and its optimizer."
582 | ]
583 | },
584 | {
585 | "cell_type": "code",
586 | "execution_count": 19,
587 | "metadata": {
588 | "id": "OrMmakq5EqeQ"
589 | },
590 | "outputs": [],
591 | "source": [
592 | "# model and optimizer must be created under `strategy.scope`.\n",
593 | "with strategy.scope():\n",
594 | " model = ResNetModel(classes=num_classes)\n",
595 | " optimizer = tf.keras.optimizers.Adam()\n",
596 | " checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)"
597 | ]
598 | },
599 | {
600 | "cell_type": "markdown",
601 | "metadata": {},
602 | "source": [
603 | "## Training loop (please complete this section)\n",
604 | "\n",
605 | "You will define a regular training step and test step, which could work without a distributed strategy. You can then use `strategy.run` to apply these functions in a distributed manner.\n",
606 | "- Notice that you'll define `train_step` and `test_step` inside another function `train_testp_step_fns`, which will then return these two functions.\n",
607 | "\n",
608 | "### Define train_step\n",
609 | "Within the strategy's scope, define `train_step(inputs)`\n",
610 | "- `inputs` will be a tuple containing `(images, labels)`.\n",
611 | "- Create a gradient tape block.\n",
612 | "- Within the gradient tape block: \n",
613 | " - Call the model, passing in the images and setting training to be `True` (complete this part).\n",
614 | " - Call the `compute_loss` function (defined earlier) to compute the training loss (complete this part).\n",
615 | " - Use the gradient tape to calculate the gradients.\n",
616 | " - Use the optimizer to update the weights using the gradients.\n",
617 | " \n",
618 | "### Define test_step\n",
619 | "Also within the strategy's scope, define `test_step(inputs)`\n",
620 | "- `inputs` is a tuple containing `(images, labels)`.\n",
621 | " - Call the model, passing in the images and set training to `False`, because the model is not going to train on the test data. (complete this part).\n",
622 | " - Use the `loss_object`, which will compute the test loss. Check `compute_loss`, defined earlier, to see what parameters to pass into `loss_object`. (complete this part).\n",
623 | " - Next, update `test_loss` (the running test loss) with the `t_loss` (the loss for the current batch).\n",
624 | " - Also update the `test_accuracy`."
625 | ]
626 | },
627 | {
628 | "cell_type": "code",
629 | "execution_count": 20,
630 | "metadata": {
631 | "id": "zUQ_nAP1MtA9"
632 | },
633 | "outputs": [],
634 | "source": [
635 | "# GRADED FUNCTION\n",
636 | "def train_test_step_fns(strategy, model, compute_loss, optimizer, train_accuracy, loss_object, test_loss, test_accuracy):\n",
637 | " with strategy.scope():\n",
638 | " def train_step(inputs):\n",
639 | " images, labels = inputs\n",
640 | "\n",
641 | " with tf.GradientTape() as tape:\n",
642 | " ### START CODE HERE ###\n",
643 | " predictions = model(images, training = True)\n",
644 | " loss = compute_loss(lables, predictions)\n",
645 | " ### END CODE HERE ###\n",
646 | "\n",
647 | " gradients = tape.gradient(loss, model.trainable_variables)\n",
648 | " optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n",
649 | "\n",
650 | " train_accuracy.update_state(labels, predictions)\n",
651 | " return loss \n",
652 | "\n",
653 | " def test_step(inputs):\n",
654 | " images, labels = inputs\n",
655 | " \n",
656 | " ### START CODE HERE ###\n",
657 | " predictions = model(images, training = False)\n",
658 | " t_loss = compute_loss(labels, predictions)\n",
659 | " ### END CODE HERE ###\n",
660 | "\n",
661 | " test_loss.update_state(t_loss)\n",
662 | " test_accuracy.update_state(labels, predictions)\n",
663 | " \n",
664 | " return train_step, test_step"
665 | ]
666 | },
667 | {
668 | "cell_type": "markdown",
669 | "metadata": {},
670 | "source": [
671 | "Use the `train_test_step_fns` function to produce the `train_step` and `test_step` functions."
672 | ]
673 | },
674 | {
675 | "cell_type": "code",
676 | "execution_count": 21,
677 | "metadata": {},
678 | "outputs": [],
679 | "source": [
680 | "train_step, test_step = train_test_step_fns(strategy, model, compute_loss, optimizer, train_accuracy, loss_object, test_loss, test_accuracy)"
681 | ]
682 | },
683 | {
684 | "cell_type": "markdown",
685 | "metadata": {},
686 | "source": [
687 | "## Distributed training and testing (please complete this section)\n",
688 | "\n",
689 | "The `train_step` and `test_step` could be used in a non-distributed, regular model training. To apply them in a distributed way, you'll use [strategy.run](https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy#run).\n",
690 | "\n",
691 | "`distributed_train_step`\n",
692 | "- Call the `run` function of the `strategy`, passing in the train step function (which you defined earlier), as well as the arguments that go in the train step function.\n",
693 | "- The run function is defined like this `run(fn, args=() )`. \n",
694 | " - `args` will take in the dataset inputs\n",
695 | "\n",
696 | "`distributed_test_step`\n",
697 | "- Similar to training, the distributed test step will use the `run` function of your strategy, taking in the test step function as well as the dataset inputs that go into the test step function."
698 | ]
699 | },
700 | {
701 | "cell_type": "markdown",
702 | "metadata": {},
703 | "source": [
704 | "#### Hint:\n",
705 | "- You saw earlier that each batch in `train_dist_dataset` is tuple with two values:\n",
706 | " - a batch of features\n",
707 | " - a batch of labels.\n",
708 | "\n",
709 | "Let's think about how you'll want to pass in the dataset inputs into `args` by running this next cell of code:"
710 | ]
711 | },
712 | {
713 | "cell_type": "code",
714 | "execution_count": 22,
715 | "metadata": {},
716 | "outputs": [
717 | {
718 | "name": "stdout",
719 | "output_type": "stream",
720 | "text": [
721 | "When passing in args=list_of_inputs:\n",
722 | "number of arguments passed is 2\n",
723 | "\n",
724 | "When passing in args=(list_of_inputs)\n",
725 | "number of arguments passed is 2\n",
726 | "\n",
727 | "When passing in args=(list_of_inputs,)\n",
728 | "number of arguments passed is 1\n"
729 | ]
730 | }
731 | ],
732 | "source": [
733 | "#See various ways of passing in the inputs \n",
734 | "\n",
735 | "def fun1(args=()):\n",
736 | " print(f\"number of arguments passed is {len(args)}\")\n",
737 | " \n",
738 | " \n",
739 | "list_of_inputs = [1,2]\n",
740 | "print(\"When passing in args=list_of_inputs:\")\n",
741 | "fun1(args=list_of_inputs)\n",
742 | "print()\n",
743 | "print(\"When passing in args=(list_of_inputs)\")\n",
744 | "fun1(args=(list_of_inputs))\n",
745 | "print()\n",
746 | "print(\"When passing in args=(list_of_inputs,)\")\n",
747 | "fun1(args=(list_of_inputs,))"
748 | ]
749 | },
750 | {
751 | "cell_type": "markdown",
752 | "metadata": {},
753 | "source": [
754 | "Notice that depending on how `list_of_inputs` is passed to `args` affects whether `fun1` sees one or two positional arguments. \n",
755 | "- If you see an error message about positional arguments when running the training code later, please come back to check how you're passing in the inputs to `run`.\n",
756 | "\n",
757 | "Please complete the following function."
758 | ]
759 | },
760 | {
761 | "cell_type": "code",
762 | "execution_count": 23,
763 | "metadata": {},
764 | "outputs": [],
765 | "source": [
766 | "def distributed_train_test_step_fns(strategy, train_step, test_step, model, compute_loss, optimizer, train_accuracy, loss_object, test_loss, test_accuracy):\n",
767 | " with strategy.scope():\n",
768 | " @tf.function\n",
769 | " def distributed_train_step(dataset_inputs):\n",
770 | " ### START CODE HERE ###\n",
771 | " per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))\n",
772 | " ### END CODE HERE ###\n",
773 | " return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,\n",
774 | " axis=None)\n",
775 | "\n",
776 | " @tf.function\n",
777 | " def distributed_test_step(dataset_inputs):\n",
778 | " ### START CODE HERE ###\n",
779 | " return strategy.run(test_step, args=(dataset_inputs,))\n",
780 | " ### END CODE HERE ###\n",
781 | " \n",
782 | " return distributed_train_step, distributed_test_step"
783 | ]
784 | },
785 | {
786 | "cell_type": "markdown",
787 | "metadata": {},
788 | "source": [
789 | "Call the function that you just defined to get the distributed train step function and distributed test step function."
790 | ]
791 | },
792 | {
793 | "cell_type": "code",
794 | "execution_count": 24,
795 | "metadata": {},
796 | "outputs": [],
797 | "source": [
798 | "distributed_train_step, distributed_test_step = distributed_train_test_step_fns(strategy, train_step, test_step, model, compute_loss, optimizer, train_accuracy, loss_object, test_loss, test_accuracy)"
799 | ]
800 | },
801 | {
802 | "cell_type": "markdown",
803 | "metadata": {},
804 | "source": [
805 | "**An important note before you continue:** \n",
806 | "\n",
807 | "The following sections will guide you through how to train your model and save it to a .zip file. These sections are **not** required for you to pass this assignment but you are encouraged to continue anyway. If you consider no more work is needed in previous sections, please submit now and carry on.\n",
808 | "\n",
809 | "After training your model, you can download it as a .zip file and upload it back to the platform to know how well it performed. However, training your model takes around 20 minutes within the Coursera environment. Because of this, there are two methods to train your model:\n",
810 | "\n",
811 | "**Method 1**\n",
812 | "\n",
813 | "If 20 mins is too long for you, we recommend to download this notebook (after submitting it for grading) and upload it to [Colab](https://colab.research.google.com/) to finish the training in a GPU-enabled runtime. If you decide to do this, these are the steps to follow:\n",
814 | "\n",
815 | "- Save this notebok.\n",
816 | "- Click the `jupyter` logo on the upper left corner of the window. This will take you to the Jupyter workspace.\n",
817 | "- Select this notebook (C2W4_Assignment.ipynb) and click `Shutdown`.\n",
818 | "- Once the notebook is shutdown, you can go ahead and download it.\n",
819 | "- Head over to [Colab](https://colab.research.google.com/) and select the `upload` tab and upload your notebook.\n",
820 | "- Before running any cell go into `Runtime` --> `Change Runtime Type` and make sure that `GPU` is enabled.\n",
821 | "- Run all of the cells in the notebook. After training, follow the rest of the instructions of the notebook to download your model.\n",
822 | "\n",
823 | "**Method 2**\n",
824 | "\n",
825 | "If you prefer to wait the 20 minutes and not leave Coursera, keep going through this notebook. Once you are done, follow these steps:\n",
826 | "- Click the `jupyter` logo on the upper left corner of the window. This will take you to the jupyter filesystem.\n",
827 | "- In the filesystem you should see a file named `mymodel.zip`. Go ahead and download it.\n",
828 | "\n",
829 | "Independent of the method you choose, you should end up with a `mymodel.zip` file which can be uploaded for evaluation after this assignment. Once again, this is optional but we strongly encourage you to do it as it is a lot of fun. \n",
830 | "\n",
831 | "With this out of the way, let's continue.\n",
832 | "\n",
833 | "\n",
834 | "\n",
835 | "## Run the distributed training in a loop\n",
836 | "\n",
837 | "You'll now use a for-loop to go through the desired number of epochs and train the model in a distributed manner.\n",
838 | "In each epoch:\n",
839 | "- Loop through each distributed training set\n",
840 | " - For each training batch, call `distributed_train_step` and get the loss.\n",
841 | "- After going through all training batches, calculate the training loss as the average of the batch losses.\n",
842 | "- Loop through each batch of the distributed test set.\n",
843 | " - For each test batch, run the distributed test step. The test loss and test accuracy are updated within the test step function.\n",
844 | "- Print the epoch number, training loss, training accuracy, test loss and test accuracy.\n",
845 | "- Reset the losses and accuracies before continuing to another epoch."
846 | ]
847 | },
848 | {
849 | "cell_type": "code",
850 | "execution_count": null,
851 | "metadata": {
852 | "id": "gX975dMSNw0e"
853 | },
854 | "outputs": [],
855 | "source": [
856 | "# Running this cell in Coursera takes around 20 mins\n",
857 | "with strategy.scope():\n",
858 | " for epoch in range(EPOCHS):\n",
859 | " # TRAIN LOOP\n",
860 | " total_loss = 0.0\n",
861 | " num_batches = 0\n",
862 | " for x in tqdm(train_dist_dataset):\n",
863 | " total_loss += distributed_train_step(x)\n",
864 | " num_batches += 1\n",
865 | " train_loss = total_loss / num_batches\n",
866 | "\n",
867 | " # TEST LOOP\n",
868 | " for x in test_dist_dataset:\n",
869 | " distributed_test_step(x)\n",
870 | "\n",
871 | " template = (\"Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, \"\n",
872 | " \"Test Accuracy: {}\")\n",
873 | " print (template.format(epoch+1, train_loss,\n",
874 | " train_accuracy.result()*100, test_loss.result(),\n",
875 | " test_accuracy.result()*100))\n",
876 | "\n",
877 | " test_loss.reset_states()\n",
878 | " train_accuracy.reset_states()\n",
879 | " test_accuracy.reset_states()"
880 | ]
881 | },
882 | {
883 | "cell_type": "markdown",
884 | "metadata": {
885 | "id": "Z1YvXqOpwy08"
886 | },
887 | "source": [
888 | "Things to note in the example above:\n",
889 | "\n",
890 | "* We are iterating over the `train_dist_dataset` and `test_dist_dataset` using a `for x in ...` construct.\n",
891 | "* The scaled loss is the return value of the `distributed_train_step`. This value is aggregated across replicas using the `tf.distribute.Strategy.reduce` call and then across batches by summing the return value of the `tf.distribute.Strategy.reduce` calls.\n",
892 | "* `tf.keras.Metrics` should be updated inside `train_step` and `test_step` that gets executed by `tf.distribute.Strategy.experimental_run_v2`.\n",
893 | "*`tf.distribute.Strategy.experimental_run_v2` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can do `tf.distribute.Strategy.reduce` to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results` to get the list of values contained in the result, one per local replica.\n"
894 | ]
895 | },
896 | {
897 | "cell_type": "markdown",
898 | "metadata": {
899 | "id": "WEaNCzYQvFqo"
900 | },
901 | "source": [
902 | "# Save the Model for submission (Optional)\n",
903 | "\n",
904 | "You'll get a saved model of this trained model. You'll then need to zip that to upload it to the testing infrastructure. We provide the code to help you with that here:\n",
905 | "\n",
906 | "## Step 1: Save the model as a SavedModel\n",
907 | "This code will save your model as a SavedModel"
908 | ]
909 | },
910 | {
911 | "cell_type": "code",
912 | "execution_count": null,
913 | "metadata": {
914 | "id": "1zAlTlRxrqFu"
915 | },
916 | "outputs": [],
917 | "source": [
918 | "model_save_path = \"./tmp/mymodel/1/\"\n",
919 | "tf.saved_model.save(model, model_save_path)"
920 | ]
921 | },
922 | {
923 | "cell_type": "markdown",
924 | "metadata": {
925 | "id": "e0Zfmx6LvTJA"
926 | },
927 | "source": [
928 | "## Step 2: Zip the SavedModel Directory into /mymodel.zip\n",
929 | "\n",
930 | "This code will zip your saved model directory contents into a single file.\n",
931 | "\n",
932 | "If you are on colab, you can use the file browser pane to the left of colab to find `mymodel.zip`. Right click on it and select 'Download'.\n",
933 | "\n",
934 | "If the download fails because you aren't allowed to download multiple files from colab, check out the guidance here: https://ccm.net/faq/32938-google-chrome-allow-websites-to-perform-simultaneous-downloads\n",
935 | "\n",
936 | "If you are in Coursera, follow the instructions previously provided.\n",
937 | "\n",
938 | "It's a large file, so it might take some time to download."
939 | ]
940 | },
941 | {
942 | "cell_type": "code",
943 | "execution_count": null,
944 | "metadata": {
945 | "id": "gMuo2wQls41l"
946 | },
947 | "outputs": [],
948 | "source": [
949 | "import os\n",
950 | "import zipfile\n",
951 | "\n",
952 | "def zipdir(path, ziph):\n",
953 | " # ziph is zipfile handle\n",
954 | " for root, dirs, files in os.walk(path):\n",
955 | " for file in files:\n",
956 | " ziph.write(os.path.join(root, file))\n",
957 | "\n",
958 | "zipf = zipfile.ZipFile('./mymodel.zip', 'w', zipfile.ZIP_DEFLATED)\n",
959 | "zipdir('./tmp/mymodel/1/', zipf)\n",
960 | "zipf.close()"
961 | ]
962 | }
963 | ],
964 | "metadata": {
965 | "accelerator": "GPU",
966 | "colab": {
967 | "collapsed_sections": [],
968 | "name": "ExerciseAnswer.ipynb",
969 | "private_outputs": true,
970 | "provenance": []
971 | },
972 | "coursera": {
973 | "schema_names": [
974 | "TF3C2W4-1",
975 | "TF3C2W4-2",
976 | "TF3C2W4-3",
977 | "TF3C2W4-4"
978 | ]
979 | },
980 | "kernelspec": {
981 | "display_name": "Python 3",
982 | "language": "python",
983 | "name": "python3"
984 | },
985 | "language_info": {
986 | "codemirror_mode": {
987 | "name": "ipython",
988 | "version": 3
989 | },
990 | "file_extension": ".py",
991 | "mimetype": "text/x-python",
992 | "name": "python",
993 | "nbconvert_exporter": "python",
994 | "pygments_lexer": "ipython3",
995 | "version": "3.7.6"
996 | }
997 | },
998 | "nbformat": 4,
999 | "nbformat_minor": 4
1000 | }
1001 |
--------------------------------------------------------------------------------
/Course 3/Week 1/birds.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 3/Week 1/birds.h5
--------------------------------------------------------------------------------
/Course 3/Week 2/results.data:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 3/Week 2/results.data
--------------------------------------------------------------------------------
/Course 3/Week 3/model.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 3/Week 3/model.h5
--------------------------------------------------------------------------------
/Course 3/Week 4/Week4_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "accelerator": "GPU",
6 | "colab": {
7 | "name": "Copy of C3W4_Assignment.ipynb",
8 | "private_outputs": true,
9 | "provenance": [],
10 | "collapsed_sections": []
11 | },
12 | "kernelspec": {
13 | "display_name": "Python 3",
14 | "language": "python",
15 | "name": "python3"
16 | },
17 | "language_info": {
18 | "codemirror_mode": {
19 | "name": "ipython",
20 | "version": 3
21 | },
22 | "file_extension": ".py",
23 | "mimetype": "text/x-python",
24 | "name": "python",
25 | "nbconvert_exporter": "python",
26 | "pygments_lexer": "ipython3",
27 | "version": "3.7.4"
28 | }
29 | },
30 | "cells": [
31 | {
32 | "cell_type": "markdown",
33 | "metadata": {
34 | "id": "vNQiSujBfjWj"
35 | },
36 | "source": [
37 | "# **Week 4 Assignment: Saliency Maps**\n",
38 | "\n",
39 | "Welcome to the final programming exercise of this course! For this week, your task is to adapt the [Cats vs Dogs](https://www.tensorflow.org/datasets/catalog/cats_vs_dogs) Class Activation Map ungraded lab (the second ungraded lab of this week) and make it generate saliency maps instead.\n",
40 | "\n",
41 | "As discussed in the lectures, a saliency map shows the pixels which greatly impacts the classification of an image. \n",
42 | "- This is done by getting the gradient of the loss with respect to changes in the pixel values, then plotting the results. \n",
43 | "- From there, you can see if your model is looking at the correct features when classifying an image. \n",
44 | " - For example, if you're building a dog breed classifier, you should be wary if your saliency map shows strong pixels outside the dog itself (e.g. sky, grass, dog house, etc...).\n",
45 | "\n",
46 | "In this assignment you will be given prompts but less starter code to fill in in. \n",
47 | "- It's good practice for you to try and write as much of this code as you can from memory and from searching the web.\n",
48 | "- **Whenever you feel stuck**, please refer back to the labs of this week to see how to write the code. In particular, look at:\n",
49 | " - **Ungraded Lab 2: Cats vs Dogs CAM**\n",
50 | " - **Ungraded Lab 3: Saliency**\n",
51 | "\n",
52 | "\n"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {
58 | "id": "wDHISSfBq40T"
59 | },
60 | "source": [
61 | "### Download test files and weights\n",
62 | "\n",
63 | "Let's begin by first downloading files we will be using for this lab."
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "metadata": {
69 | "id": "Laatr1c6lr1w"
70 | },
71 | "source": [
72 | "# Download the same test files from the Cats vs Dogs ungraded lab\n",
73 | "!wget -O cat1.jpg https://storage.googleapis.com/laurencemoroney-blog.appspot.com/MLColabImages/cat1.jpg\n",
74 | "!wget -O cat2.jpg https://storage.googleapis.com/laurencemoroney-blog.appspot.com/MLColabImages/cat2.jpg\n",
75 | "!wget -O catanddog.jpg https://storage.googleapis.com/laurencemoroney-blog.appspot.com/MLColabImages/catanddog.jpg\n",
76 | "!wget -O dog1.jpg https://storage.googleapis.com/laurencemoroney-blog.appspot.com/MLColabImages/dog1.jpg\n",
77 | "!wget -O dog2.jpg https://storage.googleapis.com/laurencemoroney-blog.appspot.com/MLColabImages/dog2.jpg\n",
78 | "\n",
79 | "# Download prepared weights\n",
80 | "!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1kipXTxesGJKGY1B8uSPRvxROgOH90fih' -O 0_epochs.h5\n",
81 | "!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1oiV6tjy5k7h9OHGTQaf0Ohn3FmF-uOs1' -O 15_epochs.h5\n"
82 | ],
83 | "execution_count": null,
84 | "outputs": []
85 | },
86 | {
87 | "cell_type": "markdown",
88 | "metadata": {
89 | "id": "g24L3lKwqb3E"
90 | },
91 | "source": [
92 | "### Import the required packages\n",
93 | "\n",
94 | "Please import:\n",
95 | "\n",
96 | " * Tensorflow\n",
97 | " * Tensorflow Datasets\n",
98 | " * Numpy\n",
99 | " * Matplotlib's PyPlot\n",
100 | " * Keras plot_model utility\n",
101 | " * Keras Models API classes you will be using\n",
102 | " * Keras layers you will be using\n",
103 | " * OpenCV (cv2)"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "metadata": {
109 | "id": "X86LKLvpBO2S"
110 | },
111 | "source": [
112 | "import tensorflow as tf\r\n",
113 | "import tensorflow_datasets as tfds\r\n",
114 | "import numpy as np\r\n",
115 | "import matplotlib.pyplot as plt\r\n",
116 | "from keras.utils import plot_model\r\n",
117 | "import cv2"
118 | ],
119 | "execution_count": null,
120 | "outputs": []
121 | },
122 | {
123 | "cell_type": "markdown",
124 | "metadata": {
125 | "id": "th4dA3I8-9Ue"
126 | },
127 | "source": [
128 | "### Download and prepare the dataset.\n",
129 | "\n"
130 | ]
131 | },
132 | {
133 | "cell_type": "markdown",
134 | "metadata": {
135 | "id": "y1hujOK9rDyU"
136 | },
137 | "source": [
138 | "#### Load Cats vs Dogs \n",
139 | "\n",
140 | "* Required: Use Tensorflow Datasets to fetch the `cats_vs_dogs` dataset. \n",
141 | " * Use the first 80% of the *train* split of the said dataset to create your training set.\n",
142 | " * Set the `as_supervised` flag to create `(image, label)` pairs.\n",
143 | " \n",
144 | "* Optional: You can create validation and test sets from the remaining 20% of the *train* split of `cats_vs_dogs` (i.e. you already used 80% for the train set). This is if you intend to train the model beyond what is required for submission."
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "metadata": {
150 | "id": "7w5HNdoHBQv_"
151 | },
152 | "source": [
153 | "# Load the data and create the train set (optional: val and test sets)\n",
154 | "\n",
155 | "train_set = tfds.load('cats_vs_dogs', as_supervised=True, split='train[:80%]')"
156 | ],
157 | "execution_count": null,
158 | "outputs": []
159 | },
160 | {
161 | "cell_type": "markdown",
162 | "metadata": {
163 | "id": "tXp0mV5Rbo76"
164 | },
165 | "source": [
166 | "#### Create preprocessing function\n",
167 | "\n",
168 | "Define a function that takes in an image and label. This will:\n",
169 | " * cast the image to float32\n",
170 | " * normalize the pixel values to [0, 1]\n",
171 | " * resize the image to 300 x 300\n"
172 | ]
173 | },
174 | {
175 | "cell_type": "code",
176 | "metadata": {
177 | "id": "pRkrL2aK2_UZ"
178 | },
179 | "source": [
180 | "def augmentimages(image, label):\n",
181 | " image = tf.cast(image, dtype=tf.float32)\n",
182 | " image /= 255\n",
183 | " image = tf.image.resize(image, (300,300))\n",
184 | " return image, label"
185 | ],
186 | "execution_count": null,
187 | "outputs": []
188 | },
189 | {
190 | "cell_type": "markdown",
191 | "metadata": {
192 | "id": "pzvF61GV32k_"
193 | },
194 | "source": [
195 | "#### Preprocess the training set\n",
196 | "\n",
197 | "Use the `map()` and pass in the method that you just defined to preprocess the training set.\n"
198 | ]
199 | },
200 | {
201 | "cell_type": "code",
202 | "metadata": {
203 | "id": "vpNEfDKM353a"
204 | },
205 | "source": [
206 | "augmented_training_data = train_set.map(augmentimages)"
207 | ],
208 | "execution_count": null,
209 | "outputs": []
210 | },
211 | {
212 | "cell_type": "markdown",
213 | "metadata": {
214 | "id": "Y4nFaMIMbrvA"
215 | },
216 | "source": [
217 | "#### Create batches of the training set. \n",
218 | "\n",
219 | "This is already provided for you. Normally, you will want to shuffle the training set. But for predictability in the grading, we will simply create the batches.\n",
220 | "\n",
221 | "```Python\n",
222 | "# Shuffle the data if you're working on your own personal project \n",
223 | "train_batches = augmented_training_data.shuffle(1024).batch(32)\n",
224 | "```"
225 | ]
226 | },
227 | {
228 | "cell_type": "code",
229 | "metadata": {
230 | "id": "POhDDPBY3vnL"
231 | },
232 | "source": [
233 | "train_batches = augmented_training_data.batch(32)"
234 | ],
235 | "execution_count": null,
236 | "outputs": []
237 | },
238 | {
239 | "cell_type": "markdown",
240 | "metadata": {
241 | "id": "za5HxgT1_Cw6"
242 | },
243 | "source": [
244 | "### Build the Cats vs Dogs classifier \n",
245 | "\n",
246 | "You'll define a model that is nearly the same as the one in the Cats vs. Dogs CAM lab.\n",
247 | "* Please preserve the architecture of the model in the Cats vs Dogs CAM lab (this week's second lab) except for the final `Dense` layer.\n",
248 | "* You should modify the Cats vs Dogs model at the last dense layer to output 2 neurons instead of 1. \n",
249 | " - This is because you will adapt the `do_salience()` function from the lab and that works with one-hot encoded labels. \n",
250 | " - You can do this by changing the `units` argument of the output Dense layer from 1 to 2, with one for each of the classes (i.e. cats and dogs).\n",
251 | " - You should choose an activation that outputs a probability for each of the 2 classes (i.e. categories), where the sum of the probabilities adds up to 1."
252 | ]
253 | },
254 | {
255 | "cell_type": "code",
256 | "metadata": {
257 | "id": "IoyCA80GBSlG"
258 | },
259 | "source": [
260 | "# YOUR CODE HERE\n",
261 | "from keras.models import Sequential,Model\n",
262 | "from keras.layers import Dense,Conv2D,Flatten,MaxPooling2D,GlobalAveragePooling2D\n",
263 | "\n",
264 | "\n",
265 | "model = Sequential()\n",
266 | "model.add(Conv2D(16,input_shape=(300,300,3),kernel_size=(3,3),activation='relu',padding='same'))\n",
267 | "model.add(MaxPooling2D(pool_size=(2,2)))\n",
268 | "\n",
269 | "model.add(Conv2D(32,kernel_size=(3,3),activation='relu',padding='same'))\n",
270 | "model.add(MaxPooling2D(pool_size=(2,2)))\n",
271 | "\n",
272 | "model.add(Conv2D(64,kernel_size=(3,3),activation='relu',padding='same'))\n",
273 | "model.add(MaxPooling2D(pool_size=(2,2)))\n",
274 | "\n",
275 | "model.add(Conv2D(128,kernel_size=(3,3),activation='relu',padding='same'))\n",
276 | "model.add(GlobalAveragePooling2D())\n",
277 | "model.add(Dense(2,activation='softmax'))\n",
278 | "\n",
279 | "model.summary()"
280 | ],
281 | "execution_count": null,
282 | "outputs": []
283 | },
284 | {
285 | "cell_type": "markdown",
286 | "metadata": {
287 | "id": "ktnATyllHXC4"
288 | },
289 | "source": [
290 | "**Expected Output:**\n",
291 | "\n",
292 | "```txt\n",
293 | "Model: \"sequential\"\n",
294 | "_________________________________________________________________\n",
295 | "Layer (type) Output Shape Param # \n",
296 | "=================================================================\n",
297 | "conv2d (Conv2D) (None, 300, 300, 16) 448 \n",
298 | "_________________________________________________________________\n",
299 | "max_pooling2d (MaxPooling2D) (None, 150, 150, 16) 0 \n",
300 | "_________________________________________________________________\n",
301 | "conv2d_1 (Conv2D) (None, 150, 150, 32) 4640 \n",
302 | "_________________________________________________________________\n",
303 | "max_pooling2d_1 (MaxPooling2 (None, 75, 75, 32) 0 \n",
304 | "_________________________________________________________________\n",
305 | "conv2d_2 (Conv2D) (None, 75, 75, 64) 18496 \n",
306 | "_________________________________________________________________\n",
307 | "max_pooling2d_2 (MaxPooling2 (None, 37, 37, 64) 0 \n",
308 | "_________________________________________________________________\n",
309 | "conv2d_3 (Conv2D) (None, 37, 37, 128) 73856 \n",
310 | "_________________________________________________________________\n",
311 | "global_average_pooling2d (Gl (None, 128) 0 \n",
312 | "_________________________________________________________________\n",
313 | "dense (Dense) (None, 2) 258 \n",
314 | "=================================================================\n",
315 | "Total params: 97,698\n",
316 | "Trainable params: 97,698\n",
317 | "Non-trainable params: 0\n",
318 | "_________________________________________________________________\n",
319 | "```"
320 | ]
321 | },
322 | {
323 | "cell_type": "markdown",
324 | "metadata": {
325 | "id": "J6nou82P_b5d"
326 | },
327 | "source": [
328 | "### Create a function to generate the saliency map\n",
329 | "\n",
330 | "Complete the `do_salience()` function below to save the **normalized_tensor** image. \n",
331 | "- The major steps are listed as comments below.\n",
332 | " - Each section may involve multiple lines of code.\n",
333 | "- Try your best to write the code from memory or by performing web searches.\n",
334 | " - Whenever you get stuck, you can review the \"saliency\" lab (the third lab of this week) to help remind you of what code to write"
335 | ]
336 | },
337 | {
338 | "cell_type": "code",
339 | "metadata": {
340 | "id": "sKbvh3bl9vnG"
341 | },
342 | "source": [
343 | "def do_salience(image, model, label, prefix):\n",
344 | " '''\n",
345 | " Generates the saliency map of a given image.\n",
346 | "\n",
347 | " Args:\n",
348 | " image (file) -- picture that the model will classify\n",
349 | " model (keras Model) -- your cats and dogs classifier\n",
350 | " label (int) -- ground truth label of the image\n",
351 | " prefix (string) -- prefix to add to the filename of the saliency map\n",
352 | " '''\n",
353 | "\n",
354 | " # Read the image and convert channel order from BGR to RGB\n",
355 | " img = cv2.imread(image)\n",
356 | " img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
357 | "\n",
358 | " # Resize the image to 300 x 300 and normalize pixel values to the range [0, 1]\n",
359 | " img = cv2.resize(img, (300, 300)) / 255.0\n",
360 | "\n",
361 | " # Add an additional dimension (for the batch), and save this in a new variable\n",
362 | " exp_image = np.expand_dims(img, axis=0)\n",
363 | "\n",
364 | "\n",
365 | " # Declare the number of classes\n",
366 | " num_classes = 2\n",
367 | "\n",
368 | "\n",
369 | " # Define the expected output array by one-hot encoding the label\n",
370 | " # The length of the array is equal to the number of classes\n",
371 | " expected_output = tf.one_hot([label] * exp_image.shape[0], num_classes)\n",
372 | "\n",
373 | "\n",
374 | " # Witin the GradientTape block:\n",
375 | " # Cast the image as a tf.float32\n",
376 | " # Use the tape to watch the float32 image\n",
377 | " # Get the model's prediction by passing in the float32 image\n",
378 | " # Compute an appropriate loss\n",
379 | " # between the expected output and model predictions.\n",
380 | " # you may want to print the predictions to see if the probabilities adds up to 1\n",
381 | " # YOUR CODE HERE\n",
382 | "\n",
383 | " with tf.GradientTape() as tape:\n",
384 | " inputs = tf.cast(exp_image, dtype=tf.float32)\n",
385 | " tape.watch(inputs)\n",
386 | " prediction = model(inputs)\n",
387 | " loss = tf.keras.losses.categorical_crossentropy(expected_output, prediction)\n",
388 | "\n",
389 | " print(prediction)\n",
390 | "\n",
391 | "\n",
392 | " # get the gradients of the loss with respect to the model's input image\n",
393 | " # YOUR CODE HERE\n",
394 | "\n",
395 | " gradients = tape.gradient(loss, inputs)\n",
396 | "\n",
397 | " # generate the grayscale tensor\n",
398 | " # YOUR CODE HERE\n",
399 | " grayscale_tensor = tf.reduce_sum(tf.abs(gradients), axis = -1)\n",
400 | "\n",
401 | " # normalize the pixel values to be in the range [0, 255].\n",
402 | " # the max value in the grayscale tensor will be pushed to 255.\n",
403 | " # the min value will be pushed to 0.\n",
404 | " # Use the formula: 255 * (x - min) / (max - min)\n",
405 | " # Use tf.reduce_max, tf.reduce_min\n",
406 | " # Cast the tensor as a tf.uint8\n",
407 | " # YOUR CODE HERE\n",
408 | "\n",
409 | " normalized_tensor = tf.cast(\n",
410 | " 255 * (grayscale_tensor - tf.reduce_min(grayscale_tensor)) / (tf.reduce_max(grayscale_tensor) - tf.reduce_min(grayscale_tensor)), \n",
411 | " tf.uint8)\n",
412 | "\n",
413 | " \n",
414 | " # Remove dimensions that are size 1\n",
415 | " # YOUR CODE HERE\n",
416 | " normalized_tensor = tf.squeeze(normalized_tensor)\n",
417 | "\n",
418 | " \n",
419 | " # plot the normalized tensor\n",
420 | " # Set the figure size to 8 by 8\n",
421 | " # do not display the axis\n",
422 | " # use the 'gray' colormap\n",
423 | " # This code is provided for you.\n",
424 | " plt.figure(figsize=(8, 8))\n",
425 | " plt.axis('off')\n",
426 | " plt.imshow(normalized_tensor, cmap='gray')\n",
427 | " plt.show()\n",
428 | "\n",
429 | " # optional: superimpose the saliency map with the original image, then display it.\n",
430 | " # we encourage you to do this to visualize your results better\n",
431 | " # YOUR CODE HERE\n",
432 | " gradient_color = cv2.applyColorMap(normalized_tensor.numpy(), cv2.COLORMAP_HOT)\n",
433 | " gradient_color = gradient_color / 255.0\n",
434 | " super_imposed = cv2.addWeighted(img, 0.5, gradient_color, 0.5, 0.0)\n",
435 | "\n",
436 | " plt.figure(figsize=(8, 8))\n",
437 | " plt.imshow(super_imposed)\n",
438 | " plt.axis('off')\n",
439 | " plt.show()\n",
440 | "\n",
441 | "\n",
442 | " # save the normalized tensor image to a file. this is already provided for you.\n",
443 | " salient_image_name = prefix + image\n",
444 | " normalized_tensor = tf.expand_dims(normalized_tensor, -1)\n",
445 | " normalized_tensor = tf.io.encode_jpeg(normalized_tensor, quality=100, format='grayscale')\n",
446 | " writer = tf.io.write_file(salient_image_name, normalized_tensor)"
447 | ],
448 | "execution_count": null,
449 | "outputs": []
450 | },
451 | {
452 | "cell_type": "markdown",
453 | "metadata": {
454 | "id": "li1idRy-parp"
455 | },
456 | "source": [
457 | "### Generate saliency maps with untrained model\n",
458 | "\n",
459 | "As a sanity check, you will load initialized (i.e. untrained) weights and use the function you just implemented. \n",
460 | "- This will check if you built the model correctly and are able to create a saliency map. \n",
461 | "\n",
462 | "If an error pops up when loading the weights or the function does not run, please check your implementation for bugs.\n",
463 | "- You can check the ungraded labs of this week.\n",
464 | "\n",
465 | "Please apply your `do_salience()` function on the following image files:\n",
466 | "\n",
467 | "* `cat1.jpg`\n",
468 | "* `cat2.jpg`\n",
469 | "* `catanddog.jpg`\n",
470 | "* `dog1.jpg`\n",
471 | "* `dog2.jpg`\n",
472 | "\n",
473 | "Cats will have the label `0` while dogs will have the label `1`. \n",
474 | "- For the catanddog, please use `0`. \n",
475 | "- For the prefix of the salience images that will be generated, please use the prefix `epoch0_salient`."
476 | ]
477 | },
478 | {
479 | "cell_type": "code",
480 | "metadata": {
481 | "id": "k39fF4n8fgG0"
482 | },
483 | "source": [
484 | "# load initial weights\n",
485 | "model.load_weights('0_epochs.h5')\n",
486 | "\n",
487 | "# generate the saliency maps for the 5 test images\n",
488 | "img_list = ['cat1.jpg', 'cat2.jpg', 'catanddog.jpg', 'dog1.jpg', 'dog2.jpg']\n",
489 | "img_label = [0, 0, 0, 1, 1]\n",
490 | "for i, l in zip(img_list, img_label):\n",
491 | " do_salience(i, model, l, 'epoch0_salient')"
492 | ],
493 | "execution_count": null,
494 | "outputs": []
495 | },
496 | {
497 | "cell_type": "markdown",
498 | "metadata": {
499 | "id": "8kcdyut5E2Tk"
500 | },
501 | "source": [
502 | "With untrained weights, you will see something like this in the output. \n",
503 | "- You will see strong pixels outside the cat that the model uses that when classifying the image. \n",
504 | "- After training that these will slowly start to localize to features inside the pet.\n",
505 | "\n",
506 | "
\n"
507 | ]
508 | },
509 | {
510 | "cell_type": "markdown",
511 | "metadata": {
512 | "id": "-ZhZgd0x_JvN"
513 | },
514 | "source": [
515 | "### Configure the model for training\n",
516 | "\n",
517 | "Use `model.compile()` to define the loss, metrics and optimizer. \n",
518 | "\n",
519 | "* Choose a loss function for the model to use when training. \n",
520 | " - For `model.compile()` the ground truth labels from the training set are passed to the model as **integers** (i.e. 0 or 1) as opposed to one-hot encoded vectors.\n",
521 | " - The model predictions are class probabilities. \n",
522 | " - You can browse the [tf.keras.losses](https://www.tensorflow.org/api_docs/python/tf/keras/losses) and determine which one is best used for this case. \n",
523 | " - Remember that you can pass the function as a string (e.g. `loss = 'loss_function_a'`). \n",
524 | "\n",
525 | "* For metrics, you can measure `accuracy`. \n",
526 | "* For the optimizer, please use [RMSProp](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/RMSprop).\n",
527 | " - Please use the default learning rate of `0.001`."
528 | ]
529 | },
530 | {
531 | "cell_type": "code",
532 | "metadata": {
533 | "id": "DkyWZ5KdBo-z"
534 | },
535 | "source": [
536 | "# YOUR CODE HERE\r\n",
537 | "model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='sparse_categorical_crossentropy', metrics = ['accuracy'])"
538 | ],
539 | "execution_count": null,
540 | "outputs": []
541 | },
542 | {
543 | "cell_type": "markdown",
544 | "metadata": {
545 | "id": "otIoJJw7_ZFN"
546 | },
547 | "source": [
548 | "### Train your model\n",
549 | "\n",
550 | "Please pass in the training batches and train your model for just **3** epochs. \n",
551 | "- **Note:** Please do not exceed 3 epochs because the grader will expect 3 epochs when grading your output.\n",
552 | " - After submitting your zipped folder for grading, feel free to continue training to improve your model.\n",
553 | "\n",
554 | "We have loaded pre-trained weights for 15 epochs so you can get a better output when you visualize the saliency maps."
555 | ]
556 | },
557 | {
558 | "cell_type": "code",
559 | "metadata": {
560 | "id": "5YSNp7k7BqfL"
561 | },
562 | "source": [
563 | "# load pre-trained weights\n",
564 | "model.load_weights('15_epochs.h5')\n",
565 | "\n",
566 | "# train the model for just 3 epochs\n",
567 | "history = model.fit(train_batches, epochs=3)"
568 | ],
569 | "execution_count": null,
570 | "outputs": []
571 | },
572 | {
573 | "cell_type": "markdown",
574 | "metadata": {
575 | "id": "2tTqtLN3tQJx"
576 | },
577 | "source": [
578 | "### Generate saliency maps at 18 epochs\n",
579 | "\n",
580 | "You will now use your `do_salience()` function again on the same test images. Please use the same parameters as before but this time, use the prefix `salient`."
581 | ]
582 | },
583 | {
584 | "cell_type": "code",
585 | "metadata": {
586 | "id": "bXFtabyVhIKN"
587 | },
588 | "source": [
589 | "# YOUR CODE HERE\r\n",
590 | "for i, l in zip(img_list, img_label):\r\n",
591 | " do_salience(i, model, l, 'salient')"
592 | ],
593 | "execution_count": null,
594 | "outputs": []
595 | },
596 | {
597 | "cell_type": "markdown",
598 | "metadata": {
599 | "id": "wGTFcfEgM6aV"
600 | },
601 | "source": [
602 | "You should see that the strong pixels are now very less than the ones you generated earlier. Moreover, most of them are now found on features within the pet."
603 | ]
604 | },
605 | {
606 | "cell_type": "markdown",
607 | "metadata": {
608 | "id": "rPtx-u4u_jL5"
609 | },
610 | "source": [
611 | "### Zip the images for grading\n",
612 | "\n",
613 | "Please run the cell below to zip the normalized tensor images you generated at 18 epochs. If you get an error, please check that you have files named:\n",
614 | "\n",
615 | "* salientcat1.jpg\n",
616 | "* salientcat2.jpg\n",
617 | "* salientcatanddog.jpg\n",
618 | "* salientdog1.jpg\n",
619 | "* salientdog2.jpg\n",
620 | "\n",
621 | "Afterwards, please download the **images.zip** from the Files bar on the left."
622 | ]
623 | },
624 | {
625 | "cell_type": "code",
626 | "metadata": {
627 | "id": "b-MhcA8Uh8H_"
628 | },
629 | "source": [
630 | "from zipfile import ZipFile\n",
631 | "\n",
632 | "#!rm images.zip\n",
633 | "\n",
634 | "filenames = ['cat1.jpg', 'cat2.jpg', 'catanddog.jpg', 'dog1.jpg', 'dog2.jpg']\n",
635 | "\n",
636 | "# writing files to a zipfile \n",
637 | "with ZipFile('images.zip','w') as zip:\n",
638 | " for file in filenames:\n",
639 | " zip.write('salient' + file)\n",
640 | "\n",
641 | "print(\"images.zip generated!\")"
642 | ],
643 | "execution_count": null,
644 | "outputs": []
645 | },
646 | {
647 | "cell_type": "markdown",
648 | "metadata": {
649 | "id": "SMOgx-N55A6p"
650 | },
651 | "source": [
652 | "### Optional: Saliency Maps at 95 epochs\n",
653 | "\n",
654 | "We have pre-trained weights generated at 95 epochs and you can see the difference between the maps you generated at 18 epochs."
655 | ]
656 | },
657 | {
658 | "cell_type": "code",
659 | "metadata": {
660 | "id": "elUfhSmMvJZh"
661 | },
662 | "source": [
663 | "!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=14vFpBJsL_TNQeugX8vUTv8dYZxn__fQY' -O 95_epochs.h5\n",
664 | "\n",
665 | "model.load_weights('95_epochs.h5')\n",
666 | "\n",
667 | "do_salience('cat1.jpg', model, 0, \"epoch95_salient\")\n",
668 | "do_salience('cat2.jpg', model, 0, \"epoch95_salient\")\n",
669 | "do_salience('catanddog.jpg', model, 0, \"epoch95_salient\")\n",
670 | "do_salience('dog1.jpg', model, 1, \"epoch95_salient\")\n",
671 | "do_salience('dog2.jpg', model, 1, \"epoch95_salient\")"
672 | ],
673 | "execution_count": null,
674 | "outputs": []
675 | },
676 | {
677 | "cell_type": "markdown",
678 | "metadata": {
679 | "id": "HuKLdQhvAaTd"
680 | },
681 | "source": [
682 | "**Congratulations on completing this week's assignment! Please go back to the Coursera classroom and upload the zipped folder to be graded.**"
683 | ]
684 | }
685 | ]
686 | }
--------------------------------------------------------------------------------
/Course 3/Week 4/images.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 3/Week 4/images.zip
--------------------------------------------------------------------------------
/Course 4/Week 1/doggo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 4/Week 1/doggo.png
--------------------------------------------------------------------------------
/Course 4/Week 2/Week2_Assignment.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "accelerator": "GPU",
6 | "kernelspec": {
7 | "display_name": "Python 3",
8 | "language": "python",
9 | "name": "python3"
10 | },
11 | "language_info": {
12 | "codemirror_mode": {
13 | "name": "ipython",
14 | "version": 3
15 | },
16 | "file_extension": ".py",
17 | "mimetype": "text/x-python",
18 | "name": "python",
19 | "nbconvert_exporter": "python",
20 | "pygments_lexer": "ipython3",
21 | "version": "3.7.4"
22 | },
23 | "colab": {
24 | "name": "Copy of C4W2_Assignment.ipynb",
25 | "private_outputs": true,
26 | "provenance": [],
27 | "collapsed_sections": [],
28 | "toc_visible": true
29 | }
30 | },
31 | "cells": [
32 | {
33 | "cell_type": "markdown",
34 | "metadata": {
35 | "id": "L6S2HVAkSt0p"
36 | },
37 | "source": [
38 | "# Week 2 Assignment: CIFAR-10 Autoencoder\n",
39 | "\n",
40 | "For this week, you will create a convolutional autoencoder for the [CIFAR10](https://www.tensorflow.org/datasets/catalog/cifar10) dataset. You are free to choose the architecture of your autoencoder provided that the output image has the same dimensions as the input image.\n",
41 | "\n",
42 | "After training, your model should meet loss and accuracy requirements when evaluated with the test dataset. You will then download the model and upload it in the classroom for grading. \n",
43 | "\n",
44 | "Let's begin!"
45 | ]
46 | },
47 | {
48 | "cell_type": "markdown",
49 | "metadata": {
50 | "id": "6r4iPr2jyisR"
51 | },
52 | "source": [
53 | "***Important:*** *This colab notebook has read-only access so you won't be able to save your changes. If you want to save your work periodically, please click `File -> Save a Copy in Drive` to create a copy in your account, then work from there.* "
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "metadata": {
59 | "id": "g1mzy2J8_nc1"
60 | },
61 | "source": [
62 | "## Imports"
63 | ]
64 | },
65 | {
66 | "cell_type": "code",
67 | "metadata": {
68 | "id": "3EXwoz-KHtWO"
69 | },
70 | "source": [
71 | "try:\n",
72 | " # %tensorflow_version only exists in Colab.\n",
73 | " %tensorflow_version 2.x\n",
74 | "except Exception:\n",
75 | " pass\n",
76 | "\n",
77 | "import tensorflow as tf\n",
78 | "import tensorflow_datasets as tfds\n",
79 | "\n",
80 | "from keras.models import Sequential"
81 | ],
82 | "execution_count": null,
83 | "outputs": []
84 | },
85 | {
86 | "cell_type": "markdown",
87 | "metadata": {
88 | "id": "n2Gs6Lyc_pd0"
89 | },
90 | "source": [
91 | "## Load and prepare the dataset\n",
92 | "\n",
93 | "The [CIFAR 10](https://www.tensorflow.org/datasets/catalog/cifar10) dataset already has train and test splits and you can use those in this exercise. Here are the general steps:\n",
94 | "\n",
95 | "* Load the train/test split from TFDS. Set `as_supervised` to `True` so it will be convenient to use the preprocessing function we provided.\n",
96 | "* Normalize the pixel values to the range [0,1], then return `image, image` pairs for training instead of `image, label`. This is because you will check if the output image is successfully regenerated after going through your autoencoder.\n",
97 | "* Shuffle and batch the train set. Batch the test set (no need to shuffle).\n"
98 | ]
99 | },
100 | {
101 | "cell_type": "code",
102 | "metadata": {
103 | "id": "t9F7YsCNIKSA"
104 | },
105 | "source": [
106 | "# preprocessing function\n",
107 | "def map_image(image, label):\n",
108 | " image = tf.cast(image, dtype=tf.float32)\n",
109 | " image = image / 255.0\n",
110 | "\n",
111 | " return image, image # dataset label is not used. replaced with the same image input.\n",
112 | "\n",
113 | "# parameters\n",
114 | "BATCH_SIZE = 128\n",
115 | "SHUFFLE_BUFFER_SIZE = 1024\n",
116 | "\n",
117 | "\n",
118 | "### START CODE HERE (Replace instances of `None` with your code) ###\n",
119 | "\n",
120 | "# use tfds.load() to fetch the 'train' split of CIFAR-10\n",
121 | "train_dataset = tfds.load('cifar10', split='train', as_supervised=True)\n",
122 | "\n",
123 | "# preprocess the dataset with the `map_image()` function above\n",
124 | "train_dataset = train_dataset.map(map_image)\n",
125 | "\n",
126 | "# shuffle and batch the dataset\n",
127 | "train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)\n",
128 | "\n",
129 | "\n",
130 | "# use tfds.load() to fetch the 'test' split of CIFAR-10\n",
131 | "test_dataset = tfds.load('cifar10', split='test', as_supervised=True)\n",
132 | "\n",
133 | "# preprocess the dataset with the `map_image()` function above\n",
134 | "test_dataset = test_dataset.map(map_image)\n",
135 | "\n",
136 | "# batch the dataset\n",
137 | "test_dataset = test_dataset.batch(BATCH_SIZE)\n",
138 | "\n",
139 | "### END CODE HERE ###"
140 | ],
141 | "execution_count": null,
142 | "outputs": []
143 | },
144 | {
145 | "cell_type": "markdown",
146 | "metadata": {
147 | "id": "rPyOgGJs_t98"
148 | },
149 | "source": [
150 | "## Build the Model\n",
151 | "\n",
152 | "Create the autoencoder model. As shown in the lectures, you will want to downsample the image in the encoder layers then upsample it in the decoder path. Note that the output layer should be the same dimensions as the original image. Your input images will have the shape `(32, 32, 3)`. If you deviate from this, your model may not be recognized by the grader and may fail. \n",
153 | "\n",
154 | "We included a few hints to use the Sequential API below but feel free to remove it and use the Functional API just like in the ungraded labs if you're more comfortable with it. Another reason to use the latter is if you want to visualize the encoder output. As shown in the ungraded labs, it will be easier to indicate multiple outputs with the Functional API. That is not required for this assignment though so you can just stack layers sequentially if you want a simpler solution."
155 | ]
156 | },
157 | {
158 | "cell_type": "code",
159 | "metadata": {
160 | "id": "Wr-Bok3lRgA3"
161 | },
162 | "source": [
163 | "# suggested layers to use. feel free to add or remove as you see fit.\n",
164 | "from keras.layers import Conv2D, UpSampling2D\n",
165 | "\n",
166 | "# use the Sequential API (you can remove if you want to use the Functional API)\n",
167 | "model = Sequential()\n",
168 | "\n",
169 | "### START CODE HERE ###\n",
170 | "# use `model.add()` to add layers (if using the Sequential API)\n",
171 | "model.add(Conv2D(16, kernel_size=3, strides=1, padding='same', activation='relu', input_shape=(32, 32, 3)))\n",
172 | "model.add(tf.keras.layers.BatchNormalization()) \n",
173 | "\n",
174 | "model.add(Conv2D(32, kernel_size=3, strides=2, padding='same', activation='relu')) \n",
175 | "\n",
176 | "model.add(Conv2D(32, kernel_size=3, strides=1, padding='same', activation='relu')) \n",
177 | "model.add(tf.keras.layers.BatchNormalization()) \n",
178 | "\n",
179 | "model.add(UpSampling2D())\n",
180 | "model.add(Conv2D(16, kernel_size=3, strides=1, padding='same', activation='relu')) \n",
181 | "model.add(tf.keras.layers.BatchNormalization()) \n",
182 | "\n",
183 | "model.add(Conv2D(3, kernel_size=1, strides=1, padding='same', activation='sigmoid')) \n",
184 | "\n",
185 | "### END CODE HERE ###\n",
186 | "\n",
187 | "model.summary()"
188 | ],
189 | "execution_count": null,
190 | "outputs": []
191 | },
192 | {
193 | "cell_type": "markdown",
194 | "metadata": {
195 | "id": "jRWTAijKEVUC"
196 | },
197 | "source": [
198 | "## Configure training parameters\n",
199 | "\n",
200 | "We have already provided the optimizer, metrics, and loss in the code below."
201 | ]
202 | },
203 | {
204 | "cell_type": "code",
205 | "metadata": {
206 | "id": "iHIeD9eDETSk"
207 | },
208 | "source": [
209 | "# Please do not change the model.compile() parameters\n",
210 | "model.compile(optimizer='adam', metrics=['accuracy'], loss='mean_squared_error')"
211 | ],
212 | "execution_count": null,
213 | "outputs": []
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {
218 | "id": "tLQPhm1W_8dC"
219 | },
220 | "source": [
221 | "## Training\n",
222 | "\n",
223 | "You can now use [model.fit()](https://keras.io/api/models/model_training_apis/#fit-method) to train your model. You will pass in the `train_dataset` and you are free to configure the other parameters. As with any training, you should see the loss generally going down and the accuracy going up with each epoch. If not, please revisit the previous sections to find possible bugs."
224 | ]
225 | },
226 | {
227 | "cell_type": "code",
228 | "metadata": {
229 | "id": "AMBimOnsRvg0"
230 | },
231 | "source": [
232 | "# parameters\n",
233 | "train_steps = len(train_dataset) // BATCH_SIZE \n",
234 | "val_steps = len(test_dataset) // BATCH_SIZE\n",
235 | "\n",
236 | "### START CODE HERE ###\n",
237 | "model.fit(train_dataset, validation_data=test_dataset, epochs=5)\n",
238 | "### END CODE HERE ###"
239 | ],
240 | "execution_count": null,
241 | "outputs": []
242 | },
243 | {
244 | "cell_type": "markdown",
245 | "metadata": {
246 | "id": "PT2l1c-SAaF4"
247 | },
248 | "source": [
249 | "## Model evaluation\n",
250 | "\n",
251 | "You can use this code to test your model locally before uploading to the grader. To pass, your model needs to satisfy these two requirements:\n",
252 | "\n",
253 | "* loss must be less than 0.01 \n",
254 | "* accuracy must be greater than 0.6"
255 | ]
256 | },
257 | {
258 | "cell_type": "code",
259 | "metadata": {
260 | "id": "vFncgqahSQhA"
261 | },
262 | "source": [
263 | "result = model.evaluate(test_dataset, steps=10)"
264 | ],
265 | "execution_count": null,
266 | "outputs": []
267 | },
268 | {
269 | "cell_type": "markdown",
270 | "metadata": {
271 | "id": "di6VOHGwIsVM"
272 | },
273 | "source": [
274 | "If you did some visualization like in the ungraded labs, then you might see something like the gallery below. This part is not required."
275 | ]
276 | },
277 | {
278 | "cell_type": "markdown",
279 | "metadata": {
280 | "id": "wmpI4skkIA5L"
281 | },
282 | "source": [
283 | "
"
284 | ]
285 | },
286 | {
287 | "cell_type": "markdown",
288 | "metadata": {
289 | "id": "uaRSkQPNAPT0"
290 | },
291 | "source": [
292 | "## Save your model\n",
293 | "\n",
294 | "Once you are satisfied with the results, you can now save your model. Please download it from the Files window on the left and go back to the Submission portal in Coursera for grading."
295 | ]
296 | },
297 | {
298 | "cell_type": "code",
299 | "metadata": {
300 | "id": "pLFpLP-c7rDR"
301 | },
302 | "source": [
303 | "model.save('mymodel.h5')"
304 | ],
305 | "execution_count": null,
306 | "outputs": []
307 | },
308 | {
309 | "cell_type": "markdown",
310 | "metadata": {
311 | "id": "QArMiXJTDxDe"
312 | },
313 | "source": [
314 | "**Congratulations on completing this week's assignment!**"
315 | ]
316 | }
317 | ]
318 | }
--------------------------------------------------------------------------------
/Course 4/Week 2/mymodel.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 4/Week 2/mymodel.h5
--------------------------------------------------------------------------------
/Course 4/Week 3/anime.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 4/Week 3/anime.h5
--------------------------------------------------------------------------------
/Course 4/Week 4/mysigns.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/fd6bcab0d58517f07e4423064a592889c28fc462/Course 4/Week 4/mysigns.zip
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | 
3 |
4 | # Tensorflow : Advanced Techniques Solutions
5 |
6 | This repository contains my solutions for the Coursera course [TensorFlow: Advanced Techniques Specialization.](https://www.coursera.org/specializations/tensorflow-advanced-techniques)
7 |
8 | Incase any of the notebooks are not visible on GitHub, open in Colab:
9 |
10 | [](https://colab.research.google.com/github/shreyasvedpathak/Tensorflow-Advanced-Techniques-Solutions/blob/master)
11 |
12 | ## Course 1: Custom Models, Layers, and Loss Functions with TensorFlow
13 |
14 | * Week 1: [Multiple Outputs Model using Keras Functional API.](Course-1/../Course%201/Week1_Assignment.ipynb)
15 | * Week 2: [Creating Custom Loss Functions.](Course-1/../Course%201/Week2_Assignment.ipynb)
16 | * Week 3: [Implement a Quadratic Layer.](Course-1/../Course%201/Week3_Assignment.ipynb)
17 | * Week 4: [Create a VGG Network.](Course-1/../Course%201/Week4_Assignment.ipynb)
18 | * Week 5: [Custom Callbacks.](Course-1/../Course%201/../Course%201/Week5_Bonus%20Notebook.ipynb)
19 |
20 | ## Course 2: Custom and Distributed Training with TensorFlow
21 |
22 | * Week 1: [Basic Tensor Operations.](Course-2/../Course%202/Week1_Assignment.ipynb)
23 | * Week 2: [Breast Cancer Prediction.](Course-2/../Course%202/Week2_Assignment.ipynb)
24 | * Week 3: [Horse or Human?](Course-2/../Course%202/Week3_Assignment.ipynb)
25 | * Week 4: [Distributed Training.](Course-2/../Course%202/Week4_Assignment.ipynb)
26 |
27 | ## Course 3: Advanced Computer Vision with TensorFlow
28 |
29 | * Week 1: [Bird Boxes.](Course-3/../Course%203/Week%201/)
30 | [](Course-3/../Course%203/Week%201/Week1_Assignment.ipynb)
31 | [](Course-3/../Course%203/Week%201/birds.h5)
32 | * Week 2: [Zombie Detector.](Course-3/../Course%203/Week%202/)
33 | [](Course-3/../Course%203/Week%202/Week2_Assignment.ipynb)
34 | [](Course-3/../Course%203/Week%202/results.data)
35 | * Week 3: [Image Segmentation of Handwritten Digits.](Course-3/../Course%203/Week%203/)
36 | [](Course-3/../Course%203/Week%203/Week3_Assignment.ipynb)
37 | [](Course-3/../Course%203/Week%203/model.h5)
38 | * Week 4: [Dogs vs Cats Saliency Maps.](Course-3/../Course%203/Week%204/)
39 | [](Course-3/../Course%203/Week%204/Week4_Assignment.ipynb)
40 | [](Course-3/../Course%203/Week%204/images.zip)
41 |
42 | ## Course 4: Generative Deep Learning with TensorFlow
43 |
44 | * Week 1: [Style Transfer Dog.](Course-4/../Course%204/Week%201/)
45 | [](Course-4/../Course%204/Week%201/Week1_Assignment.ipynb)
46 | [](Course-4/../Course%204/Week%201/doggo.png)
47 | * Week 2: [AutoEncoder Model Loss and Accuracy.](Course-4/../Course%204/Week%202/)
48 | [](Course-4/../Course%204/Week%202/Week2_Assignment.ipynb)
49 | [](Course-4/../Course%204/Week%202/mymodel.h5)
50 | * Week 3: [Anime Faces.](Course-4/../Course%204/Week%203/)
51 | [](Course-4/../Course%204/Week%203/Week3_Assignment.ipynb)
52 | [](Course-4/../Course%204/Week%203/anime.h5)
53 | * Week 4: [Generated Hands.](Course-4/../Course%204/Week%204/)
54 | [](Course-4/../Course%204/Week%204/Week4_Assignment.ipynb)
55 | [](Course-4/../Course%204/Week%204/mysigns.zip)
--------------------------------------------------------------------------------