├── LICENSE
├── README.md
├── Week1 Quiz Answers.txt
├── Week1-Exercise_1_TF_Lite_Question.ipynb
├── Week2 Quiz Answers.txt
├── Week2_Assignment_Solution.ipynb
├── Week3 Quiz Answers.txt
├── Week3_Assignment_Solution.ipynb
├── Week4 Quiz Answers.txt
└── Week4_Assignment_Solution.py
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Device-based-Models-with-TensorFlow-Lite
2 | This course "Device-based Models with TensorFlow Lite" is a part of Coursera's "TensorFlow: Data and Deployment" Specialization
3 |
--------------------------------------------------------------------------------
/Week1 Quiz Answers.txt:
--------------------------------------------------------------------------------
1 | Week 1 Quiz Answers
2 |
3 | 1.
4 | Question 1
5 | What platforms are supported by TensorFlow Lite (Check all that apply)
6 | iOS - Correct
7 | Some Microcontrollers - Correct
8 | Raspberry Pi - Correct
9 | Android - Correct
10 | Windows Phone - InCorrect
11 |
12 | 2.
13 | Question 2
14 | What is Quantization?
15 | A technique that increases precision to ensure your model works better on mobile
16 | A technique that reduces precision and model size to work better on mobile - Correct
17 | A technique to optimize the size of a model for the memory map of a mobile device
18 | A technique to ensure compatibility across all supported platforms
19 |
20 |
21 | 3.
22 | Question 3
23 | The TFLite file format is an example of what?
24 | A checkpoint
25 | A concrete function
26 | A flatbuffer - Correct
27 | A savedmodel
28 |
29 | 4.
30 | Question 4
31 | Which types of input does the TF Lite Convertor API Accept (Check all that apply)
32 | A set of concrete functions - Correct
33 | A Keras HDF5 file - Correct
34 | A list of checkpoints - InCorrect
35 | A SavedModel - Correct
36 | A model object - InCorrect
37 |
38 | 5.
39 | Question 5
40 | True or False: The SavedModel format supports model Versioning
41 | True - Correct
42 | False
43 |
44 | 6.
45 | Question 6
46 | If I want to save an existing Keras model, what’s the API signature:
47 | tf.saved_model.path=path
48 | tf.saved_model.save(model, path) - Correct
49 | Tf.model.save(path)
50 | tf.save(model, path)
51 |
52 | 7.
53 | Question 7
54 | If I want to use the TensorFlow Lite Convertor to convert a saved model to TF Lite, what’s the API signature?
55 | converter = tf.lite.TFLiteConverter.from_saved_model(path)
56 | newModel = converter.convert() - Correct
57 |
58 | newModel = tf.lite.TFLiteConverter.fromModel(myModel).convert()
59 |
60 | newModel = tf.lite.TFLiteConverter.convert(model_path)
61 |
62 | converter = tf.lite.TFLiteConverter.convert()
63 | newModel = converter.Convert(model_path)
64 |
65 | 8.
66 | Question 8
67 | If I have a keras model and want to convert it, what’s the method signature on TFLiteConverter
68 | convert(model)
69 | from_keras_model(model) - Correct
70 | convert_keras_model(model)
71 | from_keras(model)
72 |
73 | 9.
74 | Question 9
75 | If I want to convert using a command line tool, what’s the name of the tool?
76 | Tflite_to_model
77 | tfliteconvert
78 | tflite_convert - Correct
79 | tf_convert_lite
80 |
81 | 10.
82 | Question 10
83 | If I want to do post training quantization, what are the optimization options available (check all that apply)
84 | OPTIMIZE_FOR_SIZE - Correct
85 | OPTIMIZE_FOR_IOS
86 | OPTIMIZE_FOR_ANDROID
87 | OPTIMIZE_FOR_LATENCY - Correct
88 | OPTIMIZE_FOR_PERFORMANCE
89 |
--------------------------------------------------------------------------------
/Week1-Exercise_1_TF_Lite_Question.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "##### Copyright 2018 The TensorFlow Authors."
8 | ]
9 | },
10 | {
11 | "cell_type": "code",
12 | "execution_count": 1,
13 | "metadata": {},
14 | "outputs": [],
15 | "source": [
16 | "# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n",
17 | "# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n",
18 | "# ATTENTION: Please use the provided epoch values when training.\n",
19 | "\n",
20 | "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
21 | "# you may not use this file except in compliance with the License.\n",
22 | "# You may obtain a copy of the License at\n",
23 | "#\n",
24 | "# https://www.apache.org/licenses/LICENSE-2.0\n",
25 | "#\n",
26 | "# Unless required by applicable law or agreed to in writing, software\n",
27 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
28 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
29 | "# See the License for the specific language governing permissions and\n",
30 | "# limitations under the License."
31 | ]
32 | },
33 | {
34 | "cell_type": "markdown",
35 | "metadata": {
36 | "colab_type": "text",
37 | "id": "Ka96-ajYzxVU"
38 | },
39 | "source": [
40 | "# Train Your Own Model and Convert It to TFLite\n",
41 | "\n",
42 | "This notebook uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:\n",
43 | "\n",
44 | "
\n",
45 | " \n",
46 | " \n",
48 | " \n",
49 | " \n",
50 | " Figure 1. Fashion-MNIST samples (by Zalando, MIT License). \n",
51 | " \n",
52 | "
\n",
53 | "\n",
54 | "Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing we'll use here.\n",
55 | "\n",
56 | "This uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.\n",
57 | "\n",
58 | "We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:"
59 | ]
60 | },
61 | {
62 | "cell_type": "markdown",
63 | "metadata": {
64 | "colab_type": "text",
65 | "id": "rjOAfhgd__Sp"
66 | },
67 | "source": [
68 | "# Setup"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 1,
74 | "metadata": {
75 | "colab": {
76 | "base_uri": "https://localhost:8080/",
77 | "height": 34
78 | },
79 | "colab_type": "code",
80 | "id": "pfyZKowNAQ4j",
81 | "outputId": "8a94ac17-d4e7-474f-e984-a5ed389f5352"
82 | },
83 | "outputs": [
84 | {
85 | "name": "stdout",
86 | "output_type": "stream",
87 | "text": [
88 | "• Using TensorFlow Version: 2.0.0\n",
89 | "• GPU Device Found.\n"
90 | ]
91 | }
92 | ],
93 | "source": [
94 | "# TensorFlow\n",
95 | "import tensorflow as tf\n",
96 | "\n",
97 | "# TensorFlow Datsets\n",
98 | "import tensorflow_datasets as tfds\n",
99 | "tfds.disable_progress_bar()\n",
100 | "\n",
101 | "# Helper Libraries\n",
102 | "import numpy as np\n",
103 | "import matplotlib.pyplot as plt\n",
104 | "import pathlib\n",
105 | "\n",
106 | "from os import getcwd\n",
107 | "\n",
108 | "print('\\u2022 Using TensorFlow Version:', tf.__version__)\n",
109 | "print('\\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\\u2022 GPU Device Not Found. Running on CPU')"
110 | ]
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "metadata": {
115 | "colab_type": "text",
116 | "id": "tadPBTEiAprt"
117 | },
118 | "source": [
119 | "# Download Fashion MNIST Dataset\n",
120 | "\n",
121 | "We will use TensorFlow Datasets to load the Fashion MNIST dataset. "
122 | ]
123 | },
124 | {
125 | "cell_type": "code",
126 | "execution_count": 2,
127 | "metadata": {
128 | "colab": {
129 | "base_uri": "https://localhost:8080/",
130 | "height": 156
131 | },
132 | "colab_type": "code",
133 | "id": "XcNwi6nFKneZ",
134 | "outputId": "8e0d8173-6dbd-4ef5-a70b-efc8e9d33802"
135 | },
136 | "outputs": [],
137 | "source": [
138 | "splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))\n",
139 | "\n",
140 | "filePath = f\"{getcwd()}/../tmp2/\"\n",
141 | "splits, info = tfds.load('fashion_mnist', with_info=True, as_supervised=True, split=splits, data_dir=filePath)\n",
142 | "\n",
143 | "(train_examples, validation_examples, test_examples) = splits\n",
144 | "\n",
145 | "num_examples = info.splits['train'].num_examples\n",
146 | "num_classes = info.features['label'].num_classes"
147 | ]
148 | },
149 | {
150 | "cell_type": "markdown",
151 | "metadata": {},
152 | "source": [
153 | "The class names are not included with the dataset, so we will specify them here."
154 | ]
155 | },
156 | {
157 | "cell_type": "code",
158 | "execution_count": 3,
159 | "metadata": {
160 | "colab": {},
161 | "colab_type": "code",
162 | "id": "-eAv71FRm4JE"
163 | },
164 | "outputs": [],
165 | "source": [
166 | "class_names = ['T-shirt_top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n",
167 | " 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']"
168 | ]
169 | },
170 | {
171 | "cell_type": "code",
172 | "execution_count": 4,
173 | "metadata": {
174 | "colab": {},
175 | "colab_type": "code",
176 | "id": "hXe6jNokqX3_"
177 | },
178 | "outputs": [],
179 | "source": [
180 | "# Create a labels.txt file with the class names\n",
181 | "with open('labels.txt', 'w') as f:\n",
182 | " f.write('\\n'.join(class_names))"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {
189 | "colab": {},
190 | "colab_type": "code",
191 | "id": "iubWCThbdN8K"
192 | },
193 | "outputs": [],
194 | "source": [
195 | "# The images in the dataset are 28 by 28 pixels.\n",
196 | "IMG_SIZE = 28"
197 | ]
198 | },
199 | {
200 | "cell_type": "markdown",
201 | "metadata": {
202 | "colab_type": "text",
203 | "id": "ZAkuq0V0Aw2X"
204 | },
205 | "source": [
206 | "# Preprocessing Data"
207 | ]
208 | },
209 | {
210 | "cell_type": "markdown",
211 | "metadata": {
212 | "colab_type": "text",
213 | "id": "_5SIivkunKCC"
214 | },
215 | "source": [
216 | "## Preprocess"
217 | ]
218 | },
219 | {
220 | "cell_type": "code",
221 | "execution_count": 6,
222 | "metadata": {},
223 | "outputs": [],
224 | "source": [
225 | "# EXERCISE: Write a function to normalize the images.\n",
226 | "\n",
227 | "def format_example(image, label):\n",
228 | " # Cast image to float32\n",
229 | " image = tf.cast(x=image, dtype = tf.float32)\n",
230 | " \n",
231 | " # Normalize the image in the range [0, 1]\n",
232 | " image = image * 1.0/255.0\n",
233 | " \n",
234 | " return image, label"
235 | ]
236 | },
237 | {
238 | "cell_type": "code",
239 | "execution_count": 7,
240 | "metadata": {
241 | "colab": {},
242 | "colab_type": "code",
243 | "id": "HAlBlXOUMwqe"
244 | },
245 | "outputs": [],
246 | "source": [
247 | "# Specify the batch size\n",
248 | "BATCH_SIZE = 256"
249 | ]
250 | },
251 | {
252 | "cell_type": "markdown",
253 | "metadata": {
254 | "colab_type": "text",
255 | "id": "JM4HfIJtnNEk"
256 | },
257 | "source": [
258 | "## Create Datasets From Images and Labels"
259 | ]
260 | },
261 | {
262 | "cell_type": "code",
263 | "execution_count": 8,
264 | "metadata": {},
265 | "outputs": [],
266 | "source": [
267 | "# Create Datasets\n",
268 | "train_batches = train_examples.cache().shuffle(num_examples//4).batch(BATCH_SIZE).map(format_example).prefetch(1)\n",
269 | "validation_batches = validation_examples.cache().batch(BATCH_SIZE).map(format_example)\n",
270 | "test_batches = test_examples.map(format_example).batch(1)"
271 | ]
272 | },
273 | {
274 | "cell_type": "markdown",
275 | "metadata": {
276 | "colab_type": "text",
277 | "id": "M-topQaOm_LM"
278 | },
279 | "source": [
280 | "# Building the Model"
281 | ]
282 | },
283 | {
284 | "cell_type": "markdown",
285 | "metadata": {},
286 | "source": [
287 | "```\n",
288 | "Model: \"sequential\"\n",
289 | "_________________________________________________________________\n",
290 | "Layer (type) Output Shape Param # \n",
291 | "=================================================================\n",
292 | "conv2d (Conv2D) (None, 26, 26, 16) 160 \n",
293 | "_________________________________________________________________\n",
294 | "max_pooling2d (MaxPooling2D) (None, 13, 13, 16) 0 \n",
295 | "_________________________________________________________________\n",
296 | "conv2d_1 (Conv2D) (None, 11, 11, 32) 4640 \n",
297 | "_________________________________________________________________\n",
298 | "flatten (Flatten) (None, 3872) 0 \n",
299 | "_________________________________________________________________\n",
300 | "dense (Dense) (None, 64) 247872 \n",
301 | "_________________________________________________________________\n",
302 | "dense_1 (Dense) (None, 10) 650 \n",
303 | "=================================================================\n",
304 | "Total params: 253,322\n",
305 | "Trainable params: 253,322\n",
306 | "Non-trainable params: 0\n",
307 | "```"
308 | ]
309 | },
310 | {
311 | "cell_type": "code",
312 | "execution_count": 9,
313 | "metadata": {},
314 | "outputs": [],
315 | "source": [
316 | "# EXERCISE: Build and compile the model shown in the previous cell.\n",
317 | "\n",
318 | "model = tf.keras.Sequential([\n",
319 | " # Set the input shape to (28, 28, 1), kernel size=3, filters=16 and use ReLU activation,\n",
320 | " tf.keras.layers.Conv2D(16,(3,3), activation='relu', input_shape=(28,28,1)),\n",
321 | " \n",
322 | " tf.keras.layers.MaxPooling2D(),\n",
323 | " \n",
324 | " # Set the number of filters to 32, kernel size to 3 and use ReLU activation \n",
325 | " tf.keras.layers.Conv2D(32,(3,3), activation = 'relu'),\n",
326 | " \n",
327 | " # Flatten the output layer to 1 dimension\n",
328 | " tf.keras.layers.Flatten(),\n",
329 | " \n",
330 | " # Add a fully connected layer with 64 hidden units and ReLU activation\n",
331 | " tf.keras.layers.Dense(units = 64, activation = 'relu'),\n",
332 | " \n",
333 | " # Attach a final softmax classification head\n",
334 | " tf.keras.layers.Dense(units=10, activation='softmax')])\n",
335 | "\n",
336 | "# Set the appropriate loss function and use accuracy as your metric\n",
337 | "model.compile(optimizer='adam',\n",
338 | " loss= 'sparse_categorical_crossentropy',\n",
339 | " metrics= ['accuracy'])"
340 | ]
341 | },
342 | {
343 | "cell_type": "markdown",
344 | "metadata": {
345 | "colab_type": "text",
346 | "id": "zEMOz-LDnxgD"
347 | },
348 | "source": [
349 | "## Train"
350 | ]
351 | },
352 | {
353 | "cell_type": "code",
354 | "execution_count": 10,
355 | "metadata": {
356 | "colab": {},
357 | "colab_type": "code",
358 | "id": "JGlNoRtzCP4_"
359 | },
360 | "outputs": [
361 | {
362 | "name": "stdout",
363 | "output_type": "stream",
364 | "text": [
365 | "Epoch 1/10\n",
366 | "219/219 [==============================] - 96s 437ms/step - loss: 0.5861 - accuracy: 0.7946 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\n",
367 | "Epoch 2/10\n",
368 | "219/219 [==============================] - 4s 18ms/step - loss: 0.3802 - accuracy: 0.8654 - val_loss: 0.3507 - val_accuracy: 0.8720\n",
369 | "Epoch 3/10\n",
370 | "219/219 [==============================] - 4s 16ms/step - loss: 0.3296 - accuracy: 0.8819 - val_loss: 0.3177 - val_accuracy: 0.8879\n",
371 | "Epoch 4/10\n",
372 | "219/219 [==============================] - 3s 16ms/step - loss: 0.3013 - accuracy: 0.8924 - val_loss: 0.2878 - val_accuracy: 0.8980\n",
373 | "Epoch 5/10\n",
374 | "219/219 [==============================] - 4s 17ms/step - loss: 0.2763 - accuracy: 0.9001 - val_loss: 0.2742 - val_accuracy: 0.9026\n",
375 | "Epoch 6/10\n",
376 | "219/219 [==============================] - 4s 16ms/step - loss: 0.2598 - accuracy: 0.9067 - val_loss: 0.2741 - val_accuracy: 0.9006\n",
377 | "Epoch 7/10\n",
378 | "219/219 [==============================] - 4s 16ms/step - loss: 0.2453 - accuracy: 0.9119 - val_loss: 0.2655 - val_accuracy: 0.9044\n",
379 | "Epoch 8/10\n",
380 | "219/219 [==============================] - 3s 16ms/step - loss: 0.2304 - accuracy: 0.9166 - val_loss: 0.2417 - val_accuracy: 0.9139\n",
381 | "Epoch 9/10\n",
382 | "219/219 [==============================] - 4s 16ms/step - loss: 0.2185 - accuracy: 0.9192 - val_loss: 0.2509 - val_accuracy: 0.9099\n",
383 | "Epoch 10/10\n",
384 | "219/219 [==============================] - 3s 15ms/step - loss: 0.2072 - accuracy: 0.9239 - val_loss: 0.2389 - val_accuracy: 0.9134\n"
385 | ]
386 | }
387 | ],
388 | "source": [
389 | "history = model.fit(train_batches, epochs=10, validation_data=validation_batches)"
390 | ]
391 | },
392 | {
393 | "cell_type": "markdown",
394 | "metadata": {
395 | "colab_type": "text",
396 | "id": "TZT9-7w9n4YO"
397 | },
398 | "source": [
399 | "# Exporting to TFLite\n",
400 | "\n",
401 | "You will now save the model to TFLite. We should note, that you will probably see some warning messages when running the code below. These warnings have to do with software updates and should not cause any errors or prevent your code from running. "
402 | ]
403 | },
404 | {
405 | "cell_type": "code",
406 | "execution_count": 11,
407 | "metadata": {},
408 | "outputs": [
409 | {
410 | "name": "stdout",
411 | "output_type": "stream",
412 | "text": [
413 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
414 | "Instructions for updating:\n",
415 | "If using Keras pass *_constraint arguments to layers.\n"
416 | ]
417 | },
418 | {
419 | "name": "stderr",
420 | "output_type": "stream",
421 | "text": [
422 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
423 | "Instructions for updating:\n",
424 | "If using Keras pass *_constraint arguments to layers.\n"
425 | ]
426 | },
427 | {
428 | "name": "stdout",
429 | "output_type": "stream",
430 | "text": [
431 | "INFO:tensorflow:Assets written to: saved_model/1/assets\n"
432 | ]
433 | },
434 | {
435 | "name": "stderr",
436 | "output_type": "stream",
437 | "text": [
438 | "INFO:tensorflow:Assets written to: saved_model/1/assets\n"
439 | ]
440 | }
441 | ],
442 | "source": [
443 | "# EXERCISE: Use the tf.saved_model API to save your model in the SavedModel format. \n",
444 | "export_dir = 'saved_model/1'\n",
445 | "\n",
446 | "tf.saved_model.save(model,export_dir=export_dir)"
447 | ]
448 | },
449 | {
450 | "cell_type": "code",
451 | "execution_count": 12,
452 | "metadata": {
453 | "cellView": "form",
454 | "colab": {},
455 | "colab_type": "code",
456 | "id": "EDGiYrBdE6fl"
457 | },
458 | "outputs": [],
459 | "source": [
460 | "# Select mode of optimization\n",
461 | "mode = \"Speed\" \n",
462 | "\n",
463 | "if mode == 'Storage':\n",
464 | " optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE\n",
465 | "elif mode == 'Speed':\n",
466 | " optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY\n",
467 | "else:\n",
468 | " optimization = tf.lite.Optimize.DEFAULT"
469 | ]
470 | },
471 | {
472 | "cell_type": "code",
473 | "execution_count": 13,
474 | "metadata": {},
475 | "outputs": [],
476 | "source": [
477 | "# EXERCISE: Use the TFLiteConverter SavedModel API to initialize the converter\n",
478 | "\n",
479 | "converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)\n",
480 | "\n",
481 | "# Set the optimzations\n",
482 | "converter.optimizations = [optimization]\n",
483 | "\n",
484 | "# Invoke the converter to finally generate the TFLite model\n",
485 | "tflite_model = converter.convert();"
486 | ]
487 | },
488 | {
489 | "cell_type": "code",
490 | "execution_count": 14,
491 | "metadata": {
492 | "colab": {
493 | "base_uri": "https://localhost:8080/",
494 | "height": 34
495 | },
496 | "colab_type": "code",
497 | "id": "q5PWCDsTC3El",
498 | "outputId": "97349e68-0bff-41cd-ad48-90a6abb85f11"
499 | },
500 | "outputs": [
501 | {
502 | "data": {
503 | "text/plain": [
504 | "258656"
505 | ]
506 | },
507 | "execution_count": 14,
508 | "metadata": {},
509 | "output_type": "execute_result"
510 | }
511 | ],
512 | "source": [
513 | "tflite_model_file = pathlib.Path('./model.tflite')\n",
514 | "tflite_model_file.write_bytes(tflite_model)"
515 | ]
516 | },
517 | {
518 | "cell_type": "markdown",
519 | "metadata": {
520 | "colab_type": "text",
521 | "id": "SR6wFcQ1Fglm"
522 | },
523 | "source": [
524 | "# Test the Model with TFLite Interpreter "
525 | ]
526 | },
527 | {
528 | "cell_type": "code",
529 | "execution_count": 15,
530 | "metadata": {
531 | "colab": {},
532 | "colab_type": "code",
533 | "id": "rKcToCBEC-Bu"
534 | },
535 | "outputs": [],
536 | "source": [
537 | "# Load TFLite model and allocate tensors.\n",
538 | "interpreter = tf.lite.Interpreter(model_content=tflite_model)\n",
539 | "interpreter.allocate_tensors()\n",
540 | "\n",
541 | "input_index = interpreter.get_input_details()[0][\"index\"]\n",
542 | "output_index = interpreter.get_output_details()[0][\"index\"]"
543 | ]
544 | },
545 | {
546 | "cell_type": "code",
547 | "execution_count": 16,
548 | "metadata": {
549 | "colab": {},
550 | "colab_type": "code",
551 | "id": "E8EpFpIBFkq8"
552 | },
553 | "outputs": [],
554 | "source": [
555 | "# Gather results for the randomly sampled test images\n",
556 | "predictions = []\n",
557 | "test_labels = []\n",
558 | "test_images = []\n",
559 | "\n",
560 | "for img, label in test_batches.take(50):\n",
561 | " interpreter.set_tensor(input_index, img)\n",
562 | " interpreter.invoke()\n",
563 | " predictions.append(interpreter.get_tensor(output_index))\n",
564 | " test_labels.append(label[0])\n",
565 | " test_images.append(np.array(img))"
566 | ]
567 | },
568 | {
569 | "cell_type": "code",
570 | "execution_count": 17,
571 | "metadata": {
572 | "cellView": "form",
573 | "colab": {},
574 | "colab_type": "code",
575 | "id": "kSjTmi05Tyod"
576 | },
577 | "outputs": [],
578 | "source": [
579 | "# Utilities functions for plotting\n",
580 | "\n",
581 | "def plot_image(i, predictions_array, true_label, img):\n",
582 | " predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n",
583 | " plt.grid(False)\n",
584 | " plt.xticks([])\n",
585 | " plt.yticks([])\n",
586 | " \n",
587 | " img = np.squeeze(img)\n",
588 | " \n",
589 | " plt.imshow(img, cmap=plt.cm.binary)\n",
590 | " \n",
591 | " predicted_label = np.argmax(predictions_array)\n",
592 | " \n",
593 | " if predicted_label == true_label.numpy():\n",
594 | " color = 'green'\n",
595 | " else:\n",
596 | " color = 'red'\n",
597 | " \n",
598 | " plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n",
599 | " 100*np.max(predictions_array),\n",
600 | " class_names[true_label]),\n",
601 | " color=color)\n",
602 | "\n",
603 | "def plot_value_array(i, predictions_array, true_label):\n",
604 | " predictions_array, true_label = predictions_array[i], true_label[i]\n",
605 | " plt.grid(False)\n",
606 | " plt.xticks(list(range(10)))\n",
607 | " plt.yticks([])\n",
608 | " thisplot = plt.bar(range(10), predictions_array[0], color=\"#777777\")\n",
609 | " plt.ylim([0, 1])\n",
610 | " predicted_label = np.argmax(predictions_array[0])\n",
611 | " \n",
612 | " thisplot[predicted_label].set_color('red')\n",
613 | " thisplot[true_label].set_color('blue')"
614 | ]
615 | },
616 | {
617 | "cell_type": "code",
618 | "execution_count": 18,
619 | "metadata": {
620 | "cellView": "form",
621 | "colab": {
622 | "base_uri": "https://localhost:8080/",
623 | "height": 201
624 | },
625 | "colab_type": "code",
626 | "id": "ZZwg0wFaVXhZ",
627 | "outputId": "f9676edc-f305-4115-938b-389286d2228d"
628 | },
629 | "outputs": [
630 | {
631 | "data": {
632 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWAAAADCCAYAAAB3whgdAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAVNUlEQVR4nO3deZBdZZnH8e9DFsgeyAZJSBqDGQSGRWIAxV0hgBh01IJRC1zAKZYBrcKJSHk5ljU6QA1aNQ4uIIsEGUAWBWQrQIkCmsgWSIgsIQsgSQghgZCkk2f+OKexc89zuu/tTudNOr9PVRd9n/u895x7mzz33He75u6IiMjWt1PqExAR2VGpAIuIJKICLCKSiAqwiEgiKsAiIomoAIuIJNI39QmIpDZy5EhvaWlJfRqyjXnsMWhtbTy/b1848MByfM6cOcvdfVTYpqsnJ9JbtLS0MHv27NSnIdsYs+byW1sh+t/IzF6oaqMuCBGRRFSARUQSUQEWEUmkqT5gDVZIT1q4cCHLly9vsudNZPvVVAHWYIX0pClTpqQ+BZGtSl0QIiKJqACLiCSiAiwikogKsIhIIirAIiKJqACLiCSiAiwikogKsIhIIirAIiKJqACLiCSiAiwikogKsIhIIirAIiKJqACLiCSiAiwikogKsIhIIirAIiKJqACLiCSiAiwikogKsIhIIirAIiKJqACLiCSiAiwikogKsIhIIn1Tn8CO6LzzzivF1q1bF+YOHjy4FDOzMHf48OGl2KhRo8LcIUOGlGITJkwIc0ePHl2K7b777mGuiDROV8AiIomoAIuIJKICLCKSiAqwiEgiKsAiIoloFkQXtLa2lmJ9+zb+Ut52222l2NKlS8PcDRs2lGIbN24Mc929oViVqtkVAwYMKMWiGRcAI0aMKMXWrl0b5t5///2b3a56XiK9la6ARUQSUQEWEUlEBVhEJBEVYBGRRDQI1wVVg1X15s6d2/Bjjhs3LoyvWrWqFKsaWNu0aVNDsarHqHpeUfz1118Pc6Pz3Wmn+H2+/jE0CCc7Gl0Bi4gkogIsIpKICrCISCIqwCIiiagAi4gkolkQhe7OCojceuutYTzaZL1K//79S7GFCxeGudFsg6FDh4a5y5Yta+hYADvvvHMpVrX0ul+/fqXYsGHDwtz6DeCrji/SW+kKWEQkERVgEZFEVIBFRBJRARYRSUSDcF1QtbS23oIFC8L4a6+9VoqNHTs2zI2+vbhq7+AZM2aUYuvXrw9zo0G4qkHDlStXlmJVg3BR/JVXXglz68+tmb2LRXoDXQGLiCSiAiwikogKsIhIIirAIiKJqACLiCSiWRCFRpcXN+N3v/tdGK9fggvw5JNPhrnRtyLvtttuYW707cXnnHNOmDty5MhSbOrUqWHurFmzSrE+ffqEuWvWrCnFqmZiLFq0qKE8kd5KV8AiIomoAIuIJKICLCKSiAqwiEgiGoTbQi699NKGc0ePHl2KRUuDIR4cXLJkSZj7wgsvlGJHHXVUmBt9q/HixYvD3LVr15Zi5557bph71llnlWIHHnhgmLv33ntvdjvad1ikN9MVsIhIIirAIiKJqACLiCSiAiwikogKsIhIIlt1FkQzG273xNLgjjTzrchR7plnnlmKTZo0KWw/f/78ho81fvz4Uixa7gswe/bsUmzixIlhbmtrayl23333hbnRBvQvvfRSmPv000+XYqtXrw5z62dibNy4McwT6a10BSwikogKsIhIIirAIiKJqACLiCTS9CBc/QDUtjBYVjW4F51b1flu2rSpFKva83bmzJml2NChQ0uxfv36he2jfXurcqNvUJ48eXKY+8EPfrAUu/jii8Pc4447rhQ75ZRTwtxrrrmmFLvlllvC3AsuuKAUq/r7vPzyy5vdjvY+FunNdAUsIpKICrCISCIqwCIiiagAi4gkst2thGtm0K+Z3GjA7Y033ghzo1Vv0SBatOcuxHv/Dh48OMyNVof17Rv/2QYNGlSKDRkyJMyNVreNGTMmzI2e2yOPPBLm7rLLLqXYnnvuGebWr5CLBkJFejNdAYuIJKICLCKSiAqwiEgiKsAiIomoAIuIJNJjsyCiGQ/RyPuWsCWWQ5966qmlWNVy2+h5RLMNqva3jV6bqm8Ejr6RuH4Jb5uHH364FDv66KPD3Gg/36uuuirMjZx22mlhPNpn+MUXXwxz161bt9ltzYKQHY2ugEVEElEBFhFJRAVYRCQRFWARkUSaHoTrzoBX1SBLTw3ORaZPnx7GH3zwwVJs1113DXOjQbRo8KlqKXO07Dna9xdg+PDhDR0L4KabbirFqgYCoyXDVaLzvfzyy8PcaEl1tEQaYMGCBZvdrh+UE+ntdAUsIpKICrCISCIqwCIiiagAi4gkogIsIpJI07Mg6mcyNLNxek99g/LVV18dxi+66KJSrGomxujRo0uxqg3Vow3Kp02bVoo99NBDYfvFixeXYgMHDmz4WFUbsu+xxx6lWNVMjGi2wsqVK8PcaBZE9C3QAMOGDSvFqmZ41P8tmtmwX6Q30BWwiEgiKsAiIomoAIuIJKICLCKSSLeXIjczsFY1AHbvvfeWYrfffnuYO2vWrFKsan/cSZMmlWJVA0KrVq0qxaJlwACHHXZYKbZ06dJS7NVXXw3bH3744aXYokWLwtzly5eXYtEewRAvOx4wYECYu2HDhlKsaslwtNdx1UBgtK9x9NoC7LPPPpvdbmZ5tEhvoCtgEZFEVIBFRBJRARYRSUQFWEQkERVgEZFEuj0Lomqkf8aMGaXYU089FeZGy22rZgVES2gPPfTQMHfZsmWl2IQJE8LcaPP1aFktxDMTnn766YbbR0t+V6xYEeZGS4n322+/MHfNmjWlWNXMk2jGQdVG79Hfp+pxoxkaVTNPqo4nsqPQFbCISCIqwCIiiagAi4gkogIsIpJI04Nw9S688MIwHg1URXvuArz11lul2MSJE8PcaAArWp4MMHny5FJs9erVYe6jjz5aikXLaiEeyIuWLY8dOzZsHy0ZfuCBB8Lc733ve6VYrVYLc8eMGVOKRXsEQzwA1r9//zA3UrVsONrXuOp1rI/31H7RItsqXQGLiCSiAiwikogKsIhIIirAIiKJqACLiCTS1CyIFStWcMUVV2wWu/nmm8PcaAbAyJEjw9zufJNuVQxgwYIFpVj0Db8Qb0Ze9S290QyPaGbEvHnzwvbRBvJVG9BHG71XbZx+3HHHlWJ/+tOfwtxoSXfV6xgtRa56HdevX1+KtbS0hLmaBSE7Ol0Bi4gkogIsIpKICrCISCIqwCIiiTQ1CDdixAhOPvnkzWLRslqIl9bOnz8/zI32vK1aFht9G2/V4E30bb5Vg0fR0tpoiTTEy3ife+65UmzEiBFh+2hQ6vrrrw9zzzjjjFLskksuCXOjgcvoG5gB1q1bF8Yj0eBcVfsot2pAtX7v4KpBT5HeSlfAIiKJqACLiCSiAiwikogKsIhIIirAIiKJdHtD9q985SsNx6NlqgD33HNPKXbnnXeGudHsimhpL8Tf0Fu13DaaSTFu3Lgwd9999y3Fjj322FLspJNOCttXbVDeXXPnzi3FqmaeHHnkkaXY3/72tzA3miESLWWGePbKqFGjwtz6JehVM1REeitdAYuIJKICLCKSiAqwiEgiKsAiIol0exCuGVXLi4855piGYs1asmRJKfbmm2+GuUOHDi3Fdt99926fQ6OqlnRv2LChFKv6RuIjjjiiFNuelvdWPS+R3kpXwCIiiagAi4gkogIsIpKICrCISCIqwCIiiWzVWRBb2/jx41OfQsOqluFqea5I76UrYBGRRFSARUQSUQEWEUlEBVhEJBEVYBGRRFSARUQSUQEWEUlEBVhEJBEVYBGRRFSARUQSUQEWEUlEBVhEJBEVYBGRRFSARUQSUQEWEUmkqf2A58yZs9zMXuipk5Ed3sTUJyCyNTVVgN19VE+diIjIjkZdECIiiagAi4gkogIsIpJIlwuwZXa8ZeaW2T4N5i+0zEYG8TVNHrep/A4e52TLbGzFfZ+1zJ60zDZZZlPq7vuWZfaMZfa0ZXZUu/i0IvaMZTajXXymZfa4Zfaf7WLnWWbHd3BuB1tml9XFbrbMHmrwuX3IMru14jn/TyOP0ZX8Dh5nuGV2WrvboyyzO7r7uCLbu+58K/KJwKziv7Utczpb1cnAXODF4L65wKeBn7YPWmb7AicA+wFjgXsss8nF3T8GPg4sAf5imf2G/PVd6zU/wDK72zIbBgwEDvWaf6+DczsXePt+y2w4cAiwxjJ7h9f8uWafbGLDgdOA/wXwmi+zzF6yzN7nNf9j2lPr8uyekcDyLhxO7badY27xdmZhfuXsni4VYMtsMHAE8GHgtxQF2DL7EHB+cXL7A3OAL3jNvV3bAcCNwI1e85/XPe45wOeAnYGbvOZhYbfMLgaOBF4GTij+QR8E/IS8wD0LfNlrvjKKAx8FpgAzLbO1wOFe87Vtj+81n1ccp/7Q04FrvebrgOcts2eAqcV9z7QVRsvs2iL3ZmCAZbYT0A/YCHyXDt6wLLMhwAFe88fahT9N/jr/nfwN4D+L3CuA14vnsjvwTa/5DXWP9x7gZ8Bn6uKjitdlQhE6u6IY7mmZ3Q+MA672mmdF+2+Qv5YAl3rNf9hB/AfAJMvsUeBur/k5xWvzeSB5Ae7K7B4zm+3uUzrPVLtt9ZgpnmO9rnZBTAfu8JovAFZYZoe0u+9g4GxgX+AdwPva3TeYvJD8Kii+RwLvJC9oBwGHWGYfCI49CJjtNd8P+D3/KGZXAf/hNT8AeKKjeFGkZgOf95of1L74dmIcsLjd7SVFLIwXhXwZ8Nfiee8N7OQ1/2sHx5hCfgXe3onAr4qfE+vu24P8zfAT5IXubZbZe8mL7HSv+bN17X4EXOw1fw/wL8ClFecztbj/AOCzltmU4u/9JeBQ4DDglKLbJIwDM4Bni9f6nOJxZwPv7+B1EOn1utoFcSL5P2CAa4vbc4rbf/aaLwEornhayLsqAG4BLvCazwwe88ji55Hi9mDygvyHurxNwP8Vv18N3Fh8tB/uNf99Eb8SuL4q3txT7R6v+dltv1tmvwW+Zpl9GziQ/Grw53VN9iAv2m1txpC/DrO85m6ZbbDM9veatxXpm73mm4Cnitw27yK/8j3Sax51s3wM2LfdVf5Qy2yw17y+j/1ur/mK4lxuJC/2Tv4J5Y128fcDVhH/TXD8V8i7cUR2WE0XYMtsN+AjwD9bZg70AbzoPgBY1y59Y90x/ghMs8yuad8t0fbQwPe95j+lOfWP05OWAnu2uz2+iNFBHADLbDr5m9RgYJLX/HOW2Z2W2Uyv+ZvtUtcCu7S7/TlgV/IuD4Ch5G943y7ub/96t+8zeal4nIOJ+7l3Ag7zmr8VP9W31b++W+r13oX8uW6vfqZ2W7RdimOmeI6b6UoXxGeAX3rNJ3rNW7zmewLP09jHye8AK8kHrOrdCXy56F/GMhtnmY2uOOe2/sx/Jb8yXAWstMzazuGLwO+r4sXvq4EhDZxze78BTrDMdrbM9iK/Mv0z8BfgnZbZXpZZf/J+2rev+iyzfuTdMhcAA/hHEesD9K87xjzyroo2JwLTite6hXww7oQGzvU14Fjg+0XffL27gDPbneNBFY/zcctst6Lv/njyN9EHgOMts4GW2SDgU0WsKh691pMpd7VsN9y9S/8I1W7bOWaK51ivKwX4ROCmutivKfdNVjmLfGDqgvZBr/ldwDXAg5bZE8ANxAXyDWCqZTaX/Er8u0X8JOBCy+xx8j7kzuJXAD+xzB4tisvbLLNPWWZLgMOB2yyzO4tzfBK4DngKuAM43Wu+0WveCpxB/iYyD7iuyG1zOnBlcaX7ODCweI5zvOav1b0O84FhltkQy6yFfAT1oXb3Pw+ssswODV6bzXjN/07eN/zjIP/fgSnFFLmngH+reJg/k/99Hwd+7TWfXfRhX1Hc9zD5YNsjHcRXAH+0zOZaZhcWj/th4LbOnoNIb2Ze6gmQ1CyzrwOrveZVA2PbPcvsD+SDgytTn0szzGwa+fhHH+BSd/9BJ03a2v2C/M3wFXffv4nj7Uk+kDyG/JPTz9z9Rx23AjPbhXz8ZGfybsAb3ONZRRXt+5APlC5190802GYh+aedjUBrozMFzGw4+SDw/uTP8cvu/mAnbf6Jf4wFQT7g/x33fDZOJ22/Dny1ONYTwJfcO+2Kw8zOAk4h7+r7eSPH6pS762cb++F8duF8vpj6PHrw+Y3ifI5PfR5Nn3dedJ8l/8feH3gM2LfBth8A3g3MbfKYewDvLn4fAixo5JhFkRhc/N6P/BPJYU0c9xvkn0hvbaLNQmBkF17XK4GvFr/3B4Z34e/yMjCxgdxx5F2mA4rb1wEnN9Buf/Ius4Hkb2j3AHt39/8pLUXeBnnN3/Ka/zL1efQUr/kyr/nNqc+jC6YCz7j7c+6+nnwG0PRGGrr7H4BXmz2gu7/knk9bdPfV5F1c4xpo5+5vz2jpV/w09HHXzMaTjx/0+CcwMxtG/uZ0GYC7r3ffvFuuAR8FnnX3RhfT9AUGmFlf8oIaDVLXexfwsLu/6e6t5GNJn27yPEtUgEUaVzUPfKswsxbyWS0PN5jfx8weJZ/yd7e7N9QO+CHwTfIpn81w4C4zm2NmpzbYZi/yaZeXm9kjZnapmQ1q8rgnkM+R7/wE3ZcCFwGLyGcKrXL3uxpoOhd4v5mNMLOBwDFsPvOpS1SARbYDZjaYfDD0bHd/vZE27r7R3Q8inxY51cw67Xs2s7Z+6jmd5QaOcPd3A0cDp5uFC6nq9SXvmrnE3Q8mH2Sf0XGTzc63P/BJGpzfb2a7kn9q2Yt8HvogM/tCZ+3cfR7wX+Szh+4AHiXv6+4WFWCRxnU0D7zHmFk/8uI7091vbLZ98ZH+PmBaA+nvAz5ZDKhdC3zEzK5u8DhLi/++Qj5TamrHLYD8U8SSdlfnN5AX5EYdDfzV3f/eYP7HgOfdfZm7byDfFuG9jTR098vc/RB3/wD5dNoFTZxnSAVYpHH5fG+zvYorr83me/cEMzPy/tF57v7fTbQbVcwuwMwGkG8UNb+zdu7+LXcf7+4t5M/vXnfv9ArRzAaZ2ZC238lXtXY6z9vdXwYWF7MaIO/Pfaqzdu20LdNv1CLgMDMbWLy2HyXvV++UWb4uwcwmkPf/XtPEcUPd2Q1NZIfi7q1m1jbfuw/wC/fN5ntXMrNfAR8CRprZEqDm7pd13ArIr0i/CDxR9OcCnOvut3fSbg/gymI62U7Ade5e2qJ0CxoD3JTXNPoC17h7o1uOngnMLN7UniPfT6RTRaH/OPC1Rk/S3R82sxvI92dpJd/6oNGFFb82sxHABuD0LgwWlmgesIhIIuqCEBFJRAVYRCQRFWARkURUgEVEElEBFhFJRAVYRCQRFWARkURUgEVEEvl/fds6OJu21mEAAAAASUVORK5CYII=\n",
633 | "text/plain": [
634 | ""
635 | ]
636 | },
637 | "metadata": {
638 | "needs_background": "light"
639 | },
640 | "output_type": "display_data"
641 | }
642 | ],
643 | "source": [
644 | "# Visualize the outputs\n",
645 | "\n",
646 | "# Select index of image to display. Minimum index value is 1 and max index value is 50. \n",
647 | "index = 49 \n",
648 | "\n",
649 | "plt.figure(figsize=(6,3))\n",
650 | "plt.subplot(1,2,1)\n",
651 | "plot_image(index, predictions, test_labels, test_images)\n",
652 | "plt.subplot(1,2,2)\n",
653 | "plot_value_array(index, predictions, test_labels)\n",
654 | "plt.show()"
655 | ]
656 | },
657 | {
658 | "cell_type": "markdown",
659 | "metadata": {},
660 | "source": [
661 | "# Click the Submit Assignment Button Above\n",
662 | "\n",
663 | "You should now click the Submit Assignment button above to submit your notebook for grading. Once you have submitted your assignment, you can continue with the optinal section below. \n",
664 | "\n",
665 | "## If you are done, please **don't forget to run the last two cells of this notebook** to save your work and close the Notebook to free up resources for your fellow learners. "
666 | ]
667 | },
668 | {
669 | "cell_type": "markdown",
670 | "metadata": {
671 | "colab_type": "text",
672 | "id": "H8t7_jRiz9Vw"
673 | },
674 | "source": [
675 | "# Prepare the Test Images for Download (Optional)"
676 | ]
677 | },
678 | {
679 | "cell_type": "code",
680 | "execution_count": null,
681 | "metadata": {
682 | "colab": {},
683 | "colab_type": "code",
684 | "id": "Fi09nIps0gBu"
685 | },
686 | "outputs": [],
687 | "source": [
688 | "!mkdir -p test_images"
689 | ]
690 | },
691 | {
692 | "cell_type": "code",
693 | "execution_count": null,
694 | "metadata": {
695 | "colab": {},
696 | "colab_type": "code",
697 | "id": "sF7EZ63J0hZs"
698 | },
699 | "outputs": [],
700 | "source": [
701 | "from PIL import Image\n",
702 | "\n",
703 | "for index, (image, label) in enumerate(test_batches.take(50)):\n",
704 | " image = tf.cast(image * 255.0, tf.uint8)\n",
705 | " image = tf.squeeze(image).numpy()\n",
706 | " pil_image = Image.fromarray(image)\n",
707 | " pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]].lower(), index))"
708 | ]
709 | },
710 | {
711 | "cell_type": "code",
712 | "execution_count": null,
713 | "metadata": {
714 | "colab": {},
715 | "colab_type": "code",
716 | "id": "uM35O-uv0iWS"
717 | },
718 | "outputs": [],
719 | "source": [
720 | "!ls test_images"
721 | ]
722 | },
723 | {
724 | "cell_type": "code",
725 | "execution_count": null,
726 | "metadata": {
727 | "colab": {},
728 | "colab_type": "code",
729 | "id": "aR20r4qW0jVm"
730 | },
731 | "outputs": [],
732 | "source": [
733 | "!tar --create --file=fmnist_test_images.tar test_images"
734 | ]
735 | },
736 | {
737 | "cell_type": "code",
738 | "execution_count": null,
739 | "metadata": {},
740 | "outputs": [],
741 | "source": [
742 | "!ls"
743 | ]
744 | },
745 | {
746 | "cell_type": "markdown",
747 | "metadata": {},
748 | "source": [
749 | "# When you're done/would like to take a break, please run the two cells below to save your work and close the Notebook. This frees up resources for your fellow learners."
750 | ]
751 | },
752 | {
753 | "cell_type": "code",
754 | "execution_count": null,
755 | "metadata": {},
756 | "outputs": [],
757 | "source": [
758 | "%%javascript\n",
759 | "\n",
760 | "IPython.notebook.save_checkpoint();"
761 | ]
762 | },
763 | {
764 | "cell_type": "code",
765 | "execution_count": null,
766 | "metadata": {},
767 | "outputs": [],
768 | "source": [
769 | "%%javascript\n",
770 | "\n",
771 | "window.onbeforeunload = null\n",
772 | "window.close();\n",
773 | "IPython.notebook.session.delete();"
774 | ]
775 | }
776 | ],
777 | "metadata": {
778 | "accelerator": "GPU",
779 | "colab": {
780 | "collapsed_sections": [],
781 | "name": "TF Lite Week 1 Exercise - Answer",
782 | "provenance": [],
783 | "toc_visible": true,
784 | "version": "0.3.2"
785 | },
786 | "coursera": {
787 | "course_slug": "device-based-models-tensorflow",
788 | "graded_item_id": "sCFzO",
789 | "launcher_item_id": "fJyaf"
790 | },
791 | "kernelspec": {
792 | "display_name": "Python 3",
793 | "language": "python",
794 | "name": "python3"
795 | },
796 | "language_info": {
797 | "codemirror_mode": {
798 | "name": "ipython",
799 | "version": 3
800 | },
801 | "file_extension": ".py",
802 | "mimetype": "text/x-python",
803 | "name": "python",
804 | "nbconvert_exporter": "python",
805 | "pygments_lexer": "ipython3",
806 | "version": "3.6.8"
807 | }
808 | },
809 | "nbformat": 4,
810 | "nbformat_minor": 1
811 | }
812 |
--------------------------------------------------------------------------------
/Week2 Quiz Answers.txt:
--------------------------------------------------------------------------------
1 | Question 1
2 | To what file do you add the tensorflow lite dependency when building an Android app?
3 | aar.gradle
4 | build.aar
5 | build.gradle - correct
6 | gradle.build
7 |
8 |
9 | Question 2
10 | If the Android Neural networks API is available and you want to use it, how would you do that?
11 | You can’t use the neural networks API with a TensorFlow Lite model
12 | Call the setUseNNAPI method on the interpreter and set its parameter to true - correct
13 | Invoke the NNAPI object, and pass the tflite interpreter to it
14 | Do nothing, it will work automatically
15 |
16 |
17 | Question 3
18 | If you want to configure the number of threads the interpreter uses, how would you do that?
19 | Do nothing, it’s always single threaded
20 | Do nothing, it automatically picks the appropriate number of threads
21 | Call the useThreads() method, and it will apportion the correct number of threads
22 | Call setNumThreads and pass it the number of threads you want to use - correct
23 |
24 | Question 4
25 | Where’s the best place in an Android app to keep your model?
26 | It can really be anywhere, but for consistency use the assets folder - correct
27 | In the same folder as the activity that calls it
28 | You don’t keep your model in your android App, it should download it at runtime
29 | In the resources folder
30 |
31 | Question 5
32 | If you tested your converted model and know its valid, but the interpreter cannot load it at runtime on Android, what’s the most likely reason?
33 | You have’t converted the model to Java or Kotlin format
34 | You haven’t quantized your model
35 | You converted your model to iOS format by accident
36 | You didn’t specify that the model should not be compressed in the build.gradle file - correct
37 |
38 |
39 | Question 6
40 | What is the method signature of the interpreter when you want to do inference?
41 | predictions = interpreter.run(inputs)
42 | interpreter.predict(inputs, predictions)
43 | predicitons = interpreter.predict(inputs)
44 | interpreter.run(inputs, predictions) - correct
45 |
46 | Question 7
47 | What Android data structure is most commonly used to feed image input to the interpreter?
48 | A Tensor
49 | An Array
50 | A TensorArray
51 | A ByteBuffer - correct
52 |
53 | Question 8
54 | How many classes of object can a model trained on the COCO dataset recognize?
55 | 10
56 | 80 - correct
57 | 800
58 | 1000
59 |
60 | Question 9
61 | When performing object recognition, how many dimensions of output tensors are there?
62 | 80
63 | 4 - correct
64 | 10
65 | 1
66 |
67 | Question 10
68 | How do you get the coordinates of the bounding boxes from the object detection model?
69 | The coordinates are in the first tensor, but arranged differently, you have to sort them before you can plot them - correct
70 | The coordinates are in the first tensor, read them and simply plot
71 | The coordinates are in tensors 0, 1, 2 and 3
72 | The coordinates are in the first four tensors, read them and simply plot
--------------------------------------------------------------------------------
/Week2_Assignment_Solution.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "accelerator": "GPU",
6 | "colab": {
7 | "name": "C2_W2_Assignment_Solution.ipynb",
8 | "private_outputs": true,
9 | "provenance": [],
10 | "collapsed_sections": [],
11 | "toc_visible": true,
12 | "include_colab_link": true
13 | },
14 | "kernelspec": {
15 | "display_name": "Python 3",
16 | "language": "python",
17 | "name": "python3"
18 | },
19 | "language_info": {
20 | "codemirror_mode": {
21 | "name": "ipython",
22 | "version": 3
23 | },
24 | "file_extension": ".py",
25 | "mimetype": "text/x-python",
26 | "name": "python",
27 | "nbconvert_exporter": "python",
28 | "pygments_lexer": "ipython3",
29 | "version": "3.8.5"
30 | }
31 | },
32 | "cells": [
33 | {
34 | "cell_type": "markdown",
35 | "metadata": {
36 | "id": "view-in-github",
37 | "colab_type": "text"
38 | },
39 | "source": [
40 | " "
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {
46 | "id": "Za8-Nr5k11fh"
47 | },
48 | "source": [
49 | "##### Copyright 2018 The TensorFlow Authors."
50 | ]
51 | },
52 | {
53 | "cell_type": "code",
54 | "metadata": {
55 | "cellView": "form",
56 | "id": "Eq10uEbw0E4l"
57 | },
58 | "source": [
59 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
60 | "# you may not use this file except in compliance with the License.\n",
61 | "# You may obtain a copy of the License at\n",
62 | "#\n",
63 | "# https://www.apache.org/licenses/LICENSE-2.0\n",
64 | "#\n",
65 | "# Unless required by applicable law or agreed to in writing, software\n",
66 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
67 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
68 | "# See the License for the specific language governing permissions and\n",
69 | "# limitations under the License."
70 | ],
71 | "execution_count": null,
72 | "outputs": []
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "metadata": {
77 | "id": "oYM61xrTsP5d"
78 | },
79 | "source": [
80 | "# Rock, Paper & Scissors with TensorFlow Hub - TFLite"
81 | ]
82 | },
83 | {
84 | "cell_type": "markdown",
85 | "metadata": {
86 | "id": "xWFpUd1yy3gt"
87 | },
88 | "source": [
89 | "## Setup"
90 | ]
91 | },
92 | {
93 | "cell_type": "code",
94 | "metadata": {
95 | "id": "110fGB18UNJn"
96 | },
97 | "source": [
98 | "try:\n",
99 | " %tensorflow_version 2.x\n",
100 | "except:\n",
101 | " pass"
102 | ],
103 | "execution_count": null,
104 | "outputs": []
105 | },
106 | {
107 | "cell_type": "code",
108 | "metadata": {
109 | "id": "dlauq-4FWGZM"
110 | },
111 | "source": [
112 | "import numpy as np\n",
113 | "import matplotlib.pylab as plt\n",
114 | "\n",
115 | "import tensorflow as tf\n",
116 | "import tensorflow_hub as hub\n",
117 | "\n",
118 | "from tqdm import tqdm\n",
119 | "\n",
120 | "print(\"\\u2022 Using TensorFlow Version:\", tf.__version__)\n",
121 | "print(\"\\u2022 Using TensorFlow Hub Version: \", hub.__version__)\n",
122 | "print('\\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\\u2022 GPU Device Not Found. Running on CPU')"
123 | ],
124 | "execution_count": null,
125 | "outputs": []
126 | },
127 | {
128 | "cell_type": "markdown",
129 | "metadata": {
130 | "id": "mmaHHH7Pvmth"
131 | },
132 | "source": [
133 | "## Select the Hub/TF2 Module to Use\n",
134 | "\n",
135 | "Hub modules for TF 1.x won't work here, please use one of the selections provided."
136 | ]
137 | },
138 | {
139 | "cell_type": "code",
140 | "metadata": {
141 | "id": "FlsEcKVeuCnf"
142 | },
143 | "source": [
144 | "module_selection = (\"mobilenet_v2\", 224, 1280) #@param [\"(\\\"mobilenet_v2\\\", 224, 1280)\", \"(\\\"inception_v3\\\", 299, 2048)\"] {type:\"raw\", allow-input: true}\n",
145 | "handle_base, pixels, FV_SIZE = module_selection\n",
146 | "MODULE_HANDLE =\"https://tfhub.dev/google/tf2-preview/{}/feature_vector/4\".format(handle_base)\n",
147 | "IMAGE_SIZE = (pixels, pixels)\n",
148 | "print(\"Using {} with input size {} and output dimension {}\".format(MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))"
149 | ],
150 | "execution_count": null,
151 | "outputs": []
152 | },
153 | {
154 | "cell_type": "markdown",
155 | "metadata": {
156 | "id": "sYUsgwCBv87A"
157 | },
158 | "source": [
159 | "## Data Preprocessing"
160 | ]
161 | },
162 | {
163 | "cell_type": "markdown",
164 | "metadata": {
165 | "id": "8nqVX3KYwGPh"
166 | },
167 | "source": [
168 | "Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the `rock_paper_scissors` dataset.\n",
169 | "\n",
170 | "This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)\n"
171 | ]
172 | },
173 | {
174 | "cell_type": "code",
175 | "metadata": {
176 | "id": "jGvpkDj4wBup"
177 | },
178 | "source": [
179 | "import tensorflow_datasets as tfds\n",
180 | "tfds.disable_progress_bar()"
181 | ],
182 | "execution_count": null,
183 | "outputs": []
184 | },
185 | {
186 | "cell_type": "markdown",
187 | "metadata": {
188 | "id": "YkF4Boe5wN7N"
189 | },
190 | "source": [
191 | "The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.\n",
192 | "\n",
193 | "Dividing the `train` split of this dataset into (train, validation, test) with 80%, 10%, 10% of the data respectively."
194 | ]
195 | },
196 | {
197 | "cell_type": "code",
198 | "metadata": {
199 | "id": "SQ9xK9F2wGD8"
200 | },
201 | "source": [
202 | "(train_examples, validation_examples, test_examples), info = tfds.load('rock_paper_scissors',\n",
203 | " with_info=True, \n",
204 | " as_supervised=True, \n",
205 | " split=['train[:80%]',\n",
206 | " 'train[80%:90%]',\n",
207 | " 'train[90%:]'])\n",
208 | "\n",
209 | "num_examples = info.splits['train'].num_examples\n",
210 | "num_classes = info.features['label'].num_classes"
211 | ],
212 | "execution_count": null,
213 | "outputs": []
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {
218 | "id": "pmXQYXNWwf19"
219 | },
220 | "source": [
221 | "### Format the Data\n",
222 | "\n",
223 | "Use the `tf.image` module to format the images for the task.\n",
224 | "\n",
225 | "Resize the images to a fixes input size, and rescale the input channels"
226 | ]
227 | },
228 | {
229 | "cell_type": "code",
230 | "metadata": {
231 | "id": "y7UyXblSwkUS"
232 | },
233 | "source": [
234 | "def format_image(image, label):\n",
235 | " image = tf.image.resize(image, IMAGE_SIZE) / 255.0\n",
236 | " return image, label"
237 | ],
238 | "execution_count": null,
239 | "outputs": []
240 | },
241 | {
242 | "cell_type": "markdown",
243 | "metadata": {
244 | "id": "1nrDR8CnwrVk"
245 | },
246 | "source": [
247 | "Now shuffle and batch the data\n"
248 | ]
249 | },
250 | {
251 | "cell_type": "code",
252 | "metadata": {
253 | "id": "zAEUG7vawxLm"
254 | },
255 | "source": [
256 | "BATCH_SIZE = 32 #@param {type:\"integer\"}"
257 | ],
258 | "execution_count": null,
259 | "outputs": []
260 | },
261 | {
262 | "cell_type": "code",
263 | "metadata": {
264 | "id": "fHEC9mbswxvM"
265 | },
266 | "source": [
267 | "train_batches = train_examples.shuffle(num_examples // 4).batch(BATCH_SIZE).map(format_image).prefetch(1)\n",
268 | "validation_batches = validation_examples.batch(BATCH_SIZE).map(format_image).prefetch(1)\n",
269 | "test_batches = test_examples.batch(1).map(format_image)"
270 | ],
271 | "execution_count": null,
272 | "outputs": []
273 | },
274 | {
275 | "cell_type": "markdown",
276 | "metadata": {
277 | "id": "ghQhZjgEw1cK"
278 | },
279 | "source": [
280 | "Inspect a batch"
281 | ]
282 | },
283 | {
284 | "cell_type": "code",
285 | "metadata": {
286 | "id": "gz0xsMCjwx54"
287 | },
288 | "source": [
289 | "for image_batch, label_batch in train_batches.take(1):\n",
290 | " pass\n",
291 | "\n",
292 | "image_batch.shape"
293 | ],
294 | "execution_count": null,
295 | "outputs": []
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {
300 | "id": "FS_gVStowW3G"
301 | },
302 | "source": [
303 | "## Defining the Model\n",
304 | "\n",
305 | "All it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.\n",
306 | "\n",
307 | "For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy."
308 | ]
309 | },
310 | {
311 | "cell_type": "code",
312 | "metadata": {
313 | "cellView": "form",
314 | "id": "RaJW3XrPyFiF"
315 | },
316 | "source": [
317 | "do_fine_tuning = False #@param {type:\"boolean\"}"
318 | ],
319 | "execution_count": null,
320 | "outputs": []
321 | },
322 | {
323 | "cell_type": "code",
324 | "metadata": {
325 | "id": "90DUQi59NXd4"
326 | },
327 | "source": [
328 | "feature_extractor = hub.KerasLayer(MODULE_HANDLE,\n",
329 | " input_shape=IMAGE_SIZE + (3,), \n",
330 | " output_shape=[FV_SIZE],\n",
331 | " trainable=do_fine_tuning)"
332 | ],
333 | "execution_count": null,
334 | "outputs": []
335 | },
336 | {
337 | "cell_type": "code",
338 | "metadata": {
339 | "id": "3MoRX4MsNXd5"
340 | },
341 | "source": [
342 | "print(\"Building model with\", MODULE_HANDLE)\n",
343 | "\n",
344 | "model = tf.keras.Sequential([\n",
345 | " feature_extractor,\n",
346 | " tf.keras.layers.Dense(num_classes, activation='softmax')\n",
347 | "])\n",
348 | "\n",
349 | "model.summary()"
350 | ],
351 | "execution_count": null,
352 | "outputs": []
353 | },
354 | {
355 | "cell_type": "code",
356 | "metadata": {
357 | "id": "2SaXakrGNXd5"
358 | },
359 | "source": [
360 | "#@title (Optional) Unfreeze some layers\n",
361 | "NUM_LAYERS = 10 #@param {type:\"slider\", min:1, max:50, step:1}\n",
362 | " \n",
363 | "if do_fine_tuning:\n",
364 | " feature_extractor.trainable = True\n",
365 | " \n",
366 | " for layer in model.layers[-NUM_LAYERS:]:\n",
367 | " layer.trainable = True\n",
368 | "\n",
369 | "else:\n",
370 | " feature_extractor.trainable = False"
371 | ],
372 | "execution_count": null,
373 | "outputs": []
374 | },
375 | {
376 | "cell_type": "markdown",
377 | "metadata": {
378 | "id": "u2e5WupIw2N2"
379 | },
380 | "source": [
381 | "## Training the Model"
382 | ]
383 | },
384 | {
385 | "cell_type": "code",
386 | "metadata": {
387 | "id": "9f3yBUvkd_VJ"
388 | },
389 | "source": [
390 | "if do_fine_tuning:\n",
391 | " model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),\n",
392 | " loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n",
393 | " metrics=['accuracy'])\n",
394 | "else:\n",
395 | " model.compile(optimizer='adam',\n",
396 | " loss='sparse_categorical_crossentropy',\n",
397 | " metrics=['accuracy'])"
398 | ],
399 | "execution_count": null,
400 | "outputs": []
401 | },
402 | {
403 | "cell_type": "code",
404 | "metadata": {
405 | "id": "w_YKX2Qnfg6x"
406 | },
407 | "source": [
408 | "EPOCHS = 5\n",
409 | "\n",
410 | "hist = model.fit(train_batches,\n",
411 | " epochs=EPOCHS,\n",
412 | " validation_data=validation_batches)"
413 | ],
414 | "execution_count": null,
415 | "outputs": []
416 | },
417 | {
418 | "cell_type": "markdown",
419 | "metadata": {
420 | "id": "u_psFoTeLpHU"
421 | },
422 | "source": [
423 | "## Export the Model"
424 | ]
425 | },
426 | {
427 | "cell_type": "code",
428 | "metadata": {
429 | "id": "XaSb5nVzHcVv"
430 | },
431 | "source": [
432 | "RPS_SAVED_MODEL = \"rps_saved_model\""
433 | ],
434 | "execution_count": null,
435 | "outputs": []
436 | },
437 | {
438 | "cell_type": "markdown",
439 | "metadata": {
440 | "id": "fZqRAg1uz1Nu"
441 | },
442 | "source": [
443 | "Export the SavedModel"
444 | ]
445 | },
446 | {
447 | "cell_type": "code",
448 | "metadata": {
449 | "id": "yJMue5YgnwtN"
450 | },
451 | "source": [
452 | "tf.saved_model.save(model, RPS_SAVED_MODEL)"
453 | ],
454 | "execution_count": null,
455 | "outputs": []
456 | },
457 | {
458 | "cell_type": "code",
459 | "metadata": {
460 | "id": "SOQF4cOan0SY"
461 | },
462 | "source": [
463 | "%%bash -s $RPS_SAVED_MODEL\n",
464 | "saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default"
465 | ],
466 | "execution_count": null,
467 | "outputs": []
468 | },
469 | {
470 | "cell_type": "code",
471 | "metadata": {
472 | "id": "FY7QGBgBytwX"
473 | },
474 | "source": [
475 | "loaded = tf.saved_model.load(RPS_SAVED_MODEL)"
476 | ],
477 | "execution_count": null,
478 | "outputs": []
479 | },
480 | {
481 | "cell_type": "code",
482 | "metadata": {
483 | "id": "tIhPyMISz952"
484 | },
485 | "source": [
486 | "print(list(loaded.signatures.keys()))\n",
487 | "infer = loaded.signatures[\"serving_default\"]\n",
488 | "print(infer.structured_input_signature)\n",
489 | "print(infer.structured_outputs)"
490 | ],
491 | "execution_count": null,
492 | "outputs": []
493 | },
494 | {
495 | "cell_type": "markdown",
496 | "metadata": {
497 | "id": "XxLiLC8n0H16"
498 | },
499 | "source": [
500 | "## Convert Using TFLite's Converter"
501 | ]
502 | },
503 | {
504 | "cell_type": "code",
505 | "metadata": {
506 | "id": "WmSr2-yZoUhz"
507 | },
508 | "source": [
509 | "converter = tf.lite.TFLiteConverter.from_saved_model(RPS_SAVED_MODEL)\n",
510 | "converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]\n",
511 | "tflite_model = converter.convert()"
512 | ],
513 | "execution_count": null,
514 | "outputs": []
515 | },
516 | {
517 | "cell_type": "code",
518 | "metadata": {
519 | "id": "gMSyVdSdNXed"
520 | },
521 | "source": [
522 | "tflite_model_file = 'converted_model.tflite'\n",
523 | "\n",
524 | "with open(tflite_model_file, \"wb\") as f:\n",
525 | " f.write(tflite_model)"
526 | ],
527 | "execution_count": null,
528 | "outputs": []
529 | },
530 | {
531 | "cell_type": "markdown",
532 | "metadata": {
533 | "id": "BbTF6nd1KG2o"
534 | },
535 | "source": [
536 | "## Test the TFLite Model Using the Python Interpreter"
537 | ]
538 | },
539 | {
540 | "cell_type": "code",
541 | "metadata": {
542 | "id": "dg2NkVTmLUdJ"
543 | },
544 | "source": [
545 | "# Load TFLite model and allocate tensors.\n",
546 | "with open(tflite_model_file, 'rb') as fid:\n",
547 | " tflite_model = fid.read()\n",
548 | " \n",
549 | "interpreter = tf.lite.Interpreter(model_content=tflite_model)\n",
550 | "interpreter.allocate_tensors()\n",
551 | "\n",
552 | "input_index = interpreter.get_input_details()[0][\"index\"]\n",
553 | "output_index = interpreter.get_output_details()[0][\"index\"]"
554 | ],
555 | "execution_count": null,
556 | "outputs": []
557 | },
558 | {
559 | "cell_type": "code",
560 | "metadata": {
561 | "id": "snJQVs9JNglv"
562 | },
563 | "source": [
564 | "# Gather results for the randomly sampled test images\n",
565 | "predictions = []\n",
566 | "\n",
567 | "test_labels, test_imgs = [], []\n",
568 | "for img, label in tqdm(test_batches.take(10)):\n",
569 | " interpreter.set_tensor(input_index, img)\n",
570 | " interpreter.invoke()\n",
571 | " predictions.append(interpreter.get_tensor(output_index))\n",
572 | " \n",
573 | " test_labels.append(label.numpy()[0])\n",
574 | " test_imgs.append(img)"
575 | ],
576 | "execution_count": null,
577 | "outputs": []
578 | },
579 | {
580 | "cell_type": "code",
581 | "metadata": {
582 | "cellView": "form",
583 | "id": "YMTWNqPpNiAI"
584 | },
585 | "source": [
586 | "#@title Utility functions for plotting\n",
587 | "# Utilities for plotting\n",
588 | "\n",
589 | "class_names = ['rock', 'paper', 'scissors']\n",
590 | "\n",
591 | "def plot_image(i, predictions_array, true_label, img):\n",
592 | " predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n",
593 | " plt.grid(False)\n",
594 | " plt.xticks([])\n",
595 | " plt.yticks([])\n",
596 | " \n",
597 | " img = np.squeeze(img)\n",
598 | " \n",
599 | " plt.imshow(img, cmap=plt.cm.binary)\n",
600 | " \n",
601 | " predicted_label = np.argmax(predictions_array)\n",
602 | " \n",
603 | " print(type(predicted_label), type(true_label))\n",
604 | " \n",
605 | " if predicted_label == true_label:\n",
606 | " color = 'green'\n",
607 | " else:\n",
608 | " color = 'red'\n",
609 | " \n",
610 | " plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n",
611 | " 100*np.max(predictions_array),\n",
612 | " class_names[true_label]), color=color)"
613 | ],
614 | "execution_count": null,
615 | "outputs": []
616 | },
617 | {
618 | "cell_type": "code",
619 | "metadata": {
620 | "id": "1-lbnicPNkZs"
621 | },
622 | "source": [
623 | "#@title Visualize the outputs { run: \"auto\" }\n",
624 | "index = 0 #@param {type:\"slider\", min:0, max:9, step:1}\n",
625 | "plt.figure(figsize=(6,3))\n",
626 | "plt.subplot(1,2,1)\n",
627 | "plot_image(index, predictions, test_labels, test_imgs)\n",
628 | "plt.show()"
629 | ],
630 | "execution_count": null,
631 | "outputs": []
632 | },
633 | {
634 | "cell_type": "markdown",
635 | "metadata": {
636 | "id": "5i2T2NnnNXek"
637 | },
638 | "source": [
639 | "Create a file to save the labels."
640 | ]
641 | },
642 | {
643 | "cell_type": "code",
644 | "metadata": {
645 | "id": "5RD7Sk5VNXek"
646 | },
647 | "source": [
648 | "with open('labels.txt', 'w') as f:\n",
649 | " f.write('\\n'.join(class_names))"
650 | ],
651 | "execution_count": null,
652 | "outputs": []
653 | },
654 | {
655 | "cell_type": "markdown",
656 | "metadata": {
657 | "id": "PmZRieHmKLY5"
658 | },
659 | "source": [
660 | "If you are running this notebook in a Colab, you can run the cell below to download the model and labels to your local disk.\n",
661 | "\n",
662 | "**Note**: If the files do not download when you run the cell, try running the cell a second time. Your browser might prompt you to allow multiple files to be downloaded. "
663 | ]
664 | },
665 | {
666 | "cell_type": "code",
667 | "metadata": {
668 | "id": "0jJAxrQB2VFw"
669 | },
670 | "source": [
671 | "try:\n",
672 | " from google.colab import files\n",
673 | " files.download('converted_model.tflite')\n",
674 | " files.download('labels.txt')\n",
675 | "except:\n",
676 | " pass"
677 | ],
678 | "execution_count": null,
679 | "outputs": []
680 | },
681 | {
682 | "cell_type": "markdown",
683 | "metadata": {
684 | "id": "BDlmpjC6VnFZ"
685 | },
686 | "source": [
687 | "# Prepare the Test Images for Download (Optional)"
688 | ]
689 | },
690 | {
691 | "cell_type": "markdown",
692 | "metadata": {
693 | "id": "_1ja_WA0WZOH"
694 | },
695 | "source": [
696 | "This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples"
697 | ]
698 | },
699 | {
700 | "cell_type": "code",
701 | "metadata": {
702 | "id": "fzLKEBrfTREA"
703 | },
704 | "source": [
705 | "!mkdir -p test_images"
706 | ],
707 | "execution_count": null,
708 | "outputs": []
709 | },
710 | {
711 | "cell_type": "code",
712 | "metadata": {
713 | "id": "Qn7ukNQCSewb"
714 | },
715 | "source": [
716 | "from PIL import Image\n",
717 | "\n",
718 | "for index, (image, label) in enumerate(test_batches.take(50)):\n",
719 | " image = tf.cast(image * 255.0, tf.uint8)\n",
720 | " image = tf.squeeze(image).numpy()\n",
721 | " pil_image = Image.fromarray(image)\n",
722 | " pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))"
723 | ],
724 | "execution_count": null,
725 | "outputs": []
726 | },
727 | {
728 | "cell_type": "code",
729 | "metadata": {
730 | "id": "xVKKWUG8UMO5"
731 | },
732 | "source": [
733 | "!ls test_images"
734 | ],
735 | "execution_count": null,
736 | "outputs": []
737 | },
738 | {
739 | "cell_type": "code",
740 | "metadata": {
741 | "id": "l_w_-UdlS9Vi"
742 | },
743 | "source": [
744 | "!zip -qq rps_test_images.zip -r test_images/"
745 | ],
746 | "execution_count": null,
747 | "outputs": []
748 | },
749 | {
750 | "cell_type": "markdown",
751 | "metadata": {
752 | "id": "j_7FfE5NNXem"
753 | },
754 | "source": [
755 | "If you are running this notebook in a Colab, you can run the cell below to download the Zip file with the images to your local disk. \n",
756 | "\n",
757 | "**Note**: If the Zip file does not download when you run the cell, try running the cell a second time."
758 | ]
759 | },
760 | {
761 | "cell_type": "code",
762 | "metadata": {
763 | "id": "Giva6EHwWm6Y"
764 | },
765 | "source": [
766 | "try:\n",
767 | " files.download('rps_test_images.zip')\n",
768 | "except:\n",
769 | " pass"
770 | ],
771 | "execution_count": null,
772 | "outputs": []
773 | }
774 | ]
775 | }
--------------------------------------------------------------------------------
/Week3 Quiz Answers.txt:
--------------------------------------------------------------------------------
1 | Week 3 Quiz
2 |
3 | Question 1
4 | What technology is used to deploy addons like TensorFlow Lite to iOS applications?
5 | Gradle
6 | Cocoapods - Correct
7 | Applepods
8 | VSNs
9 |
10 | Question 2
11 | What is the name of the pod that you use to add TF Lite to an iOS app with the Swift language?
12 | TensorFlowLite
13 | TensorFlowSwift
14 | TensorFlowLiteSwift - Correct
15 | LiteSwift
16 |
17 | Question 3
18 | How do you deploy a model to iOS for offline use?
19 | You convert it to swift code and use it as an activity
20 | You add is as part of the app bundle - Correct
21 | You bundle model and interpreter as a resource file
22 | You download it at runtime
23 |
24 | Question 4
25 | How does iOS represent images in memory?
26 | An NSPixelArray
27 | An array of bytes
28 | A CVPixelBuffer - Correct
29 | Image class
30 |
31 | Question 5
32 | How do you do inference with a TF Lite interpreter in Swift?
33 | Copy values to input tensor, call interpreter.invoke(), copy outputs to output Tensor - Correct
34 | interpreter.run(input, output)
35 | Copy values to input tensor, call interpreter.run(), copy outputs to output Tensor
36 | interpreter.invoke(input, output)
37 |
38 | Question 6
39 | How do you specify the number of threads that the interpreter should use?
40 | Call the setThreads() method on an InterpreterOptions object and specify the desired amount
41 | Use an InterpreterOptions object and set its threads property to the desired amount
42 | Use an InterpreterOptions object and set it’s useThreads property to true
43 | Use an InterpreterOptions object and set its threadCount property to the desired amount - Correct
44 |
45 | Question 7
46 | What format is an image in a CVPixelBuffer?
47 | RGB_32
48 | BGRA_32 - Correct
49 | RGBA_32
50 | BGR_32
51 |
52 | Question 8
53 | How can you tell if a model is quantized at runtime?
54 | Check the inputTensor Data type. If it’s uInt8, the model is quantized - Correct
55 | Check the isQuantized property on the interpreter. If it is true, the model is quantized
56 | Check the isQuantized property on the input tensor. If it is true, the model is quantized
57 | Check the modelFormat data type. If it’s uInt8, the model is quantized
58 |
59 | Question 9
60 | In what order does the object detection model report the bounding box parameters?
61 | x, y, width, height
62 | y, x, width, height
63 | x, y, height, width
64 | y, x, height, width - Correct
--------------------------------------------------------------------------------
/Week3_Assignment_Solution.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "accelerator": "GPU",
6 | "colab": {
7 | "name": "C2_W3_Assignment_Solution.ipynb",
8 | "private_outputs": true,
9 | "provenance": [],
10 | "collapsed_sections": [],
11 | "toc_visible": true,
12 | "include_colab_link": true
13 | },
14 | "kernelspec": {
15 | "display_name": "Python 3",
16 | "language": "python",
17 | "name": "python3"
18 | },
19 | "language_info": {
20 | "codemirror_mode": {
21 | "name": "ipython",
22 | "version": 3
23 | },
24 | "file_extension": ".py",
25 | "mimetype": "text/x-python",
26 | "name": "python",
27 | "nbconvert_exporter": "python",
28 | "pygments_lexer": "ipython3",
29 | "version": "3.8.5"
30 | }
31 | },
32 | "cells": [
33 | {
34 | "cell_type": "markdown",
35 | "metadata": {
36 | "id": "view-in-github",
37 | "colab_type": "text"
38 | },
39 | "source": [
40 | " "
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {
46 | "id": "Za8-Nr5k11fh"
47 | },
48 | "source": [
49 | "##### Copyright 2018 The TensorFlow Authors."
50 | ]
51 | },
52 | {
53 | "cell_type": "code",
54 | "metadata": {
55 | "cellView": "form",
56 | "id": "Eq10uEbw0E4l"
57 | },
58 | "source": [
59 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
60 | "# you may not use this file except in compliance with the License.\n",
61 | "# You may obtain a copy of the License at\n",
62 | "#\n",
63 | "# https://www.apache.org/licenses/LICENSE-2.0\n",
64 | "#\n",
65 | "# Unless required by applicable law or agreed to in writing, software\n",
66 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
67 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
68 | "# See the License for the specific language governing permissions and\n",
69 | "# limitations under the License."
70 | ],
71 | "execution_count": null,
72 | "outputs": []
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "metadata": {
77 | "id": "oYM61xrTsP5d"
78 | },
79 | "source": [
80 | "# Rock, Paper & Scissors with TensorFlow Hub - TFLite"
81 | ]
82 | },
83 | {
84 | "cell_type": "markdown",
85 | "metadata": {
86 | "id": "xWFpUd1yy3gt"
87 | },
88 | "source": [
89 | "## Setup"
90 | ]
91 | },
92 | {
93 | "cell_type": "code",
94 | "metadata": {
95 | "id": "110fGB18UNJn"
96 | },
97 | "source": [
98 | "try:\n",
99 | " %tensorflow_version 2.x\n",
100 | "except:\n",
101 | " pass"
102 | ],
103 | "execution_count": null,
104 | "outputs": []
105 | },
106 | {
107 | "cell_type": "code",
108 | "metadata": {
109 | "id": "dlauq-4FWGZM"
110 | },
111 | "source": [
112 | "import numpy as np\n",
113 | "import matplotlib.pylab as plt\n",
114 | "\n",
115 | "import tensorflow as tf\n",
116 | "import tensorflow_hub as hub\n",
117 | "\n",
118 | "from tqdm import tqdm\n",
119 | "\n",
120 | "print(\"\\u2022 Using TensorFlow Version:\", tf.__version__)\n",
121 | "print(\"\\u2022 Using TensorFlow Hub Version: \", hub.__version__)\n",
122 | "print('\\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\\u2022 GPU Device Not Found. Running on CPU')"
123 | ],
124 | "execution_count": null,
125 | "outputs": []
126 | },
127 | {
128 | "cell_type": "markdown",
129 | "metadata": {
130 | "id": "mmaHHH7Pvmth"
131 | },
132 | "source": [
133 | "## Select the Hub/TF2 Module to Use\n",
134 | "\n",
135 | "Hub modules for TF 1.x won't work here, please use one of the selections provided."
136 | ]
137 | },
138 | {
139 | "cell_type": "code",
140 | "metadata": {
141 | "id": "FlsEcKVeuCnf"
142 | },
143 | "source": [
144 | "module_selection = (\"mobilenet_v2\", 224, 1280) #@param [\"(\\\"mobilenet_v2\\\", 224, 1280)\", \"(\\\"inception_v3\\\", 299, 2048)\"] {type:\"raw\", allow-input: true}\n",
145 | "handle_base, pixels, FV_SIZE = module_selection\n",
146 | "MODULE_HANDLE =\"https://tfhub.dev/google/tf2-preview/{}/feature_vector/4\".format(handle_base)\n",
147 | "IMAGE_SIZE = (pixels, pixels)\n",
148 | "print(\"Using {} with input size {} and output dimension {}\".format(MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))"
149 | ],
150 | "execution_count": null,
151 | "outputs": []
152 | },
153 | {
154 | "cell_type": "markdown",
155 | "metadata": {
156 | "id": "sYUsgwCBv87A"
157 | },
158 | "source": [
159 | "## Data Preprocessing"
160 | ]
161 | },
162 | {
163 | "cell_type": "markdown",
164 | "metadata": {
165 | "id": "8nqVX3KYwGPh"
166 | },
167 | "source": [
168 | "Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the `rock_paper_scissors` dataset.\n",
169 | "\n",
170 | "This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)"
171 | ]
172 | },
173 | {
174 | "cell_type": "code",
175 | "metadata": {
176 | "id": "jGvpkDj4wBup"
177 | },
178 | "source": [
179 | "import tensorflow_datasets as tfds\n",
180 | "tfds.disable_progress_bar()"
181 | ],
182 | "execution_count": null,
183 | "outputs": []
184 | },
185 | {
186 | "cell_type": "markdown",
187 | "metadata": {
188 | "id": "YkF4Boe5wN7N"
189 | },
190 | "source": [
191 | "The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.\n",
192 | "\n",
193 | "Dividing the `train` split of this dataset into (train, validation, test) with 80%, 10%, 10% of the data respectively."
194 | ]
195 | },
196 | {
197 | "cell_type": "code",
198 | "metadata": {
199 | "id": "SQ9xK9F2wGD8"
200 | },
201 | "source": [
202 | "(train_examples, validation_examples, test_examples), info = tfds.load('rock_paper_scissors',\n",
203 | " with_info=True, \n",
204 | " as_supervised=True, \n",
205 | " split=['train[:80%]',\n",
206 | " 'train[80%:90%]',\n",
207 | " 'train[90%:]'])\n",
208 | "\n",
209 | "num_examples = info.splits['train'].num_examples\n",
210 | "num_classes = info.features['label'].num_classes"
211 | ],
212 | "execution_count": null,
213 | "outputs": []
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {
218 | "id": "pmXQYXNWwf19"
219 | },
220 | "source": [
221 | "### Format the Data\n",
222 | "\n",
223 | "Use the `tf.image` module to format the images for the task.\n",
224 | "\n",
225 | "Resize the images to a fixes input size, and rescale the input channels"
226 | ]
227 | },
228 | {
229 | "cell_type": "code",
230 | "metadata": {
231 | "id": "y7UyXblSwkUS"
232 | },
233 | "source": [
234 | "def format_image(image, label):\n",
235 | " image = tf.image.resize(image, IMAGE_SIZE) / 255.0\n",
236 | " return image, label"
237 | ],
238 | "execution_count": null,
239 | "outputs": []
240 | },
241 | {
242 | "cell_type": "markdown",
243 | "metadata": {
244 | "id": "1nrDR8CnwrVk"
245 | },
246 | "source": [
247 | "Now shuffle and batch the data\n"
248 | ]
249 | },
250 | {
251 | "cell_type": "code",
252 | "metadata": {
253 | "id": "zAEUG7vawxLm"
254 | },
255 | "source": [
256 | "BATCH_SIZE = 32 #@param {type:\"integer\"}"
257 | ],
258 | "execution_count": null,
259 | "outputs": []
260 | },
261 | {
262 | "cell_type": "code",
263 | "metadata": {
264 | "id": "fHEC9mbswxvM"
265 | },
266 | "source": [
267 | "train_batches = train_examples.shuffle(num_examples // 4).batch(BATCH_SIZE).map(format_image).prefetch(1)\n",
268 | "validation_batches = validation_examples.batch(BATCH_SIZE).map(format_image).prefetch(1)\n",
269 | "test_batches = test_examples.batch(1).map(format_image)"
270 | ],
271 | "execution_count": null,
272 | "outputs": []
273 | },
274 | {
275 | "cell_type": "markdown",
276 | "metadata": {
277 | "id": "ghQhZjgEw1cK"
278 | },
279 | "source": [
280 | "Inspect a batch"
281 | ]
282 | },
283 | {
284 | "cell_type": "code",
285 | "metadata": {
286 | "id": "gz0xsMCjwx54"
287 | },
288 | "source": [
289 | "for image_batch, label_batch in train_batches.take(1):\n",
290 | " pass\n",
291 | "\n",
292 | "image_batch.shape"
293 | ],
294 | "execution_count": null,
295 | "outputs": []
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {
300 | "id": "FS_gVStowW3G"
301 | },
302 | "source": [
303 | "## Defining the Model\n",
304 | "\n",
305 | "All it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.\n",
306 | "\n",
307 | "For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy."
308 | ]
309 | },
310 | {
311 | "cell_type": "code",
312 | "metadata": {
313 | "cellView": "form",
314 | "id": "RaJW3XrPyFiF"
315 | },
316 | "source": [
317 | "do_fine_tuning = False #@param {type:\"boolean\"}"
318 | ],
319 | "execution_count": null,
320 | "outputs": []
321 | },
322 | {
323 | "cell_type": "code",
324 | "metadata": {
325 | "id": "GS3b2GZOBCkP"
326 | },
327 | "source": [
328 | "feature_extractor = hub.KerasLayer(MODULE_HANDLE,\n",
329 | " input_shape=IMAGE_SIZE + (3,), \n",
330 | " output_shape=[FV_SIZE],\n",
331 | " trainable=do_fine_tuning)"
332 | ],
333 | "execution_count": null,
334 | "outputs": []
335 | },
336 | {
337 | "cell_type": "code",
338 | "metadata": {
339 | "id": "2T_Km85XBCkP"
340 | },
341 | "source": [
342 | "print(\"Building model with\", MODULE_HANDLE)\n",
343 | "\n",
344 | "model = tf.keras.Sequential([\n",
345 | " feature_extractor,\n",
346 | " tf.keras.layers.Dense(num_classes, activation='softmax')\n",
347 | "])\n",
348 | "\n",
349 | "model.summary()"
350 | ],
351 | "execution_count": null,
352 | "outputs": []
353 | },
354 | {
355 | "cell_type": "code",
356 | "metadata": {
357 | "id": "FhTJQcjzBCkP"
358 | },
359 | "source": [
360 | "#@title (Optional) Unfreeze some layers\n",
361 | "NUM_LAYERS = 10 #@param {type:\"slider\", min:1, max:50, step:1}\n",
362 | " \n",
363 | "if do_fine_tuning:\n",
364 | " feature_extractor.trainable = True\n",
365 | " \n",
366 | " for layer in model.layers[-NUM_LAYERS:]:\n",
367 | " layer.trainable = True\n",
368 | "\n",
369 | "else:\n",
370 | " feature_extractor.trainable = False"
371 | ],
372 | "execution_count": null,
373 | "outputs": []
374 | },
375 | {
376 | "cell_type": "markdown",
377 | "metadata": {
378 | "id": "u2e5WupIw2N2"
379 | },
380 | "source": [
381 | "## Training the Model"
382 | ]
383 | },
384 | {
385 | "cell_type": "code",
386 | "metadata": {
387 | "id": "9f3yBUvkd_VJ"
388 | },
389 | "source": [
390 | "if do_fine_tuning:\n",
391 | " model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),\n",
392 | " loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n",
393 | " metrics=['accuracy'])\n",
394 | "else:\n",
395 | " model.compile(optimizer='adam',\n",
396 | " loss='sparse_categorical_crossentropy',\n",
397 | " metrics=['accuracy'])"
398 | ],
399 | "execution_count": null,
400 | "outputs": []
401 | },
402 | {
403 | "cell_type": "code",
404 | "metadata": {
405 | "id": "w_YKX2Qnfg6x"
406 | },
407 | "source": [
408 | "EPOCHS = 5\n",
409 | "\n",
410 | "hist = model.fit(train_batches,\n",
411 | " epochs=EPOCHS,\n",
412 | " validation_data=validation_batches)"
413 | ],
414 | "execution_count": null,
415 | "outputs": []
416 | },
417 | {
418 | "cell_type": "markdown",
419 | "metadata": {
420 | "id": "u_psFoTeLpHU"
421 | },
422 | "source": [
423 | "## Export the Model"
424 | ]
425 | },
426 | {
427 | "cell_type": "code",
428 | "metadata": {
429 | "id": "XaSb5nVzHcVv"
430 | },
431 | "source": [
432 | "RPS_SAVED_MODEL = \"rps_saved_model\""
433 | ],
434 | "execution_count": null,
435 | "outputs": []
436 | },
437 | {
438 | "cell_type": "markdown",
439 | "metadata": {
440 | "id": "fZqRAg1uz1Nu"
441 | },
442 | "source": [
443 | "Export the SavedModel"
444 | ]
445 | },
446 | {
447 | "cell_type": "code",
448 | "metadata": {
449 | "id": "yJMue5YgnwtN"
450 | },
451 | "source": [
452 | "tf.saved_model.save(model, RPS_SAVED_MODEL)"
453 | ],
454 | "execution_count": null,
455 | "outputs": []
456 | },
457 | {
458 | "cell_type": "code",
459 | "metadata": {
460 | "id": "SOQF4cOan0SY"
461 | },
462 | "source": [
463 | "%%bash -s $RPS_SAVED_MODEL\n",
464 | "saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default"
465 | ],
466 | "execution_count": null,
467 | "outputs": []
468 | },
469 | {
470 | "cell_type": "code",
471 | "metadata": {
472 | "id": "FY7QGBgBytwX"
473 | },
474 | "source": [
475 | "loaded = tf.saved_model.load(RPS_SAVED_MODEL)"
476 | ],
477 | "execution_count": null,
478 | "outputs": []
479 | },
480 | {
481 | "cell_type": "code",
482 | "metadata": {
483 | "id": "tIhPyMISz952"
484 | },
485 | "source": [
486 | "print(list(loaded.signatures.keys()))\n",
487 | "infer = loaded.signatures[\"serving_default\"]\n",
488 | "print(infer.structured_input_signature)\n",
489 | "print(infer.structured_outputs)"
490 | ],
491 | "execution_count": null,
492 | "outputs": []
493 | },
494 | {
495 | "cell_type": "markdown",
496 | "metadata": {
497 | "id": "XxLiLC8n0H16"
498 | },
499 | "source": [
500 | "## Convert Using TFLite's Converter"
501 | ]
502 | },
503 | {
504 | "cell_type": "code",
505 | "metadata": {
506 | "id": "WmSr2-yZoUhz"
507 | },
508 | "source": [
509 | "converter = tf.lite.TFLiteConverter.from_saved_model(RPS_SAVED_MODEL)\n",
510 | "converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]\n",
511 | "tflite_model = converter.convert()"
512 | ],
513 | "execution_count": null,
514 | "outputs": []
515 | },
516 | {
517 | "cell_type": "code",
518 | "metadata": {
519 | "id": "9j0yvfIUBCkX"
520 | },
521 | "source": [
522 | "tflite_model_file = 'converted_model.tflite'\n",
523 | "\n",
524 | "with open(tflite_model_file, \"wb\") as f:\n",
525 | " f.write(tflite_model)"
526 | ],
527 | "execution_count": null,
528 | "outputs": []
529 | },
530 | {
531 | "cell_type": "markdown",
532 | "metadata": {
533 | "id": "BbTF6nd1KG2o"
534 | },
535 | "source": [
536 | "## Test the TFLite Model Using the Python Interpreter"
537 | ]
538 | },
539 | {
540 | "cell_type": "code",
541 | "metadata": {
542 | "id": "dg2NkVTmLUdJ"
543 | },
544 | "source": [
545 | "# Load TFLite model and allocate tensors.\n",
546 | "with open(tflite_model_file, 'rb') as fid:\n",
547 | " tflite_model = fid.read()\n",
548 | " \n",
549 | "interpreter = tf.lite.Interpreter(model_content=tflite_model)\n",
550 | "interpreter.allocate_tensors()\n",
551 | "\n",
552 | "input_index = interpreter.get_input_details()[0][\"index\"]\n",
553 | "output_index = interpreter.get_output_details()[0][\"index\"]"
554 | ],
555 | "execution_count": null,
556 | "outputs": []
557 | },
558 | {
559 | "cell_type": "code",
560 | "metadata": {
561 | "id": "snJQVs9JNglv"
562 | },
563 | "source": [
564 | "# Gather results for the randomly sampled test images\n",
565 | "predictions = []\n",
566 | "\n",
567 | "test_labels, test_imgs = [], []\n",
568 | "for img, label in tqdm(test_batches.take(10)):\n",
569 | " interpreter.set_tensor(input_index, img)\n",
570 | " interpreter.invoke()\n",
571 | " predictions.append(interpreter.get_tensor(output_index))\n",
572 | " \n",
573 | " test_labels.append(label.numpy()[0])\n",
574 | " test_imgs.append(img)"
575 | ],
576 | "execution_count": null,
577 | "outputs": []
578 | },
579 | {
580 | "cell_type": "code",
581 | "metadata": {
582 | "cellView": "form",
583 | "id": "YMTWNqPpNiAI"
584 | },
585 | "source": [
586 | "#@title Utility functions for plotting\n",
587 | "# Utilities for plotting\n",
588 | "\n",
589 | "class_names = ['rock', 'paper', 'scissors']\n",
590 | "\n",
591 | "def plot_image(i, predictions_array, true_label, img):\n",
592 | " predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n",
593 | " plt.grid(False)\n",
594 | " plt.xticks([])\n",
595 | " plt.yticks([])\n",
596 | " \n",
597 | " img = np.squeeze(img)\n",
598 | " \n",
599 | " plt.imshow(img, cmap=plt.cm.binary)\n",
600 | " \n",
601 | " predicted_label = np.argmax(predictions_array)\n",
602 | " \n",
603 | " print(type(predicted_label), type(true_label))\n",
604 | " \n",
605 | " if predicted_label == true_label:\n",
606 | " color = 'green'\n",
607 | " else:\n",
608 | " color = 'red'\n",
609 | " \n",
610 | " plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n",
611 | " 100*np.max(predictions_array),\n",
612 | " class_names[true_label]), color=color)"
613 | ],
614 | "execution_count": null,
615 | "outputs": []
616 | },
617 | {
618 | "cell_type": "code",
619 | "metadata": {
620 | "id": "1-lbnicPNkZs"
621 | },
622 | "source": [
623 | "#@title Visualize the outputs { run: \"auto\" }\n",
624 | "index = 0 #@param {type:\"slider\", min:0, max:9, step:1}\n",
625 | "plt.figure(figsize=(6,3))\n",
626 | "plt.subplot(1,2,1)\n",
627 | "plot_image(index, predictions, test_labels, test_imgs)\n",
628 | "plt.show()"
629 | ],
630 | "execution_count": null,
631 | "outputs": []
632 | },
633 | {
634 | "cell_type": "markdown",
635 | "metadata": {
636 | "id": "iHLKgM8eBCkb"
637 | },
638 | "source": [
639 | "Create a file to save the labels."
640 | ]
641 | },
642 | {
643 | "cell_type": "code",
644 | "metadata": {
645 | "id": "XQtnbPnbBCkb"
646 | },
647 | "source": [
648 | "with open('labels.txt', 'w') as f:\n",
649 | " f.write('\\n'.join(class_names))"
650 | ],
651 | "execution_count": null,
652 | "outputs": []
653 | },
654 | {
655 | "cell_type": "markdown",
656 | "metadata": {
657 | "id": "PmZRieHmKLY5"
658 | },
659 | "source": [
660 | "If you are running this notebook in a Colab, you can run the cell below to download the model and labels to your local disk.\n",
661 | "\n",
662 | "**Note**: If the files do not download when you run the cell, try running the cell a second time. Your browser might prompt you to allow multiple files to be downloaded. "
663 | ]
664 | },
665 | {
666 | "cell_type": "code",
667 | "metadata": {
668 | "id": "0jJAxrQB2VFw"
669 | },
670 | "source": [
671 | "try:\n",
672 | " from google.colab import files\n",
673 | " files.download('converted_model.tflite')\n",
674 | " files.download('labels.txt')\n",
675 | "except:\n",
676 | " pass"
677 | ],
678 | "execution_count": null,
679 | "outputs": []
680 | },
681 | {
682 | "cell_type": "markdown",
683 | "metadata": {
684 | "id": "BDlmpjC6VnFZ"
685 | },
686 | "source": [
687 | "# Prepare the Test Images for Download (Optional)"
688 | ]
689 | },
690 | {
691 | "cell_type": "markdown",
692 | "metadata": {
693 | "id": "_1ja_WA0WZOH"
694 | },
695 | "source": [
696 | "This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples"
697 | ]
698 | },
699 | {
700 | "cell_type": "code",
701 | "metadata": {
702 | "id": "fzLKEBrfTREA"
703 | },
704 | "source": [
705 | "!mkdir -p test_images"
706 | ],
707 | "execution_count": null,
708 | "outputs": []
709 | },
710 | {
711 | "cell_type": "code",
712 | "metadata": {
713 | "id": "Qn7ukNQCSewb"
714 | },
715 | "source": [
716 | "from PIL import Image\n",
717 | "\n",
718 | "for index, (image, label) in enumerate(test_batches.take(50)):\n",
719 | " image = tf.cast(image * 255.0, tf.uint8)\n",
720 | " image = tf.squeeze(image).numpy()\n",
721 | " pil_image = Image.fromarray(image)\n",
722 | " pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))"
723 | ],
724 | "execution_count": null,
725 | "outputs": []
726 | },
727 | {
728 | "cell_type": "code",
729 | "metadata": {
730 | "id": "xVKKWUG8UMO5"
731 | },
732 | "source": [
733 | "!ls test_images"
734 | ],
735 | "execution_count": null,
736 | "outputs": []
737 | },
738 | {
739 | "cell_type": "code",
740 | "metadata": {
741 | "id": "l_w_-UdlS9Vi"
742 | },
743 | "source": [
744 | "!zip -qq rps_test_images.zip -r test_images/"
745 | ],
746 | "execution_count": null,
747 | "outputs": []
748 | },
749 | {
750 | "cell_type": "markdown",
751 | "metadata": {
752 | "id": "DctmWA30BCkd"
753 | },
754 | "source": [
755 | "If you are running this notebook in a Colab, you can run the cell below to download the Zip file with the images to your local disk. \n",
756 | "\n",
757 | "**Note**: If the Zip file does not download when you run the cell, try running the cell a second time."
758 | ]
759 | },
760 | {
761 | "cell_type": "code",
762 | "metadata": {
763 | "id": "Giva6EHwWm6Y"
764 | },
765 | "source": [
766 | "try:\n",
767 | " files.download('rps_test_images.zip')\n",
768 | "except:\n",
769 | " pass"
770 | ],
771 | "execution_count": null,
772 | "outputs": []
773 | }
774 | ]
775 | }
--------------------------------------------------------------------------------
/Week4 Quiz Answers.txt:
--------------------------------------------------------------------------------
1 | Week 4 Quiz
2 |
3 | Question 1
4 | Which Devices support TensorFlow Lite for Inference? (Check all that apply)
5 | Coral - Correct
6 | RISC - Incorrect
7 | Raspberry Pi - Correct
8 | Sparkfun Edge - Correct
9 |
10 | Question 2
11 | With a Raspberry Pi, how can you use TensorFlow?
12 | Inference Only
13 | It doesn’t work on Pi
14 | Training Only
15 | Inference and Training - Correct
16 |
17 | Question 3
18 | If you only want to do inference on a Pi, what’s the best way?
19 | Compile all of TensorFlow from Source and run it
20 | Do nothing, the Pi base image has TensorFlow in it
21 | Install the full TensorFlow with Pip install
22 | Install the standalone interpreter using pip - Correct
23 |
24 | Question 4
25 | When using ImageNet on a Raspberry Pi for Image Classification, how many classes are supported?
26 | 800
27 | 100
28 | 500
29 | 1000 - Correct
30 |
31 | Question 5
32 | How do you initialize the standalone interpreter in Python?
33 | tf.lite.load(lite_model)
34 | tf.lite.Interpreter(directory_of_saved_model)
35 | tf.lite.Interpreter(directory_of_lite_Model) - Correct
36 | tf.lite.load(saved_model)
37 |
38 | Question 6
39 | How do you get the input tensors for a model with the standalone interpreter?
40 | Call get_input_tensors() after calling allocate_tensors() on the interpreter
41 | Call get_input_tensors() after initializing the interpreter
42 | Call get_input_details() after initializing the interpreter
43 | Call get_input_details() after calling allocate_tensors() on the interpreter - Correct
44 | 7.
45 | Question 7
46 | How do you perform inference using the interpreter?
47 | Set the Input tensor with the set_tensor command and then call invoke() - Correct
48 | Call invoke(), and pass it both the input and output tensors
49 | Call invoke(), and pass it the input tensor
50 | Just call invoke(), TensorFlow can do the rest
51 |
52 | Question 8
53 | How do you read the results of inference using the interpreter?
54 | Call invoke(), pass it the input and output tensors, and then read the output tensor
55 | Call invoke(), and then call get_tensor() on the interpreter to read the output - Correct
56 | Call invoke(), and the the output will be rendered automatically
57 | Call invoke(), pass it the input tensor, read the results
--------------------------------------------------------------------------------
/Week4_Assignment_Solution.py:
--------------------------------------------------------------------------------
1 | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from tflite_runtime.interpreter import Interpreter
16 | from PIL import Image
17 | import numpy as np
18 | import argparse
19 |
20 |
21 | parser = argparse.ArgumentParser(description='Rock, paper and scissors')
22 | parser.add_argument('--filename', type=str, help='Specify the filename', required=True)
23 | parser.add_argument('--model_path', type=str, help='Specify the model path', required=True)
24 |
25 | args = parser.parse_args()
26 |
27 | filename = args.filename
28 | model_path = args.model_path
29 |
30 | labels = ['Rock', 'Paper', 'Scissors']
31 |
32 | # Load TFLite model and allocate tensors
33 | interpreter = Interpreter(model_path=model_path)
34 | interpreter.allocate_tensors()
35 |
36 | # Get input and output tensors.
37 | input_details = interpreter.get_input_details()
38 | output_details = interpreter.get_output_details()
39 |
40 | # Read image with Pillow
41 | img = Image.open(filename).convert('RGB')
42 |
43 | # Get input size
44 | input_shape = input_details[0]['shape']
45 | size = input_shape[:2] if len(input_shape) == 3 else input_shape[1:3]
46 |
47 | # Preprocess image
48 | img = img.resize(size)
49 | img = np.array(img, dtype=np.float32)
50 | img = img / 255.
51 |
52 | # Add a batch dimension
53 | input_data = np.expand_dims(img, axis=0)
54 |
55 | # Point the data to be used for testing and run the interpreter
56 | interpreter.set_tensor(input_details[0]['index'], input_data)
57 | interpreter.invoke()
58 |
59 | # Obtain results and print the predicted category
60 | predictions = interpreter.get_tensor(output_details[0]['index'])
61 | predicted_label = np.argmax(predictions)
62 | print(labels[predicted_label])
--------------------------------------------------------------------------------