├── Images_for_Readme
└── Image1.png
├── LICENSE
├── README.md
└── TensorFlow_with_Colab_tutorial.ipynb
/Images_for_Readme/Image1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nkap23/TensorFlow_with_Colab_tutorial/fe8f565a10cf4dd65bbd988fca01bc29b63d7fe9/Images_for_Readme/Image1.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 Nisarg Kapkar
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # TensorFlow_with_Colab_tutorial
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 | This tutorial will guide you through all the steps required for object recognition model training, from collecting images for the model to testing the model!
23 |
24 | Link to [tutorial](https://medium.com/@nisargkapkar/tensorflow-2-object-detection-api-with-google-colab-b2af171e81cc?source=friends_link&sk=0bb205df0e1c29a2e78c28671ddf4494)!
25 |
26 |
27 | Upcoming in Part2 (of the tutorial):
28 |
29 | - Calculating mAP (mean average precision) using eval.py
30 | - Converting saved model to tflite model
31 | - Common issues and fixes
32 |
33 |
--------------------------------------------------------------------------------
/TensorFlow_with_Colab_tutorial.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "TensorFlow_with_Colab_tutorial.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": []
9 | },
10 | "kernelspec": {
11 | "name": "python3",
12 | "display_name": "Python 3"
13 | },
14 | "accelerator": "GPU"
15 | },
16 | "cells": [
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {
20 | "id": "KfeWYiT6aNDa",
21 | "colab_type": "text"
22 | },
23 | "source": [
24 | "**TensorFlow 2 Object Detection API with Google Colab**\n",
25 | "\n",
26 | "Author: Nisarg Kapkar\n",
27 | "\n",
28 | "Link to [Medium Article](https://medium.com/@nisargkapkar/tensorflow-2-object-detection-api-with-google-colab-b2af171e81cc?source=friends_link&sk=0bb205df0e1c29a2e78c28671ddf4494)! \n",
29 | "\n",
30 | "NOT: Use this NoteBook in association with the mentioned Medium article, the article contains detailed information (which is not mentioned in this NoteBook) for Step 1, Step 2 and Step 13!\n",
31 | "\n",
32 | "Link to [GitHub Repository](https://github.com/Nkap23/TensorFlow_with_Colab_tutorial)!"
33 | ]
34 | },
35 | {
36 | "cell_type": "markdown",
37 | "metadata": {
38 | "id": "RAGVzME9DIPn",
39 | "colab_type": "text"
40 | },
41 | "source": [
42 | "Step 1- Prerequisites(Gather/Label images,Create label_map...)\n",
43 | "\n",
44 | "Refer the mentioned [Medium article](https://medium.com/@nisargkapkar/tensorflow-2-object-detection-api-with-google-colab-b2af171e81cc?source=friends_link&sk=0bb205df0e1c29a2e78c28671ddf4494) for more details!"
45 | ]
46 | },
47 | {
48 | "cell_type": "markdown",
49 | "metadata": {
50 | "id": "NGr1XCdZMVGL",
51 | "colab_type": "text"
52 | },
53 | "source": [
54 | "Step 2- Set up the directory structure on Google Drive.\n",
55 | "\n",
56 | "Refer the mentioned [Medium article](https://medium.com/@nisargkapkar/tensorflow-2-object-detection-api-with-google-colab-b2af171e81cc?source=friends_link&sk=0bb205df0e1c29a2e78c28671ddf4494) for more details!"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {
62 | "id": "O5RkAC6P7CKg",
63 | "colab_type": "text"
64 | },
65 | "source": [
66 | "Step 3- Select Hardware Accelerator.\n",
67 | "\n",
68 | "On Colab, go to Runtime->Change Runtime Type and select Hardware accelerator as GPU."
69 | ]
70 | },
71 | {
72 | "cell_type": "markdown",
73 | "metadata": {
74 | "id": "BWBi1hxcP9io",
75 | "colab_type": "text"
76 | },
77 | "source": [
78 | "NOTE:\n",
79 | "If you have given different names to your files and folders, change the paths in cells below according to your files and folders!"
80 | ]
81 | },
82 | {
83 | "cell_type": "code",
84 | "metadata": {
85 | "id": "3Si3sOwi7e06",
86 | "colab_type": "code",
87 | "colab": {}
88 | },
89 | "source": [
90 | "#Step 4- Mount Google Drive.\n",
91 | "\n",
92 | "from google.colab import drive\n",
93 | "drive.mount('/content/gdrive')"
94 | ],
95 | "execution_count": null,
96 | "outputs": []
97 | },
98 | {
99 | "cell_type": "code",
100 | "metadata": {
101 | "id": "0dW4lm8371iZ",
102 | "colab_type": "code",
103 | "colab": {}
104 | },
105 | "source": [
106 | "#Step 5- Download TensorFlow Model Garden.\n",
107 | "\n",
108 | "#cd into the TensorFlow directory in your Google Drive\n",
109 | "%cd '/content/gdrive/My Drive/TensorFlow'\n",
110 | "\n",
111 | "#and clone the TensorFlow Model Garden repository\n",
112 | "!git clone https://github.com/tensorflow/models.git",
113 | "\n\n",
114 | "#using a older version of repo (21 Sept 2020)\n",
115 | "%cd '/content/gdrive/MyDrive/TensorFlow/models'\n",
116 | "!git checkout -f e04dafd04d69053d3733bb91d47d0d95bc2c8199"
117 | ],
118 | "execution_count": null,
119 | "outputs": []
120 | },
121 | {
122 | "cell_type": "code",
123 | "metadata": {
124 | "id": "So0uPn6C8j-Z",
125 | "colab_type": "code",
126 | "colab": {}
127 | },
128 | "source": [
129 | "#Step 6- Install some required libraries and tools.\n",
130 | "\n",
131 | "!apt-get install protobuf-compiler python-lxml python-pil\n",
132 | "!pip install Cython pandas tf-slim lvis"
133 | ],
134 | "execution_count": null,
135 | "outputs": []
136 | },
137 | {
138 | "cell_type": "code",
139 | "metadata": {
140 | "id": "0BI9kNtN9Hca",
141 | "colab_type": "code",
142 | "colab": {}
143 | },
144 | "source": [
145 | "#Step 7- Compile the Protobuf libraries.\n",
146 | "\n",
147 | "#cd into 'TensorFlow/models/research'\n",
148 | "%cd '/content/gdrive/My Drive/TensorFlow/models/research/'\n",
149 | "!protoc object_detection/protos/*.proto --python_out=."
150 | ],
151 | "execution_count": null,
152 | "outputs": []
153 | },
154 | {
155 | "cell_type": "code",
156 | "metadata": {
157 | "id": "wEAg3ksA9Pih",
158 | "colab_type": "code",
159 | "colab": {}
160 | },
161 | "source": [
162 | "#Step 8- Set the environment.\n",
163 | "\n",
164 | "import os\n",
165 | "import sys\n",
166 | "os.environ['PYTHONPATH']+=\":/content/gdrive/My Drive/TensorFlow/models\"\n",
167 | "sys.path.append(\"/content/gdrive/My Drive/TensorFlow/models/research\")"
168 | ],
169 | "execution_count": null,
170 | "outputs": []
171 | },
172 | {
173 | "cell_type": "code",
174 | "metadata": {
175 | "id": "UT9-acjE9d2K",
176 | "colab_type": "code",
177 | "colab": {}
178 | },
179 | "source": [
180 | "#Step 9- Build and Install setup.py.\n",
181 | "\n",
182 | "!python setup.py build\n",
183 | "!python setup.py install"
184 | ],
185 | "execution_count": null,
186 | "outputs": []
187 | },
188 | {
189 | "cell_type": "code",
190 | "metadata": {
191 | "id": "7M5qUba89u3S",
192 | "colab_type": "code",
193 | "colab": {}
194 | },
195 | "source": [
196 | "#Step 10- Test the installation.\n",
197 | "\n",
198 | "#cd into 'TensorFlow/models/research/object_detection/builders/'\n",
199 | "%cd '/content/gdrive/My Drive/TensorFlow/models/research/object_detection/builders/'\n",
200 | "!python model_builder_tf2_test.py\n",
201 | "from object_detection.utils import label_map_util\n",
202 | "from object_detection.utils import visualization_utils as viz_utils\n",
203 | "print('Done')"
204 | ],
205 | "execution_count": null,
206 | "outputs": []
207 | },
208 | {
209 | "cell_type": "markdown",
210 | "metadata": {
211 | "id": "vQ9iPmLFEQJi",
212 | "colab_type": "text"
213 | },
214 | "source": [
215 | "NOTE:\n",
216 | "\n",
217 | "You should have the images in test and train folder (with their corresponding XML files) and label_map.pbtxt file ready in respective directories.\n",
218 | "\n",
219 | "You should also have the generate_tfrecord.py in your preprocessing directory.\n",
220 | "\n",
221 | "If you don't have these files ready, go back to Step 1 and finish downloading required files."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "metadata": {
227 | "id": "h5C5-PCp-elV",
228 | "colab_type": "code",
229 | "colab": {}
230 | },
231 | "source": [
232 | "#Step 11- Generate TFrecords.\n",
233 | "\n",
234 | "#cd into preprocessing directory\n",
235 | "%cd '/content/gdrive/My Drive/TensorFlow/scripts/preprocessing'\n",
236 | "\n",
237 | "#run the cell to generate test.record and train.record\n",
238 | "!python generate_tfrecord.py -x '/content/gdrive/My Drive/TensorFlow/workspace/training_demo/images/train' -l '/content/gdrive/My Drive/TensorFlow/workspace/training_demo/annotations/label_map.pbtxt' -o '/content/gdrive/My Drive/TensorFlow/workspace/training_demo/annotations/train.record'\n",
239 | "!python generate_tfrecord.py -x '/content/gdrive/My Drive/TensorFlow/workspace/training_demo/images/test' -l '/content/gdrive/My Drive/TensorFlow/workspace/training_demo/annotations/label_map.pbtxt' -o '/content/gdrive/My Drive/TensorFlow/workspace/training_demo/annotations/test.record'\n",
240 | "\n",
241 | "# !python generate_tfrecord.py -x '[path_to_train_folder]' -l '[path_to_annotations_folder]/label_map.pbtxt' -o '[path_to_annotations_folder]/train.record'\n",
242 | "# !python generate_tfrecord.py -x '[path_to_test_folder]' -l '[path_to_annotations_folder]/label_map.pbtxt' -o '[path_to_annotations_folder]/test.record'\n"
243 | ],
244 | "execution_count": null,
245 | "outputs": []
246 | },
247 | {
248 | "cell_type": "markdown",
249 | "metadata": {
250 | "id": "vX3VCP9HJzTL",
251 | "colab_type": "text"
252 | },
253 | "source": [
254 | "NOTE:\n",
255 | "\n",
256 | "If you haven't downloaded any pre-trained model yet, go back to Step 1 and finish downloading any pre-trained model of your choice.\n",
257 | "\n",
258 | "We are almost ready to start our model training, just a few more steps before we start our model training!"
259 | ]
260 | },
261 | {
262 | "cell_type": "markdown",
263 | "metadata": {
264 | "id": "Ql7XBP2VOhcV",
265 | "colab_type": "text"
266 | },
267 | "source": [
268 | "Step 12- Copying some files\n",
269 | "\n",
270 | "\n",
271 | "* Copy the \"model_main_tf2.py\" file from \"TensorFlow\\models\\research\\object_detection\" and paste it into training_demo. We will need this file for training the model.\n",
272 | "\n",
273 | "* Copy the \"exporter_main_v2.py\" file from \"TensorFlow\\models\\research\\object_detection\" and paste it into training_demo.\n",
274 | "We will need this file to export the trained model\n",
275 | "\n",
276 | "\n",
277 | "\n",
278 | "\n"
279 | ]
280 | },
281 | {
282 | "cell_type": "markdown",
283 | "metadata": {
284 | "id": "5Q5vNzgCJNDa",
285 | "colab_type": "text"
286 | },
287 | "source": [
288 | "Step 13- Configure the pipeline file.\n",
289 | "\n",
290 | "Refer the mentioned [Medium article](https://medium.com/@nisargkapkar/tensorflow-2-object-detection-api-with-google-colab-b2af171e81cc?source=friends_link&sk=0bb205df0e1c29a2e78c28671ddf4494) for more details!"
291 | ]
292 | },
293 | {
294 | "cell_type": "code",
295 | "metadata": {
296 | "id": "7JISAAfK-inS",
297 | "colab_type": "code",
298 | "colab": {}
299 | },
300 | "source": [
301 | "#Step 14- Start Tensorboard.\n",
302 | "\n",
303 | "#cd into training_demo\n",
304 | "%cd '/content/gdrive/My Drive/TensorFlow/workspace/training_demo'\n",
305 | "\n",
306 | "#start the Tensorboard\n",
307 | "%load_ext tensorboard\n",
308 | "%tensorboard --logdir=models/my_ssd_resnet50_v1_fpn\n",
309 | "\n",
310 | "# %load_ext tensorboard\n",
311 | "# %tensorboard --logdir=models/[name_of_pre-trained-model_you_downloaded]"
312 | ],
313 | "execution_count": null,
314 | "outputs": []
315 | },
316 | {
317 | "cell_type": "code",
318 | "metadata": {
319 | "id": "e_hgJrWJuC_O",
320 | "colab_type": "code",
321 | "colab": {}
322 | },
323 | "source": [
324 | "#optional\n",
325 | "#code to check how much session time is remaining \n",
326 | "\n",
327 | "import time,psutil\n",
328 | "uptime=time.time()-psutil.boot_time()\n",
329 | "remaintime=(12*60*60)-uptime\n",
330 | "print(remaintime/(60*60))"
331 | ],
332 | "execution_count": null,
333 | "outputs": []
334 | },
335 | {
336 | "cell_type": "code",
337 | "metadata": {
338 | "id": "yFLP5SUS-vYa",
339 | "colab_type": "code",
340 | "colab": {}
341 | },
342 | "source": [
343 | "#Step 15- Train the model.\n",
344 | "\n",
345 | "#run the cell to start model training \n",
346 | "!python model_main_tf2.py --model_dir=models/my_ssd_resnet50_v1_fpn --pipeline_config_path=models/my_ssd_resnet50_v1_fpn/pipeline.config\n",
347 | "\n",
348 | "# !python model_main_tf2.py --model_dir=models/[name_of_pre-trained-model_you_downloaded] --pipeline_config_path=models/[name_of_pre-trained-model_you_downloaded]/pipeline.config"
349 | ],
350 | "execution_count": null,
351 | "outputs": []
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "metadata": {
356 | "id": "PCmWC2Ae8DjW",
357 | "colab_type": "text"
358 | },
359 | "source": [
360 | "Congratulations! You have finished model training!"
361 | ]
362 | },
363 | {
364 | "cell_type": "code",
365 | "metadata": {
366 | "id": "zDSirOAQ-0sy",
367 | "colab_type": "code",
368 | "colab": {}
369 | },
370 | "source": [
371 | "#Step 16- Export the Trained Model.\n",
372 | "\n",
373 | "#run the cell to start model training \n",
374 | "!python exporter_main_v2.py --input_type image_tensor --pipeline_config_path ./models/my_ssd_resnet50_v1_fpn/pipeline.config --trained_checkpoint_dir ./models/my_ssd_resnet50_v1_fpn/ --output_directory ./exported-models/my_model\n",
375 | "\n",
376 | "# !python exporter_main_v2.py --input_type image_tensor --pipeline_config_path ./models/[name_of_pre-trained-model you downloaded]/pipeline.config --trained_checkpoint_dir ./models/[name_of_pre-trained-model_you_downloaded]/ --output_directory ./exported-models/my_model"
377 | ],
378 | "execution_count": null,
379 | "outputs": []
380 | },
381 | {
382 | "cell_type": "markdown",
383 | "metadata": {
384 | "id": "tAQWLOON4dZt",
385 | "colab_type": "text"
386 | },
387 | "source": [
388 | "We have finished training and exporting our model. It's time to test our model!"
389 | ]
390 | },
391 | {
392 | "cell_type": "code",
393 | "metadata": {
394 | "id": "nz2jiOgJ4mUK",
395 | "colab_type": "code",
396 | "colab": {}
397 | },
398 | "source": [
399 | "#Step 17- Test the Model.\n",
400 | "\n",
401 | "#Loading the saved_model\n",
402 | "import tensorflow as tf\n",
403 | "import time\n",
404 | "from object_detection.utils import label_map_util\n",
405 | "from object_detection.utils import visualization_utils as viz_utils\n",
406 | "\n",
407 | "PATH_TO_SAVED_MODEL=\"/content/gdrive/My Drive/TensorFlow/workspace/training_demo/exported-models/my_model/saved_model\"\n",
408 | "\n",
409 | "print('Loading model...', end='')\n",
410 | "\n",
411 | "# Load saved model and build the detection function\n",
412 | "detect_fn=tf.saved_model.load(PATH_TO_SAVED_MODEL)\n",
413 | "\n",
414 | "print('Done!')"
415 | ],
416 | "execution_count": null,
417 | "outputs": []
418 | },
419 | {
420 | "cell_type": "code",
421 | "metadata": {
422 | "id": "sX--5tHy4-S1",
423 | "colab_type": "code",
424 | "colab": {}
425 | },
426 | "source": [
427 | "#Step 18- Testing the Model.\n",
428 | "\n",
429 | "#Loading the label_map\n",
430 | "category_index=label_map_util.create_category_index_from_labelmap(\"/content/gdrive/My Drive/TensorFlow/workspace/training_demo/annotations/label_map.pbtxt\",use_display_name=True)\n",
431 | "\n",
432 | "#category_index=label_map_util.create_category_index_from_labelmap([path_to_label_map],use_display_name=True)"
433 | ],
434 | "execution_count": null,
435 | "outputs": []
436 | },
437 | {
438 | "cell_type": "code",
439 | "metadata": {
440 | "id": "UURD_H_c5grh",
441 | "colab_type": "code",
442 | "colab": {}
443 | },
444 | "source": [
445 | "#Step 19- Testing the Model.\n",
446 | "\n",
447 | "#Loading the image\n",
448 | "img=['/content/img1.jpg','/content/img2.jpg']\n",
449 | "print(img)\n",
450 | "\n",
451 | "#list containing paths of all the images"
452 | ],
453 | "execution_count": null,
454 | "outputs": []
455 | },
456 | {
457 | "cell_type": "code",
458 | "metadata": {
459 | "id": "tNLxnQNa54d_",
460 | "colab_type": "code",
461 | "colab": {}
462 | },
463 | "source": [
464 | "#Step 20- Running the Inference.\n",
465 | "\n",
466 | "import numpy as np\n",
467 | "from PIL import Image\n",
468 | "import matplotlib.pyplot as plt\n",
469 | "import warnings\n",
470 | "warnings.filterwarnings('ignore') # Suppress Matplotlib warnings\n",
471 | "\n",
472 | "def load_image_into_numpy_array(path):\n",
473 | " \"\"\"Load an image from file into a numpy array.\n",
474 | "\n",
475 | " Puts image into numpy array to feed into tensorflow graph.\n",
476 | " Note that by convention we put it into a numpy array with shape\n",
477 | " (height, width, channels), where channels=3 for RGB.\n",
478 | "\n",
479 | " Args:\n",
480 | " path: the file path to the image\n",
481 | "\n",
482 | " Returns:\n",
483 | " uint8 numpy array with shape (img_height, img_width, 3)\n",
484 | " \"\"\"\n",
485 | " return np.array(Image.open(path))\n",
486 | "\n",
487 | "for image_path in img:\n",
488 | "\n",
489 | " print('Running inference for {}... '.format(image_path), end='')\n",
490 | " image_np=load_image_into_numpy_array(image_path)\n",
491 | "\n",
492 | " # Things to try:\n",
493 | " # Flip horizontally\n",
494 | " # image_np = np.fliplr(image_np).copy()\n",
495 | " # Convert image to grayscale\n",
496 | " # image_np = np.tile(\n",
497 | " # np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)\n",
498 | "\n",
499 | " # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.\n",
500 | " input_tensor=tf.convert_to_tensor(image_np)\n",
501 | " # The model expects a batch of images, so add an axis with `tf.newaxis`.\n",
502 | " input_tensor=input_tensor[tf.newaxis, ...]\n",
503 | "\n",
504 | " # input_tensor = np.expand_dims(image_np, 0)\n",
505 | " detections=detect_fn(input_tensor)\n",
506 | "\n",
507 | " # All outputs are batches tensors.\n",
508 | " # Convert to numpy arrays, and take index [0] to remove the batch dimension.\n",
509 | " # We're only interested in the first num_detections.\n",
510 | " num_detections=int(detections.pop('num_detections'))\n",
511 | " detections={key:value[0,:num_detections].numpy()\n",
512 | " for key,value in detections.items()}\n",
513 | " detections['num_detections']=num_detections\n",
514 | "\n",
515 | " # detection_classes should be ints.\n",
516 | " detections['detection_classes']=detections['detection_classes'].astype(np.int64)\n",
517 | "\n",
518 | " image_np_with_detections=image_np.copy()\n",
519 | "\n",
520 | " viz_utils.visualize_boxes_and_labels_on_image_array(\n",
521 | " image_np_with_detections,\n",
522 | " detections['detection_boxes'],\n",
523 | " detections['detection_classes'],\n",
524 | " detections['detection_scores'],\n",
525 | " category_index,\n",
526 | " use_normalized_coordinates=True,\n",
527 | " max_boxes_to_draw=1, #max number of bounding boxes in the image\n",
528 | " min_score_thresh=.3, #min prediction threshold\n",
529 | " agnostic_mode=False)\n",
530 | " %matplotlib inline\n",
531 | " plt.figure()\n",
532 | " plt.imshow(image_np_with_detections)\n",
533 | " print('Done')\n",
534 | " plt.show()"
535 | ],
536 | "execution_count": null,
537 | "outputs": []
538 | },
539 | {
540 | "cell_type": "markdown",
541 | "metadata": {
542 | "id": "hAQv8465ZGsP",
543 | "colab_type": "text"
544 | },
545 | "source": [
546 | "Acknowledgements:\n",
547 | "\n",
548 | "Huge Thanks to [Lyudmil Vladimirov](http://pcwww.liv.ac.uk/~sglvladi/) for allowing me to use some of the content from their amazing [TensorFlow 2 Object Detection API](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html) for Local Machines!\n",
549 | "\n",
550 | "Link to their [GitHub Repository](https://github.com/sglvladi/TensorFlowObjectDetectionTutorial)."
551 | ]
552 | }
553 | ]
554 | }
555 |
--------------------------------------------------------------------------------