├── README.md
├── TFLite-Conversion.md
├── TFLite-Image-od.py
├── TFLite-PiCamera-od.py
├── TFLite-Video-od.py
├── doc
├── 2020-11-15-230504_1920x1080_scrot.png
├── Camera Interface.png
├── Screenshot 2020-11-14 144537.png
├── Screenshot 2020-11-16 104855.png
├── directory.png
├── final model.png
├── folder.png
├── model.tflite.png
├── saved_model.png
└── tf vs tflite.png
├── install-prerequisites.sh
└── models
├── labels.txt
└── model.tflite
/README.md:
--------------------------------------------------------------------------------
1 | # TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi
2 | [](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0)
3 | ### Learn how to Convert and Run TensorFlow Lite Object Detection Models on the Raspberry Pi
4 |
5 |
6 |
7 |
8 | ## Introduction
9 | This repository is a written tutorial covering two topics. TensorFlow Lite conversion and running on the Raspberry Pi. This document contains instructions for running on the Raspberry Pi. If you want to convert a Custom TensorFlow 2 Object Detection Model, please refer to the [conversion guide](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/TFLite-Conversion.md). These instructions are likely to change often with time, so if you have questions feel free to raise an issue. ***This guide has last been tested and updated on 11/13/2020.***
10 |
11 | **I will soon make a YouTube Tutorial which will be posted [here](https://www.youtube.com/watch?v=2ofuUdCDppc), and an extremely import step [here](https://www.youtube.com/channel/UCT9t2Bug62RDUfSBcPt0Bzg?sub_confirmation=1)!**
12 |
13 | ## Why TensorFlow Lite?
14 | This guide is my 3rd in a series about the TensorFlow Object Detection API. Of my 2 previous guides, one of them is about TensorFlow Object Detection on the Raspberry Pi. If I already have a tutorial, why make another? TensorFlow Lite is a massive improvement from your standard TensorFlow installation. Not only is it so much easier to install and use, but the performance is significantly better. It's optimized to run on mobile and other edge devices such as the Raspberry Pi. The numbers speak for themselves!
15 |
16 |
17 |
18 |
19 | **It took over 3 minutes to load a TensorFlow model and less than a second to load TensorFlow Lite model. TensorFlow Lite loaded literally 5,693,400% faster(yes I did the math).**
20 |
21 | ## Table of Contents
22 | 1. [Setting up the Raspberry Pi and Getting Updates](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/README.md#step-1-setting-up-the-raspberry-pi-and-getting-updates)
23 | 2. [Organizing our Workspace and Virtual Environment](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/README.md#step-2-organizing-our-workspace-and-virtual-environment)
24 | 3. [Installing the Prerequisites](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi#step-3-installing-the-prerequisites)
25 | 4. [Running Object Detection on Image, Video, or Pi Camera](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/README.md#step-4-running-object-detection-on-image-video-or-pi-camera)
26 |
27 | ## Step 1: Setting up the Raspberry Pi and Getting Updates
28 | Before we can get started, we must have access to the Raspberry Pi's Desktop Interface. This can be done with VNC Viewer or the standard Monitor and HDMI. I made a more detailed video which can be found below
29 |
30 | [](https://www.youtube.com/watch?v=jVzMRlCNO3U)
31 |
32 | Once you have access to the Desktop Interface, either remote or physical, open up a terminal. Retrieve updates for the Raspberry Pi with
33 |
34 | ```
35 | sudo apt-get update
36 | sudo apt-get dist-upgrade
37 | ```
38 |
39 | Depending on how recently you setup or updated your Pi, this can be instantaneous or lengthy. After your Raspberry Pi is up-to-date, we should make sure our Camera is enabled. First to open up the System Interface, use
40 |
41 | ```
42 | sudo raspi-config
43 | ```
44 |
45 | Then navigate to Interfacing Options -> Camera and make sure it is enabled. Then hit Finish and reboot if necessary.
46 |
47 |
48 |
49 |
50 |
51 | ## Step 2: Organizing our Workspace and Virtual Environment
52 |
53 | Then, your going to want to clone this repository with
54 |
55 | ```
56 | git clone https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi.git
57 | ```
58 |
59 | This name is a bit long so let's trim it down with
60 |
61 | ```
62 | mv TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi tensorflow
63 | ```
64 |
65 | We are now going to create a Virtual Environment to avoid version conflicts with previously installed packages on the Raspberry Pi. First, let's install virtual env with
66 |
67 | ```
68 | sudo pip3 install virtualenv
69 | ```
70 |
71 | Now, we can create our ```tensorflow``` virtual environment with
72 |
73 | ```
74 | python3 -m venv tensorflow
75 | ```
76 |
77 | There should now be a ```bin``` folder inside of our ```tensorflow``` directory. So let's change directories with
78 |
79 | ```
80 | cd tensorflow
81 | ```
82 |
83 | We can then activate our Virtual Envvironment with
84 |
85 | ```
86 | source bin/activate
87 | ```
88 |
89 | **Note: Now that we have a virtual environment, everytime you start a new terminal, you will no longer be in the virtual environment. You can reactivate it manually or issue ```echo "source tensorflow/bin/activate" >> ~/.bashrc```. This basically activates our Virtual Environment as soon as we open a new terminal. You can tell if the Virtual Environment is active by the name showing up in parenthesis next to the working directory.**
90 |
91 | When you issue ```ls```, your ```tensorflow``` directory should now look something like this
92 |
93 |
94 |
95 |
96 |
97 | ## Step 3: Installing the Prerequisites
98 | This step should be relatively simple. I have compressed all the commands into one shellscript which you can run with
99 | ```
100 | bash install-prerequisites.sh
101 | ```
102 | This might take a few minutes, but after everything has been installed you should get this message
103 | ```
104 | Prerequisites Installed Successfully
105 | ```
106 | Now, it's best to test our installation of the tflite_runtime module. To do this first type
107 | ```
108 | python
109 | ```
110 | From the Python terminal, enter these lines
111 | ```
112 | Python 3.7.3 (default, Jul 25 2020, 13:03:44)
113 | [GCC 8.3.0] on linux
114 | Type "help", "copyright", "credits" or "license" for more information.
115 | >>> import tflite_runtime as tf
116 | >>> print(tf.__version__)
117 | ```
118 | If everything went according to plan, you should get
119 | ```
120 | 2.5.0
121 | ```
122 | **Note: The link for downloading the tflite_runtime module is bound to change based on your Python version and platform/architecture. With time, newer versions will be also be released. I'll try my best to update this guide frequently, but the most recent instructions for the installation are located [here](https://www.tensorflow.org/lite/guide/python)**
123 |
124 | ## Step 4: Running Object Detection on Image, Video, or Pi Camera
125 | Now we're finally ready to run object detection! In this repository, I've included a sample model that I converted as well as 3 object detection scripts utilizing OpenCV. If you want to convert a custom model or a pre-trained model, you can look at the [TensorFlow Lite Conversion Guide](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/TFLite-Conversion.md) that I wrote. A majority of the code is from [Edje Electronics' tutorial](https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi) with a few small adjustments.
126 | - ```TFLite-PiCamera-od.py```: This program uses the Pi Camera to perform object detection. It also counts and displays the number of objects detected in the frame. The usage is
127 | ```
128 | usage: TFLite-PiCamera-od.py [-h] [--model MODEL] [--labels LABELS]
129 | [--threshold THRESHOLD] [--resolution RESOLUTION]
130 |
131 | optional arguments:
132 | -h, --help show this help message and exit
133 | --model MODEL Provide the path to the TFLite file, default is
134 | models/model.tflite
135 | --labels LABELS Provide the path to the Labels, default is
136 | models/labels.txt
137 | --threshold THRESHOLD
138 | Minimum confidence threshold for displaying detected
139 | objects
140 | --resolution RESOLUTION
141 | Desired webcam resolution in WxH. If the webcam does
142 | not support the resolution entered, errors may occur.
143 | ```
144 | - ```TFLite-Image-od.py```: This program takes a single image as input. I haven't managed to get it to run inference on a directory yet. If you do, feel free to share it as it might help others. This script also counts the number of detections. The usage is
145 | ```
146 | usage: TFLite-Image-od.py [-h] [--model MODEL] [--labels LABELS]
147 | [--image IMAGE] [--threshold THRESHOLD]
148 | [--resolution RESOLUTION]
149 |
150 | optional arguments:
151 | -h, --help show this help message and exit
152 | --model MODEL Provide the path to the TFLite file, default is
153 | models/model.tflite
154 | --labels LABELS Provide the path to the Labels, default is
155 | models/labels.txt
156 | --image IMAGE Name of the single image to perform detection on
157 | --threshold THRESHOLD
158 | Minimum confidence threshold for displaying detected
159 | objects
160 | --resolution RESOLUTION
161 | Desired webcam resolution in WxH. If the webcam does
162 | not support the resolution entered, errors may occur.
163 | ```
164 | - ```TFLite-Video-od.py```: This program is similar to the last two however it takes a video as input. The usage is
165 | ```
166 | usage: TFLite-Video-od.py [-h] [--model MODEL] [--labels LABELS]
167 | [--video VIDEO] [--threshold THRESHOLD]
168 | [--resolution RESOLUTION]
169 |
170 | optional arguments:
171 | -h, --help show this help message and exit
172 | --model MODEL Provide the path to the TFLite file, default is
173 | models/model.tflite
174 | --labels LABELS Provide the path to the Labels, default is
175 | models/labels.txt
176 | --video VIDEO Name of the video to perform detection on
177 | --threshold THRESHOLD
178 | Minimum confidence threshold for displaying detected
179 | objects
180 | --resolution RESOLUTION
181 | Desired webcam resolution in WxH. If the webcam does
182 | not support the resolution entered, errors may occur.
183 | ```
184 |
185 | Now to finally test it out, run
186 | ```
187 | python TFLite-PiCamera-od.py
188 | ```
189 | If everything works you should get something like this
190 |
191 |
192 |
193 |
194 | Congratulations! This means we're successfully performing real-time object detection on the Raspberry Pi! Now that you've tried out the Pi Camera, why not one of the other scripts? Over the next weeks I'll continue to add on to this repo and tinker with the programs to make them better than ever! If you find something cool, feel free to share it, as others can also learn! And if you have any errors, just raise an issue and I'll be happy to take a look at it. Great work, and until next time, bye!
195 |
--------------------------------------------------------------------------------
/TFLite-Conversion.md:
--------------------------------------------------------------------------------
1 | # Converting TensorFlow Models to TensorFlow Lite
2 | [](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0)
3 | ### This Guide Contains Everything you Need to Convert Custom and Pre-trained TensorFlow Models to TensorFlow Lite
4 | Following these intstructions, you can convert either a custom model or convert a pre-trained TensorFlow model. If you want to train a custom TensorFlow object detection model, I've made a detailed [GitHub guide](https://github.com/armaanpriyadarshan/Training-a-Custom-TensorFlow-2.X-Object-Detector) and a YouTube video on the topic.
5 |
6 | [](https://www.youtube.com/watch?v=oqd54apcgGE)
7 |
8 | **The following steps for conversion are based off of the directory structure and procedures in this guide. So if you haven't already taken a look at it, I recommend you do so.
9 | To move on, you should have already**
10 | - **Installed Anaconda**
11 | - **Setup the Directory Structure**
12 | - **Compiled Protos and Setup the TensorFlow Object Detection API**
13 | - **Gathered Training Data**
14 | - **Trained your Model (without exporting)**
15 |
16 | ## Steps
17 | 1. [Exporting the Model](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/TFLite-Conversion.md#exporting-the-model)
18 | 2. [Creating a New Environment and Installing TensorFlow Nightly](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/TFLite-Conversion.md#creating-a-new-environment-and-installing-tensorflow-nightly)
19 | 3. [Converting the Model to TensorFlow Lite](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/TFLite-Conversion.md#converting-the-model-to-tensorflow-lite)
20 | 4. [Preparing our Model for Use](https://github.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/blob/main/TFLite-Conversion.md#preparing-our-model-for-use)
21 |
22 | ### Exporting the Model
23 | Assuming you followed my previous guide, your directory structure should look something like this
24 |
25 |
26 |
27 |
28 | If you haven't already, make sure you have already configured the training pipeline and trained the model. You should now have a training directory (if you followed my other guide, this is ```models\my_ssd_mobilenet_v2_fpnlite```) and a ```pipeline.config``` file (```models\my_ssd_mobilenet_v2_fpnlite\pipeline.config```). Open up a new Anaconda terminal and activate the virtual environment we made in the other tutorial with
29 |
30 | ```
31 | conda activate tensorflow
32 | ```
33 | Now, we can change directories with
34 |
35 | ```
36 | cd C:\TensorFlow\workspace\training_demo
37 | ```
38 | Now, unlike my other guide, we aren't using ```exporter_main_v2.py``` to export the model. For TensorFlow Lite Models, we have to use ```export_tflite_graph_tf2.py```. You can export the model with
39 | ```
40 | python export_tflite_graph_tf2.py --pipeline_config_path models\my_ssd_mobilenet_v2_fpnlite\pipeline.config --trained_checkpoint_dir models\my_ssd_mobilenet_v2_fpnlite --output_directory exported-models\my_tflite_model
41 | ```
42 | **Note: At the moment, TensorFlow Lite only support models with the SSD Architecture (excluding EfficientDet). Make sure that you have trained with an SSD training pipeline before you continue. You can take a look at the [TensorFlow Model Zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md) or the [documentation](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md) for the most up-to-date information.**
43 |
44 | ### Creating a New Environment and Installing TensorFlow Nightly
45 | To avoid version conflicts, we'll first create a new Anaconda virtual environment to hold all the packages necessary for conversion. First, we must deactivate our current environment with
46 |
47 | ```
48 | conda deactivate
49 | ```
50 |
51 | Now issue this command to create a new environment for TFLite conversion.
52 |
53 | ```
54 | conda create -n tflite pip python=3.7
55 | ```
56 |
57 | We can now activate our environment with
58 |
59 | ```
60 | conda activate tflite
61 | ```
62 |
63 | **Note that whenever you open a new Anaconda Terminal you will not be in the virtual environment. So if you open a new prompt make sure to use the command above to activate the virtual environment**
64 |
65 | Now we must install TensorFlow in this virtual environment. However, in this environment we will not just be installing standard TensorFlow. We are going to install tf-nightly. This package is a nightly updated build of TensorFlow. This means it contains the very latest features that TensorFlow has to offer. There is a CPU and GPU version, but if you are only using it conversion I'd stick to the CPU version because it doesn't really matter. We can install it by issuing
66 |
67 | ```
68 | pip install tf-nightly
69 | ```
70 | Now, to test our installation let's use a Python terminal.
71 | ```
72 | python
73 | ```
74 | Then import the module with
75 | ```
76 | Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
77 | Type "help", "copyright", "credits" or "license" for more information.
78 | >>> import tensorflow as tf
79 | >>> print(tf.__version)
80 | ```
81 |
82 | **Note: You might get an error with importing the newest version of Numpy. It looks something like this ```RuntimeError: The current Numpy installation ('D:\\Apps\\anaconda3\\envs\\tflite\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/y3dm3h86```. You can fix this error by installing a previous version of Numpy with ```pip install numpy==1.19.3```.**
83 |
84 | If the installation was successful, you should get the version of tf-nightly that you installed.
85 | ```
86 | 2.5.0-dev20201111
87 | ```
88 |
89 | ### Converting the Model to TensorFlow Lite
90 | Now, you might have a question or two. If the program is called ```export_tflite_graph_tf2.py```, why is the exported inference graph a ```saved_model.pb``` file? Isn't this the same as standard TensorFlow?
91 |
92 |
93 |
94 |
95 | Well, in this step we'll be converting the ```saved_model``` to a single ```model.tflite``` file for object detection with tf-nightly. I recently added a sample converter program to my other repository called ```convert-to-tflite.py```. This script takes a saved_model folder for input and then converts the model to the .tflite format. Additionally, it also quantizes the model. If you take a look at the code, there are also various different features and options commented. These are optional and might be a little buggy. For some more information, take a look at the [TensorFlow Lite converter](https://www.tensorflow.org/lite/convert/). The usage of this program is as so
96 |
97 | ```
98 | usage: convert-to-tflite.py [-h] [--model MODEL] [--output OUTPUT]
99 |
100 | optional arguments:
101 | -h, --help show this help message and exit
102 | --model MODEL Folder that the saved model is located in
103 | --output OUTPUT Folder that the tflite model will be written to
104 | ```
105 |
106 | At the moment I'd recommend not using the output argument and sticking to the default values as it still has a few errors. Enough talking, to convert the model run
107 | ```
108 | python convert-to-tflite.py
109 | ```
110 |
111 | You should now see a file in the ```exported-models\my_tflite_model\saved_model``` directory called ```model.tflite```
112 |
113 |
114 |
115 |
116 |
117 | Now, there is something very important to note with this file. Take a look at the file size of the ```model.tflite``` file. **If your file size is 1 KB, that means something has gone wrong with conversion**. If you were to run object detection with this model, you will get various errors. As you can see in the image, my model is 3,549 KB which is an appropriate size. If your file is significantly bigger, 121,000 KB for example, it will drastically impact performance while running. With a model that big, my framerates dropped all the way down to 0.07 FPS. If you have any questions about this, feel free to raise an issue and I will try my best to help you out.
118 |
119 | ### Preparing our Model for Use
120 | Now that we have our model, it's time to create a new labelmap. Unlike standard TensorFlow, TensorFlow uses a .txt file instead of a .pbtxt file. Creating a new labelmap is actually much easier than it sounds. Let's take a look at an example. Below, I have provided the ```label_map.pbtxt``` that I used for my Pill Detection model.
121 | ```
122 | item {
123 | id: 1
124 | name: 'Acetaminophen 325 MG Oral Tablet'
125 | }
126 |
127 | item {
128 | id: 2
129 | name: 'Ibuprofen 200 MG Oral Tablet'
130 | }
131 | ```
132 | If we were to create a new labelmap for TensorFlow Lite, all we have to do is write each of the item names on it's own line like so
133 | ```
134 | Acetaminophen 325 MG Oral Tablet
135 | Ibuprofen 200 MG Oral Tablet
136 | ```
137 | Once you are finished filling it out save the file within the ```exported-models\my_tflite_model\saved_model``` as ```labels.txt```. The directory should now look like this
138 |
139 |
140 |
141 |
142 |
143 | We're done! The model is now ready to be used. If you want to run on the Raspberry Pi, you can transfer the model any way you prefer. [WinSCP](https://winscp.net/eng/index.php), an SFTP client, is my favorite method. Place the ```model.tflite``` file and the ```labels.txt``` in the ```tensorflow/models``` directory on the Raspberry Pi. Once your done, it should look like this
144 |
145 |
146 |
147 |
148 |
149 | There you go, you're all set to run object detection on the Pi! Good Luck!
150 |
--------------------------------------------------------------------------------
/TFLite-Image-od.py:
--------------------------------------------------------------------------------
1 | # This is Edje Electronics' code with some minor adjustments
2 | # Credit goes to his repo: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
3 |
4 | import tflite_runtime.interpreter as tflite
5 | import os
6 | import argparse
7 | import cv2
8 | import numpy as np
9 | import sys
10 | import time
11 | from threading import Thread
12 | import importlib.util
13 |
14 | # Define VideoStream class to handle streaming of video from webcam in separate processing thread
15 | # Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
16 |
17 | parser = argparse.ArgumentParser()
18 | parser.add_argument('--model', help='Provide the path to the TFLite file, default is models/model.tflite',
19 | default='models/model.tflite')
20 | parser.add_argument('--labels', help='Provide the path to the Labels, default is models/labels.txt',
21 | default='models/labels.txt')
22 | parser.add_argument('--image', help='Name of the single image to perform detection on',
23 | default='test.png')
24 | parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
25 | default=0.5)
26 |
27 | args = parser.parse_args()
28 |
29 | # PROVIDE PATH TO MODEL DIRECTORY
30 | PATH_TO_MODEL_DIR = args.model
31 |
32 | # PROVIDE PATH TO LABEL MAP
33 | PATH_TO_LABELS = args.labels
34 |
35 | # PROVIDE PATH TO IMAGE
36 | IMAGE_PATH = args.image
37 |
38 | # PROVIDE THE MINIMUM CONFIDENCE THRESHOLD
39 | MIN_CONF_THRESH = float(args.threshold)
40 |
41 | import time
42 | print('Loading model...', end='')
43 | start_time = time.time()
44 |
45 | # LOAD TFLITE MODEL
46 | interpreter = tflite.Interpreter(model_path=PATH_TO_MODEL_DIR)
47 | # LOAD LABELS
48 | with open(PATH_TO_LABELS, 'r') as f:
49 | labels = [line.strip() for line in f.readlines()]
50 | end_time = time.time()
51 | elapsed_time = end_time - start_time
52 | print('Done! Took {} seconds'.format(elapsed_time))
53 |
54 | interpreter.allocate_tensors()
55 | input_details = interpreter.get_input_details()
56 | output_details = interpreter.get_output_details()
57 |
58 | height = input_details[0]['shape'][1]
59 | width = input_details[0]['shape'][2]
60 |
61 | floating_model = (input_details[0]['dtype'] == np.float32)
62 |
63 | input_mean = 127.5
64 | input_std = 127.5
65 |
66 | # Initialize frame rate calculation
67 | frame_rate_calc = 1
68 | freq = cv2.getTickFrequency()
69 |
70 | # Start timer (for calculating frame rate)
71 | current_count=0
72 | t1 = cv2.getTickCount()
73 | print('Running inference for {}... '.format(IMAGE_PATH), end='')
74 |
75 | # Acquire frame and resize to expected shape [1xHxWx3]
76 | image = cv2.imread(IMAGE_PATH)
77 | image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
78 | imH, imW, _ = image.shape
79 | image_resized = cv2.resize(image_rgb, (width, height))
80 | input_data = np.expand_dims(image_resized, axis=0)
81 |
82 | # Normalize pixel values if using a floating model (i.e. if model is non-quantized)
83 | if floating_model:
84 | input_data = (np.float32(input_data) - input_mean) / input_std
85 |
86 | # Perform the actual detection by running the model with the image as input
87 | interpreter.set_tensor(input_details[0]['index'],input_data)
88 | interpreter.invoke()
89 |
90 | # Retrieve detection results
91 | boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
92 | classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
93 | scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
94 | #num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
95 |
96 | # Loop over all detections and draw detection box if confidence is above minimum threshold
97 | for i in range(len(scores)):
98 | if ((scores[i] > MIN_CONF_THRESH) and (scores[i] <= 1.0)):
99 |
100 | # Get bounding box coordinates and draw box
101 | # Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
102 | ymin = int(max(1,(boxes[i][0] * imH)))
103 | xmin = int(max(1,(boxes[i][1] * imW)))
104 | ymax = int(min(imH,(boxes[i][2] * imH)))
105 | xmax = int(min(imW,(boxes[i][3] * imW)))
106 |
107 | cv2.rectangle(image, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
108 |
109 | # Draw label
110 | object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
111 | label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
112 | labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
113 | label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
114 | cv2.rectangle(image, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
115 | cv2.putText(image, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
116 | current_count+=1
117 | # Calculate framerate
118 | t2 = cv2.getTickCount()
119 | time1 = (t2-t1)/freq
120 | frame_rate_calc= 1/time1
121 | # Draw framerate in corner of frame
122 | cv2.putText(image,'FPS: {0:.2f}'.format(frame_rate_calc),(15,25),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,55),2,cv2.LINE_AA)
123 | cv2.putText (image,'Total Detection Count : ' + str(current_count),(15,65),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,55),2,cv2.LINE_AA)
124 | # All the results have been drawn on the frame, so it's time to display it.
125 | cv2.imshow('Object detector', image)
126 |
127 |
128 | # Press any key to quit
129 | cv2.waitKey(0)
130 |
131 | # Clean up
132 | cv2.destroyAllWindows()
133 | print("Done")
134 |
--------------------------------------------------------------------------------
/TFLite-PiCamera-od.py:
--------------------------------------------------------------------------------
1 | # This is Edje Electronics' code with some minor adjustments
2 | # Credit goes to his repo: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
3 |
4 | import tflite_runtime.interpreter as tflite
5 | import os
6 | import argparse
7 | import cv2
8 | import numpy as np
9 | import sys
10 | import time
11 | from threading import Thread
12 | import importlib.util
13 |
14 | # Define VideoStream class to handle streaming of video from webcam in separate processing thread
15 | # Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
16 | class VideoStream:
17 | """Camera object that controls video streaming from the Picamera"""
18 | def __init__(self,resolution=(640,480),framerate=30):
19 | # Initialize the PiCamera and the camera image stream
20 | self.stream = cv2.VideoCapture(0)
21 | ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
22 | ret = self.stream.set(3,resolution[0])
23 | ret = self.stream.set(4,resolution[1])
24 |
25 | # Read first frame from the stream
26 | (self.grabbed, self.frame) = self.stream.read()
27 |
28 | # Variable to control when the camera is stopped
29 | self.stopped = False
30 |
31 | def start(self):
32 | # Start the thread that reads frames from the video stream
33 | Thread(target=self.update,args=()).start()
34 | return self
35 |
36 | def update(self):
37 | # Keep looping indefinitely until the thread is stopped
38 | while True:
39 | # If the camera is stopped, stop the thread
40 | if self.stopped:
41 | # Close camera resources
42 | self.stream.release()
43 | return
44 |
45 | # Otherwise, grab the next frame from the stream
46 | (self.grabbed, self.frame) = self.stream.read()
47 |
48 | def read(self):
49 | # Return the most recent frame
50 | return self.frame
51 |
52 | def stop(self):
53 | # Indicate that the camera and thread should be stopped
54 | self.stopped = True
55 |
56 | parser = argparse.ArgumentParser()
57 | parser.add_argument('--model', help='Provide the path to the TFLite file, default is models/model.tflite',
58 | default='models/model.tflite')
59 | parser.add_argument('--labels', help='Provide the path to the Labels, default is models/labels.txt',
60 | default='models/labels.txt')
61 | parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
62 | default=0.5)
63 | parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
64 | default='1280x720')
65 |
66 | args = parser.parse_args()
67 |
68 | # PROVIDE PATH TO MODEL DIRECTORY
69 | PATH_TO_MODEL_DIR = args.model
70 |
71 | # PROVIDE PATH TO LABEL MAP
72 | PATH_TO_LABELS = args.labels
73 |
74 | # PROVIDE THE MINIMUM CONFIDENCE THRESHOLD
75 | MIN_CONF_THRESH = float(args.threshold)
76 |
77 | resW, resH = args.resolution.split('x')
78 | imW, imH = int(resW), int(resH)
79 | import time
80 | print('Loading model...', end='')
81 | start_time = time.time()
82 |
83 | # LOAD TFLITE MODEL
84 | interpreter = tflite.Interpreter(model_path=PATH_TO_MODEL_DIR)
85 | # LOAD LABELS
86 | with open(PATH_TO_LABELS, 'r') as f:
87 | labels = [line.strip() for line in f.readlines()]
88 | end_time = time.time()
89 | elapsed_time = end_time - start_time
90 | print('Done! Took {} seconds'.format(elapsed_time))
91 |
92 | interpreter.allocate_tensors()
93 | input_details = interpreter.get_input_details()
94 | output_details = interpreter.get_output_details()
95 |
96 | height = input_details[0]['shape'][1]
97 | width = input_details[0]['shape'][2]
98 |
99 | floating_model = (input_details[0]['dtype'] == np.float32)
100 |
101 | input_mean = 127.5
102 | input_std = 127.5
103 |
104 | # Initialize frame rate calculation
105 | frame_rate_calc = 1
106 | freq = cv2.getTickFrequency()
107 | print('Running inference for PiCamera')
108 | # Initialize video stream
109 | videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
110 | time.sleep(1)
111 |
112 | #for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
113 | while True:
114 | # Start timer (for calculating frame rate)
115 | current_count=0
116 | t1 = cv2.getTickCount()
117 |
118 | # Grab frame from video stream
119 | frame1 = videostream.read()
120 |
121 | # Acquire frame and resize to expected shape [1xHxWx3]
122 | frame = frame1.copy()
123 | frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
124 | frame_resized = cv2.resize(frame_rgb, (width, height))
125 | input_data = np.expand_dims(frame_resized, axis=0)
126 |
127 | # Normalize pixel values if using a floating model (i.e. if model is non-quantized)
128 | if floating_model:
129 | input_data = (np.float32(input_data) - input_mean) / input_std
130 |
131 | # Perform the actual detection by running the model with the image as input
132 | interpreter.set_tensor(input_details[0]['index'],input_data)
133 | interpreter.invoke()
134 |
135 | # Retrieve detection results
136 | boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
137 | classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
138 | scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
139 | #num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
140 |
141 | # Loop over all detections and draw detection box if confidence is above minimum threshold
142 | for i in range(len(scores)):
143 | if ((scores[i] > MIN_CONF_THRESH) and (scores[i] <= 1.0)):
144 |
145 | # Get bounding box coordinates and draw box
146 | # Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
147 | ymin = int(max(1,(boxes[i][0] * imH)))
148 | xmin = int(max(1,(boxes[i][1] * imW)))
149 | ymax = int(min(imH,(boxes[i][2] * imH)))
150 | xmax = int(min(imW,(boxes[i][3] * imW)))
151 |
152 | cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
153 |
154 | # Draw label
155 | object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
156 | label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
157 | labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
158 | label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
159 | cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
160 | cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
161 | current_count+=1
162 | # Draw framerate in corner of frame
163 | cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(15,25),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,55),2,cv2.LINE_AA)
164 | cv2.putText (frame,'Total Detection Count : ' + str(current_count),(15,65),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,55),2,cv2.LINE_AA)
165 | # All the results have been drawn on the frame, so it's time to display it.
166 | cv2.imshow('Object Detector', frame)
167 |
168 | # Calculate framerate
169 | t2 = cv2.getTickCount()
170 | time1 = (t2-t1)/freq
171 | frame_rate_calc= 1/time1
172 |
173 | # Press 'q' to quit
174 | if cv2.waitKey(1) == ord('q'):
175 | break
176 |
177 | # Clean up
178 | cv2.destroyAllWindows()
179 | videostream.stop()
180 | print("Done")
181 |
--------------------------------------------------------------------------------
/TFLite-Video-od.py:
--------------------------------------------------------------------------------
1 | # This is Edje Electronics' code with some minor adjustments
2 | # Credit goes to his repo: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
3 |
4 | import tflite_runtime.interpreter as tflite
5 | import os
6 | import argparse
7 | import cv2
8 | import numpy as np
9 | import sys
10 | import time
11 | from threading import Thread
12 | import importlib.util
13 |
14 | parser = argparse.ArgumentParser()
15 | parser.add_argument('--model', help='Provide the path to the TFLite file, default is models/model.tflite',
16 | default='models/model.tflite')
17 | parser.add_argument('--labels', help='Provide the path to the Labels, default is models/labels.txt',
18 | default='models/labels.txt')
19 | parser.add_argument('--video', help='Name of the video to perform detection on',
20 | default='test.mp4')
21 | parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
22 | default=0.5)
23 |
24 | args = parser.parse_args()
25 |
26 | # PROVIDE PATH TO MODEL DIRECTORY
27 | PATH_TO_MODEL_DIR = args.model
28 |
29 | # PROVIDE PATH TO LABEL MAP
30 | PATH_TO_LABELS = args.labels
31 |
32 | # PROVIDE PATH TO VIDEO DIRECTORY
33 | VIDEO_PATH = args.video
34 |
35 | # PROVIDE THE MINIMUM CONFIDENCE THRESHOLD
36 | MIN_CONF_THRESH = float(args.threshold)
37 |
38 | import time
39 | print('Loading model...', end='')
40 | start_time = time.time()
41 |
42 | # LOAD TFLITE MODEL
43 | interpreter = tflite.Interpreter(model_path=PATH_TO_MODEL_DIR)
44 | # LOAD LABELS
45 | with open(PATH_TO_LABELS, 'r') as f:
46 | labels = [line.strip() for line in f.readlines()]
47 | end_time = time.time()
48 | elapsed_time = end_time - start_time
49 | print('Done! Took {} seconds'.format(elapsed_time))
50 |
51 | interpreter.allocate_tensors()
52 | input_details = interpreter.get_input_details()
53 | output_details = interpreter.get_output_details()
54 |
55 | height = input_details[0]['shape'][1]
56 | width = input_details[0]['shape'][2]
57 |
58 | floating_model = (input_details[0]['dtype'] == np.float32)
59 |
60 | input_mean = 127.5
61 | input_std = 127.5
62 |
63 | # Initialize frame rate calculation
64 | frame_rate_calc = 1
65 | freq = cv2.getTickFrequency()
66 | print('Running inference for {}... '.format(VIDEO_PATH), end='')
67 | # Initialize video
68 | video = cv2.VideoCapture(VIDEO_PATH)
69 | imW = video.get(cv2.CAP_PROP_FRAME_WIDTH)
70 | imH = video.get(cv2.CAP_PROP_FRAME_HEIGHT)
71 | while(video.isOpened()):
72 | # Start timer (for calculating frame rate)
73 | current_count=0
74 | t1 = cv2.getTickCount()
75 |
76 | # Grab frame from video stream
77 | ret, frame1 = video.read()
78 |
79 | # Acquire frame and resize to expected shape [1xHxWx3]
80 | frame = frame1.copy()
81 | frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
82 | frame_resized = cv2.resize(frame_rgb, (width, height))
83 | input_data = np.expand_dims(frame_resized, axis=0)
84 |
85 | # Normalize pixel values if using a floating model (i.e. if model is non-quantized)
86 | if floating_model:
87 | input_data = (np.float32(input_data) - input_mean) / input_std
88 |
89 | # Perform the actual detection by running the model with the image as input
90 | interpreter.set_tensor(input_details[0]['index'],input_data)
91 | interpreter.invoke()
92 |
93 | # Retrieve detection results
94 | boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
95 | classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
96 | scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
97 | #num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
98 |
99 | # Loop over all detections and draw detection box if confidence is above minimum threshold
100 | for i in range(len(scores)):
101 | if ((scores[i] > MIN_CONF_THRESH) and (scores[i] <= 1.0)):
102 |
103 | # Get bounding box coordinates and draw box
104 | # Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
105 | ymin = int(max(1,(boxes[i][0] * imH)))
106 | xmin = int(max(1,(boxes[i][1] * imW)))
107 | ymax = int(min(imH,(boxes[i][2] * imH)))
108 | xmax = int(min(imW,(boxes[i][3] * imW)))
109 |
110 | cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
111 |
112 | # Draw label
113 | object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
114 | label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
115 | labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
116 | label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
117 | cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
118 | cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
119 | current_count+=1
120 | # Draw framerate in corner of frame
121 | cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(15,25),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,55),2,cv2.LINE_AA)
122 | cv2.putText (frame,'Total Detection Count : ' + str(current_count),(15,65),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,55),2,cv2.LINE_AA)
123 | # All the results have been drawn on the frame, so it's time to display it.
124 | cv2.imshow('Object Detector', frame)
125 |
126 | # Calculate framerate
127 | t2 = cv2.getTickCount()
128 | time1 = (t2-t1)/freq
129 | frame_rate_calc= 1/time1
130 |
131 | # Press 'q' to quit
132 | if cv2.waitKey(1) == ord('q'):
133 | break
134 |
135 | # Clean up
136 | cv2.destroyAllWindows()
137 | print("Done")
138 |
--------------------------------------------------------------------------------
/doc/2020-11-15-230504_1920x1080_scrot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/2020-11-15-230504_1920x1080_scrot.png
--------------------------------------------------------------------------------
/doc/Camera Interface.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/Camera Interface.png
--------------------------------------------------------------------------------
/doc/Screenshot 2020-11-14 144537.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/Screenshot 2020-11-14 144537.png
--------------------------------------------------------------------------------
/doc/Screenshot 2020-11-16 104855.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/Screenshot 2020-11-16 104855.png
--------------------------------------------------------------------------------
/doc/directory.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/directory.png
--------------------------------------------------------------------------------
/doc/final model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/final model.png
--------------------------------------------------------------------------------
/doc/folder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/folder.png
--------------------------------------------------------------------------------
/doc/model.tflite.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/model.tflite.png
--------------------------------------------------------------------------------
/doc/saved_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/saved_model.png
--------------------------------------------------------------------------------
/doc/tf vs tflite.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/doc/tf vs tflite.png
--------------------------------------------------------------------------------
/install-prerequisites.sh:
--------------------------------------------------------------------------------
1 | # Installs Prerequisites for OpenCV
2 |
3 | sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-103
4 | sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5
5 | sudo apt-get install libatlas-base-dev
6 | sudo apt-get install libjasper-dev
7 |
8 | # Installs OpenCV pip package
9 |
10 | pip install opencv-python==4.1.0.25
11 |
12 | # Install the tflite_runtime
13 | pip install https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp37-cp37m-linux_armv7l.whl
14 | echo Prerequisites Installed Successfully
15 |
--------------------------------------------------------------------------------
/models/labels.txt:
--------------------------------------------------------------------------------
1 | person
2 | bicycle
3 | car
4 | motorcycle
5 | airplane
6 | bus
7 | train
8 | truck
9 | boat
10 | traffic light
11 | fire hydrant
12 | ???
13 | stop sign
14 | parking meter
15 | bench
16 | bird
17 | cat
18 | dog
19 | horse
20 | sheep
21 | cow
22 | elephant
23 | bear
24 | zebra
25 | giraffe
26 | ???
27 | backpack
28 | umbrella
29 | ???
30 | ???
31 | handbag
32 | tie
33 | suitcase
34 | frisbee
35 | skis
36 | snowboard
37 | sports ball
38 | kite
39 | baseball bat
40 | baseball glove
41 | skateboard
42 | surfboard
43 | tennis racket
44 | bottle
45 | ???
46 | wine glass
47 | cup
48 | fork
49 | knife
50 | spoon
51 | bowl
52 | banana
53 | apple
54 | sandwich
55 | orange
56 | broccoli
57 | carrot
58 | hot dog
59 | pizza
60 | donut
61 | cake
62 | chair
63 | couch
64 | potted plant
65 | bed
66 | ???
67 | dining table
68 | ???
69 | ???
70 | toilet
71 | ???
72 | tv
73 | laptop
74 | mouse
75 | remote
76 | keyboard
77 | cell phone
78 | microwave
79 | oven
80 | toaster
81 | sink
82 | refrigerator
83 | ???
84 | book
85 | clock
86 | vase
87 | scissors
88 | teddy bear
89 | hair drier
90 | toothbrush
91 |
--------------------------------------------------------------------------------
/models/model.tflite:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/armaanpriyadarshan/TensorFlow-2-Lite-Object-Detection-on-the-Raspberry-Pi/58b0e16e4b6cf432781876bdc1bc22e4576815ca/models/model.tflite
--------------------------------------------------------------------------------