├── LICENSE ├── Object_detection_picamera.py ├── Pet_detector.py ├── README.md └── doc ├── Install_TF_RPi.jpg ├── PetDetector_Video.PNG ├── Picamera_livingroom.png ├── YouTube_video.png ├── bashrc.png ├── camera_enabled.png ├── cards.png ├── directory.png ├── edited_script.png ├── kitchen.png ├── pet_detector_demo.png └── update.png /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /Object_detection_picamera.py: -------------------------------------------------------------------------------- 1 | ######## Picamera Object Detection Using Tensorflow Classifier ######### 2 | # 3 | # Author: Evan Juras 4 | # Date: 4/15/18 5 | # Description: 6 | # This program uses a TensorFlow classifier to perform object detection. 7 | # It loads the classifier uses it to perform object detection on a Picamera feed. 8 | # It draws boxes and scores around the objects of interest in each frame from 9 | # the Picamera. It also can be used with a webcam by adding "--usbcam" 10 | # when executing this script from the terminal. 11 | 12 | ## Some of the code is copied from Google's example at 13 | ## https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb 14 | 15 | ## and some is copied from Dat Tran's example at 16 | ## https://github.com/datitran/object_detector_app/blob/master/object_detection_app.py 17 | 18 | ## but I changed it to make it more understandable to me. 19 | 20 | 21 | # Import packages 22 | import os 23 | import cv2 24 | import numpy as np 25 | from picamera.array import PiRGBArray 26 | from picamera import PiCamera 27 | import tensorflow as tf 28 | import argparse 29 | import sys 30 | 31 | # Set up camera constants 32 | IM_WIDTH = 1280 33 | IM_HEIGHT = 720 34 | #IM_WIDTH = 640 Use smaller resolution for 35 | #IM_HEIGHT = 480 slightly faster framerate 36 | 37 | # Select camera type (if user enters --usbcam when calling this script, 38 | # a USB webcam will be used) 39 | camera_type = 'picamera' 40 | parser = argparse.ArgumentParser() 41 | parser.add_argument('--usbcam', help='Use a USB webcam instead of picamera', 42 | action='store_true') 43 | args = parser.parse_args() 44 | if args.usbcam: 45 | camera_type = 'usb' 46 | 47 | # This is needed since the working directory is the object_detection folder. 48 | sys.path.append('..') 49 | 50 | # Import utilites 51 | from utils import label_map_util 52 | from utils import visualization_utils as vis_util 53 | 54 | # Name of the directory containing the object detection module we're using 55 | MODEL_NAME = 'ssdlite_mobilenet_v2_coco_2018_05_09' 56 | 57 | # Grab path to current working directory 58 | CWD_PATH = os.getcwd() 59 | 60 | # Path to frozen detection graph .pb file, which contains the model that is used 61 | # for object detection. 62 | PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb') 63 | 64 | # Path to label map file 65 | PATH_TO_LABELS = os.path.join(CWD_PATH,'data','mscoco_label_map.pbtxt') 66 | 67 | # Number of classes the object detector can identify 68 | NUM_CLASSES = 90 69 | 70 | ## Load the label map. 71 | # Label maps map indices to category names, so that when the convolution 72 | # network predicts `5`, we know that this corresponds to `airplane`. 73 | # Here we use internal utility functions, but anything that returns a 74 | # dictionary mapping integers to appropriate string labels would be fine 75 | label_map = label_map_util.load_labelmap(PATH_TO_LABELS) 76 | categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) 77 | category_index = label_map_util.create_category_index(categories) 78 | 79 | # Load the Tensorflow model into memory. 80 | detection_graph = tf.Graph() 81 | with detection_graph.as_default(): 82 | od_graph_def = tf.GraphDef() 83 | with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: 84 | serialized_graph = fid.read() 85 | od_graph_def.ParseFromString(serialized_graph) 86 | tf.import_graph_def(od_graph_def, name='') 87 | 88 | sess = tf.Session(graph=detection_graph) 89 | 90 | 91 | # Define input and output tensors (i.e. data) for the object detection classifier 92 | 93 | # Input tensor is the image 94 | image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') 95 | 96 | # Output tensors are the detection boxes, scores, and classes 97 | # Each box represents a part of the image where a particular object was detected 98 | detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0') 99 | 100 | # Each score represents level of confidence for each of the objects. 101 | # The score is shown on the result image, together with the class label. 102 | detection_scores = detection_graph.get_tensor_by_name('detection_scores:0') 103 | detection_classes = detection_graph.get_tensor_by_name('detection_classes:0') 104 | 105 | # Number of objects detected 106 | num_detections = detection_graph.get_tensor_by_name('num_detections:0') 107 | 108 | # Initialize frame rate calculation 109 | frame_rate_calc = 1 110 | freq = cv2.getTickFrequency() 111 | font = cv2.FONT_HERSHEY_SIMPLEX 112 | 113 | # Initialize camera and perform object detection. 114 | # The camera has to be set up and used differently depending on if it's a 115 | # Picamera or USB webcam. 116 | 117 | # I know this is ugly, but I basically copy+pasted the code for the object 118 | # detection loop twice, and made one work for Picamera and the other work 119 | # for USB. 120 | 121 | ### Picamera ### 122 | if camera_type == 'picamera': 123 | # Initialize Picamera and grab reference to the raw capture 124 | camera = PiCamera() 125 | camera.resolution = (IM_WIDTH,IM_HEIGHT) 126 | camera.framerate = 10 127 | rawCapture = PiRGBArray(camera, size=(IM_WIDTH,IM_HEIGHT)) 128 | rawCapture.truncate(0) 129 | 130 | for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True): 131 | 132 | t1 = cv2.getTickCount() 133 | 134 | # Acquire frame and expand frame dimensions to have shape: [1, None, None, 3] 135 | # i.e. a single-column array, where each item in the column has the pixel RGB value 136 | frame = np.copy(frame1.array) 137 | frame.setflags(write=1) 138 | frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) 139 | frame_expanded = np.expand_dims(frame_rgb, axis=0) 140 | 141 | # Perform the actual detection by running the model with the image as input 142 | (boxes, scores, classes, num) = sess.run( 143 | [detection_boxes, detection_scores, detection_classes, num_detections], 144 | feed_dict={image_tensor: frame_expanded}) 145 | 146 | # Draw the results of the detection (aka 'visulaize the results') 147 | vis_util.visualize_boxes_and_labels_on_image_array( 148 | frame, 149 | np.squeeze(boxes), 150 | np.squeeze(classes).astype(np.int32), 151 | np.squeeze(scores), 152 | category_index, 153 | use_normalized_coordinates=True, 154 | line_thickness=8, 155 | min_score_thresh=0.40) 156 | 157 | cv2.putText(frame,"FPS: {0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA) 158 | 159 | # All the results have been drawn on the frame, so it's time to display it. 160 | cv2.imshow('Object detector', frame) 161 | 162 | t2 = cv2.getTickCount() 163 | time1 = (t2-t1)/freq 164 | frame_rate_calc = 1/time1 165 | 166 | # Press 'q' to quit 167 | if cv2.waitKey(1) == ord('q'): 168 | break 169 | 170 | rawCapture.truncate(0) 171 | 172 | camera.close() 173 | 174 | ### USB webcam ### 175 | elif camera_type == 'usb': 176 | # Initialize USB webcam feed 177 | camera = cv2.VideoCapture(0) 178 | ret = camera.set(3,IM_WIDTH) 179 | ret = camera.set(4,IM_HEIGHT) 180 | 181 | while(True): 182 | 183 | t1 = cv2.getTickCount() 184 | 185 | # Acquire frame and expand frame dimensions to have shape: [1, None, None, 3] 186 | # i.e. a single-column array, where each item in the column has the pixel RGB value 187 | ret, frame = camera.read() 188 | frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) 189 | frame_expanded = np.expand_dims(frame_rgb, axis=0) 190 | 191 | # Perform the actual detection by running the model with the image as input 192 | (boxes, scores, classes, num) = sess.run( 193 | [detection_boxes, detection_scores, detection_classes, num_detections], 194 | feed_dict={image_tensor: frame_expanded}) 195 | 196 | # Draw the results of the detection (aka 'visulaize the results') 197 | vis_util.visualize_boxes_and_labels_on_image_array( 198 | frame, 199 | np.squeeze(boxes), 200 | np.squeeze(classes).astype(np.int32), 201 | np.squeeze(scores), 202 | category_index, 203 | use_normalized_coordinates=True, 204 | line_thickness=8, 205 | min_score_thresh=0.85) 206 | 207 | cv2.putText(frame,"FPS: {0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA) 208 | 209 | # All the results have been drawn on the frame, so it's time to display it. 210 | cv2.imshow('Object detector', frame) 211 | 212 | t2 = cv2.getTickCount() 213 | time1 = (t2-t1)/freq 214 | frame_rate_calc = 1/time1 215 | 216 | # Press 'q' to quit 217 | if cv2.waitKey(1) == ord('q'): 218 | break 219 | 220 | camera.release() 221 | 222 | cv2.destroyAllWindows() 223 | 224 | -------------------------------------------------------------------------------- /Pet_detector.py: -------------------------------------------------------------------------------- 1 | ######## Raspberry Pi Pet Detector Camera using TensorFlow Object Detection API ######### 2 | # 3 | # Author: Evan Juras 4 | # Date: 10/15/18 5 | # Description: 6 | # 7 | # This script implements a "pet detector" that alerts the user if a pet is 8 | # waiting to be let inside or outside. It takes video frames from a Picamera 9 | # or USB webcam, passes them through a TensorFlow object detection model, 10 | # determines if a cat or dog has been detected in the image, checks the location 11 | # of the cat or dog in the frame, and texts the user's phone if a cat or dog is 12 | # detected in the appropriate location. 13 | # 14 | # The framework is based off the Object_detection_picamera.py script located here: 15 | # https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/blob/master/Object_detection_picamera.py 16 | # 17 | # Sending a text requires setting up a Twilio account (free trials are available). 18 | # Here is a good tutorial for using Twilio: 19 | # https://www.twilio.com/docs/sms/quickstart/python 20 | 21 | 22 | # Import packages 23 | import os 24 | import cv2 25 | import numpy as np 26 | from picamera.array import PiRGBArray 27 | from picamera import PiCamera 28 | import tensorflow as tf 29 | import argparse 30 | import sys 31 | 32 | # Set up Twilio 33 | from twilio.rest import Client 34 | 35 | # Twilio SID, authentication token, my phone number, and the Twilio phone number 36 | # are stored as environment variables on my Pi so people can't see them 37 | account_sid = os.environ['TWILIO_ACCOUNT_SID'] 38 | auth_token = os.environ['TWILIO_AUTH_TOKEN'] 39 | my_number = os.environ['MY_DIGITS'] 40 | twilio_number = os.environ['TWILIO_DIGITS'] 41 | 42 | client = Client(account_sid,auth_token) 43 | 44 | # Set up camera constants 45 | IM_WIDTH = 1280 46 | IM_HEIGHT = 720 47 | 48 | # Select camera type (if user enters --usbcam when calling this script, 49 | # a USB webcam will be used) 50 | camera_type = 'picamera' 51 | parser = argparse.ArgumentParser() 52 | parser.add_argument('--usbcam', help='Use a USB webcam instead of picamera', 53 | action='store_true') 54 | args = parser.parse_args() 55 | if args.usbcam: 56 | camera_type = 'usb' 57 | 58 | #### Initialize TensorFlow model #### 59 | 60 | # This is needed since the working directory is the object_detection folder. 61 | sys.path.append('..') 62 | 63 | # Import utilites 64 | from utils import label_map_util 65 | from utils import visualization_utils as vis_util 66 | 67 | # Name of the directory containing the object detection module we're using 68 | MODEL_NAME = 'ssdlite_mobilenet_v2_coco_2018_05_09' 69 | 70 | # Grab path to current working directory 71 | CWD_PATH = os.getcwd() 72 | 73 | # Path to frozen detection graph .pb file, which contains the model that is used 74 | # for object detection. 75 | PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb') 76 | 77 | # Path to label map file 78 | PATH_TO_LABELS = os.path.join(CWD_PATH,'data','mscoco_label_map.pbtxt') 79 | 80 | # Number of classes the object detector can identify 81 | NUM_CLASSES = 90 82 | 83 | ## Load the label map. 84 | # Label maps map indices to category names, so that when the convolution 85 | # network predicts `5`, we know that this corresponds to `airplane`. 86 | # Here we use internal utility functions, but anything that returns a 87 | # dictionary mapping integers to appropriate string labels would be fine 88 | label_map = label_map_util.load_labelmap(PATH_TO_LABELS) 89 | categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) 90 | category_index = label_map_util.create_category_index(categories) 91 | 92 | # Load the Tensorflow model into memory. 93 | detection_graph = tf.Graph() 94 | with detection_graph.as_default(): 95 | od_graph_def = tf.GraphDef() 96 | with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: 97 | serialized_graph = fid.read() 98 | od_graph_def.ParseFromString(serialized_graph) 99 | tf.import_graph_def(od_graph_def, name='') 100 | 101 | sess = tf.Session(graph=detection_graph) 102 | 103 | 104 | # Define input and output tensors (i.e. data) for the object detection classifier 105 | 106 | # Input tensor is the image 107 | image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') 108 | 109 | # Output tensors are the detection boxes, scores, and classes 110 | # Each box represents a part of the image where a particular object was detected 111 | detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0') 112 | 113 | # Each score represents level of confidence for each of the objects. 114 | # The score is shown on the result image, together with the class label. 115 | detection_scores = detection_graph.get_tensor_by_name('detection_scores:0') 116 | detection_classes = detection_graph.get_tensor_by_name('detection_classes:0') 117 | 118 | # Number of objects detected 119 | num_detections = detection_graph.get_tensor_by_name('num_detections:0') 120 | 121 | #### Initialize other parameters #### 122 | 123 | # Initialize frame rate calculation 124 | frame_rate_calc = 1 125 | freq = cv2.getTickFrequency() 126 | font = cv2.FONT_HERSHEY_SIMPLEX 127 | 128 | # Define inside box coordinates (top left and bottom right) 129 | TL_inside = (int(IM_WIDTH*0.1),int(IM_HEIGHT*0.35)) 130 | BR_inside = (int(IM_WIDTH*0.45),int(IM_HEIGHT-5)) 131 | 132 | # Define outside box coordinates (top left and bottom right) 133 | TL_outside = (int(IM_WIDTH*0.46),int(IM_HEIGHT*0.25)) 134 | BR_outside = (int(IM_WIDTH*0.8),int(IM_HEIGHT*.85)) 135 | 136 | # Initialize control variables used for pet detector 137 | detected_inside = False 138 | detected_outside = False 139 | 140 | inside_counter = 0 141 | outside_counter = 0 142 | 143 | pause = 0 144 | pause_counter = 0 145 | 146 | #### Pet detection function #### 147 | 148 | # This function contains the code to detect a pet, determine if it's 149 | # inside or outside, and send a text to the user's phone. 150 | def pet_detector(frame): 151 | 152 | # Use globals for the control variables so they retain their value after function exits 153 | global detected_inside, detected_outside 154 | global inside_counter, outside_counter 155 | global pause, pause_counter 156 | 157 | frame_expanded = np.expand_dims(frame, axis=0) 158 | 159 | # Perform the actual detection by running the model with the image as input 160 | (boxes, scores, classes, num) = sess.run( 161 | [detection_boxes, detection_scores, detection_classes, num_detections], 162 | feed_dict={image_tensor: frame_expanded}) 163 | 164 | # Draw the results of the detection (aka 'visulaize the results') 165 | vis_util.visualize_boxes_and_labels_on_image_array( 166 | frame, 167 | np.squeeze(boxes), 168 | np.squeeze(classes).astype(np.int32), 169 | np.squeeze(scores), 170 | category_index, 171 | use_normalized_coordinates=True, 172 | line_thickness=8, 173 | min_score_thresh=0.40) 174 | 175 | # Draw boxes defining "outside" and "inside" locations. 176 | cv2.rectangle(frame,TL_outside,BR_outside,(255,20,20),3) 177 | cv2.putText(frame,"Outside box",(TL_outside[0]+10,TL_outside[1]-10),font,1,(255,20,255),3,cv2.LINE_AA) 178 | cv2.rectangle(frame,TL_inside,BR_inside,(20,20,255),3) 179 | cv2.putText(frame,"Inside box",(TL_inside[0]+10,TL_inside[1]-10),font,1,(20,255,255),3,cv2.LINE_AA) 180 | 181 | # Check the class of the top detected object by looking at classes[0][0]. 182 | # If the top detected object is a cat (17) or a dog (18) (or a teddy bear (88) for test purposes), 183 | # find its center coordinates by looking at the boxes[0][0] variable. 184 | # boxes[0][0] variable holds coordinates of detected objects as (ymin, xmin, ymax, xmax) 185 | if (((int(classes[0][0]) == 17) or (int(classes[0][0] == 18) or (int(classes[0][0]) == 88))) and (pause == 0)): 186 | x = int(((boxes[0][0][1]+boxes[0][0][3])/2)*IM_WIDTH) 187 | y = int(((boxes[0][0][0]+boxes[0][0][2])/2)*IM_HEIGHT) 188 | 189 | # Draw a circle at center of object 190 | cv2.circle(frame,(x,y), 5, (75,13,180), -1) 191 | 192 | # If object is in inside box, increment inside counter variable 193 | if ((x > TL_inside[0]) and (x < BR_inside[0]) and (y > TL_inside[1]) and (y < BR_inside[1])): 194 | inside_counter = inside_counter + 1 195 | 196 | # If object is in outside box, increment outside counter variable 197 | if ((x > TL_outside[0]) and (x < BR_outside[0]) and (y > TL_outside[1]) and (y < BR_outside[1])): 198 | outside_counter = outside_counter + 1 199 | 200 | # If pet has been detected inside for more than 10 frames, set detected_inside flag 201 | # and send a text to the phone. 202 | if inside_counter > 10: 203 | detected_inside = True 204 | message = client.messages.create( 205 | body = 'Your pet wants outside!', 206 | from_=twilio_number, 207 | to=my_number 208 | ) 209 | inside_counter = 0 210 | outside_counter = 0 211 | # Pause pet detection by setting "pause" flag 212 | pause = 1 213 | 214 | # If pet has been detected outside for more than 10 frames, set detected_outside flag 215 | # and send a text to the phone. 216 | if outside_counter > 10: 217 | detected_outside = True 218 | message = client.messages.create( 219 | body = 'Your pet wants inside!', 220 | from_=twilio_number, 221 | to=my_number 222 | ) 223 | inside_counter = 0 224 | outside_counter = 0 225 | # Pause pet detection by setting "pause" flag 226 | pause = 1 227 | 228 | # If pause flag is set, draw message on screen. 229 | if pause == 1: 230 | if detected_inside == True: 231 | cv2.putText(frame,'Pet wants outside!',(int(IM_WIDTH*.1),int(IM_HEIGHT*.5)),font,3,(0,0,0),7,cv2.LINE_AA) 232 | cv2.putText(frame,'Pet wants outside!',(int(IM_WIDTH*.1),int(IM_HEIGHT*.5)),font,3,(95,176,23),5,cv2.LINE_AA) 233 | 234 | if detected_outside == True: 235 | cv2.putText(frame,'Pet wants inside!',(int(IM_WIDTH*.1),int(IM_HEIGHT*.5)),font,3,(0,0,0),7,cv2.LINE_AA) 236 | cv2.putText(frame,'Pet wants inside!',(int(IM_WIDTH*.1),int(IM_HEIGHT*.5)),font,3,(95,176,23),5,cv2.LINE_AA) 237 | 238 | # Increment pause counter until it reaches 30 (for a framerate of 1.5 FPS, this is about 20 seconds), 239 | # then unpause the application (set pause flag to 0). 240 | pause_counter = pause_counter + 1 241 | if pause_counter > 30: 242 | pause = 0 243 | pause_counter = 0 244 | detected_inside = False 245 | detected_outside = False 246 | 247 | # Draw counter info 248 | cv2.putText(frame,'Detection counter: ' + str(max(inside_counter,outside_counter)),(10,100),font,0.5,(255,255,0),1,cv2.LINE_AA) 249 | cv2.putText(frame,'Pause counter: ' + str(pause_counter),(10,150),font,0.5,(255,255,0),1,cv2.LINE_AA) 250 | 251 | return frame 252 | 253 | #### Initialize camera and perform object detection #### 254 | 255 | # The camera has to be set up and used differently depending on if it's a 256 | # Picamera or USB webcam. 257 | 258 | ### Picamera ### 259 | if camera_type == 'picamera': 260 | # Initialize Picamera and grab reference to the raw capture 261 | camera = PiCamera() 262 | camera.resolution = (IM_WIDTH,IM_HEIGHT) 263 | camera.framerate = 10 264 | rawCapture = PiRGBArray(camera, size=(IM_WIDTH,IM_HEIGHT)) 265 | rawCapture.truncate(0) 266 | 267 | # Continuously capture frames and perform object detection on them 268 | for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True): 269 | 270 | t1 = cv2.getTickCount() 271 | 272 | # Acquire frame and expand frame dimensions to have shape: [1, None, None, 3] 273 | # i.e. a single-column array, where each item in the column has the pixel RGB value 274 | frame = frame1.array 275 | frame.setflags(write=1) 276 | 277 | # Pass frame into pet detection function 278 | frame = pet_detector(frame) 279 | 280 | # Draw FPS 281 | cv2.putText(frame,"FPS: {0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA) 282 | 283 | # All the results have been drawn on the frame, so it's time to display it. 284 | cv2.imshow('Object detector', frame) 285 | 286 | # FPS calculation 287 | t2 = cv2.getTickCount() 288 | time1 = (t2-t1)/freq 289 | frame_rate_calc = 1/time1 290 | 291 | # Press 'q' to quit 292 | if cv2.waitKey(1) == ord('q'): 293 | break 294 | 295 | rawCapture.truncate(0) 296 | 297 | camera.close() 298 | 299 | ### USB webcam ### 300 | 301 | elif camera_type == 'usb': 302 | # Initialize USB webcam feed 303 | camera = cv2.VideoCapture(0) 304 | ret = camera.set(3,IM_WIDTH) 305 | ret = camera.set(4,IM_HEIGHT) 306 | 307 | # Continuously capture frames and perform object detection on them 308 | while(True): 309 | 310 | t1 = cv2.getTickCount() 311 | 312 | # Acquire frame and expand frame dimensions to have shape: [1, None, None, 3] 313 | # i.e. a single-column array, where each item in the column has the pixel RGB value 314 | ret, frame = camera.read() 315 | 316 | # Pass frame into pet detection function 317 | frame = pet_detector(frame) 318 | 319 | # Draw FPS 320 | cv2.putText(frame,"FPS: {0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA) 321 | 322 | # All the results have been drawn on the frame, so it's time to display it. 323 | cv2.imshow('Object detector', frame) 324 | 325 | # FPS calculation 326 | t2 = cv2.getTickCount() 327 | time1 = (t2-t1)/freq 328 | frame_rate_calc = 1/time1 329 | 330 | # Press 'q' to quit 331 | if cv2.waitKey(1) == ord('q'): 332 | break 333 | 334 | camera.release() 335 | 336 | cv2.destroyAllWindows() 337 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi 2 | 3 | 4 |

5 | 6 |

7 | 8 | *Update 10/13/19:* Setting up the TensorFlow Object Detection API on the Pi is much easier now! Two major updates: 1) TensorFlow can be installed simply using "pip3 install tensorflow". 2) The protobuf compiler (protoc) can be installed using "sudo apt-get protobuf-compiler. I have updated Step 3 and Step 4 to reflect these changes. 9 | 10 | Bonus: I made a Pet Detector program (Pet_detector.py) that sends me a text when it detects when my cat wants to be let outside! It runs on the Raspberry Pi and uses the TensorFlow Object Detection API. You can use the code as an example for your own object detection applications. More info is available at [the bottom of this readme](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#bonus-pet-detector). 11 | 12 | ## Introduction 13 | This guide provides step-by-step instructions for how to set up TensorFlow’s Object Detection API on the Raspberry Pi. By following the steps in this guide, you will be able to use your Raspberry Pi to perform object detection on live video feeds from a Picamera or USB webcam. Combine this guide with my tutorial on how to train your own neural network to identify specific objects, and you use your Pi for unique detection applications such as: 14 | 15 | * Detecting if bunnies are in your garden eating your precious vegetables 16 | * Telling you if there are any parking spaces available in front of your apartment building 17 | * [Beehive bee counter](http://matpalm.com/blog/counting_bees/) 18 | * [Counting cards at the blackjack table](https://hackaday.io/project/27639-rainman-20-blackjack-robot) 19 | * And anything else you can think of! 20 | 21 | Here's a YouTube video I made that walks through this guide! 22 | 23 | [![Link to my YouTube video!](https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/master/doc/YouTube_video.png)](https://www.youtube.com/watch?v=npZ-8Nj1YwY) 24 | 25 | The guide walks through the following steps: 26 | 1. [Update the Raspberry Pi](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#1-update-the-raspberry-pi) 27 | 2. [Install TensorFlow](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#2-install-tensorflow) 28 | 3. [Install OpenCV](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#3-install-opencv) 29 | 4. [Compile and install Protobuf](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#4-compile-and-install-protobuf) 30 | 5. [Set up TensorFlow directory structure and the PYTHONPATH variable](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#5-set-up-tensorflow-directory-structure-and-pythonpath-variable) 31 | 6. [Detect objects!](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#6-detect-objects) 32 | 7. [Bonus: Pet detector!](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi#bonus-pet-detector) 33 | 34 | The repository also includes the Object_detection_picamera.py script, which is a Python script that loads an object detection model in TensorFlow and uses it to detect objects in a Picamera video feed. The guide was written for TensorFlow v1.8.0 on a Raspberry Pi Model 3B running Raspbian Stretch v9. It will likely work for newer versions of TensorFlow. 35 | 36 | ## Steps 37 | ### 1. Update the Raspberry Pi 38 | First, the Raspberry Pi needs to be fully updated. Open a terminal and issue: 39 | ``` 40 | sudo apt-get update 41 | sudo apt-get dist-upgrade 42 | ``` 43 | Depending on how long it’s been since you’ve updated your Pi, the upgrade could take anywhere between a minute and an hour. 44 | 45 |

46 | 47 |

48 | 49 | ### 2. Install TensorFlow 50 | *Update 10/13/19: Changed instructions to just use "pip3 install tensorflow" rather than getting it from lhelontra's repository. The old instructions have been moved to this guide's appendix.* 51 | 52 | Next, we’ll install TensorFlow. The download is rather large (over 100MB), so it may take a while. Issue the following command: 53 | 54 | ``` 55 | pip3 install tensorflow 56 | ``` 57 | 58 | TensorFlow also needs the LibAtlas package. Install it by issuing the following command. (If this command doesn't work, issue "sudo apt-get update" and then try again). 59 | ``` 60 | sudo apt-get install libatlas-base-dev 61 | ``` 62 | While we’re at it, let’s install other dependencies that will be used by the TensorFlow Object Detection API. These are listed on the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) in TensorFlow’s Object Detection GitHub repository. Issue: 63 | ``` 64 | sudo pip3 install pillow lxml jupyter matplotlib cython 65 | sudo apt-get install python-tk 66 | ``` 67 | Alright, that’s everything we need for TensorFlow! Next up: OpenCV. 68 | 69 | ### 3. Install OpenCV 70 | TensorFlow’s object detection examples typically use matplotlib to display images, but I prefer to use OpenCV because it’s easier to work with and less error prone. The object detection scripts in this guide’s GitHub repository use OpenCV. So, we need to install OpenCV. 71 | 72 | To get OpenCV working on the Raspberry Pi, there’s quite a few dependencies that need to be installed through apt-get. If any of the following commands don’t work, issue “sudo apt-get update” and then try again. Issue: 73 | ``` 74 | sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev 75 | sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev 76 | sudo apt-get install libxvidcore-dev libx264-dev 77 | sudo apt-get install qt4-dev-tools libatlas-base-dev 78 | ``` 79 | Now that we’ve got all those installed, we can install OpenCV. Issue: 80 | ``` 81 | sudo pip3 install opencv-python 82 | ``` 83 | Alright, now OpenCV is installed! 84 | 85 | ### 4. Compile and Install Protobuf 86 | The TensorFlow object detection API uses Protobuf, a package that implements Google’s Protocol Buffer data format. You used to need to compile this from source, but now it's an easy install! I moved the old instructions for compiling and installing it from source to the appendix of this guide. 87 | 88 | ```sudo apt-get install protobuf-compiler``` 89 | 90 | Run `protoc --version` once that's done to verify it is installed. You should get a response of `libprotoc 3.6.1` or similar. 91 | 92 | ### 5. Set up TensorFlow Directory Structure and PYTHONPATH Variable 93 | Now that we’ve installed all the packages, we need to set up the TensorFlow directory. Move back to your home directory, then make a directory called “tensorflow1”, and cd into it. 94 | ``` 95 | mkdir tensorflow1 96 | cd tensorflow1 97 | ``` 98 | Download the tensorflow repository from GitHub by issuing: 99 | ``` 100 | git clone --depth 1 https://github.com/tensorflow/models.git 101 | ``` 102 | Next, we need to modify the PYTHONPATH environment variable to point at some directories inside the TensorFlow repository we just downloaded. We want PYTHONPATH to be set every time we open a terminal, so we have to modify the .bashrc file. Open it by issuing: 103 | ``` 104 | sudo nano ~/.bashrc 105 | ``` 106 | Move to the end of the file, and on the last line, add: 107 | ``` 108 | export PYTHONPATH=$PYTHONPATH:/home/pi/tensorflow1/models/research:/home/pi/tensorflow1/models/research/slim 109 | ``` 110 | 111 |

112 | 113 |

114 | 115 | Then, save and exit the file. This makes it so the “export PYTHONPATH” command is called every time you open a new terminal, so the PYTHONPATH variable will always be set appropriately. Close and then re-open the terminal. 116 | 117 | Now, we need to use Protoc to compile the Protocol Buffer (.proto) files used by the Object Detection API. The .proto files are located in /research/object_detection/protos, but we need to execute the command from the /research directory. Issue: 118 | ``` 119 | cd /home/pi/tensorflow1/models/research 120 | protoc object_detection/protos/*.proto --python_out=. 121 | ``` 122 | This command converts all the "name".proto files to "name_pb2".py files. Next, move into the object_detection directory: 123 | ``` 124 | cd /home/pi/tensorflow1/models/research/object_detection 125 | ``` 126 | Now, we’ll download the SSD_Lite model from the [TensorFlow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). The model zoo is Google’s collection of pre-trained object detection models that have various levels of speed and accuracy. The Raspberry Pi has a weak processor, so we need to use a model that takes less processing power. Though the model will run faster, it comes at a tradeoff of having lower accuracy. For this tutorial, we’ll use SSDLite-MobileNet, which is the fastest model available. 127 | 128 | Google is continuously releasing models with improved speed and performance, so check back at the model zoo often to see if there are any better models. 129 | 130 | Download the SSDLite-MobileNet model and unpack it by issuing: 131 | ``` 132 | wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz 133 | tar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz 134 | ``` 135 | Now the model is in the object_detection directory and ready to be used. 136 | 137 | ### 6. Detect Objects! 138 | Okay, now everything is set up for performing object detection on the Pi! The Python script in this repository, Object_detection_picamera.py, detects objects in live feeds from a Picamera or USB webcam. Basically, the script sets paths to the model and label map, loads the model into memory, initializes the Picamera, and then begins performing object detection on each video frame from the Picamera. 139 | 140 | If you’re using a Picamera, make sure it is enabled in the Raspberry Pi configuration menu. 141 | 142 |

143 | 144 |

145 | 146 | Download the Object_detection_picamera.py file into the object_detection directory by issuing: 147 | ``` 148 | wget https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/master/Object_detection_picamera.py 149 | ``` 150 | Run the script by issuing: 151 | ``` 152 | python3 Object_detection_picamera.py 153 | ``` 154 | The script defaults to using an attached Picamera. If you have a USB webcam instead, add --usbcam to the end of the command: 155 | ``` 156 | python3 Object_detection_picamera.py --usbcam 157 | ``` 158 | Once the script initializes (which can take up to 30 seconds), you will see a window showing a live view from your camera. Common objects inside the view will be identified and have a rectangle drawn around them. 159 | 160 |

161 | 162 |

163 | 164 | With the SSDLite model, the Raspberry Pi 3 performs fairly well, achieving a frame rate higher than 1FPS. This is fast enough for most real-time object detection applications. 165 | 166 | You can also use a model you trained yourself [(here's a guide that shows you how to train your own model)](https://www.youtube.com/watch?v=Rgpfk6eYxJA) by adding the frozen inference graph into the object_detection directory and changing the model path in the script. You can test this out using my playing card detector model (transferred from ssd_mobilenet_v2 model and trained on TensorFlow v1.5) located at [this dropbox link](https://www.dropbox.com/s/27avwicywbq68tx/card_model.zip?dl=0). Once you’ve downloaded and extracted the model, or if you have your own model, place the model folder into the object_detection directory. Place the label_map.pbtxt file into the object_detection/data directory. 167 | 168 |

169 | 170 |

171 | 172 | Then, open the Object_detection_picamera.py script in a text editor. Go to the line where MODEL_NAME is set and change the string to match the name of the new model folder. Then, on the line where PATH_TO_LABELS is set, change the name of the labelmap file to match the new label map. Change the NUM_CLASSES variable to the number of classes your model can identify. 173 | 174 |

175 | 176 |

177 | 178 | Now, when you run the script, it will use your model rather than the SSDLite_MobileNet model. If you’re using my model, it will detect and identify any playing cards dealt in front of the camera. 179 | 180 | **Note: If you plan to run this on the Pi for extended periods of time (greater than 5 minutes), make sure to have a heatsink installed on the Pi's main CPU! All the processing causes the CPU to run hot. Without a heatsink, it will shut down due to high temperature.** 181 | 182 |

183 | 184 |

185 | 186 | Thanks for following through this guide, I hope you found it useful. Good luck with your object detection applications on the Raspberry Pi! 187 | 188 | ## Bonus: Pet Detector! 189 | 190 |

191 | 192 |

193 | 194 | ### Description 195 | The Pet_detector.py script is an example application of using object detection on the API to alert users when a certain object is detected. I have two indoor-outdoor pets at my parents' home: a cat and a dog. They frequently stand at the door and wait patiently to be let inside or outside. This pet detector uses the TensorFlow MobileNet-SSD model to detect when they are near the door. It defines two regions in the image, an "inside" region and an "outside" region. If the pet is detected in either region for at least 10 consecutive frames, the script uses Twilio to send my phone a text message. 196 | 197 | Here's a YouTube video demonstrating the pet detector and explaining how it works! 198 | 199 | [![Link to my YouTube video!](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/blob/master/doc/PetDetector_Video.PNG)](https://youtu.be/gGqVNuYol6o) 200 | 201 | ### Usage 202 | Run the pet detector by downloading Pet_detector.py to your /object_detection directory and issuing: 203 | ``` 204 | python3 Pet_detector.py 205 | ``` 206 | 207 | Using the Pet_detector.py program requires having a Twilio account set up [(see tutorial here)](https://www.twilio.com/docs/sms/quickstart/python). It also uses four environment variables that have to be set before running the program: TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, MY_DIGITS, and TWILIO_DIGITS. These can be set using the "export" command, as shown below. More information on setting environment variables for Twilio is given [here](https://www.twilio.com/blog/2017/01/how-to-set-environment-variables.html). 208 | ``` 209 | export TWILIO_ACCOUNT_SID=[sid_value] 210 | export TWILIO_AUTH_TOKEN=[auth_token] 211 | export MY_DIGITS=[your cell phone number] 212 | export TWILIO_DIGITS=[phone number of the Twilio account] 213 | ``` 214 | The sid_value, auth_token, and phone number of the Twilio account values are all provided when a Twilio account is set up. 215 | 216 | If you don't want to bother with setting up Twilio so the pet detector can send you texts, you can just comment out the lines in the code that use the Twilio library. The detector will still display a message on the screen when your pet wants inside or outside. 217 | 218 | Also, you can move the locations of the "inside" and "outside" boxes by adjusting the TL_inside, BR_inside, TL_outside, and BR_outside variables. 219 | 220 | ## Appendix 221 | 222 | ### Old instructions for installing TensorFlow 223 | These instructions show how to install TensorFlow using lhelontra's repository. They were replaced in my 10/13/19 update of this guide. I am keeping them here, because these are the instructions used in my [video](https://www.youtube.com/watch?v=npZ-8Nj1YwY). 224 | 225 | In the /home/pi directory, create a folder called ‘tf’, which will be used to hold all the installation files for TensorFlow and Protobuf, and cd into it: 226 | ``` 227 | mkdir tf 228 | cd tf 229 | ``` 230 | A pre-built, Rapsberry Pi-compatible wheel file for installing the latest version of TensorFlow is available in the [“TensorFlow for ARM” GitHub repository](https://github.com/lhelontra/tensorflow-on-arm/releases). GitHub user lhelontra updates the repository with pre-compiled installation packages each time a new TensorFlow is released. Thanks lhelontra! Download the wheel file by issuing: 231 | ``` 232 | wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.8.0/tensorflow-1.8.0-cp35-none-linux_armv7l.whl 233 | ``` 234 | At the time this tutorial was written, the most recent version of TensorFlow was version 1.8.0. If a more recent version is available on the repository, you can download it rather than version 1.8.0. 235 | 236 | Alternatively, if the owner of the GitHub repository stops releasing new builds, or if you want some experience compiling Python packages from source code, you can check out my video guide: [How to Install TensorFlow from Source on the Raspberry Pi](https://youtu.be/WqCnW_2XDw8), which shows you how to build and install TensorFlow from source on the Raspberry Pi. 237 | 238 | [![Link to TensorFlow installation video!](https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/master/doc/Install_TF_RPi.jpg)](https://www.youtube.com/watch?v=WqCnW_2XDw8) 239 | 240 | Now that we’ve got the file, install TensorFlow by issuing: 241 | ``` 242 | sudo pip3 install /home/pi/tf/tensorflow-1.8.0-cp35-none-linux_armv7l.whl 243 | ``` 244 | TensorFlow also needs the LibAtlas package. Install it by issuing (if this command doesn't work, issue "sudo apt-get update" and then try again): 245 | ``` 246 | sudo apt-get install libatlas-base-dev 247 | ``` 248 | While we’re at it, let’s install other dependencies that will be used by the TensorFlow Object Detection API. These are listed on the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) in TensorFlow’s Object Detection GitHub repository. Issue: 249 | ``` 250 | sudo pip3 install pillow lxml jupyter matplotlib cython 251 | sudo apt-get install python-tk 252 | ``` 253 | TensorFlow is now installed and ready to go! 254 | 255 | ### Old instructions for compiling and installing Protobuf from source 256 | These are the old instructions from Step 4 showing how to compile and install Protobuf from source. These were replaced in the 10/13/19 update of this guide. 257 | 258 | The TensorFlow object detection API uses Protobuf, a package that implements Google’s Protocol Buffer data format. Unfortunately, there’s currently no easy way to install Protobuf on the Raspberry Pi. We have to compile it from source ourselves and then install it. Fortunately, a [guide](http://osdevlab.blogspot.com/2016/03/how-to-install-google-protocol-buffers.html) has already been written on how to compile and install Protobuf on the Pi. Thanks OSDevLab for writing the guide! 259 | 260 | First, get the packages needed to compile Protobuf from source. Issue: 261 | ``` 262 | sudo apt-get install autoconf automake libtool curl 263 | ``` 264 | Then download the protobuf release from its GitHub repository by issuing: 265 | ``` 266 | wget https://github.com/google/protobuf/releases/download/v3.5.1/protobuf-all-3.5.1.tar.gz 267 | ``` 268 | If a more recent version of protobuf is available, download that instead. Unpack the file and cd into the folder: 269 | ``` 270 | tar -zxvf protobuf-all-3.5.1.tar.gz 271 | cd protobuf-3.5.1 272 | ``` 273 | Configure the build by issuing the following command (it takes about 2 minutes): 274 | ``` 275 | ./configure 276 | ``` 277 | Build the package by issuing: 278 | ``` 279 | make 280 | ``` 281 | The build process took 61 minutes on my Raspberry Pi. When it’s finished, issue: 282 | ``` 283 | make check 284 | ``` 285 | This process takes even longer, clocking in at 107 minutes on my Pi. According to other guides I’ve seen, this command may exit out with errors, but Protobuf will still work. If you see errors, you can ignore them for now. Now that it’s built, install it by issuing: 286 | ``` 287 | sudo make install 288 | ``` 289 | Then move into the python directory and export the library path: 290 | ``` 291 | cd python 292 | export LD_LIBRARY_PATH=../src/.libs 293 | ``` 294 | Next, issue: 295 | ``` 296 | python3 setup.py build --cpp_implementation 297 | python3 setup.py test --cpp_implementation 298 | sudo python3 setup.py install --cpp_implementation 299 | ``` 300 | Then issue the following path commands: 301 | ``` 302 | export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp 303 | export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3 304 | ``` 305 | Finally, issue: 306 | ``` 307 | sudo ldconfig 308 | ``` 309 | That’s it! Now Protobuf is installed on the Pi. Verify it’s installed correctly by issuing the command below and making sure it puts out the default help text. 310 | ``` 311 | protoc 312 | ``` 313 | For some reason, the Raspberry Pi needs to be restarted after this process, or TensorFlow will not work. Go ahead and reboot the Pi by issuing: 314 | ``` 315 | sudo reboot now 316 | ``` 317 | 318 | Protobuf should now be installed! 319 | -------------------------------------------------------------------------------- /doc/Install_TF_RPi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/Install_TF_RPi.jpg -------------------------------------------------------------------------------- /doc/PetDetector_Video.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/PetDetector_Video.PNG -------------------------------------------------------------------------------- /doc/Picamera_livingroom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/Picamera_livingroom.png -------------------------------------------------------------------------------- /doc/YouTube_video.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/YouTube_video.png -------------------------------------------------------------------------------- /doc/bashrc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/bashrc.png -------------------------------------------------------------------------------- /doc/camera_enabled.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/camera_enabled.png -------------------------------------------------------------------------------- /doc/cards.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/cards.png -------------------------------------------------------------------------------- /doc/directory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/directory.png -------------------------------------------------------------------------------- /doc/edited_script.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/edited_script.png -------------------------------------------------------------------------------- /doc/kitchen.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/kitchen.png -------------------------------------------------------------------------------- /doc/pet_detector_demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/pet_detector_demo.png -------------------------------------------------------------------------------- /doc/update.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/90086152f1d80c2224afcbaf4a549a4b2555fc48/doc/update.png --------------------------------------------------------------------------------