10 |
11 | # Robotics Level 4
12 |
13 | This repo is an extension of previous [level](https://github.com/jiteshsaini/robotics-level-3). The code of this robot is organised in various folders inside the directory 'earthrover'. The names of these folders briefly indicate the purpose of the code inside them. This repo focusses on the advanced capabilities embedded into the robot via use of Pre-trained Machine Learning models provided by "tensorflow.org" or created via online tool of Google called Teachable Machine. The following projects in this repo demonstrate how we can integrate Tensorflow Lite and such Machine Learning Models on a Raspberry Pi computer. You can further read about them by accessing their individual README.md file.
14 |
15 | - Gesture Controls
16 | - Image Classification
17 | - Object Detection
18 | - Object Tracking
19 | - Human Following
20 |
21 | ## Download the code and configure your Raspberry Pi
22 |
23 | I have created a bash script that installs all the packages / libraries required to run this code on your Raspberry Pi. The script also downloads the code of this repo along with ML models on your device automatically. Follow the instructions on the link given below to configure your Raspberry Pi:-
24 |
25 | https://helloworld.co.in/earthrover
26 |
27 |
28 | ## Object Detection
29 |
30 | The code for this project is placed in a directory named 'object_detection' inside the 'earthrover' directory
31 | The ML model used in this project is placed inside 'all_models' directory.
32 |
33 | The robot can spy on a particular object and provide an alarm on a remote Web Control panel whenever the selected object appears in the frame.
34 |
35 | ## Object Tracking
36 | The code for this project is placed in a directory named 'object_tracking' inside the 'earthrover' directory
37 | The ML model used in this project is placed inside 'all_models' directory.
38 |
39 | Robot is made to track a ball and follow it. You can see the robot's camera view on a browser while it is tracking the ball.
40 |
41 | ## Human Following
42 | The code for this project is placed in a directory named 'human_following' inside the 'earthrover' directory
43 | The ML model used in this project is placed inside 'all_models' directory.
44 |
45 | Robot is made to follow a human. It is a good human follower :)
46 |
47 | ## Image Classification
48 |
49 | The code for this project is placed in a directory named 'image_classification' inside the 'earthrover' directory.
50 | The ML model used in this project is placed inside 'all_models' directory.
51 |
52 | The robot's camera view is streamed over LAN with overlays of image classification output. Also, if an object is recognised, the robot speaks out its name.
53 |
54 | ## Gesture Control
55 |
56 | The code for this project is placed in a folder named 'tm' inside the 'earthrover' directory.
57 | The model used in this project is trained through Teachable Machine online tool by Google.
58 | The model files are present in the same directory. Presently the model is trained to recognise hand gestures. You can train your own model using Teachable Machine and replace the model files to customise the project.
59 |
--------------------------------------------------------------------------------
/all_models/coco_labels.txt:
--------------------------------------------------------------------------------
1 | 0 person
2 | 1 bicycle
3 | 2 car
4 | 3 motorcycle
5 | 4 airplane
6 | 5 bus
7 | 6 train
8 | 7 truck
9 | 8 boat
10 | 9 traffic light
11 | 10 fire hydrant
12 | 12 stop sign
13 | 13 parking meter
14 | 14 bench
15 | 15 bird
16 | 16 cat
17 | 17 dog
18 | 18 horse
19 | 19 sheep
20 | 20 cow
21 | 21 elephant
22 | 22 bear
23 | 23 zebra
24 | 24 giraffe
25 | 26 backpack
26 | 27 umbrella
27 | 30 handbag
28 | 31 tie
29 | 32 suitcase
30 | 33 frisbee
31 | 34 skis
32 | 35 snowboard
33 | 36 sports ball
34 | 37 kite
35 | 38 baseball bat
36 | 39 baseball glove
37 | 40 skateboard
38 | 41 surfboard
39 | 42 tennis racket
40 | 43 bottle
41 | 45 wine glass
42 | 46 cup
43 | 47 fork
44 | 48 knife
45 | 49 spoon
46 | 50 bowl
47 | 51 banana
48 | 52 apple
49 | 53 sandwich
50 | 54 orange
51 | 55 broccoli
52 | 56 carrot
53 | 57 hot dog
54 | 58 pizza
55 | 59 donut
56 | 60 cake
57 | 61 chair
58 | 62 couch
59 | 63 pot_plant
60 | 64 bed
61 | 66 dining table
62 | 69 toilet
63 | 71 tv
64 | 72 laptop
65 | 73 mouse
66 | 74 remote
67 | 75 keyboard
68 | 76 cell phone
69 | 77 microwave
70 | 78 oven
71 | 79 toaster
72 | 80 sink
73 | 81 refrigerator
74 | 83 book
75 | 84 clock
76 | 85 vase
77 | 86 scissors
78 | 87 teddy bear
79 | 88 hair drier
80 | 89 toothbrush
--------------------------------------------------------------------------------
/all_models/mobilenet_ssd_v2_coco_quant_postprocess.tflite:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jiteshsaini/robotics-level-4/34e70e32c6733c3500619d5d73bd2f6ceb77abe8/all_models/mobilenet_ssd_v2_coco_quant_postprocess.tflite
--------------------------------------------------------------------------------
/all_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jiteshsaini/robotics-level-4/34e70e32c6733c3500619d5d73bd2f6ceb77abe8/all_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
--------------------------------------------------------------------------------
/all_models/mobilenet_v1_1.0_224_quant.tflite:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jiteshsaini/robotics-level-4/34e70e32c6733c3500619d5d73bd2f6ceb77abe8/all_models/mobilenet_v1_1.0_224_quant.tflite
--------------------------------------------------------------------------------
/earthrover/accelerometer/acc.js:
--------------------------------------------------------------------------------
1 | var xx,yy,zz;
2 |
3 | if (window.DeviceMotionEvent != undefined) {
4 | window.ondevicemotion = function(e) {
5 | sleep(250);
6 | xx=e.accelerationIncludingGravity.x.toFixed(2);
7 | yy=e.accelerationIncludingGravity.y.toFixed(2);
8 | zz=e.accelerationIncludingGravity.z.toFixed(2);
9 |
10 | document.getElementById("accelerationX").innerHTML = xx;
11 | document.getElementById("accelerationY").innerHTML = yy;
12 | document.getElementById("accelerationZ").innerHTML = zz;
13 |
14 | if(document.getElementById('acc').value == 'ON')
15 | send_acc_data(xx,yy,zz);
16 | }
17 | }
18 |
19 | function acc_toggle()
20 | {
21 | var id='acc';
22 | //alert('acc toggle');
23 | button_caption=document.getElementById(id).value;
24 |
25 | if(button_caption=="OFF"){
26 | document.getElementById(id).value="ON";
27 | document.getElementById(id).style.backgroundColor="#66ff66";
28 |
29 | }
30 | if(button_caption=="ON"){
31 | document.getElementById(id).value="OFF";
32 | document.getElementById(id).style.backgroundColor="gray";
33 | }
34 | }
35 |
36 | function sleep(milliseconds) {
37 | var start = new Date().getTime();
38 | for (var i = 0; i < 1e7; i++) {
39 | if ((new Date().getTime() - start) > milliseconds){
40 | break;
41 | }
42 | }
43 | }
44 |
45 | function send_acc_data(xx,yy,zz)
46 | {
47 |
48 | $.post("ajax_acc.php",
49 | {
50 | acc_x: xx,
51 | acc_y: yy,
52 | acc_z:zz
53 | }
54 | );
55 | }
56 |
--------------------------------------------------------------------------------
/earthrover/accelerometer/ajax_acc.php:
--------------------------------------------------------------------------------
1 | 4){
9 | right();
10 | }
11 | elseif($y<-4){
12 | left();
13 | }
14 | else{
15 |
16 | if ($z>5){
17 | forward();
18 | }
19 | elseif ($z<-5){
20 | back();
21 | }
22 | else{
23 | stop();
24 | }
25 | }
26 |
27 | ?>
28 |
--------------------------------------------------------------------------------
/earthrover/accelerometer/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Accelerometer Control
5 |
6 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
Earthrover control through smart phone's Accelerometer
55 | Interface 12 V, 100 RPM DC Motors with Raspberry Pi using L293D based motor driver board.
56 | Attach the Pi Camera to the Raspberry Pi. Don't forget to enable it in the preferences.
57 | Read more about the hardware connections here
58 | and here
59 |
60 |
61 |
62 |
63 |
64 |
65 |
Basic Robot
66 |
67 |
Direction and Speed Controls
68 |
69 |
70 | Code Location: earthrover/control_panel
71 |
72 | The direction buttons send commands to GPIO pins 8 & 11 for motor 1 and GPIO pins 14 & 15 for motor 2.
73 | The speed slider sends a value between 0-100 (in increments of 10) to server. This value is used to generate PWM on pins 20 & 21 simultaneously, resulting in speed control of motors.
74 |
91 | When you press camera 'ON' button, a python script 'earthrover/camera_lights/cam_server.py' is launched in the background and starts streaming the camera video.
92 | The Light buttons toggles the state of GPIO pins 17, 18 & 27. You can connect simple LEDs directly to these GPIO pins or 12 V high brigthness LEDs through a transistor switching circuit.
93 |
103 | The robot can speak out a written text via a Text to Speech engine called 'espeak'.
104 | Also, it can play pre-recorded mp3 files via built in 'omxplayer'.
105 |
113 | Code Location: earthrover/range_sensor<
114 | br>
115 | The toggle button launches the python script 'earthrover/range_sensor/range_sensor.py' in background.
116 | The measured distance value is shown on the Control Panel.
117 |
118 |
119 |
120 | In above picture, the distance measured by the sensor is 65.6 cm.
121 | If the distance falls below 30cm, the robot is programmed to move back automatically
122 |
133 | Javascript code running in the browser can access the hardware (accelerometer, microphone, orientation etc) of your mobile phone / laptop.
134 | But, to be able to access these sensors, the webpage containing the javascript code must originate from a server with 'https' enabled. That means the Apache webserver running on Raspberry Pi must have https enabled.
135 |
136 | Each of the following buttons below open their respective https page. If you see a "NET::ERR_CERT_INVALID" error on Chrome and there is no "proceed to website" option, then you just type "thisisunsafe" directly in chrome on the same page. You should be able to see the page. Refer to this blog
137 |
138 |
149 | Open the earthrover control panel using a mobile browser and press the 'Accelerometer' icon. The webpage with relevant Javascript code will appear and start capturing and sending the accelerometer data of your mobile phone to the server (Raspberry Pi).
150 | You can now control the robot by tilting the phone.
151 |
160 | Open the earthrover control panel using a mobile/Laptop browser (Chrome) and press the 'Voice Control' icon. The webpage with relevant Javascript code will appear which takes voice input and converts
161 | it to text and send to the server (Raspberry Pi).
162 | You can now control the robot by speaking the valid commands. The list of valid commands are described in the link below.
163 |
174 | Press the 'Compass' icon. The instructions to use are given on the webpage that appears.
175 | This is just an example to demonstrate how you can use the orientation sensor of the mobile phone to control the robot direction precisely.
176 |
177 |
191 | See various Pre-trained Models by Google Coral team: Pre-trained Models
192 |
193 |
194 | This section contains projects that involve deployment of a custom / pre-trained model on Raspberry Pi to achieve advanced functionalities.
195 |
196 |
197 |
198 |
199 |
Gesture Controls
200 |
201 | Code Location: earthrover/tm
202 | ML Model details: Custom Model created using Teachable Machine tool to recognise hand gestures.
203 | Inference: On your Laptop's browser using tensorflow.js
204 | Hardware Acceleration: Not implemented, since inference is taking place on browser.
205 |
206 | Using a laptop with web-cam, open chrome browser. Load the earthrover control panel and press the 'Gesture Controls' button. A page will appear with relevant functionality.
207 | Press the start button, the web-cam will turn on and starts looking for the hand gestures. If a gesture is recognised, command corresponding to the gesture is sent to the server (Raspberry Pi) to actuate the GPIO pins.
208 |
209 | You can notice that this button has a different color than rest of the buttons in this section. Because this is the only case where inference is happing on the browser. In other cases, inference is taking place on Raspberry Pi.
210 |
219 | ML Model details: Pre-trained Image Classification Model by coral.ai
220 | Inference: On Raspberry Pi using Tensorflow Lite
221 | Hardware Acceleration: Not Implemented
222 |
223 | On the control panel, press the 'Image Classification' button. When this button is pressed, a python script 'image_recog_cv2.py' is launced in the background.
224 | The camera view with results overlay can be accessed by by clicking button.
225 | Try to show different objects to the camera. You will see the results on browser. Also, the robot will speak out the name.
226 | To stop the background script press the 'Image Classification' button once again. This will free up the camera for other tasks.
227 |
236 | ML Model details: Pre-trained Object Detection Model by coral.ai
237 | Inference: On Raspberry Pi using Tensorflow Lite
238 | Hardware Acceleration: Improve inferencing speed by 10x by attaching USB Coral Accelerator and setting the variable 'edgetpu' to '1' in python file 'earthrover/util.py'
239 |
240 | On the control panel, press the 'Object Detection' button. When this button is pressed, a python script 'object_detection_web2.py' is launced in the background.
241 | A button will appear. Click it to see the Web UI through which you can set object of interest.
242 | To stop the background script press the 'Object Detection' button once again. This will free up the camera for other tasks.
243 |
252 | ML Model details: Pre-trained Object Detection Model by coral.ai
253 | Inference: On Raspberry Pi using Tensorflow Lite
254 | Hardware Acceleration: Improve inferencing speed by 10x by attaching USB Coral Accelerator and setting the variable 'edgetpu' to '1' in python file 'earthrover/util.py'
255 |
256 | On the control panel, press the 'Object Tracking' button. When this button is pressed, a python script 'object_tracking.py' is launced in the background.
257 | A button will appear. Click it to see the robot's camera view while it tracks an object.
258 | To stop the background script press the 'Object Tracking' button once again. This will free up the camera for other tasks.
259 |
267 | ML Model details: Pre-trained Object Detection Model by coral.ai
268 | Inference: On Raspberry Pi using Tensorflow Lite
269 | Hardware Acceleration: Improve inferencing speed by 10x by attaching USB Coral Accelerator and setting the variable 'edgetpu' to '1' in python file 'earthrover/util.py'
270 |
271 | On the control panel, press the 'Human Following' button. When this button is pressed, a python script 'human_follower.py' is launced in the background.
272 | A button will appear. Click it to see the robot's camera view while it tracks a person.
273 | To stop the background script press the 'Human Following' button once again. This will free up the camera for other tasks.
274 |
15 |
16 | ## Model files
17 | The ML model used in this project is placed in 'all_models' directory inside parent directory.
18 |
19 | ## Overview of the Project
20 | Robot detects an object using a Machine Learning model 'MobileNet SSD v1 (COCO)' and TensorFlow Lite interpreter. The Robot follows the object and manoeuvres itself to get the object in the center of frame. While the robot is tracking / following the object, working of tracking algorithm and Robot's view can be accessed on a browser. Robot's view with information overlay is generated using OpenCV. The various overlays on a frame are shown in the picture below
21 |
22 |
23 |
24 |
25 |
26 |
27 | When the object is present in the frame, information such as bounding boxes, center of the object, deviation of the object from center of the frame, robot direction and speed are updated as shown in picture below. In the below example, X and Y values denote the deviation of center of the object (the red dot) from center of the frame. Since the horizontal deviation i.e. value of 'X' is above the tolerance value, the code generated 'Move Left' command.
28 |
29 |
30 |
31 |
32 |
33 |
34 | Python's micro Web Framework called "FLASK" is used for streaming the camera frame (or Robot's view) over LAN.
35 |
--------------------------------------------------------------------------------
/earthrover/object_tracking/common.py:
--------------------------------------------------------------------------------
1 | """
2 | This file has utility functions which are used in 'object_tracking.py' file
3 |
4 | This code is based on Google-Coral Object Detection example code available at:
5 | https://github.com/google-coral/examples-camera/tree/master/opencv
6 |
7 | """
8 | import numpy as np
9 | from PIL import Image
10 | import tflite_runtime.interpreter as tflite
11 | import platform
12 |
13 |
14 | EDGETPU_SHARED_LIB = {
15 | 'Linux': 'libedgetpu.so.1',
16 | 'Darwin': 'libedgetpu.1.dylib',
17 | 'Windows': 'edgetpu.dll'
18 | }[platform.system()]
19 |
20 | def make_interpreter_0(model_file):
21 | model_file, *device = model_file.split('@')
22 | return tflite.Interpreter(model_path=model_file)
23 |
24 | def make_interpreter_1(model_file):
25 | model_file, *device = model_file.split('@')
26 | return tflite.Interpreter(
27 | model_path=model_file,
28 | experimental_delegates=[
29 | tflite.load_delegate(EDGETPU_SHARED_LIB,
30 | {'device': device[0]} if device else {})
31 | ])
32 |
33 | def set_input(interpreter, image, resample=Image.NEAREST):
34 | """Copies data to input tensor."""
35 | image = image.resize((input_image_size(interpreter)[0:2]), resample)
36 | input_tensor(interpreter)[:, :] = image
37 |
38 | def input_image_size(interpreter):
39 | """Returns input image size as (width, height, channels) tuple."""
40 | _, height, width, channels = interpreter.get_input_details()[0]['shape']
41 | return width, height, channels
42 |
43 | def input_tensor(interpreter):
44 | """Returns input tensor view as numpy array of shape (height, width, 3)."""
45 | tensor_index = interpreter.get_input_details()[0]['index']
46 | return interpreter.tensor(tensor_index)()[0]
47 |
48 | def output_tensor(interpreter, i):
49 | """Returns dequantized output tensor if quantized before."""
50 | output_details = interpreter.get_output_details()[i]
51 | output_data = np.squeeze(interpreter.tensor(output_details['index'])())
52 | if 'quantization' not in output_details:
53 | return output_data
54 | scale, zero_point = output_details['quantization']
55 | if scale == 0:
56 | return output_data - zero_point
57 | return scale * (output_data - zero_point)
58 |
59 | import time
60 | def time_elapsed(start_time,event):
61 | time_now=time.time()
62 | duration = (time_now - start_time)*1000
63 | duration=round(duration,2)
64 | print (">>> ", duration, " ms (" ,event, ")")
65 |
66 | import os
67 | def load_model(model_dir,model, lbl, edgetpu):
68 |
69 | print('Loading from directory: {} '.format(model_dir))
70 | print('Loading Model: {} '.format(model))
71 | print('Loading Labels: {} '.format(lbl))
72 |
73 | model_path=os.path.join(model_dir,model)
74 | labels_path=os.path.join(model_dir,lbl)
75 |
76 | if(edgetpu==0):
77 | interpreter = make_interpreter_0(model_path)
78 | else:
79 | interpreter = make_interpreter_1(model_path)
80 |
81 | interpreter.allocate_tensors()
82 |
83 | labels = load_labels(labels_path)
84 |
85 | return interpreter, labels
86 |
87 | import re
88 | def load_labels(path):
89 | p = re.compile(r'\s*(\d+)(.+)')
90 | with open(path, 'r', encoding='utf-8') as f:
91 | lines = (p.match(line).groups() for line in f.readlines())
92 | return {int(num): text.strip() for num, text in lines}
93 |
94 | #----------------------------------------------------------------------
95 | import collections
96 | Object = collections.namedtuple('Object', ['id', 'score', 'bbox'])
97 |
98 | class BBox(collections.namedtuple('BBox', ['xmin', 'ymin', 'xmax', 'ymax'])):
99 | """Bounding box.
100 | Represents a rectangle which sides are either vertical or horizontal, parallel
101 | to the x or y axis.
102 | """
103 | __slots__ = ()
104 |
105 | def get_output(interpreter, score_threshold, top_k, image_scale=1.0):
106 | """Returns list of detected objects."""
107 | boxes = output_tensor(interpreter, 0)
108 | class_ids = output_tensor(interpreter, 1)
109 | scores = output_tensor(interpreter, 2)
110 | count = int(output_tensor(interpreter, 3))
111 |
112 | def make(i):
113 | ymin, xmin, ymax, xmax = boxes[i]
114 | return Object(
115 | id=int(class_ids[i]),
116 | score=scores[i],
117 | bbox=BBox(xmin=np.maximum(0.0, xmin),
118 | ymin=np.maximum(0.0, ymin),
119 | xmax=np.minimum(1.0, xmax),
120 | ymax=np.minimum(1.0, ymax)))
121 |
122 | return [make(i) for i in range(top_k) if scores[i] >= score_threshold]
123 | #--------------------------------------------------------------------
124 |
125 |
126 |
--------------------------------------------------------------------------------
/earthrover/object_tracking/master.py:
--------------------------------------------------------------------------------
1 | #Project: Earthrover
2 | #Created by: Jitesh Saini
3 |
4 | import time,os
5 | import sys
6 |
7 | local_path=os.path.dirname(os.path.realpath(__file__))
8 |
9 | print ("local_path: ", local_path)
10 |
11 | status = sys.argv[1]
12 |
13 | file_name="object_tracking.py"
14 |
15 | if (status=="1"):
16 | print "starting Object Detection script"
17 | cmd= "sudo python3 " + local_path + "/" + file_name + " &"
18 | print ("cmd: ", cmd)
19 | os.system(cmd)
20 | time.sleep(1)
21 |
22 |
23 | if (status=="0"):
24 | cmd= "sudo pkill -f " + file_name
25 | os.system(cmd)
26 |
--------------------------------------------------------------------------------
/earthrover/object_tracking/object_tracking.py:
--------------------------------------------------------------------------------
1 | """
2 | Project: AI Robot - Object Tracking
3 | Author: Jitesh Saini
4 | Github: https://github.com/jiteshsaini
5 | website: https://helloworld.co.in
6 |
7 | - The robot uses PiCamera to capture frames.
8 | - An object within the frame is detected using Machine Learning moldel & TensorFlow Lite interpreter.
9 | - Using OpenCV, the frame is overlayed with information such as bounding boxes, center coordinates of the object, deviation of the object from center of the frame etc.
10 | - The frame with overlays is streamed over LAN using FLASK, which can be accessed using a browser by typing IP address of the RPi followed by the port (2204 as per this code)
11 | - Google Coral USB Accelerator should be used to accelerate the inferencing process.
12 |
13 | When Coral USB Accelerator is connected, amend line 14 of util.py as:-
14 | edgetpu = 1
15 |
16 | When Coral USB Accelerator is not connected, amend line 14 of util.py as:-
17 | edgetpu = 0
18 |
19 | The code moves the robot in order to bring center of the object closer to center of the frame.
20 | """
21 |
22 | import common as cm
23 | import cv2
24 | import numpy as np
25 | from PIL import Image
26 | import time
27 | from threading import Thread
28 |
29 | import sys
30 | sys.path.insert(0, '/var/www/html/earthrover')
31 | import util as ut
32 | ut.init_gpio()
33 |
34 | cap = cv2.VideoCapture(0)
35 | threshold=0.2
36 | top_k=5 #number of objects to be shown as detected
37 |
38 | model_dir = '/var/www/html/all_models'
39 | model = 'mobilenet_ssd_v2_coco_quant_postprocess.tflite'
40 | model_edgetpu = 'mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite'
41 | lbl = 'coco_labels.txt'
42 |
43 | tolerance=0.1
44 | x_deviation=0
45 | y_deviation=0
46 | arr_track_data=[0,0,0,0,0,0]
47 |
48 | arr_valid_objects=['apple', 'sports ball', 'frisbee', 'orange', 'mouse', 'vase', 'banana' ]
49 |
50 | #---------Flask----------------------------------------
51 | from flask import Flask, Response
52 | from flask import render_template
53 |
54 | app = Flask(__name__)
55 |
56 | @app.route('/')
57 | def index():
58 | #return "Default Message"
59 | return render_template("index.html")
60 |
61 | @app.route('/video_feed')
62 | def video_feed():
63 | #global cap
64 | return Response(main(),
65 | mimetype='multipart/x-mixed-replace; boundary=frame')
66 |
67 | #-----------------------------------------------------------
68 |
69 |
70 | #-----initialise motor speed-----------------------------------
71 |
72 | import RPi.GPIO as GPIO
73 | GPIO.setmode(GPIO.BCM) # choose BCM numbering scheme
74 |
75 | GPIO.setup(20, GPIO.OUT)# set GPIO 20 as output pin
76 | GPIO.setup(21, GPIO.OUT)# set GPIO 21 as output pin
77 |
78 | pin20 = GPIO.PWM(20, 100) # create object pin20 for PWM on port 20 at 100 Hertz
79 | pin21 = GPIO.PWM(21, 100) # create object pin21 for PWM on port 21 at 100 Hertz
80 |
81 | #set speed to maximum value
82 | val=100
83 | pin20.start(val) # start pin20 on 0 percent duty cycle (off)
84 | pin21.start(val) # start pin21 on 0 percent duty cycle (off)
85 |
86 | print("speed set to: ", val)
87 | #---------------------------------------------------------------
88 |
89 |
90 | def track_object(objs,labels):
91 |
92 | #global delay
93 | global x_deviation, y_deviation, tolerance, arr_track_data
94 |
95 |
96 | if(len(objs)==0):
97 | print("no objects to track")
98 | ut.stop()
99 | ut.red_light("OFF")
100 | arr_track_data=[0,0,0,0,0,0]
101 | return
102 |
103 |
104 | #ut.head_lights("OFF")
105 | k=0
106 | flag=0
107 | for obj in objs:
108 | lbl=labels.get(obj.id, obj.id)
109 | k = arr_valid_objects.count(lbl)
110 | if (k>0):
111 | x_min, y_min, x_max, y_max = list(obj.bbox)
112 | flag=1
113 | break
114 |
115 | #print(x_min, y_min, x_max, y_max)
116 | if(flag==0):
117 | print("selected object no present")
118 | return
119 |
120 | x_diff=x_max-x_min
121 | y_diff=y_max-y_min
122 | print("x_diff: ",round(x_diff,5))
123 | print("y_diff: ",round(y_diff,5))
124 |
125 |
126 | obj_x_center=x_min+(x_diff/2)
127 | obj_x_center=round(obj_x_center,3)
128 |
129 | obj_y_center=y_min+(y_diff/2)
130 | obj_y_center=round(obj_y_center,3)
131 |
132 |
133 | #print("[",obj_x_center, obj_y_center,"]")
134 |
135 | x_deviation=round(0.5-obj_x_center,3)
136 | y_deviation=round(0.5-obj_y_center,3)
137 |
138 | print("{",x_deviation,y_deviation,"}")
139 |
140 | #move_robot()
141 | thread = Thread(target = move_robot)
142 | thread.start()
143 | #thread.join()
144 |
145 | #print(cmd)
146 |
147 | arr_track_data[0]=obj_x_center
148 | arr_track_data[1]=obj_y_center
149 | arr_track_data[2]=x_deviation
150 | arr_track_data[3]=y_deviation
151 |
152 |
153 | #this function is executed within a thread
154 | def move_robot():
155 | global x_deviation, y_deviation, tolerance, arr_track_data
156 |
157 | print("moving robot .............!!!!!!!!!!!!!!")
158 | print(x_deviation, y_deviation, tolerance, arr_track_data)
159 |
160 | if(abs(x_deviation)abs(y_deviation)):
169 | if(x_deviation>=tolerance):
170 | cmd="Move Left"
171 | delay1=get_delay(x_deviation,'l')
172 |
173 | ut.left()
174 | time.sleep(delay1)
175 | ut.stop()
176 |
177 | if(x_deviation<=-1*tolerance):
178 | cmd="Move Right"
179 | delay1=get_delay(x_deviation,'r')
180 |
181 | ut.right()
182 | time.sleep(delay1)
183 | ut.stop()
184 | else:
185 |
186 | if(y_deviation>=tolerance):
187 | cmd="Move Forward"
188 | delay1=get_delay(y_deviation,'f')
189 |
190 | ut.forward()
191 | time.sleep(delay1)
192 | ut.stop()
193 |
194 | if(y_deviation<=-1*tolerance):
195 | cmd="Move Backward"
196 | delay1=get_delay(y_deviation,'b')
197 |
198 | ut.back()
199 | time.sleep(delay1)
200 | ut.stop()
201 |
202 |
203 | arr_track_data[4]=cmd
204 | arr_track_data[5]=delay1
205 |
206 | #based on the deviation of the object from the center of the frame, a delay value is returned by this function
207 | #which decides how long the motion command is to be given to the motors.
208 | def get_delay(deviation,direction):
209 | deviation=abs(deviation)
210 | if (direction=='f' or direction=='b'):
211 | if(deviation>=0.3):
212 | d=0.1
213 | elif(deviation>=0.2 and deviation<0.30):
214 | d=0.075
215 | elif(deviation>=0.15 and deviation<0.2):
216 | d=0.045
217 | else:
218 | d=0.035
219 | else:
220 | if(deviation>=0.4):
221 | d=0.080
222 | elif(deviation>=0.35 and deviation<0.40):
223 | d=0.070
224 | elif(deviation>=0.30 and deviation<0.35):
225 | d=0.060
226 | elif(deviation>=0.25 and deviation<0.30):
227 | d=0.050
228 | elif(deviation>=0.20 and deviation<0.25):
229 | d=0.040
230 | else:
231 | d=0.030
232 |
233 | return d
234 |
235 |
236 | def main():
237 |
238 | from util import edgetpu
239 |
240 | if (edgetpu==1):
241 | mdl = model_edgetpu
242 | else:
243 | mdl = model
244 |
245 | interpreter, labels =cm.load_model(model_dir,mdl,lbl,edgetpu)
246 |
247 | fps=1
248 | arr_dur=[0,0,0]
249 | #while cap.isOpened():
250 | while True:
251 | start_time=time.time()
252 |
253 | #----------------Capture Camera Frame-----------------
254 | start_t0=time.time()
255 | ret, frame = cap.read()
256 | if not ret:
257 | break
258 |
259 | cv2_im = frame
260 | cv2_im = cv2.flip(cv2_im, 0)
261 | cv2_im = cv2.flip(cv2_im, 1)
262 |
263 | cv2_im_rgb = cv2.cvtColor(cv2_im, cv2.COLOR_BGR2RGB)
264 | pil_im = Image.fromarray(cv2_im_rgb)
265 |
266 | arr_dur[0]=time.time() - start_t0
267 | #cm.time_elapsed(start_t0,"camera capture")
268 | #----------------------------------------------------
269 |
270 | #-------------------Inference---------------------------------
271 | start_t1=time.time()
272 | cm.set_input(interpreter, pil_im)
273 | interpreter.invoke()
274 | objs = cm.get_output(interpreter, score_threshold=threshold, top_k=top_k)
275 |
276 | arr_dur[1]=time.time() - start_t1
277 | #cm.time_elapsed(start_t1,"inference")
278 | #----------------------------------------------------
279 |
280 | #-----------------other------------------------------------
281 | start_t2=time.time()
282 | track_object(objs,labels)#tracking <<<<<<<
283 |
284 | if cv2.waitKey(1) & 0xFF == ord('q'):
285 | break
286 |
287 |
288 | cv2_im = draw_overlays(cv2_im, objs, labels, arr_dur, arr_track_data)
289 | # cv2.imshow('Object Tracking - TensorFlow Lite', cv2_im)
290 |
291 | ret, jpeg = cv2.imencode('.jpg', cv2_im)
292 | pic = jpeg.tobytes()
293 |
294 | #Flask streaming
295 | yield (b'--frame\r\n'
296 | b'Content-Type: image/jpeg\r\n\r\n' + pic + b'\r\n\r\n')
297 |
298 | arr_dur[2]=time.time() - start_t2
299 | #cm.time_elapsed(start_t2,"other")
300 | #cm.time_elapsed(start_time,"overall")
301 |
302 | #print("arr_dur:",arr_dur)
303 | fps = round(1.0 / (time.time() - start_time),1)
304 | print("*********FPS: ",fps,"************")
305 |
306 | cap.release()
307 | cv2.destroyAllWindows()
308 |
309 | def draw_overlays(cv2_im, objs, labels, arr_dur, arr_track_data):
310 | height, width, channels = cv2_im.shape
311 | font=cv2.FONT_HERSHEY_SIMPLEX
312 |
313 | global tolerance
314 |
315 | #draw black rectangle on top
316 | cv2_im = cv2.rectangle(cv2_im, (0,0), (width, 24), (0,0,0), -1)
317 |
318 |
319 | #write processing durations
320 | cam=round(arr_dur[0]*1000,0)
321 | inference=round(arr_dur[1]*1000,0)
322 | other=round(arr_dur[2]*1000,0)
323 | text_dur = 'Camera: {}ms Inference: {}ms other: {}ms'.format(cam,inference,other)
324 | cv2_im = cv2.putText(cv2_im, text_dur, (int(width/4)-30, 16),font, 0.4, (255, 255, 255), 1)
325 |
326 | #write FPS
327 | total_duration=cam+inference+other
328 | fps=round(1000/total_duration,1)
329 | text1 = 'FPS: {}'.format(fps)
330 | cv2_im = cv2.putText(cv2_im, text1, (10, 20),font, 0.7, (150, 150, 255), 2)
331 |
332 |
333 | #draw black rectangle at bottom
334 | cv2_im = cv2.rectangle(cv2_im, (0,height-24), (width, height), (0,0,0), -1)
335 |
336 | #write deviations and tolerance
337 | str_tol='Tol : {}'.format(tolerance)
338 | cv2_im = cv2.putText(cv2_im, str_tol, (10, height-8),font, 0.55, (150, 150, 255), 2)
339 |
340 |
341 | x_dev=arr_track_data[2]
342 | str_x='X: {}'.format(x_dev)
343 | if(abs(x_dev)
2 |
3 |