├── LICENSE
├── README.md
├── aiy_model_output.py
├── join_videos.sh
├── picam_record.py
├── server.py
├── setup.py
├── static
├── drawAiyVision.js
├── favicon.ico
├── index.html
├── socket-test.html
└── uv4l.js
├── tests
├── socket_client.py
└── uv4l_socket_tester.py
├── uv4l-socket.sh
└── video_maker.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2018 webrtcHacks
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
23 |
24 | Notice:
25 |
26 | Portions of server.py are taken from https://github.com/google/aiyprojects-raspbian:
27 |
28 | Copyright 2017 Google Inc.
29 |
30 | Licensed under the Apache License, Version 2.0 (the "License");
31 | you may not use this file except in compliance with the License.
32 | You may obtain a copy of the License at
33 |
34 | http://www.apache.org/licenses/LICENSE-2.0
35 |
36 | Unless required by applicable law or agreed to in writing, software
37 | distributed under the License is distributed on an "AS IS" BASIS,
38 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
39 | See the License for the specific language governing permissions and
40 | limitations under the License.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # AIY Vision Kit Web Service
2 |
3 | Display the video feed and annotations of the [AIY Vision Kit](https://aiyprojects.withgoogle.com/vision) face detection or object detection models from
4 | the PiCamera to any web page using [WebRTC](https://webrtc.org) and [UV4L](http://www.linux-projects.org/uv4l/).
5 |
6 | See the [Part 2: Building a AIY Vision Kit Web Server with UV4L](https://webrtchacks.com/?p=2824&) webrtcHacks post for more details.
7 |
8 | 
9 |
10 |
11 | # Architecture
12 |
13 | 
14 |
15 | ## Installation
16 |
17 | 1. [Buy](http://www.microcenter.com/site/content/google_aiy.aspx) a AIY Vision Kit
18 | 1. Follow the Vision Kit [Assembly Guide](https://aiyprojects.withgoogle.com/vision#assembly-guide-1-get-the-vision-kit-sd-image) to build it
19 | 1. Install UV4L (see the next section)
20 | 1. Install git if you don't have it: `sudo apt-get install git`
21 | 1. Clone the repo: `git clone https://github.com/webrtcHacks/aiy_vision_web_server.git`
22 | 1. Go to the directory: `cd aiy_vision_web_server/`
23 | 1. Install Python dependencies: `sudo python3 setup.py install`
24 | 1. Turn the default Joy Detection demo off: `sudo systemctl stop joy_detection_demo.service`
25 | 1. Run the server: `python3 server.py`
26 | 1. Point your web browser to [http://raspberrypi.local:5000](http://raspberrypi.local:5000) or whatever you set your hostname or IP address to
27 |
28 | ### UV4L Installation
29 |
30 | #### Raspberry Pi Zero
31 | ```
32 | curl http://www.linux-projects.org/listing/uv4l_repo/lpkey.asc | sudo apt-key add -
33 | echo "deb http://www.linux-projects.org/listing/uv4l_repo/raspbian/stretch stretch main" | sudo tee -a /etc/apt/sources.list
34 | sudo apt-get update
35 | sudo apt-get install -y uv4l uv4l-raspicam uv4l-raspicam-extras uv4l-webrtc-armv6 uv4l-raspidisp uv4l-raspidisp-extras
36 | ```
37 |
38 | #### Raspberry Pi 2 and 3
39 | ```
40 | curl http://www.linux-projects.org/listing/uv4l_repo/lpkey.asc | sudo apt-key add -
41 | echo "deb http://www.linux-projects.org/listing/uv4l_repo/raspbian/stretch stretch main" | sudo tee -a /etc/apt/sources.list
42 | sudo apt-get update
43 | sudo apt-get install -y uv4l uv4l-raspicam uv4l-raspicam-extras uv4l-webrtc uv4l-raspidisp uv4l-raspidisp-extras
44 | ```
45 |
46 | ## Command Line Options
47 |
48 | They following options are available after `python server.py`:
49 |
50 | Verbose switch | Short switch | Default | Description
51 | ---|---|---|---
52 | --model MODEL | -m MODEL | face | Sets the model to use: `face`, `object`, or `class`
53 | --cam-mode CAM_MODE | -c CAM_MODE | 5 | Sets the [Pi Camera Mode](https://www.raspberrypi.org/documentation/raspbian/applications/camera.md)
54 | --framerate FRAMERATE | -f FRAMERATE | 15 | Sets the camera frame rate
55 | --hres HRES | -hr HRES | 1280 |Sets the horizontal resolution
56 | --vres VRES | -vr VRES | 720 |Sets the vertical resolution
--------------------------------------------------------------------------------
/aiy_model_output.py:
--------------------------------------------------------------------------------
1 | # Function to standardize inference output of AIY models
2 | import json # Format API output
3 |
4 | # AIY requirements
5 | from aiy.vision.models import object_detection, face_detection, image_classification
6 |
7 | # return the appropriate model
8 | def model_selector(argument):
9 | options = {
10 | "object": object_detection.model(),
11 | "face": face_detection.model(),
12 | "class": image_classification.model()
13 | }
14 | return options.get(argument, "nothing")
15 |
16 |
17 | # helper class to convert inference output to JSON
18 | class ApiObject(object):
19 | def __init__(self):
20 | self.name = "webrtcHacks AIY Vision Server REST API"
21 | self.version = "0.2.1"
22 | self.numObjects = 0
23 | self.objects = []
24 |
25 | def to_json(self):
26 | return json.dumps(self.__dict__)
27 |
28 |
29 | def process_inference(model, result, params):
30 |
31 | output = ApiObject()
32 |
33 | # handler for the AIY Vision object detection model
34 | if model == "object":
35 | output.threshold = 0.3
36 | objects = object_detection.get_objects(result, output.threshold)
37 |
38 | for obj in objects:
39 | # print(object)
40 | item = {
41 | 'name': 'object',
42 | 'class_name': obj._LABELS[obj.kind],
43 | 'score': obj.score,
44 | 'x': obj.bounding_box[0] / params['width'],
45 | 'y': obj.bounding_box[1] / params['height'],
46 | 'width': obj.bounding_box[2] / params['width'],
47 | 'height': obj.bounding_box[3] / params['height']
48 | }
49 |
50 | output.numObjects += 1
51 | output.objects.append(item)
52 |
53 | # handler for the AIY Vision face detection model
54 | elif model == "face":
55 | faces = face_detection.get_faces(result)
56 |
57 | for face in faces:
58 | # print(face)
59 | item = {
60 | 'name': 'face',
61 | 'score': face.face_score,
62 | 'joy': face.joy_score,
63 | 'x': face.bounding_box[0] / params['width'],
64 | 'y': face.bounding_box[1] / params['height'],
65 | 'width': face.bounding_box[2] / params['width'],
66 | 'height': face.bounding_box[3] / params['height']
67 | }
68 |
69 | output.numObjects += 1
70 | output.objects.append(item)
71 |
72 | elif model == "class":
73 | output.threshold = 0.3
74 | classes = image_classification.get_classes(result)
75 |
76 | s = ""
77 |
78 | for (obj, prob) in classes:
79 | if prob > output.threshold:
80 | s += '%s=%1.2f\t|\t' % (obj, prob)
81 |
82 | item = {
83 | 'name': 'class',
84 | 'class_name': obj,
85 | 'score': prob
86 | }
87 |
88 | output.numObjects += 1
89 | output.objects.append(item)
90 |
91 | # print('%s\r' % s)
92 |
93 | return output
94 |
--------------------------------------------------------------------------------
/join_videos.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | framerate=${2:-15}
3 | ffmpeg -framerate $framerate -n -i "concat:./recordings/$1_before.h264|./recordings/$1_after.h264" ./recordings/$1.mp4
--------------------------------------------------------------------------------
/picam_record.py:
--------------------------------------------------------------------------------
1 | # modified from: https://picamera.readthedocs.io/en/release-1.2/recipes2.html#splitting-to-from-a-circular-stream
2 | # Output needs to be entered into ffmpeg -
3 | # example: ffmpeg -framerate 15 -i "concat:1535935485_before.h264|1535935485_after.h264" 1535935485.mp4
4 |
5 | import io
6 | import os
7 | from time import time
8 |
9 | from picamera import PiCameraCircularIO
10 |
11 | is_recording = False
12 |
13 |
14 | def init(before_detection=5, timeout=5, max_length=30):
15 | global record_time_before_detection, no_detection_timeout, max_recording_length
16 | record_time_before_detection = before_detection
17 | no_detection_timeout = timeout
18 | max_recording_length = max_length
19 |
20 |
21 | # Start recording
22 | def start(cam):
23 |
24 | # setup globals
25 | global camera, stream
26 | camera = cam
27 |
28 | stream = PiCameraCircularIO(camera, seconds=record_time_before_detection)
29 | camera.start_recording(stream, format='h264')
30 |
31 | camera.wait_recording(5) # make sure recording is loaded
32 | print("recording initialized")
33 |
34 |
35 | def write_video(stream, file):
36 | # Write the entire content of the circular buffer to disk. No need to
37 | # lock the stream here as we're definitely not writing to it
38 | # simultaneously
39 | with io.open(file, 'wb') as output:
40 | for frame in stream.frames:
41 | if frame.header:
42 | stream.seek(frame.position)
43 | break
44 | while True:
45 | buf = stream.read1()
46 | if not buf:
47 | break
48 | output.write(buf)
49 |
50 | print("wrote %s" % file)
51 | # Wipe the circular stream once we're done
52 | stream.seek(0)
53 | stream.truncate()
54 |
55 |
56 | before_file = "error.h264"
57 | after_file = "error.h264"
58 |
59 |
60 | def detection(detected):
61 | # Recording
62 | global is_recording, recording_start_time, last_detection_time, before_file, after_file
63 |
64 |
65 | now = time()
66 |
67 | if detected:
68 | if not is_recording:
69 | print("Detection started")
70 | is_recording = True
71 | recording_start_time = int(now)
72 | before_file = (os.path.join('./recordings', '%d_before.h264' % recording_start_time))
73 | after_file = (os.path.join('./recordings', '%d_after.h264' % recording_start_time))
74 | # start recording frames after the initial detection
75 | camera.split_recording(after_file)
76 | # Write the 10 seconds "before" detection
77 | write_video(stream, before_file)
78 |
79 | # write to disk if max recording length exceeded
80 | elif int(now) - recording_start_time > max_recording_length - record_time_before_detection:
81 | print("Max recording length reached. Writing %s" % after_file)
82 | # Split here: write to after_file, and start capturing to the stream again
83 | camera.split_recording(stream)
84 | is_recording = False
85 |
86 | last_detection_time = now
87 | else:
88 | if is_recording and int(now)-last_detection_time > no_detection_timeout:
89 | print("No more detections, writing %s" % after_file)
90 | # Split here: write to after_file, and start capturing to the stream again
91 | camera.split_recording(stream)
92 | is_recording = False
93 |
--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2018 Chad Wallace Hart
2 | # Attribution notice:
3 | # Large portions of this code are from https://github.com/google/aiyprojects-raspbian
4 | # Copyright 2017 Google Inc.
5 | # http://www.apache.org/licenses/LICENSE-2.0
6 | #
7 | # Walkthough and function details https://webrtchacks.com/aiy-vision-kit-uv4l-web-server/
8 | # Source repo: https://github.com/webrtcHacks/aiy_vision_web_server
9 |
10 | from threading import Thread, Event # Multi-threading
11 | import queue # Multi-threading
12 | from time import time, sleep
13 | from datetime import datetime # Timing & stats output
14 | import socket # uv4l communication
15 | import os # help with connecting to the socket file
16 | import argparse # Commandline arguments
17 | import random # Used for performance testing
18 |
19 | from picamera import PiCamera # PiCam hardware
20 | from flask import Flask, Response # Web server
21 |
22 | from aiy_model_output import model_selector, process_inference
23 | import picam_record as record
24 |
25 | # AIY requirements
26 | from aiy.leds import Leds
27 | from aiy.leds import PrivacyLed
28 | from aiy.vision.inference import CameraInference, ImageInference
29 |
30 |
31 | socket_connected = False
32 | q = queue.Queue(maxsize=1) # we'll use this for inter-process communication
33 | # ToDo: remove these
34 | # capture_width = 1640 # The max horizontal resolution of PiCam v2
35 | # capture_height = 922 # Max vertical resolution on PiCam v2 with a 16:9 ratio
36 | time_log = []
37 |
38 |
39 | # Control connection to the linux socket and send messages to it
40 | def socket_data(run_event, send_rate=1/30):
41 | socket_path = '/tmp/uv4l-raspidisp.socket'
42 |
43 | # wait for a connection
44 | def wait_to_connect():
45 | global socket_connected
46 |
47 | print('socket waiting for connection...')
48 | while run_event.is_set():
49 | try:
50 | socket_connected = False
51 | connection, client_address = s.accept()
52 | print('socket connected')
53 | socket_connected = True
54 | send_data(connection)
55 |
56 | except socket.timeout:
57 | continue
58 |
59 | except socket.error as err:
60 | print("socket error: %s" % err)
61 | break
62 |
63 | print("closing socket")
64 | s.close()
65 | socket_connected = False
66 |
67 | # continually send data as it comes in from the q
68 | def send_data(connection):
69 | while run_event.is_set():
70 | try:
71 | if q.qsize() > 0:
72 | message = q.get()
73 | connection.send(str(message).encode())
74 |
75 | sleep(send_rate)
76 | except socket.error as send_err:
77 | print("connected socket error: %s" % send_err)
78 | return
79 |
80 | try:
81 | # Create the socket file if it does not exist
82 | if not os.path.exists(socket_path):
83 | f = open(socket_path, 'w')
84 | f.close()
85 |
86 | os.unlink(socket_path)
87 | s = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
88 | s.bind(socket_path)
89 | s.listen(1)
90 | s.settimeout(1)
91 | wait_to_connect()
92 | except socket.error as sock_err:
93 | print("socket error: %s" % sock_err)
94 | return
95 | except OSError:
96 | if os.path.exists(socket_path):
97 | print("Error accessing %s\nTry running 'sudo chown pi: %s'" % (socket_path, socket_path))
98 | os._exit(0)
99 | return
100 | else:
101 | print("Socket file not found. Did you configure uv4l-raspidisp to use %s?" % socket_path)
102 | raise
103 |
104 |
105 | # AIY Vision setup and inference
106 | def run_inference(run_event, model="face", framerate=15, cam_mode=5, hres=1640, vres=922, stats=False, recording=False):
107 | # See the Raspicam documentation for mode and framerate limits:
108 | # https://picamera.readthedocs.io/en/release-1.13/fov.html#sensor-modes
109 | # Default to the highest resolution possible at 16:9 aspect ratio
110 |
111 | global socket_connected, time_log
112 |
113 | leds = Leds()
114 |
115 | with PiCamera() as camera, PrivacyLed(leds):
116 | camera.sensor_mode = cam_mode
117 | camera.resolution = (hres, vres)
118 | camera.framerate = framerate
119 | camera.video_stabilization = True
120 | camera.start_preview() # fullscreen=True)
121 |
122 | tf_model = model_selector(model)
123 |
124 | # this is not needed because the function defaults to "face"
125 | if tf_model == "nothing":
126 | print("No tensorflow model or invalid model specified - exiting..")
127 | camera.stop_preview()
128 | os._exit(0)
129 | return
130 |
131 | if recording:
132 | record.start(camera)
133 |
134 | try:
135 | with CameraInference(tf_model) as inference:
136 | print("%s model loaded" % model)
137 |
138 | last_time = time() # measure inference time
139 |
140 | for result in inference.run():
141 |
142 | # exit on shutdown
143 | if not run_event.is_set():
144 | camera.stop_preview()
145 | return
146 |
147 | output = process_inference(model, result, {'height':vres , 'width': hres})
148 |
149 | now = time()
150 | output.timeStamp = now
151 | output.inferenceTime = (now - last_time)
152 | last_time = now
153 |
154 | # Process detection
155 | # No need to do anything else if there are no objects
156 | if output.numObjects > 0:
157 |
158 | # API Output
159 | output_json = output.to_json()
160 | print(output_json)
161 |
162 | # Send the json object if there is a socket connection
163 | if socket_connected is True:
164 | q.put(output_json)
165 |
166 | if recording:
167 | record.detection(output.numObjects > 0)
168 |
169 | # Additional data to measure inference time
170 | if stats:
171 | time_log.append(output.inferenceTime)
172 | time_log = time_log[-10:] # just keep the last 10 times
173 | print("Avg inference time: %s" % (sum(time_log)/len(time_log)))
174 | finally:
175 | camera.stop_preview()
176 | if recording:
177 | camera.stop_recording()
178 |
179 | # Web server setup
180 | app = Flask(__name__)
181 |
182 |
183 | def flask_server():
184 | app.run(debug=False, host='0.0.0.0', threaded=True) # use_reloader=False
185 |
186 |
187 | @app.route('/')
188 | def index():
189 | return Response(open('static/index.html').read(), mimetype="text/html")
190 |
191 | # Note: This won't be able to play the files without conversion.
192 | # Running ffmpeg while running inference & streaming will be too intensive for the Pi Zeros
193 | # Look to make this a user controlled process or do it in the browser
194 | @app.route('/recordings')
195 | def recordings():
196 | # before_list = glob(os.getcwd() + '/recordings/*_before.h264')
197 | # after_list = glob(os.getcwd() + '/recordings/*_after.h264')
198 |
199 | # print(before_list)
200 | # print(after_list)
201 |
202 | files = [f for f in os.listdir('./recordings') if f.endswith(".h264")]
203 | files.sort()
204 | print(files)
205 |
206 | html_table = "
Before Detection | After Detection |
"
207 | # for i in range(len(before_list)):
208 | # html_table = html_table + "" + before_list[i] + " | " + after_list[i] + " |
"
209 |
210 | html_table = html_table + "
"
211 | print(html_table)
212 | page = "List of RecordingsThe list goes here
%s" % html_table
213 | print(page)
214 | return Response(page)
215 |
216 |
217 | # test route to verify the flask is working
218 | @app.route('/ping')
219 | def ping():
220 | return Response("pong")
221 |
222 |
223 | @app.route('/socket-test')
224 | def socket_test():
225 | return Response(open('static/socket-test.html').read(), mimetype="text/html")
226 |
227 | '''
228 | def socket_tester(rate):
229 | output = ApiObject()
230 | last_time = False
231 | count = 0
232 |
233 | while True:
234 | if socket_connected is True:
235 | count += 1
236 | current_time = time()
237 | output.time = (datetime.utcnow()-datetime.utcfromtimestamp(0)).total_seconds()*1000
238 | output.data = ''.join(random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789") for _ in range(1000))
239 | output.count = count
240 | q.put(output.to_json())
241 | print("count, timestamp, delta: %s %s %s" % (output.count, current_time, current_time - last_time))
242 | last_time = current_time
243 |
244 | sleep(rate)
245 | '''
246 |
247 |
248 | # Main control logic to parse args and spawn threads
249 | def main(webserver):
250 |
251 | # Command line parameters to help with testing and optimization
252 | parser = argparse.ArgumentParser()
253 | parser.add_argument(
254 | '--model',
255 | '-m',
256 | dest='model',
257 | default='face',
258 | help='Sets the model to use: "face", "object", or "class"')
259 | parser.add_argument(
260 | '--cam-mode',
261 | '-c',
262 | type=int,
263 | dest='cam_mode',
264 | default=5,
265 | help='Sets the camera mode. Default is 5')
266 | parser.add_argument(
267 | '--framerate',
268 | '-f',
269 | type=int,
270 | dest='framerate',
271 | default=15,
272 | help='Sets the camera framerate. Default is 15')
273 | parser.add_argument(
274 | '--hres',
275 | '-hr',
276 | type=int,
277 | dest='hres',
278 | default=1640,
279 | help='Sets the horizontal resolution')
280 | parser.add_argument(
281 | '--vres',
282 | '-vr',
283 | type=int,
284 | dest='vres',
285 | default=922,
286 | help='Sets the vertical resolution')
287 | parser.add_argument(
288 | '--stats',
289 | '-s',
290 | action='store_true',
291 | help='Show average inference timing statistics')
292 | parser.add_argument(
293 | '--perftest',
294 | '-t',
295 | dest='perftest',
296 | action='store_true',
297 | help='Start socket performance test')
298 | parser.add_argument(
299 | '--record',
300 | '-r',
301 | dest='record',
302 | action='store_true',
303 | help='Record')
304 | # ToDo: Add recorder parameters
305 | parser.epilog = 'For more info see the github repo: https://github.com/webrtcHacks/aiy_vision_web_server/' \
306 | ' or the webrtcHacks blog post: https://webrtchacks.com/?p=2824'
307 | args = parser.parse_args()
308 |
309 | is_running = Event()
310 | is_running.set()
311 |
312 | '''
313 | if args.perftest:
314 | print("Socket performance test mode")
315 |
316 | # run this independent of a flask connection so we can test it with the uv4l console
317 | socket_thread = Thread(target=socket_data, args=(is_running, 1/1000,))
318 | socket_thread.start()
319 |
320 | socket_test_thread = Thread(target=socket_tester,
321 | args=(0.250,))
322 | socket_test_thread.start()
323 |
324 | else:
325 | '''
326 |
327 | if args.record:
328 | record.init(before_detection=5, timeout=5, max_length=30)
329 |
330 | # run this independent of a flask connection so we can test it with the uv4l console
331 | socket_thread = Thread(target=socket_data, args=(is_running, 1 / args.framerate,))
332 | socket_thread.start()
333 |
334 | # thread for running AIY Tensorflow inference
335 | detection_thread = Thread(target=run_inference,
336 | args=(is_running, args.model, args.framerate, args.cam_mode,
337 | args.hres, args.vres, args.stats, args.record))
338 | detection_thread.start()
339 |
340 | # run Flask in the main thread
341 | webserver.run(debug=False, host='0.0.0.0')
342 |
343 | # close threads when flask is done
344 | print("exiting...")
345 | is_running.clear()
346 | '''
347 | if args.perftest:
348 | socket_test_thread.join(0)
349 | else:
350 | detection_thread.join(0)
351 | '''
352 | socket_thread.join(0)
353 |
354 |
355 | if __name__ == '__main__':
356 | main(app)
357 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import find_packages
2 | from setuptools import setup
3 | import subprocess
4 |
5 | socket_path = '/tmp/uv4l-raspidisp.socket'
6 |
7 | REQUIRED_PACKAGES = ['Flask', 'argparse', 'picamera']
8 |
9 | setup(
10 | name='aiy_vision_web_server',
11 | version='0.1',
12 | install_requires=REQUIRED_PACKAGES,
13 | include_package_data=True,
14 | packages=[p for p in find_packages()],
15 | description='AIY Vision Web Server',
16 | )
17 |
18 | # Change the owner on the UV4L socket file
19 |
20 | try:
21 | print("Disabling uv4l-raspidisp keyboard to socket and releasing %s" % socket_path)
22 | subprocess.Popen("./uv4l-socket.sh %s" %socket_path,shell=True)
23 | except:
24 | raise
25 |
--------------------------------------------------------------------------------
/static/drawAiyVision.js:
--------------------------------------------------------------------------------
1 | /**
2 | * Created by chadwallacehart on 1/27/18.
3 | * Setup and draw boxes on a canvas
4 | * Written for webrtcHacks - https://webrtchacks.com
5 | */
6 |
7 | /*exported processAiyData*/
8 |
9 |
10 | //Video element selector
11 | let v = document.getElementById("remoteVideo");
12 |
13 | //for starting events
14 | let isPlaying = false,
15 | gotMetadata = false;
16 |
17 | //Canvas setup
18 |
19 | //create a canvas for drawing object boundaries
20 | let drawCanvas = document.createElement('canvas');
21 | document.body.appendChild(drawCanvas);
22 | let drawCtx = drawCanvas.getContext("2d");
23 |
24 | let lastSighting = null;
25 |
26 | //Convert RGB color integer values to hex
27 | function toHex(n) {
28 | if (n < 256) {
29 | return Math.abs(n)
30 | .toString(16);
31 | }
32 | return 0;
33 | }
34 |
35 | //draw boxes and labels on each detected object
36 | function drawBox(x, y, width, height, label, color) {
37 |
38 | if (color)
39 | drawCtx.strokeStyle = drawCtx.fillStyle = '#' + toHex(color.r) + toHex(color.g) + toHex(color.b);
40 |
41 | let cx = x * drawCanvas.width;
42 | let cy = y * drawCanvas.height;
43 | let cWidth = (width * drawCanvas.width) - x;
44 | let cHeight = (height * drawCanvas.height) - y;
45 |
46 | drawCtx.fillText(label, cx + 5, cy - 10);
47 | drawCtx.strokeRect(cx, cy, cWidth, cHeight);
48 |
49 | }
50 |
51 |
52 | //Main function to export
53 | function processAiyData(result) {
54 | console.log(result);
55 |
56 | lastSighting = Date.now();
57 |
58 | //clear the previous drawings
59 | drawCtx.clearRect(0, 0, drawCanvas.width, drawCanvas.height);
60 |
61 | result.objects.forEach((item, itemNum) => {
62 |
63 | let label;
64 |
65 | switch (item.name) {
66 | case "face": {
67 | label = "Face: " + Math.round(item.score * 100) + "%" + " Joy: " + Math.round(item.joy * 100) + "%";
68 | let color = {
69 | r: Math.round(item.joy * 255),
70 | g: 70,
71 | b: Math.round((1 - item.joy) * 255)
72 | };
73 |
74 | drawBox(item.x, item.y, item.width, item.height, label, color);
75 | break;
76 | }
77 | case "object": {
78 | label = item.class_name + " - " + Math.round(item.score * 100) + "%";
79 | drawBox(item.x, item.y, item.width, item.height, label);
80 | break;
81 | }
82 | case "class": {
83 | label = item.class_name + " - " + Math.round(item.score * 100) + "%";
84 | drawCtx.fillText(label, 20, 20 * (itemNum + 1));
85 | break;
86 | }
87 | default: {
88 | console.log("I don't know what that AIY Vision server response was");
89 | }
90 | }
91 | });
92 |
93 | }
94 |
95 |
96 | //Start object detection
97 | function setupCanvas() {
98 |
99 | console.log("Ready to draw");
100 |
101 | //Set canvas sizes based on input video
102 | drawCanvas.width = v.videoWidth;
103 | drawCanvas.height = v.videoHeight;
104 |
105 | //Some styles for the drawCanvas
106 | drawCtx.lineWidth = 8;
107 | drawCtx.strokeStyle = "cyan";
108 | drawCtx.font = "20px Verdana";
109 | drawCtx.fillStyle = "cyan";
110 |
111 | //if no updates in the last 2 seconds then clear the canvas
112 | setInterval(() => {
113 | if (Date.now() - lastSighting > 1000)
114 | drawCtx.clearRect(0, 0, drawCanvas.width, drawCanvas.height);
115 | }, 500)
116 |
117 | }
118 |
119 | //Starting events
120 |
121 | //check if metadata is ready - we need the video size
122 | v.onloadedmetadata = () => {
123 | console.log("video metadata ready");
124 | gotMetadata = true;
125 | if (isPlaying)
126 | setupCanvas();
127 | };
128 |
129 | //see if the video has started playing
130 | v.onplaying = () => {
131 | console.log("video playing");
132 | isPlaying = true;
133 | if (gotMetadata) {
134 | setupCanvas();
135 | }
136 | };
137 |
138 |
--------------------------------------------------------------------------------
/static/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/webrtcHacks/aiy_vision_web_server/12f23890eec1c7f17c41b8dd07442ec5d9eb469d/static/favicon.ico
--------------------------------------------------------------------------------
/static/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | AIY Vision Detection View and Annotation
6 |
7 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
--------------------------------------------------------------------------------
/static/socket-test.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | AIY Vision Detection View and Annotation
6 |
7 |
20 |
21 |
22 |
23 |
36 |
37 |
38 |
--------------------------------------------------------------------------------
/static/uv4l.js:
--------------------------------------------------------------------------------
1 | /**
2 | * Created by chadwalalcehart on 1/27/18.
3 | * Adaption of uv4l WebRTC samples to receive only
4 | */
5 |
6 | /*global processAiyData:false*/
7 |
8 | const uv4lPort = 9080; //This is determined by the uv4l configuration. 9080 is default set by uv4l-raspidisp-extras
9 | const protocol = location.protocol === "https:" ? "wss:" : "ws:";
10 | const signalling_server_address = location.hostname;
11 | let ws = new WebSocket(protocol + '//' + signalling_server_address + ':' + uv4lPort + '/stream/webrtc');
12 |
13 | //Global vars
14 | let remotePc = false;
15 | let pc,
16 | dataChannel;
17 | let iceCandidates = [];
18 | const remoteVideo = document.querySelector('#remoteVideo');
19 |
20 |
21 | ////////////////////////////////////////
22 | /*** Peer Connection Event Handlers ***/
23 |
24 | function gotRemoteStream(e) {
25 | if (remoteVideo.srcObject !== e.streams[0]) {
26 | remoteVideo.srcObject = e.streams[0];
27 | console.log('Received remote stream');
28 | }
29 | }
30 |
31 | function gotDataChannel(event) {
32 | console.log("Data Channel opened");
33 | let receiveChannel = event.channel;
34 | receiveChannel.addEventListener('message', event => processAiyData(JSON.parse(event.data)));
35 | receiveChannel.addEventListener('error', err => console.error("DataChannel Error:", err));
36 | receiveChannel.addEventListener('close', () => console.log("The DataChannel is closed"));
37 | }
38 |
39 | ////////////////////////////////////
40 | /*** Call signaling to start call and
41 | * handle ICE candidates ***/
42 |
43 | function startCall() {
44 |
45 | const pcConfig = {
46 | iceServers: [{
47 | urls: ["stun:" + signalling_server_address + ":3478"]
48 | }]
49 | };
50 |
51 | //Setup our peerConnection object
52 | pc = new RTCPeerConnection(pcConfig);
53 |
54 | pc.addEventListener('track', gotRemoteStream);
55 | pc.addEventListener('datachannel', gotDataChannel);
56 |
57 | //Send the call commmand
58 | let req = {
59 | what: "call",
60 | options: {
61 | force_hw_vcodec: true,
62 | vformat: 55,
63 | trickle_ice: true
64 | }
65 | };
66 |
67 | ws.send(JSON.stringify(req));
68 | console.log("Initiating call request" + JSON.stringify(req));
69 |
70 | }
71 |
72 | //Process incoming ICE candidates
73 | // in a format that adapter.js doesn't like, so regenerate
74 | function addIceCandidate(candidate) {
75 |
76 | function onAddIceCandidateSuccess() {
77 | console.log("Successfully added ICE candidate")
78 | }
79 |
80 | function onAddIceCandidateError(err) {
81 | console.error("Failed to add candidate: " + err)
82 | }
83 |
84 | let generatedCandidate = new RTCIceCandidate({
85 | sdpMLineIndex: candidate.sdpMLineIndex,
86 | candidate: candidate.candidate,
87 | sdpMid: candidate.sdpMid
88 | });
89 | //console.log("Created ICE candidate: " + JSON.stringify(generatedCandidate));
90 |
91 | //Hold on to them in case the remote PeerConnection isn't ready
92 | iceCandidates.push(generatedCandidate);
93 |
94 | //Add the generated candidates when the remote PeerConnection is ready
95 | if (remotePc) {
96 | iceCandidates.forEach((candidate) =>
97 | pc.addIceCandidate(candidate)
98 | .then(onAddIceCandidateSuccess, onAddIceCandidateError)
99 | );
100 | console.log("Added " + iceCandidates.length + " remote candidate(s)");
101 | iceCandidates = [];
102 | }
103 | }
104 |
105 | //Handle Offer/Answer exchange
106 | function offerAnswer(remoteSdp) {
107 |
108 | //Start the answer by setting the remote SDP
109 | pc.setRemoteDescription(new RTCSessionDescription(remoteSdp))
110 | .then(() => {
111 | remotePc = true;
112 | console.log("setRemoteDescription complete")
113 | },
114 | (err) => console.error("Failed to setRemoteDescription: " + err));
115 |
116 |
117 | //Create the local SDP
118 | pc.createAnswer()
119 | .then(
120 | (localSdp) => {
121 | pc.setLocalDescription(localSdp)
122 | .then(() => {
123 | console.log("setLocalDescription complete");
124 |
125 | //send the answer
126 | let req = {
127 | what: "answer",
128 | data: JSON.stringify(localSdp)
129 | };
130 | ws.send(JSON.stringify(req));
131 | console.log("Sent local SDP: " + JSON.stringify(localSdp));
132 |
133 | },
134 | (err) => console.error("setLocalDescription error:" + err));
135 | },
136 | (err) =>
137 | console.log('Failed to create session description: ' + err.toString())
138 | );
139 |
140 | }
141 |
142 |
143 | ////////////////////////////////////
144 | /*** Handle WebSocket messages ***/
145 |
146 | function websocketEvents() {
147 |
148 | /*** Signaling logic ***/
149 | ws.onopen = () => {
150 | console.log("websocket open");
151 |
152 | startCall();
153 | };
154 |
155 | ws.onmessage = (event) => {
156 | let message = JSON.parse(event.data);
157 | console.log("Incoming message:" + JSON.stringify(message));
158 |
159 | if (!message.what) {
160 | console.error("Websocket message not defined");
161 | return;
162 | }
163 |
164 | switch (message.what) {
165 |
166 | case "offer":
167 | offerAnswer(JSON.parse(message.data));
168 | break;
169 |
170 | case "iceCandidate":
171 | if (!message.data) {
172 | console.log("Ice Gathering Complete");
173 | } else
174 | addIceCandidate(JSON.parse(message.data));
175 | break;
176 |
177 | //ToDo: Ask about this - I can't get this message to show
178 | case "iceCandidates":
179 | let candidates = JSON.parse(message.data);
180 | candidates.forEach((candidate) =>
181 | addIceCandidate(JSON.parse(candidate)));
182 | break;
183 |
184 | default:
185 | console.warn("Unhandled websocket message: " + JSON.stringify(message))
186 | }
187 | };
188 |
189 | ws.onerror = (err) => {
190 | console.error("Websocket error: " + err.toString());
191 | };
192 |
193 | ws.onclose = () => {
194 | console.log("Websocket closed.");
195 | };
196 |
197 |
198 | }
199 |
200 | ////////////////////////////////
201 | /*** General control logic ***/
202 |
203 | //Exit gracefully
204 | window.onbeforeunload = () => {
205 | remoteVideo.src = '';
206 |
207 | if (pc) {
208 | pc.close();
209 | pc = null;
210 | }
211 |
212 | if (ws) {
213 | ws.send({log: 'closing browser'});
214 | ws.send(JSON.stringify({what: "hangup"}));
215 | ws.close();
216 | ws = null;
217 | }
218 | };
219 |
220 | //////////////////////////
221 | /*** video handling ***/
222 |
223 | document.addEventListener("DOMContentLoaded", () => {
224 |
225 | websocketEvents();
226 |
227 | remoteVideo.addEventListener('loadedmetadata', function () {
228 | console.log(`Remote video videoWidth: ${this.videoWidth}px, videoHeight: ${this.videoHeight}px`);
229 | });
230 |
231 | remoteVideo.addEventListener('resize', () => {
232 | console.log(`Remote video size changed to ${remoteVideo.videoWidth}x${remoteVideo.videoHeight}`);
233 | });
234 |
235 | });
--------------------------------------------------------------------------------
/tests/socket_client.py:
--------------------------------------------------------------------------------
1 | import socket
2 | import os
3 | from time import time, sleep
4 |
5 | socket_path = '/tmp/uv4l-raspidisp.socket'
6 |
7 |
8 | def print_socket():
9 |
10 | # wait for a connection
11 | def wait_to_connect():
12 |
13 | print('connecting...')
14 | while True:
15 | try:
16 | connection, client_address = s.accept()
17 | print('socket connected')
18 |
19 | data = connection.recv(64)
20 | if len(data) > 0:
21 | print(data)
22 | break
23 |
24 | sleep(0.001)
25 |
26 | except socket.timeout:
27 | print("timeout..")
28 | continue
29 |
30 | except socket.error as err:
31 | print("socket error: %s" % err)
32 | raise
33 |
34 | print("closing socket")
35 | s.close()
36 |
37 | try:
38 | s = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
39 | s.connect(socket_path)
40 |
41 | while True:
42 | data = s.recv(64)
43 | print(repr(data))
44 | sleep(0.001)
45 |
46 |
47 | except OSError:
48 | raise
49 | except socket.error as sock_err:
50 | print("socket error: %s" % sock_err)
51 | return
52 | except:
53 | print("closing..")
54 | s.close()
55 | raise
56 |
57 |
58 | if __name__ == '__main__':
59 | print_socket()
60 |
--------------------------------------------------------------------------------
/tests/uv4l_socket_tester.py:
--------------------------------------------------------------------------------
1 | import socket
2 | import queue
3 | import os
4 | import json
5 | import random
6 | from datetime import datetime
7 | from time import time, sleep
8 | from threading import Thread
9 |
10 | socket_connected = False
11 | end = False
12 | q = queue.Queue(maxsize=1) # we'll use this for inter-process communication
13 |
14 |
15 | # Control connection to the linux socket and send messages to it
16 | def socket_data(send_rate=1/30):
17 | socket_path = '/tmp/uv4l-raspidisp.socket'
18 |
19 | # wait for a connection
20 | def wait_for_client():
21 | global socket_connected, end
22 |
23 | print('socket waiting for connection...')
24 | while True:
25 | try:
26 | socket_connected = False
27 | connection, client_address = s.accept()
28 | print('socket connected')
29 | socket_connected = True
30 | send_data(connection)
31 |
32 | if end:
33 | return
34 |
35 | except socket.timeout:
36 | continue
37 |
38 | except socket.error as err:
39 | print("socket error: %s" % err)
40 | break
41 |
42 | except KeyboardInterrupt:
43 | return
44 |
45 | print("closing socket")
46 | s.close()
47 | socket_connected = False
48 |
49 | # continually send data as it comes in from the q
50 | def send_data(connection):
51 | while True:
52 | try:
53 | if q.qsize() > 0:
54 | message = q.get()
55 | connection.send(str(message).encode())
56 | if end:
57 | return
58 | sleep(send_rate)
59 |
60 | except socket.error as send_err:
61 | print("connected socket error: %s" % send_err)
62 | return
63 | except KeyboardInterrupt:
64 | return
65 |
66 | try:
67 | # Create the socket file if it does not exist
68 | if not os.path.exists(socket_path):
69 | f = open(socket_path, 'w')
70 | f.close()
71 |
72 | os.remove(socket_path)
73 | s = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
74 | s.bind(socket_path)
75 | s.listen(1)
76 | s.settimeout(1)
77 | wait_for_client()
78 | except OSError:
79 | if os.path.exists(socket_path):
80 | print("Error accessing %s\nTry running 'sudo chown pi: %s'" % (socket_path, socket_path))
81 | os._exit(0)
82 | return
83 | else:
84 | pass
85 | except socket.error as sock_err:
86 | print("socket error: %s" % sock_err)
87 | return
88 |
89 |
90 | # helper class to convert inference output to JSON
91 | class ApiObject(object):
92 | def __init__(self):
93 | self.name = "socket test"
94 | self.version = "0.0.1"
95 | self.numObjects = 0
96 |
97 | def to_json(self):
98 | return json.dumps(self.__dict__)
99 |
100 |
101 | def socket_tester(rate):
102 | output = ApiObject()
103 | last_time = False
104 | count = 0
105 |
106 | try:
107 | while True:
108 | if socket_connected is True:
109 | count += 1
110 | current_time = time()
111 | output.time = (datetime.utcnow()-datetime.utcfromtimestamp(0)).total_seconds()*1000
112 | output.data = ''.join(random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789") for _ in range(1000))
113 | output.count = count
114 | q.put(output.to_json())
115 | print("count, timestamp, delta: %s %s %s" % (output.count, current_time, current_time - last_time))
116 | last_time = current_time
117 |
118 | sleep(rate)
119 | except KeyboardInterrupt:
120 | return
121 |
122 |
123 | def main():
124 | global end
125 | socket_thread = Thread(target=socket_data, args=(1 / 1000,))
126 | socket_thread.start()
127 |
128 | socket_tester(1/15)
129 |
130 | # ToDo: figure out why this does not close cleanly
131 | end = True
132 | print("Exiting..")
133 | socket_thread.join(0)
134 |
135 |
136 | if __name__ == '__main__':
137 | main()
--------------------------------------------------------------------------------
/uv4l-socket.sh:
--------------------------------------------------------------------------------
1 | sudo systemctl stop raspidisp_server.service
2 | sudo systemctl disable raspidisp_server.service
3 | sudo chown pi: $1
--------------------------------------------------------------------------------
/video_maker.py:
--------------------------------------------------------------------------------
1 | import os
2 | from glob import glob
3 | import re
4 | import subprocess
5 | import argparse
6 |
7 | def make_videos(files_to_process):
8 |
9 | for id in files_to_process:
10 | print("creating %s.mp4" % id)
11 | subprocess.call('./join_videos.sh %s' % id, shell=True)
12 |
13 |
14 | def main():
15 | parser = argparse.ArgumentParser()
16 | parser.add_argument(
17 | '--view',
18 | '-v',
19 | dest='view',
20 | default=False,
21 | action='store_true',
22 | help="View-only mode - do not process any files")
23 |
24 | parser.add_argument(
25 | '--framerate',
26 | '-f',
27 | type=int,
28 | dest='framerate',
29 | default=15,
30 | help='Sets the camera framerate. Default is 15')
31 |
32 |
33 |
34 | args = parser.parse_args()
35 |
36 | path = "./recordings"
37 | mp4_files = glob(os.path.join(path, "[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*.mp4"))
38 | h264_files = glob(os.path.join(path, "[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*_*.h264"))
39 |
40 | h264_ids = []
41 | to_process = []
42 |
43 | for file in mp4_files:
44 | m = re.search(r"([0-9]{10,})", file)
45 | h264_ids.append(m.group(1))
46 |
47 | for file in h264_files:
48 | m = re.search(r"([0-9]{10,})_(before|after)(\.h264)", file)
49 | vid = m.group(1)
50 | if vid not in h264_ids:
51 | to_process.append(vid)
52 |
53 | to_process = list(set(to_process)) # dedupe
54 |
55 | if args.view:
56 | print("Existing videos")
57 | for file in mp4_files:
58 | print(file)
59 |
60 | print("Recordings without video")
61 | for file in to_process:
62 | print(file)
63 | else:
64 | make_videos(to_process)
65 |
66 |
67 | if __name__ == '__main__':
68 | main()
69 |
--------------------------------------------------------------------------------