├── LICENSE ├── README.md ├── blinds.py ├── calibrate.py ├── haarcascade_frontalface_default.xml ├── main.py └── mypivideostream.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 etanzapinsky 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # hardware_demo 2 | 3 | 4 | # voyeur 5 | 6 | This is the code for [this](https://www.youtube.com/watch?v=YKtbO6iW9-Y) video by Siraj Raval on Youtube. 7 | Voyuer is the code used to control a robotic Venetian blind. It consists of a Raspberry Pi 3, a camera and screen to display the camera output. It using OpenCV to control a servo to close a Venetion blind when it detects a face and open a Venetian blind when it doesn't detect a face. 8 | 9 | The goal of the work is to question how we interact with technology. Is looking at our screens similar to looking through windows? What permission do we have to look through these windows? Who is controlling what we see through these screens? What happens if the screens react to our presence? 10 | 11 | _NOTE: The project is still in development and this code/instructions are for a very early prototype._ 12 | 13 | ## Materials 14 | * [Raspberry Pi Model 3](https://www.adafruit.com/products/3055) 15 | * [Pi Cobbler and Breakout Cable](https://www.adafruit.com/products/2029) 16 | * [Breadboard](https://www.adafruit.com/product/239) 17 | * [5V Power Supply](https://www.adafruit.com/products/1995) 18 | * [Pibow Case](https://www.adafruit.com/products/2083) 19 | * [Raspberry Pi Camera](https://www.adafruit.com/products/3099) 20 | * [Raspberry Pi Camera Case](https://www.adafruit.com/products/3253) 21 | * [Continuous Rotation Servo](https://www.adafruit.com/products/154) 22 | * [Jumper Wires](https://www.adafruit.com/products/758) 23 | 24 | ## Construction 25 | 26 | ### Setup/Install Required Software 27 | This installs OpenCV from a binary. The version is a bit outdated and only has python2 bindings, but I've found that having an outdated version is much easier than having to compile OpenCV from source. PyImageSearch has a tutorial if you'd like to try and [install OpenCV and the python3 bindings from source](http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/). 28 | ``` 29 | sudo apt-get install opencv 30 | ``` 31 | 32 | Install pigpio library to interface with the gpio pins. 33 | ``` 34 | sudo apt-get install pigpio 35 | ``` 36 | 37 | Start pigpio daemon process so the library can communicate with the gpio pins. 38 | ``` 39 | sudo pigpiod 40 | ``` 41 | 42 | ### Setup Harware 43 | 1. Connect and configure the camera module to the Rasberry Pi. [Tutorial](https://www.raspberrypi.org/learning/getting-started-with-picamera/worksheet/) 44 | 2. Connect the Pi cobbler and breakout cable to the GPIO pins on the Raspberry Pi on one end and the breadboard on the other. 45 | 3. Using the breadboard, connect the servo to the Raspberry Pi GPIO pins. Since we are powering only a single servo, we're connect the servo directly to the Raspberry Pi. The servo is controlled via Pulse Width Modulation (PWM) signals driven in software by the `pigpio` library. Connect the red cable on the servo to the 5V pin on the Pi, the brown cable to the GND pin on the Pi, and the yellow cable to pin 18 (the pin we'll use to send the PWM signal) on the Pi. [Reference](https://learn.adafruit.com/adafruits-raspberry-pi-lesson-8-using-a-servo-motor/hardware). 46 | 4. Attach the servo to the blind openning/closing mechanism. I strung a piece of wire through the top of the servo to twist the blinds open/close when the servo turns. 47 | 48 | ### Putting it all together 49 | Now with all the requisite software + hardware, you can use the code in this repo to control the blinds! Clone this repo and you should be ready to go! 50 | 51 | Be forewarned, the code here is still very rough. Before optimizing I wanted to get this working end-to-end so the blind control code has no safeguards in place to prevent the blinds from getting damaged if they're not in the expected state. e.g. the main code expects the blinds to start open, if they don't the code might think the blinds are in one state, but actually be in another. 52 | 53 | Another thing I'm not proud of right now is shelling out to the `pigs` command line tool to send the PWM signals to the servo. In my haste to get the prototype working, I had some trouble working with the pigpio python bindings directly, but was able to use the command line tool without problems. 54 | 55 | ## Resources 56 | PWM software control on the Pi 57 | * [pigpio library/resources](http://abyz.co.uk/rpi/pigpio/index.html) 58 | * [pigpio servo control command](http://abyz.co.uk/rpi/pigpio/pigs.html#S/SERVO) 59 | 60 | PyImageSearch Python OpenCV Tutorials 61 | * [PyImageSearch face detector in 5 minutes](http://www.pyimagesearch.com/2015/05/11/creating-a-face-detection-api-with-python-and-opencv-in-just-5-minutes/) 62 | * [PyImageSearch increase Pi camera FPS fps](http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/) 63 | 64 | # Credits 65 | 66 | Credits go to [Etan](https://github.com/etanzapinsky) and Siraj 67 | -------------------------------------------------------------------------------- /blinds.py: -------------------------------------------------------------------------------- 1 | #!usr/bin/env python2 2 | 3 | import time 4 | 5 | from subprocess import call 6 | 7 | # use gpio pin out 18 to send pwm pulses to the servo 8 | PIN = 18 9 | 10 | # pwm pulse time in ms -- 1500ms pulse doesn't move the servo 11 | # from product page: https://www.adafruit.com/products/154 12 | # Position "90" (1.5ms pulse) is stop, "180" (2ms pulse) is full speed forward, 13 | # "0" (1ms pulse) is full speed backwards. 14 | NEUTRAL = 1500 15 | # +200 ms to the base pulse to turn the blinds down slowly 16 | DOWN = 200 17 | # -200 ms to the base pulse to turn the blinds up slowly 18 | UP = -200 19 | 20 | # time in seconds 21 | SERVO_RUNTIME = 4 22 | 23 | class Blinds: 24 | def __init__(self): 25 | self.opened = True # start with blinds at neutral 26 | self.closed = False 27 | self.moving = False 28 | 29 | def run_servo(self, direction): 30 | ''' 31 | Used to encapsulate turning the servo. 32 | 33 | Right now performing hack to shell out to `pigs` command line tool which 34 | works exactly as expected. I don't know how to control the servo correctly 35 | yet with the python bindings of the pigpio library. Once I learn how to 36 | do that, I'll change the underlying implementation of this method 37 | ''' 38 | call(["pigs", "s", "{}".format(PIN), "{}".format(direction)]) 39 | 40 | def open(self): 41 | # set that we're no longer closed 42 | self.closed = False 43 | 44 | # currently already open or in the process of moving 45 | if self.opened or self.moving: 46 | return 47 | 48 | # debugging output 49 | print('open') 50 | 51 | # set that we're moving 52 | self.moving = True 53 | 54 | # send pwm signal to open the blinds 55 | self.run_servo(NEUTRAL + UP) 56 | 57 | # sleep to let the servo move 58 | time.sleep(SERVO_RUNTIME) 59 | 60 | # reset servo to neutral to stop moving 61 | self.run_servo(NEUTRAL) 62 | 63 | # set that we've finished moving 64 | self.moving = False 65 | 66 | # set that we're open 67 | self.opened = True 68 | 69 | def close(self): 70 | # set that we're no longer open 71 | self.opened = False 72 | 73 | # currently already closed or in the process of moving 74 | if self.closed or self.moving: 75 | return 76 | 77 | # debugging output 78 | print('close') 79 | 80 | # set that we're moving 81 | self.moving = True 82 | 83 | # send pwm signal to close the blinds 84 | self.run_servo(NEUTRAL + DOWN) 85 | 86 | # sleep to let the servo move 87 | time.sleep(SERVO_RUNTIME) 88 | 89 | # reset servo to neutral to stop moving 90 | self.run_servo(NEUTRAL) 91 | 92 | # set that we've finished moving 93 | self.moving = False 94 | 95 | # set that we're closed 96 | self.closed = True 97 | 98 | -------------------------------------------------------------------------------- /calibrate.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | from blinds import Blinds, NEUTRAL, UP, DOWN 4 | 5 | # janky way to calibrate blinds to be open/closed to the right amount 6 | # edit this file to change UP/DOWN to move blinds in desired direction, 7 | # save and then run 8 | def main(): 9 | blinds = Blinds() 10 | blinds.run_servo(NEUTRAL + UP) 11 | time.sleep(1) 12 | blinds.run_servo(NEUTRAL) 13 | 14 | 15 | if __name__ == '__main__': 16 | main() 17 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import numpy 2 | import cv2 3 | 4 | from mypivideostream import PiVideoStream 5 | from blinds import Blinds 6 | 7 | if __name__ == '__main__': 8 | cv2.namedWindow("Frame", cv2.WINDOW_NORMAL) 9 | 10 | # start and get contents of stream 11 | vs = PiVideoStream().start() 12 | 13 | # create blinds object 14 | blinds = Blinds() 15 | 16 | while(True): 17 | image = vs.read() 18 | if image is None: 19 | continue 20 | 21 | cv2.imshow("Frame", image) 22 | 23 | # get matching rects 24 | rects = vs.read_rects() 25 | 26 | # there is a face in the frame so there are bounding rectangles that match 27 | if len(rects) > 0: 28 | blinds.close() 29 | else: 30 | blinds.open() 31 | 32 | cv2.destroyAllWindows() 33 | -------------------------------------------------------------------------------- /mypivideostream.py: -------------------------------------------------------------------------------- 1 | import os 2 | import cv2 3 | 4 | from picamera.array import PiRGBArray 5 | from picamera import PiCamera 6 | from threading import Thread 7 | 8 | # path to training data for HAAR face classifier 9 | FACE_DETECTOR_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 10 | 'haarcascade_frontalface_default.xml') 11 | 12 | # http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/ 13 | class PiVideoStream: 14 | def __init__(self, resolution=(640, 480), framerate=32, save_image_interval=1): 15 | ''' 16 | @param save_image_interval, interval in sec to save imave 17 | ''' 18 | # initialize the camera and stream 19 | self.camera = PiCamera() 20 | self.camera.resolution = resolution 21 | self.camera.framerate = framerate 22 | self.rawCapture = PiRGBArray(self.camera, size=resolution) 23 | self.stream = self.camera.capture_continuous(self.rawCapture, 24 | format="bgr", use_video_port=True) 25 | 26 | # initialize the frame and the variable used to indicate 27 | # if the thread should be stopped 28 | self.frame = None 29 | self.rects = [] # list of matching faces 30 | self.stopped = False 31 | 32 | def start(self): 33 | # start the thread to read frames from the video stream 34 | t = Thread(target=self.update, args=()) 35 | t.daemon = True 36 | t.start() 37 | return self 38 | 39 | def update(self): 40 | # keep looping infinitely until the thread is stopped 41 | for f in self.stream: 42 | # grab the frame from the stream and clear the stream in 43 | # preparation for the next frame 44 | self.frame = f.array 45 | self.rawCapture.truncate(0) 46 | 47 | # convert the image to grayscale, load the face cascade detector, 48 | # and detect faces in the image 49 | # Using data trained from here: 50 | # http://www.pyimagesearch.com/2015/05/11/creating-a-face-detection-api-with-python-and-opencv-in-just-5-minutes/ 51 | image = cv2.cvtColor(self.frame, cv2.COLOR_BGR2GRAY) 52 | detector = cv2.CascadeClassifier(FACE_DETECTOR_PATH) 53 | rects = detector.detectMultiScale(image, scaleFactor=1.1, minNeighbors=5,minSize=(30, 30), flags=cv2.cv.CV_HAAR_SCALE_IMAGE) 54 | 55 | # construct a list of bounding boxes from the detection 56 | self.rects = [(int(x), int(y), int(x + w), int(y + h)) for (x, y, w, h) in rects] 57 | 58 | # if the thread indicator variable is set, stop the thread 59 | # and resource camera resources 60 | if self.stopped: 61 | self.stream.close() 62 | self.rawCapture.close() 63 | self.camera.close() 64 | return 65 | 66 | def read(self): 67 | # return the frame most recently read 68 | return self.frame 69 | 70 | def read_rects(self): 71 | # return the matching rectangles most recently read after processing 72 | return self.rects 73 | 74 | def stop(self): 75 | # indicate that the thread should be stopped 76 | self.stopped = True 77 | --------------------------------------------------------------------------------