├── .github └── dependabot.yml ├── .gitignore ├── 1.jpg ├── 2.jpg ├── 3.jpg ├── LICENSE ├── README.md ├── app.py ├── base_camera.py ├── camera.py ├── camera_opencv.py ├── camera_pi.py ├── conf.json ├── install.sh ├── pushbullet.sh ├── requirements.txt ├── requirements_pi.txt ├── saved_imgs └── saved_imgs.txt └── templates └── index.html /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | # To get started with Dependabot version updates, you'll need to specify which 2 | # package ecosystems to update and where the package manifests are located. 3 | # Please see the documentation for all configuration options: 4 | # https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates 5 | 6 | version: 2 7 | updates: 8 | - package-ecosystem: "" # See documentation for possible values 9 | directory: "/" # Location of package manifests 10 | schedule: 11 | interval: "weekly" 12 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | -------------------------------------------------------------------------------- /1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kalfasyan/Home_Surveillance_with_Python/35e6875384a6690a34330eb882dc657d0adcddfd/1.jpg -------------------------------------------------------------------------------- /2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kalfasyan/Home_Surveillance_with_Python/35e6875384a6690a34330eb882dc657d0adcddfd/2.jpg -------------------------------------------------------------------------------- /3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kalfasyan/Home_Surveillance_with_Python/35e6875384a6690a34330eb882dc657d0adcddfd/3.jpg -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2014 Miguel Grinberg 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Motion Detection, Alerts, Streaming 2 | With this repo you can: 3 | 1. Watch your webcam/camera/picamera feed on localhost [(0.0.0.0:5000)](https://0.0.0.0:5000) which is served with flask. 4 | 2. Perform motion detection on the camera feed, using opencv. 5 | 3. Send alerts / push notifications (via pushbullet) to your phone, desktop or anywhere you have installed pushbullet, with its API. 6 | 4. Save the images that triggered the alerts on your disk (marking the exact image region of movement). 7 | 8 | 9 | ## Requirements 10 | I strongly advise you to install a separate virtual environment to avoid dependency hells of python packages. 11 | Check out "Step 8" from [this nice blog post](https://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/). 12 | 13 | Get an Access token from [Pushbullet](https://www.pushbullet.com/#settings/account) 14 | 15 | Make sure you have 'curl' installed: 16 | ```sudo apt install curl``` 17 | 18 | Python 3 19 | 20 | ## Installation 21 | run the install.sh script: 22 | ```./install.sh``` 23 | 24 | ## Usage 25 | ```CAMERA=opencv python3 app.py -c conf.json``` 26 | 27 | Then open this [address (http://0.0.0.0:5000/)](http://0.0.0.0:5000/) on your browser. 28 | 29 | If you run it on raspberry pi (+ enabled camera module, installed picamera package), uncomment line 13 from app.py: 30 | ```#from camera_pi import Camera``` 31 | and then run: 32 | ```python3 app.py -c conf.json``` 33 | 34 | ### Thanks to: 35 | [miguelgrinberg](https://github.com/miguelgrinberg/flask-video-streaming) - for the flask streaming part 36 | [Adrian Rosebrock](https://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv/) - for the motion detection part 37 | [pushbullet](https://docs.pushbullet.com) - for the alerts part 38 | 39 | #### Troubleshooting 40 | You might have to install these libraries if you get errors complaining about them: 41 | ``` 42 | sudo apt install libhdf5-dev 43 | sudo apt install libhdf5-serial-dev 44 | sudo apt install libqt4-test 45 | sudo apt install libqtgui4 46 | ``` 47 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | from importlib import import_module 3 | import os 4 | from flask import Flask, render_template, Response 5 | 6 | # import camera driver 7 | if os.environ.get('CAMERA'): 8 | Camera = import_module('camera_' + os.environ['CAMERA']).Camera 9 | else: 10 | from camera import Camera 11 | 12 | # Raspberry Pi camera module (requires picamera package) 13 | #from camera_pi import Camera 14 | 15 | app = Flask(__name__) 16 | 17 | 18 | @app.route('/') 19 | def index(): 20 | """Video streaming home page.""" 21 | return render_template('index.html') 22 | 23 | 24 | def gen(camera): 25 | """Video streaming generator function.""" 26 | while True: 27 | frame = camera.get_frame() 28 | yield (b'--frame\r\n' 29 | b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n') 30 | 31 | 32 | @app.route('/video_feed') 33 | def video_feed(): 34 | """Video streaming route. Put this in the src attribute of an img tag.""" 35 | return Response(gen(Camera()), 36 | mimetype='multipart/x-mixed-replace; boundary=frame') 37 | 38 | 39 | if __name__ == '__main__': 40 | app.run(host='0.0.0.0', threaded=True) 41 | -------------------------------------------------------------------------------- /base_camera.py: -------------------------------------------------------------------------------- 1 | import time 2 | import threading 3 | try: 4 | from greenlet import getcurrent as get_ident 5 | except ImportError: 6 | try: 7 | from thread import get_ident 8 | except ImportError: 9 | from _thread import get_ident 10 | 11 | 12 | class CameraEvent(object): 13 | """An Event-like class that signals all active clients when a new frame is 14 | available. 15 | """ 16 | def __init__(self): 17 | self.events = {} 18 | 19 | def wait(self): 20 | """Invoked from each client's thread to wait for the next frame.""" 21 | ident = get_ident() 22 | if ident not in self.events: 23 | # this is a new client 24 | # add an entry for it in the self.events dict 25 | # each entry has two elements, a threading.Event() and a timestamp 26 | self.events[ident] = [threading.Event(), time.time()] 27 | return self.events[ident][0].wait() 28 | 29 | def set(self): 30 | """Invoked by the camera thread when a new frame is available.""" 31 | now = time.time() 32 | remove = None 33 | for ident, event in self.events.items(): 34 | if not event[0].isSet(): 35 | # if this client's event is not set, then set it 36 | # also update the last set timestamp to now 37 | event[0].set() 38 | event[1] = now 39 | else: 40 | # if the client's event is already set, it means the client 41 | # did not process a previous frame 42 | # if the event stays set for more than 5 seconds, then assume 43 | # the client is gone and remove it 44 | if now - event[1] > 5: 45 | remove = ident 46 | if remove: 47 | del self.events[remove] 48 | 49 | def clear(self): 50 | """Invoked from each client's thread after a frame was processed.""" 51 | self.events[get_ident()][0].clear() 52 | 53 | 54 | class BaseCamera(object): 55 | thread = None # background thread that reads frames from camera 56 | frame = None # current frame is stored here by background thread 57 | last_access = 0 # time of last client access to the camera 58 | event = CameraEvent() 59 | 60 | def __init__(self): 61 | """Start the background camera thread if it isn't running yet.""" 62 | if BaseCamera.thread is None: 63 | BaseCamera.last_access = time.time() 64 | 65 | # start background frame thread 66 | BaseCamera.thread = threading.Thread(target=self._thread) 67 | BaseCamera.thread.start() 68 | 69 | # wait until frames are available 70 | while self.get_frame() is None: 71 | time.sleep(0) 72 | 73 | def get_frame(self): 74 | """Return the current camera frame.""" 75 | BaseCamera.last_access = time.time() 76 | 77 | # wait for a signal from the camera thread 78 | BaseCamera.event.wait() 79 | BaseCamera.event.clear() 80 | 81 | return BaseCamera.frame 82 | 83 | @staticmethod 84 | def frames(): 85 | """"Generator that returns frames from the camera.""" 86 | raise RuntimeError('Must be implemented by subclasses.') 87 | 88 | @classmethod 89 | def _thread(cls): 90 | """Camera background thread.""" 91 | print('Starting camera thread.') 92 | frames_iterator = cls.frames() 93 | for frame in frames_iterator: 94 | BaseCamera.frame = frame 95 | BaseCamera.event.set() # send signal to clients 96 | time.sleep(0) 97 | 98 | # if there hasn't been any clients asking for frames in 99 | # the last 10 seconds then stop the thread 100 | if time.time() - BaseCamera.last_access > 10*60*500: 101 | frames_iterator.close() 102 | print('Stopping camera thread due to inactivity.') 103 | break 104 | BaseCamera.thread = None 105 | -------------------------------------------------------------------------------- /camera.py: -------------------------------------------------------------------------------- 1 | import time 2 | from base_camera import BaseCamera 3 | 4 | 5 | class Camera(BaseCamera): 6 | """An emulated camera implementation that streams a repeated sequence of 7 | files 1.jpg, 2.jpg and 3.jpg at a rate of one frame per second.""" 8 | imgs = [open(f + '.jpg', 'rb').read() for f in ['1', '2', '3']] 9 | 10 | @staticmethod 11 | def frames(): 12 | while True: 13 | time.sleep(1) 14 | yield Camera.imgs[int(time.time()) % 3] 15 | -------------------------------------------------------------------------------- /camera_opencv.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import warnings 3 | import datetime 4 | import imutils 5 | import json 6 | import time 7 | import cv2 8 | import numpy as np 9 | import os 10 | import io 11 | import time 12 | from base_camera import BaseCamera 13 | import scipy.misc 14 | 15 | 16 | class Camera(BaseCamera): 17 | video_source = 0 18 | 19 | @staticmethod 20 | def set_video_source(source): 21 | Camera.video_source = source 22 | 23 | @staticmethod 24 | def frames(): 25 | camera = cv2.VideoCapture(Camera.video_source) 26 | if not camera.isOpened(): 27 | raise RuntimeError('Could not start camera.') 28 | 29 | # construct the argument parser and parse the arguments 30 | ap = argparse.ArgumentParser() 31 | ap.add_argument("-c", "--conf", required=True, 32 | help="path to the JSON configuration file") 33 | args = vars(ap.parse_args()) 34 | 35 | # filter warnings, load the configuration and initialize the Dropbox 36 | # client 37 | warnings.filterwarnings("ignore") 38 | conf = json.load(open(args["conf"])) 39 | # let camera warm up 40 | print("[INFO] warming up...") 41 | time.sleep(conf["camera_warmup_time"]) 42 | 43 | # allow the camera to warmup, then initialize the average frame, last 44 | # uploaded timestamp, and frame motion counter 45 | avg = None 46 | lastUploaded = datetime.datetime.now() 47 | motionCounter = 0 48 | imgCounter = 0 49 | 50 | # capture frames from the camera 51 | while True: 52 | # grab the raw NumPy array representing the image and initialize 53 | # the timestamp and occupied/unoccupied text 54 | ret, frame = camera.read() 55 | 56 | # encode as a jpeg image and return it 57 | yield cv2.imencode('.jpg', frame)[1].tobytes() 58 | 59 | timestamp = datetime.datetime.now() 60 | text = "No motion detected.." 61 | 62 | # resize the frame, convert it to RGB, 63 | # and make a grayscale copy and blur it 64 | frame = cv2.cvtColor(imutils.resize(frame, width=500), cv2.COLOR_BGR2RGB) 65 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 66 | gray = cv2.GaussianBlur(gray, (21, 21), 0) 67 | 68 | # if the average frame is None, initialize it 69 | if avg is None: 70 | print("[INFO] starting background model...") 71 | avg = gray.copy().astype("float") 72 | continue 73 | 74 | # accumulate the weighted average between the current frame and 75 | # previous frames, then compute the difference between the current 76 | # frame and running average 77 | cv2.accumulateWeighted(gray, avg, 0.5) 78 | frameDelta = cv2.absdiff(gray, cv2.convertScaleAbs(avg)) 79 | 80 | # threshold the delta image, dilate the thresholded image to fill 81 | # in holes, then find contours on thresholded image 82 | thresh = cv2.threshold(frameDelta, conf["delta_thresh"], 255, 83 | cv2.THRESH_BINARY)[1] 84 | thresh = cv2.dilate(thresh, None, iterations=2) 85 | cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 86 | cv2.CHAIN_APPROX_SIMPLE) 87 | cnts = cnts[0] if imutils.is_cv2() else cnts[1] 88 | 89 | # loop over the contours 90 | for c in cnts: 91 | # if the contour is too small, ignore it 92 | if cv2.contourArea(c) < conf["min_area"]: 93 | continue 94 | 95 | # compute the bounding box for the contour, draw it on the frame, 96 | # and update the text 97 | (x, y, w, h) = cv2.boundingRect(c) 98 | cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) 99 | text = "Motion Detected!" 100 | 101 | # draw the text and timestamp on the frame 102 | ts = timestamp.strftime("%A %d %B %Y %I:%M:%S%p") 103 | cv2.putText(frame, "Room Status: {}".format(text), (10, 20), 104 | cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) 105 | cv2.putText(frame, ts, (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 106 | 0.35, (0, 0, 255), 1) 107 | 108 | 109 | # check to see if the room is occupied 110 | if text == "Motion Detected!": 111 | # check to see if enough time has passed between uploads 112 | if (timestamp - lastUploaded).seconds >= conf["min_upload_seconds"]: 113 | # increment the motion counter 114 | motionCounter += 1 115 | 116 | # check to see if the number of frames with consistent motion is 117 | # high enough 118 | if motionCounter >= conf["min_motion_frames"]: 119 | # update the last uploaded timestamp and reset the motion 120 | # counter 121 | print("[INFO] Motion detected!") 122 | os.system('./pushbullet.sh "Alert Motion Detected"') 123 | 124 | scipy.misc.imsave('./saved_imgs/outfile'+str(imgCounter)+'.jpg', frame) 125 | imgCounter += 1 126 | 127 | lastUploaded = timestamp 128 | motionCounter = 0 129 | # otherwise, the room is not occupied 130 | else: 131 | motionCounter = 0 -------------------------------------------------------------------------------- /camera_pi.py: -------------------------------------------------------------------------------- 1 | # USAGE 2 | # python pi_surveillance.py --conf conf.json 3 | 4 | import io 5 | import time 6 | import argparse 7 | import warnings 8 | import datetime 9 | import imutils 10 | import json 11 | import time 12 | import cv2 13 | import numpy as np 14 | import os 15 | import scipy.misc 16 | import picamera 17 | from base_camera import BaseCamera 18 | from picamera.array import PiRGBArray 19 | from picamera import PiCamera 20 | 21 | class Camera(BaseCamera): 22 | @staticmethod 23 | def frames(): 24 | 25 | with picamera.PiCamera() as camera: 26 | camera.vflip = True 27 | # construct the argument parser and parse the arguments 28 | ap = argparse.ArgumentParser() 29 | ap.add_argument("-c", "--conf", required=True, 30 | help="path to the JSON configuration file") 31 | args = vars(ap.parse_args()) 32 | 33 | # filter warnings, load the configuration and initialize the Dropbox 34 | # client 35 | warnings.filterwarnings("ignore") 36 | conf = json.load(open(args["conf"])) 37 | # let camera warm up 38 | print("[INFO] warming up...") 39 | time.sleep(conf["camera_warmup_time"]) 40 | 41 | # initialize the camera and grab a reference to the raw camera capture 42 | camera.resolution = tuple(conf["resolution"]) 43 | camera.framerate = conf["fps"] 44 | rawCapture = io.BytesIO() 45 | 46 | # allow the camera to warmup, then initialize the average frame, last 47 | # uploaded timestamp, and frame motion counter 48 | avg = None 49 | lastUploaded = datetime.datetime.now() 50 | motionCounter = 0 51 | imgCounter = 0 52 | 53 | # capture frames from the camera 54 | for _ in camera.capture_continuous(rawCapture, format="jpeg", use_video_port=True): 55 | # grab the raw NumPy array representing the image and initialize 56 | # the timestamp and occupied/unoccupied text 57 | rawCapture.seek(0) 58 | yield rawCapture.read() 59 | 60 | data = np.fromstring(rawCapture.getvalue(), dtype=np.uint8) 61 | # "Decode" the image from the array, preserving colour 62 | frame = cv2.imdecode(data, 1) 63 | 64 | rawCapture.seek(0) 65 | rawCapture.truncate(0) 66 | 67 | timestamp = datetime.datetime.now() 68 | text = "No motion detected.." 69 | 70 | # resize the frame, convert it to RGB, 71 | # and make a grayscale copy and blur it 72 | frame = cv2.cvtColor(imutils.resize(frame, width=500), cv2.COLOR_BGR2RGB) 73 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 74 | gray = cv2.GaussianBlur(gray, (21, 21), 0) 75 | 76 | # if the average frame is None, initialize it 77 | if avg is None: 78 | print("[INFO] starting background model...") 79 | avg = gray.copy().astype("float") 80 | rawCapture.truncate(0) 81 | continue 82 | 83 | # accumulate the weighted average between the current frame and 84 | # previous frames, then compute the difference between the current 85 | # frame and running average 86 | cv2.accumulateWeighted(gray, avg, 0.5) 87 | frameDelta = cv2.absdiff(gray, cv2.convertScaleAbs(avg)) 88 | 89 | # threshold the delta image, dilate the thresholded image to fill 90 | # in holes, then find contours on thresholded image 91 | thresh = cv2.threshold(frameDelta, conf["delta_thresh"], 255, 92 | cv2.THRESH_BINARY)[1] 93 | thresh = cv2.dilate(thresh, None, iterations=2) 94 | cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 95 | cv2.CHAIN_APPROX_SIMPLE) 96 | cnts = cnts[0] if imutils.is_cv2() else cnts[1] 97 | 98 | # loop over the contours 99 | for c in cnts: 100 | # if the contour is too small, ignore it 101 | if cv2.contourArea(c) < conf["min_area"]: 102 | continue 103 | 104 | # compute the bounding box for the contour, draw it on the frame, 105 | # and update the text 106 | (x, y, w, h) = cv2.boundingRect(c) 107 | cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) 108 | text = "Motion Detected!" 109 | 110 | # draw the text and timestamp on the frame 111 | ts = timestamp.strftime("%A %d %B %Y %I:%M:%S%p") 112 | cv2.putText(frame, "Room Status: {}".format(text), (10, 20), 113 | cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) 114 | cv2.putText(frame, ts, (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 115 | 0.35, (0, 0, 255), 1) 116 | 117 | 118 | # check to see if the room is occupied 119 | if text == "Motion Detected!": 120 | # check to see if enough time has passed between uploads 121 | if (timestamp - lastUploaded).seconds >= conf["min_upload_seconds"]: 122 | # increment the motion counter 123 | motionCounter += 1 124 | 125 | # check to see if the number of frames with consistent motion is 126 | # high enough 127 | if motionCounter >= conf["min_motion_frames"]: 128 | # update the last uploaded timestamp and reset the motion 129 | # counter 130 | print("[INFO] Motion detected!") 131 | os.system('./pushbullet.sh "Alert Motion Detected"') 132 | 133 | scipy.misc.imsave('./saved_imgs/outfile'+str(imgCounter)+'.jpg', frame) 134 | imgCounter += 1 135 | 136 | lastUploaded = timestamp 137 | motionCounter = 0 138 | # otherwise, the room is not occupied 139 | else: 140 | motionCounter = 0 -------------------------------------------------------------------------------- /conf.json: -------------------------------------------------------------------------------- 1 | { 2 | "show_video": false, 3 | "min_upload_seconds": 3.0, 4 | "min_motion_frames": 8, 5 | "camera_warmup_time": 2.5, 6 | "delta_thresh": 5, 7 | "resolution": [640, 480], 8 | "fps": 16, 9 | "min_area": 5000 10 | } 11 | -------------------------------------------------------------------------------- /install.sh: -------------------------------------------------------------------------------- 1 | echo 'Make sure you have installed curl (sudo apt install curl) and pip3' 2 | echo 'It is also recommended to use a separate virtual environment.' 3 | echo 'Enter your Pushbullet API key (or create one here: https://www.pushbullet.com/#settings/account)' 4 | read PUSHBULLET_API 5 | echo 'Saving it to .bashrc.' 6 | echo 'export PUSHBULLET_API='"'$PUSHBULLET_API'" >> ~/.bashrc 7 | 8 | read -r -p "Are you using a Raspberry Pi? [y/N] " response 9 | if [[ "$response" =~ ^([yY][eE][sS]|[yY])+$ ]] 10 | then 11 | pip3 install -r requirements_pi.txt 12 | else 13 | pip3 install -r requirements.txt 14 | fi 15 | 16 | echo "Giving permissions to the alert script." 17 | chmod 777 pushbullet.sh 18 | 19 | echo "Sourcing .bashrc.." 20 | source ~/.bashrc 21 | 22 | echo 'Sending a test alert!' 23 | MSG="$1" 24 | curl -u $PUSHBULLET_API: https://api.pushbullet.com/v2/pushes -d type=note -d title="Alert" -d body="$MSG" 25 | -------------------------------------------------------------------------------- /pushbullet.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Set the environment variable in your .bashrc file like so: 4 | # export PUSHBULLET_API="MY_PUSHBULLET_API_KEY" 5 | MSG="$1" 6 | 7 | curl -u $PUSHBULLET_API: https://api.pushbullet.com/v2/pushes -d type=note -d title="Alert" -d body="$MSG" 8 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | backcall==0.1.0 2 | certifi==2023.7.22 3 | chardet==3.0.4 4 | decorator==4.3.0 5 | dropbox==9.0.0 6 | idna==3.7 7 | imutils==0.4.6 8 | ipython==8.10.0 9 | ipython-genutils==0.2.0 10 | jedi==0.12.1 11 | numpy==1.22.0 12 | parso==0.3.1 13 | pexpect==4.6.0 14 | pickleshare==0.7.4 15 | prompt-toolkit==1.0.15 16 | ptyprocess==0.6.0 17 | Pygments==2.15.0 18 | requests>=2.20.0 19 | simplegeneric==0.8.1 20 | six==1.11.0 21 | traitlets==4.3.2 22 | urllib3>=1.24.2 23 | wcwidth==0.1.7 24 | flask==2.3.2 25 | Pillow>=6.2.2 26 | scipy==1.10.0 27 | opencv-contrib-python==4.2.0.32 28 | -------------------------------------------------------------------------------- /requirements_pi.txt: -------------------------------------------------------------------------------- 1 | backcall==0.1.0 2 | certifi==2023.7.22 3 | chardet==3.0.4 4 | decorator==4.3.0 5 | dropbox==9.0.0 6 | idna==3.7 7 | imutils==0.4.6 8 | ipython==8.10.0 9 | ipython-genutils==0.2.0 10 | jedi==0.12.1 11 | numpy==1.22.0 12 | parso==0.3.1 13 | pexpect==4.6.0 14 | picamera==1.13 15 | pickleshare==0.7.4 16 | prompt-toolkit==1.0.15 17 | ptyprocess==0.6.0 18 | Pygments==2.15.0 19 | requests>=2.20.0 20 | simplegeneric==0.8.1 21 | six==1.11.0 22 | traitlets==4.3.2 23 | urllib3>=1.24.2 24 | wcwidth==0.1.7 25 | flask==2.3.2 26 | Pillow>=6.2.2 27 | scipy==1.10.0 28 | opencv-contrib-python==4.2.0.32 29 | -------------------------------------------------------------------------------- /saved_imgs/saved_imgs.txt: -------------------------------------------------------------------------------- 1 | The images that show what triggered the alerts will be stored here 2 | -------------------------------------------------------------------------------- /templates/index.html: -------------------------------------------------------------------------------- 1 | 2 |
3 |