├── README.md └── examples ├── analog-gauge-reader ├── LICENSE ├── README.md ├── analog_gauge_reader.py └── images │ ├── gauge-1-calibration.jpg │ ├── gauge-1-lines.jpg │ ├── gauge-1.jpg │ ├── gauge-2.jpg │ └── screen-prompt.jpg └── motion-heatmap ├── LICENSE ├── README.md ├── images └── diff-overlay.jpg └── motion-heatmap.py /README.md: -------------------------------------------------------------------------------- 1 | # DISCONTINUATION OF PROJECT # 2 | This project will no longer be maintained by Intel. 3 | Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. 4 | Intel no longer accepts patches to this project. 5 | # python-cv-samples 6 | Python computer vision samples 7 | -------------------------------------------------------------------------------- /examples/analog-gauge-reader/LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | Copyright © 2014-2017 Intel Corporation 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining 5 | a copy of this software and associated documentation files (the 6 | "Software"), to deal in the Software without restriction, including 7 | without limitation the rights to use, copy, modify, merge, publish, 8 | distribute, sublicense, and/or sell copies of the Software, and to 9 | permit persons to whom the Software is furnished to do so, subject to 10 | the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 17 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 19 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 20 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 21 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /examples/analog-gauge-reader/README.md: -------------------------------------------------------------------------------- 1 | # Analog Gauge Reader 2 | 3 | This sample application takes an image or video frame of an analog gauge and reads the value using functions from the OpenCV\* computer vision library. 4 | It consists of two parts: calibration and measurement. During calibration, the application calibrates an image 5 | of a gauge (provided by the user) by prompting the user to enter the range of gauge values in degrees. It then uses these 6 | calibrated values in the measurement stage to convert the angle of the dial into a meaningful value. 7 | 8 | ## What you’ll learn 9 | * Circle detection 10 | * Line detection 11 | 12 | ## Gather your materials 13 | * Python\* 2.7 or greater 14 | * OpenCV version 3.3.0 or greater 15 | * An image of a gauge (or you can use the sample one provided) 16 | 17 | ## Setup 18 | 1. Take a picture of a gauge or use the gauge-1.jpg provided. If you name it something other than gauge-1.jpg make sure to 19 | change that in the `main() ` function. 20 | 2. Run the application `python analog_gauge_reader.py` and enter the requested values, using the output file gauge-#-calibration.jpg to determine the values. Here's an example of what the calibration image looks like: 21 | ![](images/gauge-1-calibration.jpg) 22 | 23 | For the calibration image above, you would enter in the following values: 24 | ![](images/screen-prompt.jpg) 25 | 26 | 3. The application by default reads the value of the gauge of the image you used for calibration. For the provided image, it gives a result of 16.4 psi. Not bad. 27 | 28 | Original image: 29 | ![](images/gauge-1.jpg) 30 | 31 | Found line: (not normally an output, just to show more of what's going on) 32 | ![](images/gauge-1-lines.jpg) 33 | 34 | gauge-2.jpg is provided for the user to try. 35 | 36 | ## Get the Code 37 | Code is included in this folder of the repository in the .py file. 38 | 39 | ## How it works 40 | The main functions used from the OpenCV\* library are `HoughCircles` (to detect the outline of the gauge and center point) and `HoughLines` (to detect the dial). 41 | 42 | Basic filtering is done as follows: 43 | For cirles (this happens in `calibrate_gauge()`) 44 | * only return circles from HoughCircles that are within reasonable range of the image height (this assumes the gauge takes up most of the view) 45 | * average the resulting circles and use the average for the center point and radius 46 | For lines (this happens in `get_current_value()`) 47 | * apply a threshold using `cv2.threshold.` and `cv2.THRESH_BINARY_INV` with threshold of 175 and maxValue of 255 work fine 48 | * remove all lines outside a given radius 49 | * check if a line is within an acceptable range of the radius 50 | * use the first acceptable line as the dial 51 | 52 | ### Optimization tips: 53 | If you're struggling to get your gauge to work, here are some tips: 54 | * Good lighting is key. Make sure there are no shadows if possible. 55 | * Gauges with very thin or small dials may not work well, thicker dials work better. 56 | * `diff1LowerBound, diff1UpperBound, diff2LowerBound, diff2UpperBound` determine the filtering of lines that don't represent the dial. You may need to adjust this if it's not returning any lines found. 57 | 58 | 59 | There is a considerable amount of trigonometry involved to create the calibration image, mainly sine and cosine to plot the calibration image lines and arctangent to get the angle of the dial. This approach sets 0/360 to be the -y axis (if the image has a cartesian grid in the middle) and it goes clock-wise. There is a slight modification to make the 0/360 degrees be at the -y axis, by an addition (i+9) in the calculation of p_text[i][j]. Without this +9 the 0/360 point would be on the +x axis. So this 60 | implementation assumes the gauge is aligned in the image, but it can be adjusted by changing the value of 9 to something else. 61 | 62 | IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments. 63 | -------------------------------------------------------------------------------- /examples/analog-gauge-reader/analog_gauge_reader.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright (c) 2017 Intel Corporation. 3 | Licensed under the MIT license. See LICENSE file in the project root for full license information. 4 | ''' 5 | 6 | import cv2 7 | import numpy as np 8 | #import paho.mqtt.client as mqtt 9 | import time 10 | 11 | def avg_circles(circles, b): 12 | avg_x=0 13 | avg_y=0 14 | avg_r=0 15 | for i in range(b): 16 | #optional - average for multiple circles (can happen when a gauge is at a slight angle) 17 | avg_x = avg_x + circles[0][i][0] 18 | avg_y = avg_y + circles[0][i][1] 19 | avg_r = avg_r + circles[0][i][2] 20 | avg_x = int(avg_x/(b)) 21 | avg_y = int(avg_y/(b)) 22 | avg_r = int(avg_r/(b)) 23 | return avg_x, avg_y, avg_r 24 | 25 | def dist_2_pts(x1, y1, x2, y2): 26 | #print np.sqrt((x2-x1)^2+(y2-y1)^2) 27 | return np.sqrt((x2 - x1)**2 + (y2 - y1)**2) 28 | 29 | def calibrate_gauge(gauge_number, file_type): 30 | ''' 31 | This function should be run using a test image in order to calibrate the range available to the dial as well as the 32 | units. It works by first finding the center point and radius of the gauge. Then it draws lines at hard coded intervals 33 | (separation) in degrees. It then prompts the user to enter position in degrees of the lowest possible value of the gauge, 34 | as well as the starting value (which is probably zero in most cases but it won't assume that). It will then ask for the 35 | position in degrees of the largest possible value of the gauge. Finally, it will ask for the units. This assumes that 36 | the gauge is linear (as most probably are). 37 | It will return the min value with angle in degrees (as a tuple), the max value with angle in degrees (as a tuple), 38 | and the units (as a string). 39 | ''' 40 | 41 | img = cv2.imread('gauge-%s.%s' %(gauge_number, file_type)) 42 | height, width = img.shape[:2] 43 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert to gray 44 | #gray = cv2.GaussianBlur(gray, (5, 5), 0) 45 | # gray = cv2.medianBlur(gray, 5) 46 | 47 | #for testing, output gray image 48 | #cv2.imwrite('gauge-%s-bw.%s' %(gauge_number, file_type),gray) 49 | 50 | #detect circles 51 | #restricting the search from 35-48% of the possible radii gives fairly good results across different samples. Remember that 52 | #these are pixel values which correspond to the possible radii search range. 53 | circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 20, np.array([]), 100, 50, int(height*0.35), int(height*0.48)) 54 | # average found circles, found it to be more accurate than trying to tune HoughCircles parameters to get just the right one 55 | a, b, c = circles.shape 56 | x,y,r = avg_circles(circles, b) 57 | 58 | #draw center and circle 59 | cv2.circle(img, (x, y), r, (0, 0, 255), 3, cv2.LINE_AA) # draw circle 60 | cv2.circle(img, (x, y), 2, (0, 255, 0), 3, cv2.LINE_AA) # draw center of circle 61 | 62 | #for testing, output circles on image 63 | #cv2.imwrite('gauge-%s-circles.%s' % (gauge_number, file_type), img) 64 | 65 | 66 | #for calibration, plot lines from center going out at every 10 degrees and add marker 67 | #for i from 0 to 36 (every 10 deg) 68 | 69 | ''' 70 | goes through the motion of a circle and sets x and y values based on the set separation spacing. Also adds text to each 71 | line. These lines and text labels serve as the reference point for the user to enter 72 | NOTE: by default this approach sets 0/360 to be the +x axis (if the image has a cartesian grid in the middle), the addition 73 | (i+9) in the text offset rotates the labels by 90 degrees so 0/360 is at the bottom (-y in cartesian). So this assumes the 74 | gauge is aligned in the image, but it can be adjusted by changing the value of 9 to something else. 75 | ''' 76 | separation = 10.0 #in degrees 77 | interval = int(360 / separation) 78 | p1 = np.zeros((interval,2)) #set empty arrays 79 | p2 = np.zeros((interval,2)) 80 | p_text = np.zeros((interval,2)) 81 | for i in range(0,interval): 82 | for j in range(0,2): 83 | if (j%2==0): 84 | p1[i][j] = x + 0.9 * r * np.cos(separation * i * 3.14 / 180) #point for lines 85 | else: 86 | p1[i][j] = y + 0.9 * r * np.sin(separation * i * 3.14 / 180) 87 | text_offset_x = 10 88 | text_offset_y = 5 89 | for i in range(0, interval): 90 | for j in range(0, 2): 91 | if (j % 2 == 0): 92 | p2[i][j] = x + r * np.cos(separation * i * 3.14 / 180) 93 | p_text[i][j] = x - text_offset_x + 1.2 * r * np.cos((separation) * (i+9) * 3.14 / 180) #point for text labels, i+9 rotates the labels by 90 degrees 94 | else: 95 | p2[i][j] = y + r * np.sin(separation * i * 3.14 / 180) 96 | p_text[i][j] = y + text_offset_y + 1.2* r * np.sin((separation) * (i+9) * 3.14 / 180) # point for text labels, i+9 rotates the labels by 90 degrees 97 | 98 | #add the lines and labels to the image 99 | for i in range(0,interval): 100 | cv2.line(img, (int(p1[i][0]), int(p1[i][1])), (int(p2[i][0]), int(p2[i][1])),(0, 255, 0), 2) 101 | cv2.putText(img, '%s' %(int(i*separation)), (int(p_text[i][0]), int(p_text[i][1])), cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0,0,0),1,cv2.LINE_AA) 102 | 103 | cv2.imwrite('gauge-%s-calibration.%s' % (gauge_number, file_type), img) 104 | 105 | #get user input on min, max, values, and units 106 | print 'gauge number: %s' %gauge_number 107 | min_angle = raw_input('Min angle (lowest possible angle of dial) - in degrees: ') #the lowest possible angle 108 | max_angle = raw_input('Max angle (highest possible angle) - in degrees: ') #highest possible angle 109 | min_value = raw_input('Min value: ') #usually zero 110 | max_value = raw_input('Max value: ') #maximum reading of the gauge 111 | units = raw_input('Enter units: ') 112 | 113 | #for testing purposes: hardcode and comment out raw_inputs above 114 | # min_angle = 45 115 | # max_angle = 320 116 | # min_value = 0 117 | # max_value = 200 118 | # units = "PSI" 119 | 120 | return min_angle, max_angle, min_value, max_value, units, x, y, r 121 | 122 | def get_current_value(img, min_angle, max_angle, min_value, max_value, x, y, r, gauge_number, file_type): 123 | 124 | #for testing purposes 125 | #img = cv2.imread('gauge-%s.%s' % (gauge_number, file_type)) 126 | 127 | gray2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) 128 | 129 | # Set threshold and maxValue 130 | thresh = 175 131 | maxValue = 255 132 | 133 | # for testing purposes, found cv2.THRESH_BINARY_INV to perform the best 134 | # th, dst1 = cv2.threshold(gray2, thresh, maxValue, cv2.THRESH_BINARY); 135 | # th, dst2 = cv2.threshold(gray2, thresh, maxValue, cv2.THRESH_BINARY_INV); 136 | # th, dst3 = cv2.threshold(gray2, thresh, maxValue, cv2.THRESH_TRUNC); 137 | # th, dst4 = cv2.threshold(gray2, thresh, maxValue, cv2.THRESH_TOZERO); 138 | # th, dst5 = cv2.threshold(gray2, thresh, maxValue, cv2.THRESH_TOZERO_INV); 139 | # cv2.imwrite('gauge-%s-dst1.%s' % (gauge_number, file_type), dst1) 140 | # cv2.imwrite('gauge-%s-dst2.%s' % (gauge_number, file_type), dst2) 141 | # cv2.imwrite('gauge-%s-dst3.%s' % (gauge_number, file_type), dst3) 142 | # cv2.imwrite('gauge-%s-dst4.%s' % (gauge_number, file_type), dst4) 143 | # cv2.imwrite('gauge-%s-dst5.%s' % (gauge_number, file_type), dst5) 144 | 145 | # apply thresholding which helps for finding lines 146 | th, dst2 = cv2.threshold(gray2, thresh, maxValue, cv2.THRESH_BINARY_INV); 147 | 148 | # found Hough Lines generally performs better without Canny / blurring, though there were a couple exceptions where it would only work with Canny / blurring 149 | #dst2 = cv2.medianBlur(dst2, 5) 150 | #dst2 = cv2.Canny(dst2, 50, 150) 151 | #dst2 = cv2.GaussianBlur(dst2, (5, 5), 0) 152 | 153 | # for testing, show image after thresholding 154 | cv2.imwrite('gauge-%s-tempdst2.%s' % (gauge_number, file_type), dst2) 155 | 156 | # find lines 157 | minLineLength = 10 158 | maxLineGap = 0 159 | lines = cv2.HoughLinesP(image=dst2, rho=3, theta=np.pi / 180, threshold=100,minLineLength=minLineLength, maxLineGap=0) # rho is set to 3 to detect more lines, easier to get more then filter them out later 160 | 161 | #for testing purposes, show all found lines 162 | # for i in range(0, len(lines)): 163 | # for x1, y1, x2, y2 in lines[i]: 164 | # cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 2) 165 | # cv2.imwrite('gauge-%s-lines-test.%s' %(gauge_number, file_type), img) 166 | 167 | # remove all lines outside a given radius 168 | final_line_list = [] 169 | #print "radius: %s" %r 170 | 171 | diff1LowerBound = 0.15 #diff1LowerBound and diff1UpperBound determine how close the line should be from the center 172 | diff1UpperBound = 0.25 173 | diff2LowerBound = 0.5 #diff2LowerBound and diff2UpperBound determine how close the other point of the line should be to the outside of the gauge 174 | diff2UpperBound = 1.0 175 | for i in range(0, len(lines)): 176 | for x1, y1, x2, y2 in lines[i]: 177 | diff1 = dist_2_pts(x, y, x1, y1) # x, y is center of circle 178 | diff2 = dist_2_pts(x, y, x2, y2) # x, y is center of circle 179 | #set diff1 to be the smaller (closest to the center) of the two), makes the math easier 180 | if (diff1 > diff2): 181 | temp = diff1 182 | diff1 = diff2 183 | diff2 = temp 184 | # check if line is within an acceptable range 185 | if (((diff1diff1LowerBound*r) and (diff2diff2LowerBound*r)): 186 | line_length = dist_2_pts(x1, y1, x2, y2) 187 | # add to final list 188 | final_line_list.append([x1, y1, x2, y2]) 189 | 190 | #testing only, show all lines after filtering 191 | # for i in range(0,len(final_line_list)): 192 | # x1 = final_line_list[i][0] 193 | # y1 = final_line_list[i][1] 194 | # x2 = final_line_list[i][2] 195 | # y2 = final_line_list[i][3] 196 | # cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 2) 197 | 198 | # assumes the first line is the best one 199 | x1 = final_line_list[0][0] 200 | y1 = final_line_list[0][1] 201 | x2 = final_line_list[0][2] 202 | y2 = final_line_list[0][3] 203 | cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 2) 204 | 205 | #for testing purposes, show the line overlayed on the original image 206 | #cv2.imwrite('gauge-1-test.jpg', img) 207 | cv2.imwrite('gauge-%s-lines-2.%s' % (gauge_number, file_type), img) 208 | 209 | #find the farthest point from the center to be what is used to determine the angle 210 | dist_pt_0 = dist_2_pts(x, y, x1, y1) 211 | dist_pt_1 = dist_2_pts(x, y, x2, y2) 212 | if (dist_pt_0 > dist_pt_1): 213 | x_angle = x1 - x 214 | y_angle = y - y1 215 | else: 216 | x_angle = x2 - x 217 | y_angle = y - y2 218 | # take the arc tan of y/x to find the angle 219 | res = np.arctan(np.divide(float(y_angle), float(x_angle))) 220 | #np.rad2deg(res) #coverts to degrees 221 | 222 | # print x_angle 223 | # print y_angle 224 | # print res 225 | # print np.rad2deg(res) 226 | 227 | #these were determined by trial and error 228 | res = np.rad2deg(res) 229 | if x_angle > 0 and y_angle > 0: #in quadrant I 230 | final_angle = 270 - res 231 | if x_angle < 0 and y_angle > 0: #in quadrant II 232 | final_angle = 90 - res 233 | if x_angle < 0 and y_angle < 0: #in quadrant III 234 | final_angle = 90 - res 235 | if x_angle > 0 and y_angle < 0: #in quadrant IV 236 | final_angle = 270 - res 237 | 238 | #print final_angle 239 | 240 | old_min = float(min_angle) 241 | old_max = float(max_angle) 242 | 243 | new_min = float(min_value) 244 | new_max = float(max_value) 245 | 246 | old_value = final_angle 247 | 248 | old_range = (old_max - old_min) 249 | new_range = (new_max - new_min) 250 | new_value = (((old_value - old_min) * new_range) / old_range) + new_min 251 | 252 | return new_value 253 | 254 | def main(): 255 | gauge_number = 1 256 | file_type='jpg' 257 | # name the calibration image of your gauge 'gauge-#.jpg', for example 'gauge-5.jpg'. It's written this way so you can easily try multiple images 258 | min_angle, max_angle, min_value, max_value, units, x, y, r = calibrate_gauge(gauge_number, file_type) 259 | 260 | #feed an image (or frame) to get the current value, based on the calibration, by default uses same image as calibration 261 | img = cv2.imread('gauge-%s.%s' % (gauge_number, file_type)) 262 | val = get_current_value(img, min_angle, max_angle, min_value, max_value, x, y, r, gauge_number, file_type) 263 | print "Current reading: %s %s" %(val, units) 264 | 265 | if __name__=='__main__': 266 | main() 267 | 268 | -------------------------------------------------------------------------------- /examples/analog-gauge-reader/images/gauge-1-calibration.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/intel-iot-devkit/python-cv-samples/c998b59d041ccea44ef86abe88d755f20d495139/examples/analog-gauge-reader/images/gauge-1-calibration.jpg -------------------------------------------------------------------------------- /examples/analog-gauge-reader/images/gauge-1-lines.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/intel-iot-devkit/python-cv-samples/c998b59d041ccea44ef86abe88d755f20d495139/examples/analog-gauge-reader/images/gauge-1-lines.jpg -------------------------------------------------------------------------------- /examples/analog-gauge-reader/images/gauge-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/intel-iot-devkit/python-cv-samples/c998b59d041ccea44ef86abe88d755f20d495139/examples/analog-gauge-reader/images/gauge-1.jpg -------------------------------------------------------------------------------- /examples/analog-gauge-reader/images/gauge-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/intel-iot-devkit/python-cv-samples/c998b59d041ccea44ef86abe88d755f20d495139/examples/analog-gauge-reader/images/gauge-2.jpg -------------------------------------------------------------------------------- /examples/analog-gauge-reader/images/screen-prompt.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/intel-iot-devkit/python-cv-samples/c998b59d041ccea44ef86abe88d755f20d495139/examples/analog-gauge-reader/images/screen-prompt.jpg -------------------------------------------------------------------------------- /examples/motion-heatmap/LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | Copyright © 2014-2017 Intel Corporation 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining 5 | a copy of this software and associated documentation files (the 6 | "Software"), to deal in the Software without restriction, including 7 | without limitation the rights to use, copy, modify, merge, publish, 8 | distribute, sublicense, and/or sell copies of the Software, and to 9 | permit persons to whom the Software is furnished to do so, subject to 10 | the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 17 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 19 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 20 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 21 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /examples/motion-heatmap/README.md: -------------------------------------------------------------------------------- 1 | # Motion Heatmap 2 | 3 | This sample application is useful to see movement patterns over time. For example, it could be used to see the usage of entrances to a factory floor over time, or patterns of shoppers in a store. 4 | 5 | ## What you’ll learn 6 | * Background subtraction 7 | * Application of a threshold 8 | * Accumulation of changed pixels over time 9 | * Add a color/heat map 10 | 11 | ## Gather your materials 12 | * Python\* 2.7 or greater 13 | * OpenCV version 3.3.0 or greater 14 | * The vtest.avi video from https://github.com/opencv/opencv/blob/master/samples/data/vtest.avi 15 | 16 | ## Setup 17 | 1. You need the extra modules installed for the MOG background subtractor. This tutorial was tested on Windows\*, and the easiest way to install it was using: 18 | ``` 19 | pip install opencv-contrib-python 20 | ``` 21 | 2. Download the vtest.avi video from https://github.com/opencv/opencv/blob/master/samples/data/vtest.avi and put it in the same folder as the python script. 22 | 3. Run the python script. You should see a diff-overlay.jpg when it's done. 23 | 24 | ![](images/diff-overlay.jpg) 25 | 26 | ## Get the Code 27 | Code is included in this folder of the repository in the .py file. 28 | 29 | ## How it works 30 | The main APIs used in OpenCV are: 31 | * MOG background subtractor (cv2.bgsegm.createBackgroundSubtractorMOG()) - https://docs.opencv.org/3.0-beta/modules/video/doc/motion_analysis_and_object_tracking.html?highlight=createbackgroundsubtractormog#createbackgroundsubtractormog 32 | Note: the docs are out of date, and the propoer way to initialize is 33 | ``` 34 | cv2.bgsegm.createBackgroundSubtractorMOG() 35 | ``` 36 | * cv2.threshold() - https://docs.opencv.org/3.3.1/d7/d4d/tutorial_py_thresholding.html 37 | * cv2.add() - https://docs.opencv.org/3.2.0/d0/d86/tutorial_py_image_arithmetics.html 38 | * cv2.applyColorMap() - https://docs.opencv.org/3.0-beta/modules/imgproc/doc/colormaps.html 39 | * cv2.addWeighted() - https://docs.opencv.org/3.2.0/d0/d86/tutorial_py_image_arithmetics.html 40 | 41 | The application takes each frame and first applies background subtraction using the cv2.bgsegm.createBackgroundSubtractorMOG() object to create a mask. A threshold is then applied to the mask to remove small amounts of movement, and also to set the accumulation value for each iteration. The result of the threshold is added to an accumulation image (one that starts out at all zero and gets added to each iteration without removing anything), which is what records the motion. At the very end, a color map is applied to the accumulated image so it's easier to see the motion. This colored imaged is then combined with a copy of the first frame using cv2.addWeighted to accomplish the overlay. 42 | 43 | IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments. 44 | -------------------------------------------------------------------------------- /examples/motion-heatmap/images/diff-overlay.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/intel-iot-devkit/python-cv-samples/c998b59d041ccea44ef86abe88d755f20d495139/examples/motion-heatmap/images/diff-overlay.jpg -------------------------------------------------------------------------------- /examples/motion-heatmap/motion-heatmap.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright (c) 2017 Intel Corporation. 3 | Licensed under the MIT license. See LICENSE file in the project root for full license information. 4 | ''' 5 | 6 | import numpy as np 7 | import cv2 8 | import copy 9 | 10 | def main(): 11 | cap = cv2.VideoCapture('vtest.avi') 12 | # pip install opencv-contrib-python 13 | fgbg = cv2.bgsegm.createBackgroundSubtractorMOG() 14 | 15 | # number of frames is a variable for development purposes, you can change the for loop to a while(cap.isOpened()) instead to go through the whole video 16 | num_frames = 350 17 | 18 | first_iteration_indicator = 1 19 | for i in range(0, num_frames): 20 | ''' 21 | There are some important reasons this if statement exists: 22 | -in the first run there is no previous frame, so this accounts for that 23 | -the first frame is saved to be used for the overlay after the accumulation has occurred 24 | -the height and width of the video are used to create an empty image for accumulation (accum_image) 25 | ''' 26 | if (first_iteration_indicator == 1): 27 | ret, frame = cap.read() 28 | first_frame = copy.deepcopy(frame) 29 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 30 | height, width = gray.shape[:2] 31 | accum_image = np.zeros((height, width), np.uint8) 32 | first_iteration_indicator = 0 33 | else: 34 | ret, frame = cap.read() # read a frame 35 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # convert to grayscale 36 | 37 | fgmask = fgbg.apply(gray) # remove the background 38 | 39 | # for testing purposes, show the result of the background subtraction 40 | # cv2.imshow('diff-bkgnd-frame', fgmask) 41 | 42 | # apply a binary threshold only keeping pixels above thresh and setting the result to maxValue. If you want 43 | # motion to be picked up more, increase the value of maxValue. To pick up the least amount of motion over time, set maxValue = 1 44 | thresh = 2 45 | maxValue = 2 46 | ret, th1 = cv2.threshold(fgmask, thresh, maxValue, cv2.THRESH_BINARY) 47 | # for testing purposes, show the threshold image 48 | # cv2.imwrite('diff-th1.jpg', th1) 49 | 50 | # add to the accumulated image 51 | accum_image = cv2.add(accum_image, th1) 52 | # for testing purposes, show the accumulated image 53 | # cv2.imwrite('diff-accum.jpg', accum_image) 54 | 55 | # for testing purposes, control frame by frame 56 | # raw_input("press any key to continue") 57 | 58 | # for testing purposes, show the current frame 59 | # cv2.imshow('frame', gray) 60 | 61 | if cv2.waitKey(1) & 0xFF == ord('q'): 62 | break 63 | 64 | # apply a color map 65 | # COLORMAP_PINK also works well, COLORMAP_BONE is acceptable if the background is dark 66 | color_image = im_color = cv2.applyColorMap(accum_image, cv2.COLORMAP_HOT) 67 | # for testing purposes, show the colorMap image 68 | # cv2.imwrite('diff-color.jpg', color_image) 69 | 70 | # overlay the color mapped image to the first frame 71 | result_overlay = cv2.addWeighted(first_frame, 0.7, color_image, 0.7, 0) 72 | 73 | # save the final overlay image 74 | cv2.imwrite('diff-overlay.jpg', result_overlay) 75 | 76 | # cleanup 77 | cap.release() 78 | cv2.destroyAllWindows() 79 | 80 | if __name__=='__main__': 81 | main() --------------------------------------------------------------------------------