├── README.md
├── VideoHealthMonitoring.sln
└── VideoHealthMonitoring
├── VideoHealthMonitoring.pyproj
├── __pycache__
├── csk.cpython-36.pyc
├── csk_facedetection.cpython-36.pyc
├── rPPG_Extracter.cpython-36.pyc
├── rPPG_lukas_Extracter.cpython-36.pyc
├── rPPG_preprocessing.cpython-36.pyc
└── rPPG_processing_realtime.cpython-36.pyc
├── csk.py
├── csk_facedetection.py
├── haarcascade_frontalface_default.xml
├── mat_exporter.py
├── placeholder.png
├── rPPG_Extracter.py
├── rPPG_Extracter_denseflow.py
├── rPPG_GUI.py
├── rPPG_lukas_Extracter.py
├── rPPG_preprocessing.py
├── rPPG_processing_realtime.py
└── util
├── Border.png
├── RSD_Python.pyproj
├── __init__.py
├── __pycache__
├── FuncUtil.cpython-36.pyc
├── __init__.cpython-36.pyc
├── desktop.ini
├── func_util.cpython-36.pyc
├── opencv_util.cpython-36.pyc
├── pyqtgraph_util.cpython-36.pyc
├── qt_util.cpython-36.pyc
└── style.cpython-36.pyc
├── desktop.ini
├── func_util.py
├── lines.html
├── opencv_util.py
├── pyqtgraph_util.py
├── qt_util.py
└── style.py
/README.md:
--------------------------------------------------------------------------------
1 | updated version : https://github.com/MartinChristiaan/PythonVideoPulserateV2
2 |
3 | # Python Video Pulserate using the chrominance method
4 | Python implementation of a pulse rate monitor using rPPG from video with the chrominance method.
5 | It uses OpenCV for face detection, for skin selection two methods are available. Skin classification using a hsv color range and forehead estimation. Due to the state of the art chrominance method the system is fairly motion robust. Furthermore, this framework also features a GUI that depicts the measured raw rPPG signal and the resulting fourier spectrum.
6 |
7 | # Dependencies
8 | * Python3
9 | * Numpy.
10 | * Matplotlib.
11 | * OpenCV
12 | * SciPy
13 |
14 | For GUI:
15 | * PyQt5
16 | * PyQtgraph
17 |
18 |
19 | # GUI
20 |
21 | Execute the rPPG_GUI.py to launch the GUI. Edit the source variable in order to select the desired input. The GUI uses PyQtgraph in order to display measurent data in real time. The video source as well as de skin selection methods can be set within the python script. Exact user instructions can be found within the Python file.
22 |
23 | # .Mat Extraction
24 |
25 | The Raw rPPG signal from an offline recording can also be extracted to a .Mat file so it can be processed with Matlab, usefull for more detailed analysis. Run the mat_exporter.py for this functionality.
26 |
27 | # Other files
28 | rPPG_preprocessing.py contains some of the functions used for face tracking and skin classification.
29 | rPPG_proccessing_realtime.py contains the signal processing (normalization, detrending, bandpass filtering and the chrominance method ) used to improve the signal.
30 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring.sln:
--------------------------------------------------------------------------------
1 |
2 | Microsoft Visual Studio Solution File, Format Version 12.00
3 | # Visual Studio 15
4 | VisualStudioVersion = 15.0.28307.136
5 | MinimumVisualStudioVersion = 10.0.40219.1
6 | Project("{888888A0-9F3D-457C-B088-3A5042F75D52}") = "VideoHealthMonitoring", "VideoHealthMonitoring\VideoHealthMonitoring.pyproj", "{AFA8CCF9-1320-46BE-AC4E-7C7D44C40595}"
7 | EndProject
8 | Global
9 | GlobalSection(SolutionConfigurationPlatforms) = preSolution
10 | Debug|Any CPU = Debug|Any CPU
11 | Release|Any CPU = Release|Any CPU
12 | EndGlobalSection
13 | GlobalSection(ProjectConfigurationPlatforms) = postSolution
14 | {AFA8CCF9-1320-46BE-AC4E-7C7D44C40595}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
15 | {AFA8CCF9-1320-46BE-AC4E-7C7D44C40595}.Release|Any CPU.ActiveCfg = Release|Any CPU
16 | EndGlobalSection
17 | GlobalSection(SolutionProperties) = preSolution
18 | HideSolutionNode = FALSE
19 | EndGlobalSection
20 | GlobalSection(ExtensibilityGlobals) = postSolution
21 | SolutionGuid = {31BF9B8B-F80F-47D3-BFB9-E8520D808241}
22 | EndGlobalSection
23 | EndGlobal
24 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/VideoHealthMonitoring.pyproj:
--------------------------------------------------------------------------------
1 |
2 |
3 | Debug
4 | 2.0
5 | afa8ccf9-1320-46be-ac4e-7c7d44c40595
6 | .
7 | rPPG_GUI.py
8 |
9 |
10 | .
11 | .
12 | VideoHealthMonitoring
13 | VideoHealthMonitoring
14 |
15 |
16 | true
17 | false
18 |
19 |
20 | true
21 | false
22 |
23 |
24 |
25 |
26 | Code
27 |
28 |
29 | Code
30 |
31 |
32 |
33 |
34 |
35 |
36 | Code
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
71 |
72 |
73 |
74 |
75 |
76 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/__pycache__/csk.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/__pycache__/csk.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/__pycache__/csk_facedetection.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/__pycache__/csk_facedetection.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/__pycache__/rPPG_Extracter.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/__pycache__/rPPG_Extracter.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/__pycache__/rPPG_lukas_Extracter.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/__pycache__/rPPG_lukas_Extracter.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/__pycache__/rPPG_preprocessing.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/__pycache__/rPPG_preprocessing.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/__pycache__/rPPG_processing_realtime.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/__pycache__/rPPG_processing_realtime.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/csk.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | class CSK:
4 | def __init__(self):
5 | self.eta = 0.075
6 | self.sigma = 0.2
7 | self.lmbda = 0.01
8 |
9 | def init(self,frame,x1,y1,width,height):
10 | # Save position and size of bbox
11 | self.x1 = x1
12 | self.y1 = y1
13 | self.width = width if width%2==0 else width-1
14 | self.height = height if height%2==0 else height-1
15 |
16 | # Crop & Window
17 | self.x = self.crop(frame,x1,y1,self.width,self.height)
18 |
19 | # Generate regression target
20 | self.y = self.target(self.width,self.height)
21 | self.prev = np.unravel_index(np.argmax(self.y, axis=None), self.y.shape) # Maximum position
22 |
23 | # Training
24 | self.alphaf = self.training(self.x,self.y,self.sigma,self.lmbda)
25 |
26 | def update(self,frame):
27 | # Crop at the previous position (doubled size)
28 | z = self.crop(frame,self.x1,self.y1,self.width,self.height)
29 |
30 | # Detection
31 | responses = self.detection(self.alphaf,self.x,z,0.2)
32 | curr = np.unravel_index(np.argmax(responses, axis=None), responses.shape)
33 | dy = curr[0]-self.prev[0]
34 | dx = curr[1]-self.prev[1]
35 |
36 | # New position (left top corner)
37 | self.x1 = self.x1 - dx
38 | self.y1 = self.y1 - dy
39 |
40 | # Training
41 | xtemp = self.eta*self.crop(frame,self.x1,self.y1,self.width,self.height) + (1-self.eta)*self.x
42 | self.x = self.crop(frame,self.x1,self.y1,self.width,self.height)
43 |
44 | self.alphaf = self.eta*self.training(self.x,self.y,0.2,0.01) + (1-self.eta)*self.alphaf # linearly interpolated
45 | self.x = xtemp
46 |
47 | return self.x1, self.y1
48 |
49 |
50 | def dgk(self, x1, x2, sigma):
51 | c = np.fft.fftshift(np.fft.ifft2(np.fft.fft2(x1)*np.conj(np.fft.fft2(x2))))
52 | d = np.dot(np.conj(x1.flatten(1)),x1.flatten(1)) + np.dot(np.conj(x2.flatten(1)),x2.flatten(1)) - 2*c
53 | k = np.exp(-1/sigma**2*np.abs(d)/np.size(x1))
54 | return k
55 |
56 | def training(self, x, y, sigma, lmbda):
57 | k = self.dgk(x, x, sigma)
58 |
59 | # y2 = np.zeros_like(k)
60 | # for i in range(3):
61 | # y2[:,:,i] = y
62 | alphaf = np.fft.fft2(y)/(np.fft.fft2(k)+lmbda)
63 | return alphaf
64 |
65 | def detection(self, alphaf, x, z, sigma):
66 | k = self.dgk(x, z, sigma)
67 | responses = np.real(np.fft.ifft2(alphaf*np.fft.fft2(k)))
68 | return responses
69 |
70 | def window(self,img):
71 | height = img.shape[0]
72 | width = img.shape[1]
73 |
74 | j = np.arange(0,width)
75 | i = np.arange(0,height)
76 | J, I = np.meshgrid(j,i)
77 | window = np.sin(np.pi*J/width)*np.sin(np.pi*I/height)
78 | windowed = window*((img/255)-0.5)
79 |
80 | return windowed
81 |
82 | def crop(self,img,x1,y1,width,height):
83 | pad_y = [0,0]
84 | pad_x = [0,0]
85 |
86 | if (y1-height/2) < 0:
87 | y_up = 0
88 | pad_y[0] = int(-(y1-height/2))
89 | else:
90 | y_up = int(y1-height/2)
91 |
92 | if (y1+3*height/2) > img.shape[0]:
93 | y_down = img.shape[0]
94 | pad_y[1] = int((y1+3*height/2) - img.shape[0])
95 | else:
96 | y_down = int(y1+3*height/2)
97 |
98 | if (x1-width/2) < 0:
99 | x_left = 0
100 | pad_x[0] = int(-(x1-width/2))
101 | else:
102 | x_left = int(x1-width/2)
103 |
104 | if (x1+3*width/2) > img.shape[1]:
105 | x_right = img.shape[1]
106 | pad_x[1] = int((x1+3*width/2) - img.shape[1])
107 | else:
108 | x_right = int(x1+3*width/2)
109 |
110 | cropped = img[y_up:y_down,x_left:x_right]
111 | padded = np.pad(cropped,(pad_y,pad_x),'edge')
112 | windowed = self.window(padded)
113 | return windowed
114 |
115 | def target(self,width,height):
116 | double_height = 2 * height
117 | double_width = 2 * width
118 | s = np.sqrt(double_height*double_width)/16
119 |
120 | j = np.arange(0,double_width)
121 | i = np.arange(0,double_height)
122 | J, I = np.meshgrid(j,i)
123 | y = np.exp(-((J-width)**2+(I-height)**2)/s**2)
124 |
125 | return y
126 |
127 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/csk_facedetection.py:
--------------------------------------------------------------------------------
1 | import csk
2 | import numpy as np
3 | from scipy.misc import imread, imsave
4 | import cv2 # (Optional) OpenCV for drawing bounding boxes
5 | from rPPG_preprocessing import *
6 |
7 | class CSKFaceDetector():
8 |
9 | def __init__(self):
10 | self.face_rect = []
11 | self.tracker = csk.CSK() # CSK instance
12 | self.init = True
13 | def track_face(self,frame,gray):
14 | if self.init:
15 | frame_cropped,gray_cropped,self.face_rect = crop_to_face(frame,gray,[0,0,0,0])
16 | self.tracker.init(gray,self.face_rect[0],self.face_rect[1],self.face_rect[2],self.face_rect[3])
17 | self.init = False
18 | return frame_cropped,gray_cropped,self.face_rect
19 | else:
20 | self.face_rect[0], self.face_rect[1] = self.tracker.update(gray) # update CSK tracker and output estimated position
21 | return crop_frame(frame,self.face_rect),crop_frame(gray,self.face_rect),self.face_rect
22 |
23 |
24 |
25 | # 1st frame's groundtruth information
26 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/mat_exporter.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import argparse
3 | import cv2
4 | import time
5 | import os
6 | import matplotlib.pyplot as plt
7 | import scipy.io as sio
8 | from util.opencv_util import *
9 | from rPPG_Extracter import *
10 | import csv
11 | from rPPG_lukas_Extracter import *
12 |
13 | matname = "mixed_motion"
14 | data_path = "C:\\Users\\marti\\Downloads\\Data\\mixed_motion"
15 | vl = VideoLoader(data_path + "\\bmp\\" )
16 | fs = 20
17 |
18 | def is_number(s):
19 | try:
20 | float(s)
21 | return True
22 | except ValueError:
23 | return False
24 |
25 |
26 | with open(data_path + '\\reference.csv', newline='') as csvfile:
27 | data = list(csv.reader(csvfile))
28 | pulse_ref = np.array([float(row[1]) for row in data if is_number(row[1])])
29 |
30 | rPPG_extracter = rPPG_Extracter()
31 | i=0
32 | timestamps = []
33 | while True:
34 | frame,should_stop,timestamp = vl.load_frame()
35 | timestamps.append(timestamp)
36 | if should_stop:
37 | break
38 | # Forehead Only:
39 | #rPPG_extracter.measure_rPPG(frame,False,[.35,.65,.05,.15])
40 | # Skin Classifier:
41 | rPPG_extracter.measure_rPPG(frame,True,[])
42 |
43 |
44 | # Use this if you want lukas Kanade
45 | #rPPG_extracter.crop_to_face_and_safe(frame)
46 | #rPPG_extracter.track_Local_motion_lukas()
47 | #rPPG_extracter.calc_ppg(frame)
48 | i=i+1 # Progress
49 | print(i)
50 |
51 | rPPG = np.transpose(rPPG_extracter.rPPG)
52 |
53 |
54 |
55 | mat_dict = {"rPPG" : rPPG,"ref_pulse_rate" : pulse_ref}
56 | sio.savemat(matname,mat_dict)
57 |
58 | t = np.arange(rPPG.shape[1])/fs
59 |
60 | plt.figure()
61 | plt.plot(t,1/np.array(timestamps),'-r')
62 | plt.xlabel("Time (sec)")
63 | plt.ylabel("FPS")
64 |
65 |
66 | plt.figure()
67 | plt.plot(t,rPPG[0],'-r')
68 | plt.plot(t,rPPG[1],'-g')
69 | plt.plot(t,rPPG[2],'-b')
70 | plt.xlabel("Time (sec)")
71 | plt.ylabel("Amplitude ")
72 |
73 | plt.figure()
74 | plt.plot(t,pulse_ref[0:t.shape[0]],'-r')
75 | plt.xlabel("Time (sec)")
76 | plt.ylabel("Pulse_rate (BPM) ")
77 | plt.grid(1)
78 | plt.show()
79 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/placeholder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/placeholder.png
--------------------------------------------------------------------------------
/VideoHealthMonitoring/rPPG_Extracter.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import argparse
3 | import cv2
4 | import time
5 | import os
6 | import matplotlib.pyplot as plt
7 | import scipy.io as sio
8 | from util.opencv_util import *
9 | from rPPG_preprocessing import *
10 | from csk_facedetection import CSKFaceDetector
11 | import math
12 |
13 | class rPPG_Extracter():
14 | def __init__(self):
15 | self.prev_face = [0,0,0,0]
16 | self.skin_prev = []
17 | self.rPPG = []
18 | self.frame_cropped = []
19 | self.sub_roi_rect = []
20 | self.csk_tracker = CSKFaceDetector()
21 |
22 | def calc_ppg(self,num_pixels,frame):
23 |
24 | r_avg = np.sum(frame[:,:,0])/num_pixels
25 | g_avg = np.sum(frame[:,:,1])/num_pixels
26 | b_avg = np.sum(frame[:,:,2])/num_pixels
27 | ppg = [r_avg,g_avg,b_avg]
28 | for i,col in enumerate(ppg):
29 | if math.isnan(col):
30 | ppg[i] = 0
31 | return ppg
32 |
33 | def process_frame(self, frame, sub_roi, use_classifier,use_csk=False):
34 | gray_frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
35 |
36 | if use_csk:
37 | frame_cropped,gray_frame,self.prev_face = self.csk_tracker.track_face(frame,gray_frame)
38 | else:
39 | frame_cropped,gray_frame,self.prev_face = crop_to_face(frame,gray_frame,self.prev_face)
40 |
41 | if len(sub_roi) > 0:
42 | sub_roi_rect = get_subroi_rect(frame_cropped,sub_roi)
43 | frame_cropped = crop_frame(frame_cropped,sub_roi_rect)
44 | gray_frame = crop_frame(gray_frame,sub_roi_rect)
45 | self.sub_roi_rect = sub_roi_rect
46 |
47 | num_pixels = frame.shape[0] * frame.shape[1]
48 | if use_classifier:
49 | frame_cropped,num_pixels = apply_skin_classifier(frame_cropped)
50 | return frame_cropped, gray_frame,num_pixels
51 |
52 | def measure_rPPG(self,frame,use_classifier = False,sub_roi = []):
53 | frame_cropped, gray_frame,num_pixels = self.process_frame(frame, sub_roi, use_classifier)
54 | self.rPPG.append(self.calc_ppg(num_pixels,frame_cropped))
55 | self.frame_cropped = frame_cropped
56 |
57 |
58 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/rPPG_Extracter_denseflow.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import argparse
3 | import cv2
4 | import time
5 | import os
6 | import matplotlib.pyplot as plt
7 | import scipy.io as sio
8 | from util.opencv_util import *
9 | from rPPG_preprocessing import *
10 | from csk_facedetection import CSKFaceDetector
11 | import math
12 | class rPPG_Extracter():
13 | def __init__(self):
14 | self.prev_face = [0,0,0,0]
15 | self.skin_prev = []
16 | self.rPPG = []
17 | self.cropped_gray_frames = []
18 | self.frame_cropped = []
19 | self.flow_frames = 8
20 | self.x_track = []
21 | self.y_track = []
22 | self.dx = []
23 | self.dy = []
24 | self.error = []
25 | self.sub_roi_rect = []
26 | self.csk_tracker = CSKFaceDetector()
27 |
28 |
29 | def calc_ppg(self,num_pixels,frame):
30 |
31 | r_avg = np.sum(frame[:,:,0])/num_pixels
32 | g_avg = np.sum(frame[:,:,1])/num_pixels
33 | b_avg = np.sum(frame[:,:,2])/num_pixels
34 | ppg = [r_avg,g_avg,b_avg]
35 | for i,col in enumerate(ppg):
36 | if math.isnan(col):
37 | ppg[i] = 0
38 | return ppg
39 |
40 | def process_frame(self, frame, sub_roi, use_classifier,use_csk=False):
41 | gray_frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
42 |
43 | if use_csk:
44 | frame_cropped,gray_frame,self.prev_face = self.csk_tracker.track_face(frame,gray_frame)
45 | else:
46 | frame_cropped,gray_frame,self.prev_face = crop_to_face(frame,gray_frame,self.prev_face)
47 |
48 | if len(sub_roi) > 0:
49 | sub_roi_rect = get_subroi_rect(frame_cropped,sub_roi)
50 | frame_cropped = crop_frame(frame_cropped,sub_roi_rect)
51 | gray_frame = crop_frame(gray_frame,sub_roi_rect)
52 | self.sub_roi_rect = sub_roi_rect
53 |
54 | num_pixels = frame.shape[0] * frame.shape[1]
55 | if use_classifier:
56 | frame_cropped,num_pixels = apply_skin_classifier(frame_cropped)
57 | return frame_cropped, gray_frame,num_pixels
58 |
59 | def measure_rPPG(self,frame,use_classifier = False,sub_roi = []):
60 | frame_cropped, gray_frame,num_pixels = self.process_frame(frame, sub_roi, use_classifier)
61 | self.rPPG.append(self.calc_ppg(num_pixels,frame_cropped))
62 | self.frame_cropped = frame_cropped
63 |
64 | def measure_rPPG_delta_saturated(self,frame):
65 | #frame_cropped, gray_frame,num_pixels = self.process_frame(frame, [], True)
66 | #ppg_val = self.calc_ppg(num_pixels,frame_cropped)
67 |
68 | #if np.sum(len(self.rPPG) > 1 and np.array(ppg_val) - np.array(self.rPPG[-1])) > 30:
69 | # print("Delta TOO LARGE!")
70 | # print("Suggested" + str(self.prev_face) + " Overwritted : " + str(self.back_up_face))
71 | # frame_cropped = crop_frame(frame,self.back_up_face)
72 | # self.prev_face = self.back_up_face
73 | # frame_cropped,num_pixels = apply_skin_classifier(frame_cropped)
74 | # ppg_val = self.calc_ppg(num_pixels,frame_cropped)
75 | #else:
76 | # self.rPPG.append(ppg_val)
77 |
78 | #self.back_up_face = self.prev_face
79 | frame,num_pixels = apply_skin_classifier(frame)
80 | ppg_val = self.calc_ppg(num_pixels,frame)
81 | self.rPPG.append(ppg_val)
82 | self.frame_cropped = frame
83 |
84 |
85 | def find_flow_pixels(self):
86 | base_flow = np.zeros()
87 | for frame_id in range(1,self.flow_frames):
88 | flow = cv2.calcOpticalFlowFarneback(flow_frames[frame_id-1],flow_frames[frame_id], None, 0.5, 3, 15, 3, 5, 1.2, 0)
89 |
90 |
91 | def crop_to_face_and_safe(self,frame,use_classifier = False,sub_roi = []):
92 | frame_cropped, gray_frame,_ = self.process_frame(frame, sub_roi, use_classifier)
93 | self.cropped_gray_frames.append(gray_frame)
94 | self.frame_cropped = frame_cropped
95 |
96 | def track_Local_motion(self):
97 | try :
98 | h, w = self.cropped_gray_frames[-1].shape[:2]
99 | step = 16
100 | if len(self.x_track) == 0:
101 | self.y_track, self.x_track = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1)
102 |
103 | num_frames = len(self.cropped_gray_frames)
104 |
105 | if num_frames > 1:
106 | flow = cv2.calcOpticalFlowFarneback(self.cropped_gray_frames[-2],self.cropped_gray_frames[-1], None, 0.5, 3, 15, 3, 5, 1.2, 0)
107 | self.x_track = self.x_track.clip(0,w-1)
108 | self.y_track = self.y_track.clip(0,h-1)
109 | fx, fy = flow[self.y_track.astype(int),self.x_track.astype(int)].T#.astype(int)
110 | self.dx.append(fx)
111 | self.dy.append(fy)
112 | self.x_track+=fx
113 | self.y_track+=fy
114 | #print(self.x_track.shape)
115 |
116 | orig_pos_y,orig_pos_x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int)
117 | estimated_pos_x = np.copy(self.x_track)
118 |
119 | estimated_pos_y = np.copy(self.y_track)
120 | for fr in range(np.min([15,len(self.dx)])):
121 | estimated_pos_x += self.dx[-fr]
122 | estimated_pos_y += self.dy[-fr]
123 |
124 | self.error = np.sqrt((estimated_pos_x - orig_pos_x)**2 + (estimated_pos_y - orig_pos_y)**2 )
125 | except Exception :
126 | print("Flow Failed")
127 |
128 | #def track_Local_motion_lukas(self):
129 | # try :
130 | # h, w = self.cropped_gray_frames[-1].shape[:2]
131 | # step = 16
132 | # p = [(h/2,w/2)]
133 |
134 | # num_frames = len(self.cropped_gray_frames)
135 |
136 | # if num_frames > 1:
137 | # flow = cv2.calcOpticalFlowFarneback(self.cropped_gray_frames[-2],self.cropped_gray_frames[-1], None, 0.5, 3, 15, 3, 5, 1.2, 0)
138 | # self.x_track = self.x_track.clip(0,w-1)
139 | # self.y_track = self.y_track.clip(0,h-1)
140 | # fx, fy = flow[self.y_track.astype(int),self.x_track.astype(int)].T#.astype(int)
141 | # self.dx.append(fx)
142 | # self.dy.append(fy)
143 | # self.x_track+=fx
144 | # self.y_track+=fy
145 | # #print(self.x_track.shape)
146 |
147 | # orig_pos_y,orig_pos_x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int)
148 | # estimated_pos_x = np.copy(self.x_track)
149 |
150 | # estimated_pos_y = np.copy(self.y_track)
151 | # for fr in range(np.min([15,len(self.dx)])):
152 | # estimated_pos_x += self.dx[-fr]
153 | # estimated_pos_y += self.dy[-fr]
154 |
155 | # self.error = np.sqrt((estimated_pos_x - orig_pos_x)**2 + (estimated_pos_y - orig_pos_y)**2 )
156 | # except Exception :
157 | # print("Flow Failed")
158 |
159 |
160 |
161 |
162 |
163 | # cleanup the camera and close any open windows
164 |
165 |
166 |
167 |
168 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/rPPG_GUI.py:
--------------------------------------------------------------------------------
1 | # Author : Martin van Leeuwen
2 | # Video pulse rate monitor using rPPG with the chrominance method.
3 |
4 | ############################## User Settings #############################################
5 |
6 | class Settings():
7 | def __init__(self):
8 | self.use_classifier = True # Toggles skin classifier
9 | self.use_flow = False # (Mixed_motion only) Toggles PPG detection
10 | # with Lukas Kanade optical flow
11 | self.show_cropped = True # Shows the processed frame on the aplication instead of the regular one.
12 | self.sub_roi = []# [.35,.65,.05,.15] # If instead of skin classifier, forhead estimation should be used
13 | # set to [.35,.65,.05,.15]
14 | self.use_resampling = True # Set to true with webcam
15 |
16 | # In the source either put the data path to the image sequence/video or "webcam"
17 | #source = "C:\\Users\\marti\\Downloads\\Data\\13kmh.mp4" # stationary\\bmp\\"
18 | #source = "C:\\Users\\marti\\Downloads\\Data\\stationary\\bmp\\"
19 | source = "webcam"
20 | fs = 20 # Please change to the capture rate of the footage.
21 |
22 | ############################## APP #######################################################
23 |
24 | from PyQt5 import QtGui
25 | from PyQt5 import QtCore
26 | from PyQt5.QtCore import Qt
27 | from PyQt5.QtWidgets import *
28 | from util.qt_util import *
29 | from util.pyqtgraph_util import *
30 | import numpy as np
31 | from util.func_util import *
32 | import matplotlib.cm as cm
33 | from util.style import style
34 | from util.opencv_util import *
35 | from rPPG_Extracter import *
36 | from rPPG_lukas_Extracter import *
37 | from rPPG_processing_realtime import extract_pulse
38 |
39 | ## Creates The App
40 |
41 | fftlength = 300
42 |
43 | f = np.linspace(0,fs/2,fftlength/2 + 1) * 60;
44 | settings = Settings()
45 |
46 |
47 | def create_video_player():
48 | frame = cv2.imread("placeholder.png")
49 | vb = pg.GraphicsView()
50 | frame,_ = pg.makeARGB(frame,None,None,None,False)
51 | img = pg.ImageItem(frame,axisOrder = 'row-major')
52 | img.show()
53 | vb.addItem(img)
54 | return img, vb
55 |
56 |
57 | app,w = create_basic_app()
58 | img, vb = create_video_player()
59 |
60 | layout = QHBoxLayout()
61 | control_layout = QVBoxLayout()
62 | layout.addLayout(control_layout)
63 | layout.addWidget(vb)
64 |
65 | fig = create_fig()
66 | fig.setTitle('rPPG')
67 | addLabels(fig,'time','intensity','-','sec')
68 | plt_r = plot(fig,np.arange(0,5),np.arange(0,5),[255,0,0])
69 | plt_g = plot(fig,np.arange(0,5),np.arange(0,5),[0,255,0])
70 | plt_b = plot(fig,np.arange(0,5),np.arange(0,5),[0,0,255])
71 |
72 | fig_bpm = create_fig()
73 | fig_bpm.setTitle('Frequency')
74 | fig_bpm.setXRange(0,300)
75 | addLabels(fig_bpm,'Frequency','intensity','-','BPM')
76 | plt_bpm = plot(fig_bpm,np.arange(0,5),np.arange(0,5),[255,0,0])
77 |
78 | layout.addWidget(fig)
79 | layout.addWidget(fig_bpm)
80 | timestamps = []
81 | time_start = [0]
82 |
83 | def update(load_frame,rPPG_extracter,rPPG_extracter_lukas,settings : Settings):
84 | bpm = 0
85 | frame,should_stop,timestamp = load_frame() #frame_from_camera()
86 | dt = time.time()-time_start[0]
87 | fps = 1/(dt)
88 |
89 | time_start[0] = time.time()
90 | if len(timestamps) == 0:
91 | timestamps.append(0)
92 | else:
93 | timestamps.append(timestamps[-1] + dt)
94 |
95 | #print("Update")
96 | if should_stop:
97 | return
98 | rPPG = []
99 | if settings.use_flow:
100 | rPPG_extracter = rPPG_extracter_lukas
101 | rPPG_extracter.crop_to_face_and_safe(frame)
102 | rPPG_extracter.track_Local_motion_lukas()
103 | rPPG_extracter.calc_ppg(frame)
104 | points = rPPG_extracter.points
105 | frame = cv2.circle(frame,(points[0,0,0],points[0,0,1]),5,(0,0,255),-1)
106 |
107 | rPPG = np.transpose(rPPG_extracter_lukas.rPPG)
108 | else:
109 | rPPG_extracter.measure_rPPG(frame,settings.use_classifier,settings.sub_roi)
110 | rPPG = np.transpose(rPPG_extracter.rPPG)
111 |
112 | # Extract Pulse
113 | if rPPG.shape[1] > 10:
114 | if settings.use_resampling :
115 | t = np.arange(0,timestamps[-1],1/fs)
116 |
117 | rPPG_resampled= np.zeros((3,t.shape[0]))
118 | for col in [0,1,2]:
119 | rPPG_resampled[col] = np.interp(t,timestamps,rPPG[col])
120 | rPPG = rPPG_resampled
121 | num_frames = rPPG.shape[1]
122 | start = max([num_frames-100,0])
123 | t = np.arange(num_frames)/fs
124 | pulse = extract_pulse(rPPG,fftlength,fs)
125 | plt_bpm.setData(f,pulse)
126 | plt_r.setData(t[start:num_frames],rPPG[0,start:num_frames])
127 | plt_g.setData(t[start:num_frames],rPPG[1,start:num_frames])
128 | plt_b.setData(t[start:num_frames],rPPG[2,start:num_frames])
129 | bpm = f[np.argmax(pulse)]
130 | fig_bpm.setTitle('Frequency : PR = ' + str(bpm) + ' BPM' )
131 |
132 |
133 |
134 | #print(fps)
135 | face = rPPG_extracter.prev_face
136 | if not settings.use_flow:
137 | if settings.show_cropped:
138 | frame = rPPG_extracter.frame_cropped
139 | if len(settings.sub_roi) > 0:
140 | sr = rPPG_extracter.sub_roi_rect
141 | draw_rect(frame,sr)
142 | else:
143 | try :
144 | draw_rect(frame,face)
145 | if len(settings.sub_roi) > 0:
146 | sr = rPPG_extracter.sub_roi_rect
147 | sr[0] += face[0]
148 | sr[1] += face[1]
149 | draw_rect(frame,sr)
150 | except Exception:
151 | print("no face")
152 | # write_text(frame,"bpm : " + '{0:.2f}'.format(bpm),(0,100))
153 | write_text(frame,"fps : " + '{0:.2f}'.format(fps),(0,50))
154 | frame,_ = pg.makeARGB(frame,None,None,None,False)
155 | #cv2.imshow("images", np.hstack([frame]))
156 | img.setImage(frame)
157 |
158 | timer = QtCore.QTimer()
159 | timer.start(10)
160 |
161 | def setup_update_loop(load_frame,timer,settings):
162 | # = "C:\\Users\\marti\\Downloads\\Data\\translation"
163 | try: timer.timeout.disconnect()
164 | except Exception: pass
165 | rPPG_extracter = rPPG_Extracter()
166 | rPPG_extracter_lukas = rPPG_Lukas_Extracter()
167 | update_fun = lambda : update(load_frame,rPPG_extracter,rPPG_extracter_lukas,settings)
168 | timer.timeout.connect(update_fun)
169 |
170 | def setup_loaded_image_sequence(data_path,timer,settings):
171 | vl = VideoLoader(data_path)
172 |
173 | setup_update_loop(vl.load_frame,timer,settings)
174 |
175 | def setup_webcam(timer,settings):
176 | camera = cv2.VideoCapture(0)
177 | #camera.set(3, 1280)
178 | #camera.set(4, 720)
179 | settings.use_resampling = True
180 | def frame_from_camera():
181 | _,frame = camera.read()
182 | return frame,False,camera.get(cv2.CAP_PROP_POS_MSEC)
183 |
184 | setup_update_loop(frame_from_camera,timer,settings)
185 |
186 | def setup_video(data_path,timer,settings):
187 | vi_cap = cv2.VideoCapture(data_path)
188 | #settings.use_resampling = True
189 | def frame_from_video():
190 | _,frame = vi_cap.read()
191 | rows,cols = frame.shape[:2]
192 |
193 | #M = cv2.getRotationMatrix2D((cols/2,rows/2),270,1)
194 | #frame = cv2.warpAffine(frame,M,(cols,rows))
195 | frame = cv2.resize(frame,None,fx=0.5, fy=0.5, interpolation = cv2.INTER_CUBIC)
196 | return frame,False,0
197 |
198 | setup_update_loop(frame_from_video,timer,settings)
199 |
200 | #settings = Settings()
201 | if source.endswith('.mp4'):
202 | setup_video(source,timer,settings)
203 | elif source == 'webcam':
204 | setup_webcam(timer,settings)
205 | else :
206 | setup_loaded_image_sequence(source,timer,settings)
207 |
208 |
209 | w.setLayout(layout)
210 | execute_app(app,w)
211 | camera.release()
212 | cv2.destroyAllWindows()
213 |
214 |
215 |
216 |
217 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/rPPG_lukas_Extracter.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import argparse
3 | import cv2
4 | import time
5 | import os
6 | import matplotlib.pyplot as plt
7 | import scipy.io as sio
8 | from util.opencv_util import *
9 | from rPPG_preprocessing import *
10 | from csk_facedetection import CSKFaceDetector
11 | import math
12 |
13 |
14 | class rPPG_Lukas_Extracter():
15 | def __init__(self):
16 | self.prev_face = [0,0,0,0]
17 | self.skin_prev = []
18 | self.rPPG = []
19 | self.cropped_gray_frames = []
20 | self.frame_cropped = []
21 | self.flow_frames = 8
22 | self.points = []
23 | self.lk_params = dict( winSize = (15,15),maxLevel = 2,criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
24 |
25 |
26 | def crop_to_face_and_safe(self,frame):
27 | self.cropped_gray_frames.append(cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY))
28 | self.frame_cropped = frame
29 |
30 | def calc_ppg(self,frame):
31 | try:
32 | x_region = np.zeros(0,dtype = int)
33 | y_region = np.zeros(0,dtype = int)
34 |
35 | for point_id in range(self.points.shape[0]):
36 | x= int(self.points[point_id,0,1])
37 | y = int(self.points[point_id,0,0])
38 | margin = 7
39 | x_region= np.concatenate((x_region,np.arange(x-margin,x+margin,dtype = int)))
40 | y_region = np.concatenate((y_region,np.arange(y-margin,y+margin,dtype = int)))
41 |
42 | X,Y = np.meshgrid(x_region,y_region)
43 | tracked_section = frame[X,Y,:]
44 | print(tracked_section.shape)
45 | npix = tracked_section.shape[0] * tracked_section.shape[1]
46 | r_avg = np.sum(tracked_section[:,:,0])/npix
47 | g_avg = np.sum(tracked_section[:,:,1])/npix
48 | b_avg = np.sum(tracked_section[:,:,2])/npix
49 | ppg = [r_avg,g_avg,b_avg]
50 | for i,col in enumerate(ppg):
51 | if math.isnan(col):
52 | ppg[i] = 0
53 | self.rPPG.append(ppg)
54 | except Exception:
55 | self.rPPG.append(rPPG[-1])
56 |
57 |
58 | def track_Local_motion_lukas(self):
59 | h, w = self.cropped_gray_frames[-1].shape[:2]
60 | if len(self.points) == 0:
61 | self.points = np.zeros((1,1,2),dtype = np.float32)
62 | self.points[0,0,0] = 380
63 | self.points[0,0,1] = 280
64 | num_frames = len(self.cropped_gray_frames)
65 |
66 | if num_frames > 1:
67 | self.points, st, err = cv2.calcOpticalFlowPyrLK(self.cropped_gray_frames[-2], self.cropped_gray_frames[-1], self.points, None, **self.lk_params)
68 |
69 | #else:
70 | # feature_params = dict( maxCorners = 100,
71 | # qualityLevel = 0.3,
72 | # minDistance = 7,
73 | # blockSize = 7 )
74 | # self.points = cv2.goodFeaturesToTrack(self.cropped_gray_frames[-1], mask = None, **feature_params)
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 | # cleanup the camera and close any open windows
83 |
84 |
85 |
86 |
87 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/rPPG_preprocessing.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import argparse
3 | import cv2
4 | import time
5 | import os
6 | import matplotlib.pyplot as plt
7 | import scipy.io as sio
8 | from util.opencv_util import *
9 |
10 |
11 | lower = np.array([0, 48, 80], dtype = "uint8")
12 | upper = np.array([20, 255, 255], dtype = "uint8")
13 |
14 |
15 | def apply_skin_classifier(frame):
16 | converted = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
17 | skinMask = cv2.inRange(converted, lower, upper)
18 | kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
19 | skinMask = cv2.erode(skinMask, kernel, iterations = 2)
20 | skinMask = cv2.dilate(skinMask, kernel, iterations = 2)
21 | skinMask = cv2.GaussianBlur(skinMask, (3, 3), 0)
22 | num_skin_pixels = np.sum(np.max(skinMask,1))
23 | skin = cv2.bitwise_and(frame, frame, mask = skinMask)
24 | return skin,num_skin_pixels
25 |
26 | def get_subroi_rect(frame_cropped,roi):
27 | w,h = frame_cropped.shape[:2]
28 | min_x = int(roi[0] * w)
29 | max_x = int(roi[1] * w)
30 | min_y = int(roi[2] * h)
31 | max_y = int(roi[3] * h)
32 | return [min_x,min_y,max_x-min_x,max_y-min_y]
33 |
34 |
35 | def crop_frame(frame,rect):
36 | x = rect[0]
37 | y = rect[1]
38 | w = rect[2]
39 | h = rect[3]
40 | return frame[y:y+h,x:x+w]
41 |
42 |
43 | def crop_to_face(frame,gray,prev_face):
44 | faces = face_cascade.detectMultiScale(gray, 1.3, 5)
45 | if len(faces) == 0:
46 | if prev_face[0] == 0:
47 | return frame,gray,prev_face
48 | else:
49 | return crop_frame(frame,prev_face),crop_frame(gray,prev_face),prev_face
50 | else:
51 | if prev_face[0] == 0:
52 | face_rect = faces[0]
53 | return crop_frame(frame,faces[0]),crop_frame(gray,faces[0]),faces[0]
54 | else:
55 | face_rect = faces[0]
56 |
57 | delta = (face_rect[0] - prev_face[0])**2 + (face_rect[1] - prev_face[1])**2
58 | if delta > 40:
59 | face_rect[2] = prev_face[2]
60 | face_rect[3] = prev_face[3]
61 | return crop_frame(frame,face_rect),crop_frame(gray,face_rect),face_rect
62 | else:
63 | return crop_frame(frame,prev_face),crop_frame(gray,prev_face),prev_face
64 |
65 | #def crop_to_face_PI(frame,gray,prev_face,e_sum):
66 | # faces = face_cascade.detectMultiScale(gray, 1.3, 5)
67 | # kp = 1
68 | # ki = 0
69 | # if len(faces) == 0:
70 | # if len(prev_face) == 0:
71 | # return frame,gray,None
72 | # else:
73 | # return crop_frame(frame,prev_face),crop_frame(gray,prev_face),prev_face
74 | # else:
75 | # if len(prev_face) == 0:
76 | # return crop_frame(frame,faces[0]),crop_frame(gray,faces[0]),faces[0]
77 | # else:
78 | # face_rect = faces[0]
79 | # ey = face_rect[0] - prev_face[0]
80 | # ex = face_rect[1] - prev_face[1]
81 |
82 | # e_sum = [e_sum[0] + ex, e_sum[1] + ey]
83 | # print(e_sum)
84 |
85 | # face_rect[0] = prev_face[0] + e_sum[0] * ki + ey * kp
86 | # face_rect[1] = prev_face[1] + e_sum[1] * ki + ex * kp
87 |
88 | # face_rect[2] = prev_face[2]
89 | # face_rect[3] = prev_face[3]
90 | # return crop_frame(frame,face_rect),crop_frame(gray,face_rect),face_rect
91 |
92 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/rPPG_processing_realtime.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import scipy.io as sio
3 | from scipy import signal
4 | import matplotlib.pyplot as plt
5 | # Params
6 | R = 0
7 | G = 1
8 | B = 2
9 |
10 |
11 | def extract_pulse(rPPG,fftlength,fs_video):
12 |
13 | if(rPPG.shape[1] < fftlength):
14 | return np.zeros(int(fftlength/2)+1)
15 | fft_roi = range(int(fftlength/2+1)) # We only care about this part of the fft because it is symmetric anyway
16 | bpf_div= 60 * fs_video / 2
17 | b_BPF40220,a_BPF40220 = signal.butter(10, ([40/bpf_div, 220/bpf_div]), 'bandpass')
18 |
19 | col_c = np.zeros((3,fftlength))
20 | skin_vec = [1,0.66667,0.5]
21 | for col in [R,G,B]:
22 | col_stride = rPPG[col,-fftlength:]# select last samples
23 | y_ACDC = signal.detrend(col_stride/np.mean(col_stride))
24 | col_c[col] = y_ACDC * skin_vec[col]
25 | X_chrom = col_c[R]-col_c[G]
26 | Y_chrom = col_c[R] + col_c[G] - 2* col_c[B]
27 | Xf = signal.filtfilt(b_BPF40220,a_BPF40220,X_chrom) # Applies band pass filter
28 | Yf = signal.filtfilt(b_BPF40220,a_BPF40220,Y_chrom)
29 | Nx = np.std(Xf)
30 | Ny = np.std(Yf)
31 | alpha_CHROM = Nx/Ny
32 | x_stride_method = Xf- alpha_CHROM*Yf
33 | STFT = np.fft.fft(x_stride_method,fftlength)[fft_roi]
34 | normalized_amplitude = np.abs(STFT)/np.max(np.abs(STFT))
35 | return normalized_amplitude
36 |
37 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/Border.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/Border.png
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/RSD_Python.pyproj:
--------------------------------------------------------------------------------
1 |
2 |
3 | Debug
4 | 2.0
5 | f2867339-ef21-4d6c-ba2e-ac0241a15d7c
6 | .
7 | main.py
8 |
9 |
10 | .
11 | .
12 | RSD_Python
13 | RSD_Python
14 |
15 |
16 | true
17 | false
18 |
19 |
20 | true
21 | false
22 |
23 |
24 |
25 |
26 |
27 | Code
28 |
29 |
30 | Code
31 |
32 |
33 |
34 |
35 |
38 |
39 |
40 |
41 |
42 |
43 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__init__.py
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/FuncUtil.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/FuncUtil.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/__init__.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/__init__.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/desktop.ini:
--------------------------------------------------------------------------------
1 | [.ShellClassInfo]
2 | InfoTip=This folder is shared online.
3 | IconFile=C:\Program Files\Google\Drive\googledrivesync.exe
4 | IconIndex=16
5 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/func_util.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/func_util.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/opencv_util.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/opencv_util.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/pyqtgraph_util.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/pyqtgraph_util.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/qt_util.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/qt_util.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/__pycache__/style.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MartinChristiaan/PythonVideoPulserate/8f9d72bd008502785b1b5c29a8180b00ef9f2d47/VideoHealthMonitoring/util/__pycache__/style.cpython-36.pyc
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/desktop.ini:
--------------------------------------------------------------------------------
1 | [.ShellClassInfo]
2 | InfoTip=This folder is shared online.
3 | IconFile=C:\Program Files\Google\Drive\googledrivesync.exe
4 | IconIndex=16
5 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/func_util.py:
--------------------------------------------------------------------------------
1 |
2 | def mfilter(pred,arr):
3 | result = []
4 | for e in arr:
5 | if(pred(e)):
6 | result.append(e)
7 | return result
8 | def pipe(arg, *funcs):
9 | for f in funcs:
10 | arg = f(arg)
11 | return arg
12 | def iter(fun,arr):
13 | for k in arr:
14 | fun(k)
15 |
16 | def iteri(fun,arr):
17 | for i in range(len(arr)):
18 | fun(i,arr[i])
19 |
20 | def init(fun,cnt):
21 | list = []
22 | for i in range(cnt):
23 | list.append(fun())
24 | return list
25 |
26 | def funs_map(funs):
27 | r = []
28 | for fun in funs:
29 | r.append(fun())
30 | return r
31 |
32 |
33 | def init(fun,cnt):
34 | mlist = []
35 | for i in range(cnt):
36 | mlist.append(fun())
37 | return mlist
38 |
39 | def initi(fun,cnt):
40 | list = []
41 | for i in range(cnt):
42 | list.append(fun(i))
43 | return list
44 |
45 | def initi_tuple(fun,cnt):
46 | a_list = []
47 | b_list = []
48 | for i in range(cnt):
49 | a,b = fun(i)
50 | a_list.append(a)
51 | b_list.append(b)
52 | return a_list,b_list
53 |
54 | def initi_3tuple(fun,cnt):
55 | a_list = []
56 | b_list = []
57 | c_list = []
58 | for i in range(cnt):
59 | a,b = fun(i)
60 | a_list.append(a)
61 | b_list.append(b)
62 | c_list.append(c)
63 | return a_list,b_list,c_list
64 |
65 | def mapi_3tuple(fun,arr):
66 | a_list = []
67 | b_list = []
68 | c_list = []
69 | for i in range(len(arr)):
70 | a,b,c = fun(i,arr[i])
71 | a_list.append(a)
72 | b_list.append(b)
73 | c_list.append(c)
74 | return a_list,b_list,c_list
75 |
76 | def map2i_3tuple(fun,arr1,arr2):
77 | a_list = []
78 | b_list = []
79 | c_list = []
80 | for i in range(len(arr1)):
81 | a,b,c = fun(i,arr1[i],arr2[i])
82 | a_list.append(a)
83 | b_list.append(b)
84 | c_list.append(c)
85 | return a_list,b_list,c_list
86 |
87 | def mapi(fun,arr):
88 | a_list = []
89 | for i in range(len(arr)):
90 | a = fun(i,arr[i])
91 | a_list.append(a)
92 | return a_list
93 |
94 |
95 |
96 | def mmap(fun,arr):
97 | return list(map(fun,arr))
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/lines.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Bokeh Plot
7 |
8 |
9 |
10 |
11 |
14 |
25 |
26 |
27 |
28 |
31 |
32 |
47 |
48 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/opencv_util.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import os
3 | import time
4 | class VideoLoader():
5 | def __init__(self,folder):
6 | self.frame = 0
7 | self.video_folder = folder
8 |
9 | def load_frame(self):
10 | self.frame+=1
11 | frame_path = self.video_folder + str(self.frame) + ".bmp"
12 | exists = os.path.isfile(frame_path)
13 | if exists:
14 | return cv2.imread(frame_path),False,self.frame/20
15 | else:
16 | print(frame_path + " Does not exist")
17 | return None,True,0
18 |
19 |
20 | cascPath = "haarcascade_frontalface_default.xml"
21 | face_cascade = cv2.CascadeClassifier(cascPath)
22 |
23 | def write_text(img,text,location):
24 | font = cv2.FONT_HERSHEY_SIMPLEX
25 | fontScale = 1
26 | fontColor = (255,255,255)
27 | lineType = 2
28 | cv2.putText(img,text,location,font,fontScale,fontColor,lineType)
29 |
30 | def write_fps(skin, start_time):
31 | fps = 1/(time.time() - start_time)
32 | import numpy as np
33 |
34 | def draw_tracking_dots(img,x_track,y_track,offset,error, step=16):
35 |
36 |
37 | lines = np.vstack([x_track + offset[0], y_track + offset[1],error]).T.reshape(-1, 3)
38 | lines = np.int32(lines + 0.5)
39 | for (x1, y1,e) in lines:
40 | col = (0,255,0)
41 | if e > 10:
42 | col = (0,0,255)
43 |
44 | cv2.circle(img, (x1, y1), 1, col, -1)
45 |
46 |
47 | def draw_flow(img, flow,offset, step=16):
48 | h, w = flow.shape[:2]
49 | y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int)
50 | fx, fy = flow[y,x].T
51 | y += offset[1]
52 | x += offset[0]
53 |
54 | lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2)
55 | lines = np.int32(lines + 0.5)
56 | cv2.polylines(img, lines, 0, (0, 255, 0))
57 | for (x1, y1), (_x2, _y2) in lines:
58 | cv2.circle(img, (x1, y1), 1, (0, 255, 0), -1)
59 |
60 |
61 | def draw_rect(frame,rect):
62 | rects = [rect]
63 | for (x, y, w, h) in rects:
64 | cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255), 2)
65 |
66 | def make_border(frame,rect):
67 | border = cv2.imread('util/Border.png',cv2.IMREAD_UNCHANGED)
68 |
69 | rows,cols,channels = border.shape
70 |
71 | x = 0 #rect[0]
72 | y = 0#rect[1]
73 | w = cols #rect[2]
74 | h = rows#rect[3]
75 | bg_part = frame[y:y+h,x:x+w]
76 | border_rgb = border[:,:,0:3]
77 |
78 | alpha= border[:,:,3]
79 | alpha = alpha[:,:,np.newaxis].astype(np.float32) / 255.0
80 | alpha = np.concatenate((alpha,alpha,alpha), axis=2)
81 | border_rgb = border_rgb.astype(np.float32) * alpha
82 | bg_part=bg_part.astype(np.float32) * (1-alpha)
83 | merged = bg_part + border_rgb
84 | frame[y:y+h,x:x+w] = merged.astype(np.uint8)
85 |
86 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/pyqtgraph_util.py:
--------------------------------------------------------------------------------
1 | import pyqtgraph as pg
2 | import numpy as np
3 | import matplotlib.cm as cm
4 | def getcolors(ncolors):
5 | colors= cm.rainbow(np.linspace(0, 1, ncolors))
6 | return colors[:,:3]*255
7 |
8 |
9 |
10 |
11 | def plot(fig,x,y,color):
12 | plot = fig.plot(x,y,pen = (color[0],color[1],color[2]))
13 | return plot
14 |
15 | def create_fig():
16 | fig = pg.PlotWidget()
17 | fig.showGrid(x=True, y=True)
18 | return fig
19 |
20 |
21 | def addLabels(fig,xlabel,ylabel,xunit ='-',yunit = '-'):
22 | fig.setLabel('left', ylabel, units=xunit)
23 | fig.setLabel('bottom', xlabel, units=yunit)
24 |
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/qt_util.py:
--------------------------------------------------------------------------------
1 | from PyQt5 import QtGui # (the example applies equally well to PySide)
2 | from PyQt5 import QtCore # (the example applies equally well to PySide)
3 | from PyQt5.QtCore import Qt
4 | from PyQt5.QtWidgets import *
5 | from util.style import style
6 | def getSliderEvent(slider : QSlider):
7 | return slider.valueChanged
8 |
9 | def getSliderValue(slider : QSlider,r):
10 | t = slider.value()/100
11 | return (1-t) * r[0] + t * r[1]
12 |
13 | def create_button(layout,text,onclick):
14 | btn = QPushButton(text)
15 | btn.clicked.connect(onclick)
16 | layout.addWidget(btn)
17 |
18 | def create_button_btnsettingsinclick(layout,text,settings,onclick):
19 | btn = QPushButton(text)
20 | btn.clicked.connect(lambda : onclick(btn,settings))
21 | layout.addWidget(btn)
22 |
23 |
24 |
25 | def setSliderRange(slider : QSlider,range):
26 | slider.setMinimum(range[0])
27 | slider.setMaximum(range[-1])
28 | print(len(1/range))
29 | #slider.setTickInterval(1/len(range))
30 |
31 | def setup_default_slider(layout):
32 | slider = QSlider(Qt.Horizontal)
33 | slider.setFocusPolicy(Qt.StrongFocus)
34 | slider.setTickPosition(QSlider.TicksBothSides)
35 | slider.setRange(0,100)
36 | slider.setSingleStep(1)
37 | layout.addWidget(slider)
38 | return slider
39 |
40 | def create_basic_app():
41 | app = QtGui.QApplication([])
42 | app.setStyle(QStyleFactory.create('Windows'))
43 | app.setStyleSheet(style)
44 | w = QtGui.QWidget()
45 | p = w.palette()
46 | p.setColor(w.backgroundRole(), Qt.black)
47 | w.setPalette(p)
48 | return app,w
49 |
50 | def execute_app(app,w):
51 | w.resize(1920,800 )
52 | w.show()
53 | app.exec_()
--------------------------------------------------------------------------------
/VideoHealthMonitoring/util/style.py:
--------------------------------------------------------------------------------
1 | style = """
2 | QSlider::groove:horizontal {
3 | border: 1px solid #bbb;
4 | background: white;
5 | height: 10px;
6 | border-radius: 4px;
7 | }
8 |
9 | QSlider::sub-page:horizontal {
10 | background: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1,
11 | stop: 0 #66e, stop: 1 #bbf);
12 | background: qlineargradient(x1: 0, y1: 0.2, x2: 1, y2: 1,
13 | stop: 0 #bbf, stop: 1 #55f);
14 | border: 1px solid #777;
15 | height: 10px;
16 | border-radius: 4px;
17 | }
18 |
19 | QSlider::add-page:horizontal {
20 | background: #fff;
21 | border: 1px solid #777;
22 | height: 10px;
23 | border-radius: 4px;
24 | }
25 |
26 |
27 | QSlider::handle:horizontal {
28 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1,
29 | stop:0 #eee, stop:1 #ccc);
30 | border: 1px solid #777;
31 | width: 13px;
32 | margin-top: -2px;
33 | margin-bottom: -2px;
34 | border-radius: 4px;
35 | }
36 |
37 | QSlider::handle:horizontal:hover {
38 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1,
39 | stop:0 #fff, stop:1 #ddd);
40 | border: 1px solid #444;
41 | border-radius: 4px;
42 | }
43 |
44 | QSlider::sub-page:horizontal:disabled {
45 | background: #bbb;
46 | border-color: #999;
47 | }
48 |
49 | QSlider::add-page:horizontal:disabled {
50 | background: #eee;
51 | border-color: #999;
52 | }
53 |
54 | QSlider::handle:horizontal:disabled {
55 | background: #eee;
56 | border: 1px solid #aaa;
57 | border-radius: 4px;
58 | }
59 | QLabel {color : white; }
60 |
61 | QPushButton {
62 | background: #eee;
63 | color: black;
64 | height: 25px;
65 |
66 | border-radius: 5px;
67 | }
68 |
69 | }
70 | """
--------------------------------------------------------------------------------