├── LICENSE
├── README.md
├── README.txt
├── app.py
├── haarcascade_frontalface_alt.xml
├── heartbeat.js
├── images
├── 1st.png
└── 2nd.png
├── img
├── doctor.svg
├── doctor_1.svg
├── doctors.svg
├── doctors_1.svg
├── med.svg
├── quest.svg
└── quest2.svg
├── index.html
├── index.js
├── static
├── heartbeat.js
└── index.js
├── style.css
└── template
└── index.html
/README.md:
--------------------------------------------------------------------------------
1 | # Blood Pressure Estimation using Deep Learning
2 |
3 |
4 |
5 |
6 | ## Project Overview
7 | This repository contains a research project for estimating blood pressure using deep learning techniques and rPPG signal extraction.
8 |
9 | ## Important Notes
10 | - This is a personal research project, and results may vary based on various factors.
11 | - The accuracy of blood pressure estimation is dependent on multiple parameters including webcam quality, lighting conditions, distance from camera, skin tone, and environmental conditions.
12 | - As this is a research project, please understand that the model does not guarantee 85-100% accuracy in blood pressure estimation.
13 | - The trained model file (model_5.h5) is not included in this public repository. For research collaboration or access to the trained model, please contact me directly.
14 | - The training notebook (MSc_DL_Model.ipynb) is not included, but the dataset is provided so you can train your own model.
15 | - For professional consultation or implementation support, feel free to reach out via email.
16 | - The code will be refactored and improved in the future when time permits.
17 | - If you find this project helpful, please consider giving it a star! Your support is appreciated.
18 |
19 | ## Folder Description
20 | - MSc_DL_Model.ipynb : The file where I built and trained my DL models, data visualization and data manipulation (not included in this repository)
21 | - app.py : The file where Flask framework used to connect both DL model and the website
22 | - model_5.h5 : The trained model with Adam optimization function (not included in this repository)
23 | - heartbeat.js and index.js : The implementation for extracting rPPG signal
24 | - index.html : The website
25 |
26 | ## Installation & Execution Steps
27 | 1. Begin by downloading the dataset from the provided link: [Dataset Link](https://drive.google.com/file/d/19q0Q4wqKwfp9ZWzy_PsPxItMVGXXnTcG/view). Place the downloaded file into the Code folder, ensuring both files are in the same directory.
28 |
29 | 2. Run the `app.py` file before launching the `index.html` file. In your terminal, execute the command: `python app.py`. Then, open the generated link by clicking on it. (It's important to mention that this webpage is not used for blood pressure estimation.)
30 |
31 | 3. If you're using Visual Studio Code, right-click on the `index.html` file and choose the "Open with Live Server" option. This will open the HTML file, allowing you to estimate your blood pressure by clicking the 'Let's predict your BP' button.
32 |
33 | 4. While the `app.py` is running, you can use the `index.html` file to measure your blood pressure.
34 | - Ensure that you have the browser's Inspect tool open (preferably in Chrome). In the console of the Inspect tool, you'll see logs indicating that the website is running and successfully tracking your face.
35 | - After approximately 10 seconds, your predicted blood pressure will appear in a pop-up menu once the webcam detects your face and remains stable.
36 |
37 | ## Contact & Support
38 | If you require any assistance, guidance, or are interested in research collaboration, custom implementations, or consulting services related to this project, feel free to reach out via email at enesbasbugeng@gmail.com.
39 |
40 | Best regards,
41 | Enes
42 |
--------------------------------------------------------------------------------
/README.txt:
--------------------------------------------------------------------------------
1 |
2 | Folder description:
3 |
4 | -- MSc_DL_Model.ipynb : The file where I built and trained my DL models, data
5 | visualisation and data manipulation
6 | -- app.py : The file where Flask framework used to connect both DL model and the website.
7 | -- model_5.h5 is the trained model with Adam optimization function
8 | -- heartbeat.js and index.js : The implementation for extracting rPPG signal
9 | -- index.html : The website
10 |
11 |
12 | In order to execute this programme, please follow:
13 |
14 | 1- Please download the dataset first. Then, put that file into the Code file. They have to be in the same folder.
15 | I had to remove it since there was a limitation (max 50 MB) on QM+. The dataset is a 55 MB file.
16 | LINK (https://drive.google.com/file/d/19MDJylakwdTKNS55zMpTtZX8PbYWNTkX/view?usp=sharing)
17 |
18 | 2- app.py file has to be run before running the index.html file. In terminal, run: python app.py
19 | Then, open the given link by clicking. (We are not using this web page to estimate BP)
20 |
21 | 3- I used Visual Studio Code, and in the application, I can run .html files by right-clicking the file and pressing Open with Live Server option. Then, it opens the html file. (Now, you can estimate your BP with this web page by clicking the button that says 'Let's predict your BP'.)
22 |
23 | 4- While app.py is executed, you are free to use index.html file to measure your BP.
24 | 4.1- I suggest you, while trying to measure your BP, make sure that you have also opened
25 | Inspects of the browser (mine was Chrome). In the console, in inspect, you can see the
26 | logs saying the website is running and tracking your face properly.
27 | 4.2- After around 10 seconds, you can see your predicted BP on the pop-up menu; once
28 | the webcam sees your face stayed remained.
29 |
30 |
31 |
32 | Please let me know if you need any kind of help or guidance.
33 | Email: enesbasbugeng@gmail.com
34 |
35 | Regards,
36 |
37 | Samil Enes Basbug
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, render_template
2 | import pandas as pd
3 | import numpy as np
4 | import h5py
5 | from flask import request
6 | import json
7 | # DL PACKETS
8 | from keras.models import Sequential
9 | from keras.layers import Dense
10 | # from keras.layers.advanced_activations import LeakyReLU
11 | from sklearn.model_selection import train_test_split
12 | import tensorflow as tf
13 | import pickle
14 | from flask_cors import CORS
15 |
16 | import matplotlib.pyplot as plt
17 | from keras.models import Sequential, load_model
18 | from keras.layers import Dense
19 | # from keras.layers.advanced_activations import LeakyReLU
20 | from sklearn.model_selection import train_test_split
21 | import tensorflow as tf
22 |
23 |
24 |
25 | app = Flask(__name__, template_folder = 'template')
26 |
27 | @app.route('/', methods=['GET'])
28 | def index():
29 | return render_template('index.html')
30 | CORS(app)
31 | @app.route('/enes', methods=['POST'])
32 | def enes():
33 | try:
34 | # geting the json request
35 | jsonn = request.form['data']
36 | signal_e =json.loads(jsonn)
37 | # print(json)
38 | # call render_pdf function with new value
39 | filename = "rPPG-BP-UKL_rppg_7s.h5"
40 |
41 | f = h5py.File(filename, "r")
42 | datasetNames = [n for n in f.keys()]
43 |
44 | # DATA
45 |
46 | df_rppg = pd.DataFrame(np.array(h5py.File(filename)['rppg']))
47 | df_id = pd.DataFrame(np.array(h5py.File(filename)['subject_idx']))
48 | df_label = pd.DataFrame(np.array(h5py.File(filename)['label']))
49 | df_label = df_label.T.rename(columns={0: "Sys", 1: "Dia"})
50 | df = pd.concat([df_rppg.T, df_label.reindex(df_rppg.T.index)], axis=1)
51 | X = df.iloc[:,:-2]
52 | y = df.iloc[:,875:] # BOTH Systolic and Diastolic
53 | y = y.values
54 | X_new = df.iloc[:,:180]
55 |
56 | x_train, x_test, y_train, y_test_3 = train_test_split(X_new , y, test_size=0.2, random_state=0)
57 |
58 | model7 = Sequential()
59 | model7.add(Dense(96, input_dim = 180, activation='relu'))
60 | model7.add(Dense(84, activation='relu'))
61 | model7.add(Dense(72, activation='relu'))
62 | model7.add(Dense(60, activation='relu'))
63 | model7.add(Dense(48, activation='relu'))
64 | model7.add(Dense(36, activation='relu'))
65 | model7.add(Dense(24, activation='relu'))
66 | model7.add(Dense(12, activation='relu'))
67 | model7.add(Dense(2, activation='relu'))
68 |
69 | model7.compile(loss='mean_squared_error', optimizer='adam', metrics=[tf.keras.metrics.MeanAbsoluteError()])
70 | # model7.fit(x_train, y_train, epochs=500, batch_size=64)
71 |
72 | model7 = load_model('model_5.h5')
73 |
74 |
75 | # d = [-0.19395743310451508,-0.13397778570652008,-0.06642476469278336,-0.00400609290227294,0.04940028488636017,0.1056060791015625,0.17126771807670593,0.25075361132621765,0.3304878771305084,0.385836124420166,0.3902944028377533,0.3280150294303894,0.19947871565818787,0.02291148714721203,-0.16587123274803162,-0.3201744556427002,-0.4043360650539398,-0.4104035198688507,-0.36546677350997925,-0.3044014275074005,-0.24522922933101654,-0.18511442840099335,-0.11113037168979645,-0.027769535779953003,0.06156091392040253,0.14375369250774384,0.2204512059688568,0.28527283668518066,0.3213392496109009,0.3095156252384186,0.23773613572120667,0.1294040083885193,0.00886403676122427,-0.1044655591249466,-0.2065752148628235,-0.2890903651714325,-0.33529770374298096,-0.3375154435634613,-0.3064318299293518,-0.26670417189598083,-0.224334254860878,-0.16030514240264893,-0.037550561130046844,0.13837218284606934,0.33715665340423584,0.505218505859375,0.6209288239479065,0.6616799831390381,0.6141523718833923,0.46817365288734436,0.24724945425987244,-0.000021440908312797546,-0.219168022274971,-0.375104695558548,-0.4593260884284973,-0.4823637902736664,-0.4545128047466278,-0.3858606219291687,-0.275950163602829,-0.149588480591774,-0.02973109669983387,0.0444100946187973,0.07027456909418106,0.050793107599020004,0.031174587085843086,0.04778067767620087,0.13081292808055878,0.23930446803569794,0.3136889934539795,0.29787367582321167,0.19707173109054565,0.04643579199910164,-0.10031749308109283,-0.20072254538536072,-0.23266632854938507,-0.20702007412910461,-0.1528751701116562,-0.09664773941040039,-0.04982328042387962,-0.018707968294620514,-0.0064563569612801075,-0.01043056882917881,-0.01623682491481304,-0.008313526399433613,0.0188070647418499,0.05872868001461029,0.09683024138212204,0.12068737298250198,0.12985946238040924,0.12695743143558502,0.11933128535747528,0.09914843738079071,0.06371089816093445,0.007436198648065329,-0.05992235988378525,-0.12099021673202515,-0.15762588381767273,-0.1657705008983612,-0.15945737063884735,-0.14917117357254028,-0.13249628245830536,-0.1018509566783905,-0.05918792635202408,-0.017232712358236313,0.020971041172742844,0.057372190058231354,0.10261250287294388,0.15965959429740906,0.23144583404064178,0.30801400542259216,0.3716323971748352,0.40726399421691895,0.38340258598327637,0.2870757579803467,0.10093791037797928,-0.10274682939052582,-0.2685244679450989,-0.34004607796669006,-0.3625195026397705,-0.37544527649879456,-0.4074031710624695,-0.4400412142276764,-0.44137129187583923,-0.39309513568878174,-0.28794974088668823,-0.14049796760082245,0.03278353065252304,0.20502431690692902,0.3552732467651367,0.4628975987434387,0.5185611844062805,0.5143548250198364,0.4504460096359253,0.32919228076934814,0.1631823182106018,-0.020226500928401947,-0.17662878334522247,-0.26940417289733887,-0.29052549600601196,-0.2698424458503723,-0.23697537183761597,-0.20644645392894745,-0.1698295921087265,-0.12296345084905624,-0.0725134015083313,-0.023371445015072823,0.022308936342597008,0.0754413902759552,0.12982065975666046,0.1778223067522049,0.20719653367996216,0.2210918813943863,0.22519740462303162,0.2093498855829239,0.16233161091804504,0.08167611062526703,-0.0065857539884746075,-0.08127100765705109,-0.1386258453130722,-0.18994520604610443,-0.2347329556941986,-0.2547788918018341,-0.22998972237110138,-0.16674168407917023,-0.09261948615312576,-0.039033591747283936,-0.011466680094599724,0.0019848591182380915,0.01615481823682785,0.03708353638648987,0.06880908459424973,0.11225734651088715,0.1596776843070984,0.18958507478237152,0.1837269514799118,0.13547749817371368,0.05332048609852791,-0.04175598546862602,-0.121566042304039,-0.15176013112068176]
76 |
77 | pred_BP = model7.predict([signal_e])
78 |
79 | # print(singal)
80 |
81 | BP = str(int(pred_BP[0][0])) + '/' + str(int(pred_BP[0][1]))
82 | # '0' meaning no error
83 | # print(json.dumps(a))
84 | return BP
85 |
86 | except Exception as error:
87 | return str(error)
88 |
89 |
90 |
91 |
92 |
93 | if __name__ == '__main__':
94 | app.run(debug=True)
95 |
--------------------------------------------------------------------------------
/heartbeat.js:
--------------------------------------------------------------------------------
1 | const RESCAN_INTERVAL = 1000;
2 | const DEFAULT_FPS = 30;
3 | const LOW_BPM = 42;
4 | const HIGH_BPM = 240;
5 | const REL_MIN_FACE_SIZE = 0.4;
6 | const SEC_PER_MIN = 60;
7 | const MSEC_PER_SEC = 1000;
8 | const MAX_CORNERS = 10;
9 | const MIN_CORNERS = 5;
10 | const QUALITY_LEVEL = 0.01;
11 | const MIN_DISTANCE = 10;
12 | let isPostData=false;
13 | let frequencyCounter=0;
14 | // Simple rPPG implementation in JavaScript
15 | // - Code could be improved given better documentation available for opencv.js
16 | export class Heartbeat {
17 | constructor(webcamId, canvasId, classifierPath, targetFps, windowSize, rppgInterval) {
18 | this.webcamId = webcamId;
19 | this.canvasId = canvasId,
20 | this.classifierPath = classifierPath;
21 | this.streaming = false;
22 | this.faceValid = false;
23 | this.targetFps = targetFps;
24 | this.windowSize = windowSize;
25 | this.rppgInterval = rppgInterval;
26 | }
27 | // Start the video stream
28 | async startStreaming() {
29 | try {
30 | this.stream = await navigator.mediaDevices.getUserMedia({
31 | video: {
32 | facingMode: 'user',
33 | width: { exact: this.webcamVideoElement.width },
34 | height: { exact: this.webcamVideoElement.height }
35 | },
36 | audio: false
37 | });
38 | } catch (e) {
39 | console.log(e);
40 | }
41 | if (!this.stream) {
42 | throw new Error('Could not obtain video from webcam.');
43 | }
44 | // Set srcObject to the obtained stream
45 | this.webcamVideoElement.srcObject = this.stream;
46 | // Start the webcam video stream
47 | this.webcamVideoElement.play();
48 | this.streaming = true;
49 | return new Promise(resolve => {
50 | // Add event listener to make sure the webcam has been fully initialized.
51 | this.webcamVideoElement.oncanplay = () => {
52 | resolve();
53 | };
54 | });
55 | }
56 | // Create file from url
57 | async createFileFromUrl(path, url) {
58 | let request = new XMLHttpRequest();
59 | request.open('GET', url, true);
60 | request.responseType = 'arraybuffer';
61 | request.send();
62 | return new Promise(resolve => {
63 | request.onload = () => {
64 | if (request.readyState === 4) {
65 | if (request.status === 200) {
66 | let data = new Uint8Array(request.response);
67 | cv.FS_createDataFile('/', path, data, true, false, false);
68 | resolve();
69 | } else {
70 | console.log('Failed to load ' + url + ' status: ' + request.status);
71 | }
72 | }
73 | };
74 | });
75 | }
76 | // Initialise the demo
77 | async init() {
78 | this.webcamVideoElement = document.getElementById(this.webcamId);
79 | try {
80 | await this.startStreaming();
81 | this.webcamVideoElement.width = this.webcamVideoElement.videoWidth;
82 | this.webcamVideoElement.height = this.webcamVideoElement.videoHeight;
83 | this.frameRGB = new cv.Mat(this.webcamVideoElement.height, this.webcamVideoElement.width, cv.CV_8UC4);
84 | this.lastFrameGray = new cv.Mat(this.webcamVideoElement.height, this.webcamVideoElement.width, cv.CV_8UC1);
85 | this.frameGray = new cv.Mat(this.webcamVideoElement.height, this.webcamVideoElement.width, cv.CV_8UC1);
86 | this.overlayMask = new cv.Mat(this.webcamVideoElement.height, this.webcamVideoElement.width, cv.CV_8UC1);
87 | this.cap = new cv.VideoCapture(this.webcamVideoElement);
88 | // Set variables
89 | this.signal = []; // 120 x 3 raw rgb values
90 | this.timestamps = []; // 120 x 1 timestamps
91 | this.rescan = []; // 120 x 1 rescan bool
92 | this.face = new cv.Rect(); // Position of the face
93 | // Load face detector
94 | this.classifier = new cv.CascadeClassifier();
95 | let faceCascadeFile = "haarcascade_frontalface_alt.xml";
96 | if (!this.classifier.load(faceCascadeFile)) {
97 | await this.createFileFromUrl(faceCascadeFile, this.classifierPath);
98 | this.classifier.load(faceCascadeFile)
99 | }
100 | this.scanTimer = setInterval(this.processFrame.bind(this),
101 | MSEC_PER_SEC / this.targetFps);
102 | this.rppgTimer = setInterval(this.rppg.bind(this), this.rppgInterval);
103 | } catch (e) {
104 | console.log(e);
105 | }
106 | }
107 | // Add one frame to raw signal
108 | processFrame() {
109 | try {
110 | if (!this.frameGray.empty()) {
111 | this.frameGray.copyTo(this.lastFrameGray); // Save last frame
112 | }
113 | this.cap.read(this.frameRGB); // Save current frame
114 | let time = Date.now()
115 | let rescanFlag = false;
116 | cv.cvtColor(this.frameRGB, this.frameGray, cv.COLOR_RGBA2GRAY);
117 | // Need to find the face
118 | if (!this.faceValid) {
119 | this.lastScanTime = time;
120 | this.detectFace(this.frameGray);
121 | }
122 | // Scheduled face rescan
123 | else if (time - this.lastScanTime >= RESCAN_INTERVAL) {
124 | this.lastScanTime = time
125 | this.detectFace(this.frameGray);
126 | rescanFlag = true;
127 | }
128 | // Track face
129 | else {
130 | // Disable for now,
131 | //this.trackFace(this.lastFrameGray, this.frameGray);
132 | }
133 | // Update the signal
134 | if (this.faceValid) {
135 | // Shift signal buffer
136 | while (this.signal.length > this.targetFps * this.windowSize) {
137 | this.signal.shift();
138 | this.timestamps.shift();
139 | this.rescan.shift();
140 | }
141 | // Get mask
142 | let mask = new cv.Mat();
143 | mask = this.makeMask(this.frameGray, this.face);
144 | // New values
145 | let means = cv.mean(this.frameRGB, mask);
146 | mask.delete();
147 | // Add new values to raw signal buffer
148 | this.signal.push(means.slice(0, 3));
149 | this.timestamps.push(time);
150 | this.rescan.push(rescanFlag);
151 | }
152 | // Draw face
153 | cv.rectangle(this.frameRGB, new cv.Point(this.face.x, this.face.y),
154 | new cv.Point(this.face.x + this.face.width, this.face.y + this.face.height),
155 | [0, 255, 0, 255]);
156 | // Apply overlayMask
157 | this.frameRGB.setTo([255, 0, 0, 255], this.overlayMask);
158 | cv.imshow(this.canvasId, this.frameRGB);
159 | } catch (e) {
160 | console.log("Error capturing frame:");
161 | console.log(e);
162 | }
163 | }
164 | // Run face classifier
165 | detectFace(gray) {
166 | let faces = new cv.RectVector();
167 | this.classifier.detectMultiScale(gray, faces, 1.1, 3, 0);
168 | if (faces.size() > 0) {
169 | this.face = faces.get(0);
170 | this.faceValid = true;
171 | } else {
172 | console.log("No faces");
173 | this.invalidateFace();
174 | }
175 | faces.delete();
176 | }
177 | // Make ROI mask from face
178 | makeMask(frameGray, face) {
179 | let result = cv.Mat.zeros(frameGray.rows, frameGray.cols, cv.CV_8UC1);
180 | let white = new cv.Scalar(255, 255, 255, 255);
181 | let pt1 = new cv.Point(Math.round(face.x + 0.3 * face.width),
182 | Math.round(face.y + 0.1 * face.height));
183 | let pt2 = new cv.Point(Math.round(face.x + 0.7 * face.width),
184 | Math.round(face.y + 0.25 * face.height));
185 | cv.rectangle(result, pt1, pt2, white, -1);
186 | return result;
187 | }
188 | // Invalidate the face
189 | invalidateFace() {
190 | this.signal = [];
191 | this.timestamps = [];
192 | this.rescan = [];
193 | this.overlayMask.setTo([0, 0, 0, 0]);
194 | this.face = new cv.Rect();
195 | this.faceValid = false;
196 | this.corners = [];
197 | }
198 | // Track the face
199 | trackFace(lastFrameGray, frameGray) {
200 | // If not available, detect some good corners to track within face
201 | let trackingMask = cv.Mat.zeros(frameGray.rows, frameGray.cols, cv.CV_8UC1);
202 | let squarePointData = new Uint8Array([
203 | this.face.x + 0.22 * this.face.width, this.face.y + 0.21 * this.face.height,
204 | this.face.x + 0.78 * this.face.width, this.face.y + 0.21 * this.face.height,
205 | this.face.x + 0.70 * this.face.width, this.face.y + 0.65 * this.face.height,
206 | this.face.x + 0.30 * this.face.width, this.face.y + 0.65 * this.face.height]);
207 | let squarePoints = cv.matFromArray(4, 1, cv.CV_32SC2, squarePointData);
208 | let pts = new cv.MatVector();
209 | let corners = new cv.Mat();
210 | pts.push_back(squarePoints);
211 | cv.fillPoly(trackingMask, pts, [255, 255, 255, 255]);
212 | cv.goodFeaturesToTrack(lastFrameGray, corners, MAX_CORNERS,
213 | QUALITY_LEVEL, MIN_DISTANCE, trackingMask, 3);
214 | trackingMask.delete(); squarePoints.delete(); pts.delete();
215 |
216 | // Calculate optical flow
217 | let corners_1 = new cv.Mat();
218 | let st = new cv.Mat();
219 | let err = new cv.Mat();
220 | let winSize = new cv.Size(15, 15);
221 | let maxLevel = 2;
222 | let criteria = new cv.TermCriteria(
223 | cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 0.03);
224 | cv.calcOpticalFlowPyrLK(lastFrameGray, frameGray, corners, corners_1,
225 | st, err, winSize, maxLevel, criteria);
226 |
227 | // Backtrack once
228 | let corners_0 = new cv.Mat();
229 | cv.calcOpticalFlowPyrLK(frameGray, lastFrameGray, corners_1, corners_0,
230 | st, err, winSize, maxLevel, criteria);
231 | // TODO exclude unmatched corners
232 |
233 | // Clean up
234 | st.delete(); err.delete();
235 |
236 | if (corners_1.rows >= MIN_CORNERS) {
237 | // Estimate affine transform
238 | const [s, tx, ty] = this.estimateAffineTransform(corners_0, corners_1);
239 | // Apply affine transform
240 | this.face = new cv.Rect(
241 | this.face.x * s + tx, this.face.y * s + ty,
242 | this.face.width * s, this.face.height * s);
243 | } else {
244 | this.invalidateFace();
245 | }
246 |
247 | corners.delete(); corners_1.delete(); corners_0.delete();
248 | }
249 | // For some reason this is not available in opencv.js, so implemented it
250 | estimateAffineTransform(corners_0, corners_1) {
251 | // Construct X and Y matrix
252 | let t_x = cv.matFromArray(corners_0.rows * 2, 1, cv.CV_32FC1,
253 | Array.from(corners_0.data32F));
254 | let y = cv.matFromArray(corners_1.rows * 2, 1, cv.CV_32FC1,
255 | Array.from(corners_1.data32F));
256 | let x = new cv.Mat(corners_0.rows * 2, 3, cv.CV_32FC1);
257 | let t_10 = new cv.Mat(); let t_01 = new cv.Mat();
258 | cv.repeat(cv.matFromArray(2, 1, cv.CV_32FC1, [1, 0]), corners_0.rows, 1, t_10);
259 | cv.repeat(cv.matFromArray(2, 1, cv.CV_32FC1, [0, 1]), corners_0.rows, 1, t_01);
260 | t_x.copyTo(x.col(0));
261 | t_10.copyTo(x.col(1));
262 | t_01.copyTo(x.col(2));
263 |
264 | // Solve
265 | let res = cv.Mat.zeros(3, 1, cv.CV_32FC1);
266 | cv.solve(x, y, res, cv.DECOMP_SVD);
267 |
268 | // Clean up
269 | t_01.delete(); t_10.delete(); x.delete(); t_x.delete(); y.delete();
270 |
271 | return [res.data32F[0], res.data32F[1], res.data32F[2]];
272 | }
273 | // Compute rppg signal and estimate HR
274 | rppg() {
275 | // Update fps
276 | let fps = this.getFps(this.timestamps);
277 | // If valid signal is large enough: estimate
278 | if (this.signal.length >= this.targetFps * this.windowSize) {
279 |
280 | console.log("Running____")
281 | // Work with cv.Mat from here
282 | let signal = cv.matFromArray(this.signal.length, 1, cv.CV_32FC3,
283 | [].concat.apply([], this.signal));
284 | // Filtering
285 |
286 |
287 |
288 | this.denoise(signal, this.rescan);
289 |
290 | this.standardize(signal);
291 |
292 | this.detrend(signal, fps);
293 |
294 | this.movingAverage(signal, 3, Math.max(Math.floor(fps / 6), 2));
295 |
296 | // HR estimation
297 |
298 |
299 |
300 | signal = this.selectGreen(signal);
301 | // console.log(signal) // rPPG rPPG rPPG rPPG rPPG rPPG rPPG rPPG rPPG
302 | // Draw time domain signal
303 |
304 |
305 |
306 | this.overlayMask.setTo([0, 0, 0, 0]);
307 |
308 | // this.drawTime(signal); //
309 | this.timeToFrequency(signal, true);
310 | // Calculate band spectrum limits
311 | let low = Math.floor(signal.rows * LOW_BPM / SEC_PER_MIN / fps);
312 | let high = Math.ceil(signal.rows * HIGH_BPM / SEC_PER_MIN / fps);
313 | if (!signal.empty()) {
314 | // Mask for infeasible frequencies
315 | let bandMask = cv.matFromArray(signal.rows, 1, cv.CV_8U,
316 | new Array(signal.rows).fill(0).fill(1, low, high + 1));
317 | // this.drawFrequency(signal, low, high, bandMask); //
318 | // Identify feasible frequency with maximum magnitude
319 | let result = cv.minMaxLoc(signal, bandMask);
320 | bandMask.delete();
321 | // Infer BPM
322 | let bpm = result.maxLoc.y * fps / signal.rows * SEC_PER_MIN;
323 | // console.log(bpm);
324 | // Draw BPM
325 | // this.drawBPM(bpm);
326 | }
327 | this.ENES(signal);
328 | signal.delete();
329 | } else {
330 | console.log("signal too small");
331 | }
332 | }
333 |
334 |
335 | ENES(signal){
336 | var ekle = [];
337 | for (var i = 1; i < signal.rows; i++) {
338 |
339 | if (i < 181) {
340 |
341 | ekle.push(signal.data32F[i]);
342 |
343 | }
344 |
345 | }
346 |
347 | var avg = ekle.reduce((a, b) => a + b) / ekle.length;
348 | var std = Math.sqrt(ekle.map(x => Math.pow(x - avg, 2)).reduce((a, b) => a + b) / ekle.length);
349 | var newList = ekle.map(function (num) {
350 | return (num - avg) / std;
351 | })
352 | //Math.sqrt(list.map(x => Math.pow(x - avg, 2)).reduce((a, b) => a + b) / list.length)
353 | // console.log("BANA BAK" + newList + "\n");
354 |
355 | if(frequencyCounter==20){
356 | $.ajax({
357 | url: "http://127.0.0.1:5000/enes",
358 | type:"POST",
359 | //dataType:"json",
360 | // contentType: 'application/json;charset=UTF-8',
361 | data : {data:JSON.stringify(newList)},
362 | success: function (response) {
363 | // alert(response)
364 | $("#dataOne").html(response);
365 |
366 | // $("#dataOne").html(response.data2);
367 | }
368 | });
369 | console.log("SIGNAL=" + newList)
370 | console.log("\n")
371 | clearInterval(this.rppgTimer);
372 |
373 | }
374 | frequencyCounter++;
375 |
376 | }
377 |
378 | // Calculate fps from timestamps
379 | getFps(timestamps, timeBase = 1000) {
380 | if (Array.isArray(timestamps) && timestamps.length) {
381 | if (timestamps.length == 1) {
382 | return DEFAULT_FPS;
383 | } else {
384 | let diff = timestamps[timestamps.length - 1] - timestamps[0];
385 | return timestamps.length / diff * timeBase;
386 | }
387 | } else {
388 | return DEFAULT_FPS;
389 | }
390 | }
391 | // Remove noise from face rescanning
392 | denoise(signal, rescan) {
393 | let diff = new cv.Mat();
394 | cv.subtract(signal.rowRange(1, signal.rows), signal.rowRange(0, signal.rows - 1), diff);
395 | for (var i = 1; i < signal.rows; i++) {
396 | if (rescan[i] == true) {
397 | let adjV = new cv.MatVector();
398 | let adjR = cv.matFromArray(signal.rows, 1, cv.CV_32FC1,
399 | new Array(signal.rows).fill(0).fill(diff.data32F[(i - 1) * 3], i, signal.rows));
400 | let adjG = cv.matFromArray(signal.rows, 1, cv.CV_32FC1,
401 | new Array(signal.rows).fill(0).fill(diff.data32F[(i - 1) * 3 + 1], i, signal.rows));
402 | let adjB = cv.matFromArray(signal.rows, 1, cv.CV_32FC1,
403 | new Array(signal.rows).fill(0).fill(diff.data32F[(i - 1) * 3 + 2], i, signal.rows));
404 | adjV.push_back(adjR); adjV.push_back(adjG); adjV.push_back(adjB);
405 | let adj = new cv.Mat();
406 | cv.merge(adjV, adj);
407 | cv.subtract(signal, adj, signal);
408 | adjV.delete(); adjR.delete(); adjG.delete(); adjB.delete();
409 | adj.delete();
410 | }
411 | }
412 | diff.delete();
413 | }
414 | // Standardize signal
415 | standardize(signal) {
416 | let mean = new cv.Mat();
417 | let stdDev = new cv.Mat();
418 | let t1 = new cv.Mat();
419 | cv.meanStdDev(signal, mean, stdDev, t1);
420 | let means_c3 = cv.matFromArray(1, 1, cv.CV_32FC3, [mean.data64F[0], mean.data64F[1], mean.data64F[2]]);
421 | let stdDev_c3 = cv.matFromArray(1, 1, cv.CV_32FC3, [stdDev.data64F[0], stdDev.data64F[1], stdDev.data64F[2]]);
422 | let means = new cv.Mat(signal.rows, 1, cv.CV_32FC3);
423 | let stdDevs = new cv.Mat(signal.rows, 1, cv.CV_32FC3);
424 | cv.repeat(means_c3, signal.rows, 1, means);
425 | cv.repeat(stdDev_c3, signal.rows, 1, stdDevs);
426 | cv.subtract(signal, means, signal, t1, -1);
427 | cv.divide(signal, stdDevs, signal, 1, -1);
428 | mean.delete(); stdDev.delete(); t1.delete();
429 | means_c3.delete(); stdDev_c3.delete();
430 | means.delete(); stdDevs.delete();
431 | }
432 | // Remove trend in signal
433 | detrend(signal, lambda) {
434 | let h = cv.Mat.zeros(signal.rows - 2, signal.rows, cv.CV_32FC1);
435 | let i = cv.Mat.eye(signal.rows, signal.rows, cv.CV_32FC1);
436 | let t1 = cv.Mat.ones(signal.rows - 2, 1, cv.CV_32FC1)
437 | let t2 = cv.matFromArray(signal.rows - 2, 1, cv.CV_32FC1,
438 | new Array(signal.rows - 2).fill(-2));
439 | let t3 = new cv.Mat();
440 | t1.copyTo(h.diag(0)); t2.copyTo(h.diag(1)); t1.copyTo(h.diag(2));
441 | cv.gemm(h, h, lambda * lambda, t3, 0, h, cv.GEMM_1_T);
442 | cv.add(i, h, h, t3, -1);
443 | cv.invert(h, h, cv.DECOMP_LU);
444 | cv.subtract(i, h, h, t3, -1);
445 | let s = new cv.MatVector();
446 | cv.split(signal, s);
447 | cv.gemm(h, s.get(0), 1, t3, 0, s.get(0), 0);
448 | cv.gemm(h, s.get(1), 1, t3, 0, s.get(1), 0);
449 | cv.gemm(h, s.get(2), 1, t3, 0, s.get(2), 0);
450 | cv.merge(s, signal);
451 | h.delete(); i.delete();
452 | t1.delete(); t2.delete(); t3.delete();
453 | s.delete();
454 | }
455 | // Moving average on signal
456 | movingAverage(signal, n, kernelSize) {
457 | for (var i = 0; i < n; i++) {
458 | cv.blur(signal, signal, { height: kernelSize, width: 1 });
459 | }
460 | }
461 |
462 |
463 |
464 | // TODO solve this more elegantly
465 | selectGreen(signal) {
466 | let rgb = new cv.MatVector();
467 | cv.split(signal, rgb);
468 | // TODO possible memory leak, delete rgb?
469 |
470 | let result = rgb.get(1); //
471 |
472 |
473 | rgb.delete();
474 | return result;
475 | }
476 | // Convert from time to frequency domain
477 | timeToFrequency(signal, magnitude) {
478 | // Prepare planes
479 | let planes = new cv.MatVector();
480 | planes.push_back(signal);
481 | planes.push_back(new cv.Mat.zeros(signal.rows, 1, cv.CV_32F))
482 | let powerSpectrum = new cv.Mat();
483 | cv.merge(planes, signal);
484 | // Fourier transform
485 | cv.dft(signal, signal, cv.DFT_COMPLEX_OUTPUT);
486 | if (magnitude) {
487 | cv.split(signal, planes);
488 | cv.magnitude(planes.get(0), planes.get(1), signal);
489 | }
490 | }
491 |
492 | // Draw time domain signal to overlayMask
493 | drawTime(signal) {
494 | // Display size
495 | let displayHeight = this.face.height / 2.0;
496 | let displayWidth = this.face.width * 0.8;
497 | // Signal
498 | let result = cv.minMaxLoc(signal);
499 | let heightMult = displayHeight / (result.maxVal - result.minVal);
500 | let widthMult = displayWidth / (signal.rows - 1);
501 | let drawAreaTlX = this.face.x + this.face.width + 10;
502 | let drawAreaTlY = this.face.y
503 | let start = new cv.Point(drawAreaTlX,
504 | drawAreaTlY + (result.maxVal - signal.data32F[0]) * heightMult);
505 | var list = [];
506 | var bune = [];
507 |
508 | // console.log(signal.rows)
509 |
510 | for (var i = 1; i < signal.rows; i++) {
511 | let end = new cv.Point(drawAreaTlX + i * widthMult,
512 | drawAreaTlY + (result.maxVal - signal.data32F[i]) * heightMult);
513 |
514 |
515 | cv.line(this.overlayMask, start, end, [255, 255, 255, 255], 2, cv.LINE_4, 0);
516 | if (i < 181) {
517 |
518 | bune.push(signal.data32F[i]);
519 |
520 | list.push(end.y);
521 | }
522 |
523 | start = end;
524 | }
525 |
526 | var avg = bune.reduce((a, b) => a + b) / bune.length;
527 | var std = Math.sqrt(bune.map(x => Math.pow(x - avg, 2)).reduce((a, b) => a + b) / bune.length);
528 | var newList = bune.map(function (num) {
529 | return (num - avg) / std;
530 | })
531 | //Math.sqrt(list.map(x => Math.pow(x - avg, 2)).reduce((a, b) => a + b) / list.length)
532 | console.log("BUNE" + bune + "\n");
533 | console.log("NEWLIST" + newList)
534 | // console.log(bune);
535 | console.log('-------');
536 | console.log('\n');
537 | //frequencycounter 250ms de bir artyor yani 50 icin 12.5 sn de gircek
538 | if(isPostData==false&&frequencyCounter>50){
539 | //veriyi yolla
540 |
541 | // $.ajax({
542 | // url: "url",
543 | // success: function(response){ response type json -> returned response response=10 response{
544 | // data1:10,
545 | //data2:20
546 | // response[0]
547 | // response[1]//if its an array
548 | //}console.log(response)
549 | // $("#dataOne").html(response.data2);
550 | // }
551 | // });
552 | isPostData=true;
553 | }
554 | frequencyCounter++;
555 | // console.log(list);
556 | }
557 | // Draw frequency domain signal to overlayMask
558 | drawFrequency(signal, low, high, bandMask) {
559 | // Display size
560 | let displayHeight = this.face.height / 2.0;
561 | let displayWidth = this.face.width * 0.8;
562 | // Signal
563 | let result = cv.minMaxLoc(signal, bandMask);
564 | let heightMult = displayHeight / (result.maxVal - result.minVal);
565 | let widthMult = displayWidth / (high - low);
566 | let drawAreaTlX = this.face.x + this.face.width + 10;
567 | let drawAreaTlY = this.face.y + this.face.height / 2.0;
568 | let start = new cv.Point(drawAreaTlX,
569 | drawAreaTlY + (result.maxVal - signal.data32F[low]) * heightMult);
570 | for (var i = low + 1; i <= high; i++) {
571 | let end = new cv.Point(drawAreaTlX + (i - low) * widthMult,
572 | drawAreaTlY + (result.maxVal - signal.data32F[i]) * heightMult);
573 | cv.line(this.overlayMask, start, end, [255, 0, 0, 255], 2, cv.LINE_4, 0);
574 | start = end;
575 | }
576 | }
577 | // Draw tracking corners
578 | drawCorners(corners) {
579 | for (var i = 0; i < corners.rows; i++) {
580 | cv.circle(this.frameRGB, new cv.Point(
581 | corners.data32F[i * 2], corners.data32F[i * 2 + 1]),
582 | 5, [0, 255, 0, 255], -1);
583 | // circle(frameRGB, corners[i], r, WHITE, -1, 8, 0);
584 | // line(frameRGB, Point(corners[i].x-5,corners[i].y), Point(corners[i].x+5,corners[i].y), GREEN, 1);
585 | // line(frameRGB, Point(corners[i].x,corners[i].y-5), Point(corners[i].x,corners[i].y+5), GREEN, 1);
586 | }
587 | }
588 | // Draw bpm string to overlayMask
589 | drawBPM(bpm) {
590 | cv.putText(this.overlayMask, bpm.toFixed(0).toString(),
591 | new cv.Point(this.face.x, this.face.y - 10),
592 | cv.FONT_HERSHEY_PLAIN, 1.5, [255, 0, 0, 255], 2);
593 | }
594 | // Clean up resources
595 | stop() {
596 | clearInterval(this.rppgTimer);
597 | clearInterval(this.scanTimer);
598 | if (this.webcam) {
599 | this.webcamVideoElement.pause();
600 | this.webcamVideoElement.srcObject = null;
601 | }
602 | if (this.stream) {
603 | this.stream.getVideoTracks()[0].stop();
604 | }
605 | this.invalidateFace();
606 | this.streaming = false;
607 | this.frameRGB.delete();
608 | this.lastFrameGray.delete();
609 | this.frameGray.delete();
610 | this.overlayMask.delete();
611 | }
612 | }
--------------------------------------------------------------------------------
/images/1st.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/enesbasbug/Blood_Pressure_Estimation_with_Webcam_using_Deep_Learning/80a4477b0a87411eee63afb8dc0a5363d95839ab/images/1st.png
--------------------------------------------------------------------------------
/images/2nd.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/enesbasbug/Blood_Pressure_Estimation_with_Webcam_using_Deep_Learning/80a4477b0a87411eee63afb8dc0a5363d95839ab/images/2nd.png
--------------------------------------------------------------------------------
/img/doctor.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/img/doctor_1.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/img/quest.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/img/quest2.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
61 |
64 | If you have normal blood pressure, your blood pressure is less than 120/80.
65 | Stick with an active lifestyle and healthy diet to keep that going.
66 |
67 |
68 | Is your blood pressure above the normal range, in either or both systolic
69 | and diastolic levels? Your doctor will want to have more than one blood pressure reading before
70 | diagnosing hypertension.
71 |
116 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora delectus assumenda 117 | soluta facilis, atque ad! 118 |
119 | Read More 120 |141 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora delectus assumenda 142 | soluta facilis, atque ad! 143 |
144 | Read More 145 |161 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora delectus assumenda 162 | soluta facilis, atque ad! 163 |
164 | Read More 165 |182 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Mollitia earum veniam possimus 183 | inventore ratione nulla. 184 |
185 |186 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Debitis expedita nam a sit sapiente est 187 | facilis, voluptatem nobis. Eaque dolores sapiente cupiditate assumenda. Laboriosam, quod. 188 |
189 | 190 | Read More 191 | 192 | 193 |208 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Mollitia earum veniam possimus 209 | inventore ratione nulla. 210 |
211 |212 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Debitis expedita nam a sit sapiente est 213 | facilis, voluptatem nobis. Eaque dolores sapiente cupiditate assumenda. Laboriosam, quod. 214 |
215 | 216 | Read More 217 | 218 |Our doctors all have 7+ years working experience in the 338 | hospitals.
339 |
60 |
63 | If you have normal blood pressure, your blood pressure is less than 120/80.
64 | Stick with an active lifestyle and healthy diet to keep that going.
65 |
66 |
67 | Is your blood pressure above the normal range, in either or both systolic
68 | and diastolic levels? Your doctor will want to have more than one blood pressure reading before
69 | diagnosing hypertension.
70 |
115 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora delectus assumenda 116 | soluta facilis, atque ad! 117 |
118 | Read More 119 |140 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora delectus assumenda 141 | soluta facilis, atque ad! 142 |
143 | Read More 144 |160 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempora delectus assumenda 161 | soluta facilis, atque ad! 162 |
163 | Read More 164 |181 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Mollitia earum veniam possimus 182 | inventore ratione nulla. 183 |
184 |185 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Debitis expedita nam a sit sapiente est 186 | facilis, voluptatem nobis. Eaque dolores sapiente cupiditate assumenda. Laboriosam, quod. 187 |
188 | 189 | Read More 190 | 191 | 192 |207 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Mollitia earum veniam possimus 208 | inventore ratione nulla. 209 |
210 |211 | Lorem ipsum dolor sit amet consectetur adipisicing elit. Debitis expedita nam a sit sapiente est 212 | facilis, voluptatem nobis. Eaque dolores sapiente cupiditate assumenda. Laboriosam, quod. 213 |
214 | 215 | Read More 216 | 217 |Our doctors all have 7+ years working experience in the 337 | hospitals.
338 |