├── .gitattributes ├── README.md ├── app.py ├── attendance_taker.py ├── features_extraction_to_csv.py ├── get_faces_from_camera_tkinter.py ├── requirements.txt └── templates └── index.html /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Smart Attendance System using Raspberry Pi in python 2 | 3 | This project is implemented in Raspberry Pi 4 Model B . It uses Artificial Intelligence and Machine Learning to detect a person's face using USB camera / Pi-Camera and checks whether the person is present or absent for the day.If the person is present, it shows at what time he/she was present in the local database.The local database stores the daily log of all people that the camera has detected .We can later access any date that we want and know about a person's attendance.We use opencv and dlib majorly for detection of faces and objects. 4 | 5 | 6 | Another Folder called "data" needs to be copied into your current Repository . As GitHub allows only 25 MB of uploads from the system , use the below drive link to download it. 7 | 8 | 9 | 10 | Note: 11 | 12 | MAKE SURE THAT THE ENTIRE "data" FOLDER ITSELF IS PRESENT IN YOUR LOCAL REPOSITORY . DON'T JUST DOWNLOAD THE FOLDERS PRESENT INSIDE OF THE "data" FOLDER i.e., DON'T DOWNLOAD "data_dlib" and "data_faces_from_camera" SEPARATELY . EVEN IF YOU DO IT , MAKE SURE TO PUT THEM INSIDE A PARENT FOLDER CALLED "data" OR ELSE YOUR CODE WILL THROW ERRORS. 13 | https://drive.google.com/drive/folders/1L84hdJCYviUpZ2RNl6FunQgr8A9XLvnB?usp=sharing 14 | 15 | 16 | 17 | Please Read requirements.txt file before trying to install libraries. I have very much simplified the process of implementing it so that you don't have to waste your time trying to fix errors. 18 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, render_template, request 2 | import sqlite3 3 | from datetime import datetime 4 | 5 | app = Flask(__name__) 6 | 7 | @app.route('/') 8 | def index(): 9 | return render_template('index.html', selected_date='', no_data=False) 10 | 11 | @app.route('/attendance', methods=['POST']) 12 | def attendance(): 13 | selected_date = request.form.get('selected_date') 14 | selected_date_obj = datetime.strptime(selected_date, '%Y-%m-%d') 15 | formatted_date = selected_date_obj.strftime('%Y-%m-%d') 16 | 17 | conn = sqlite3.connect('attendance.db') 18 | cursor = conn.cursor() 19 | 20 | cursor.execute("SELECT name, time FROM attendance WHERE date = ?", (formatted_date,)) 21 | attendance_data = cursor.fetchall() 22 | 23 | conn.close() 24 | 25 | if not attendance_data: 26 | return render_template('index.html', selected_date=selected_date, no_data=True) 27 | 28 | return render_template('index.html', selected_date=selected_date, attendance_data=attendance_data) 29 | 30 | if __name__ == '__main__': 31 | app.run(debug=True) 32 | -------------------------------------------------------------------------------- /attendance_taker.py: -------------------------------------------------------------------------------- 1 | import dlib 2 | import numpy as np 3 | import cv2 4 | import os 5 | import pandas as pd 6 | import time 7 | import logging 8 | import sqlite3 9 | import datetime 10 | 11 | 12 | # Dlib / Use frontal face detector of Dlib 13 | detector = dlib.get_frontal_face_detector() 14 | 15 | # Dlib landmark / Get face landmarks 16 | predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat') 17 | 18 | # Dlib Resnet Use Dlib resnet50 model to get 128D face descriptor 19 | face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat") 20 | 21 | # Create a connection to the database 22 | conn = sqlite3.connect("attendance.db") 23 | cursor = conn.cursor() 24 | 25 | # Create a table for the current date 26 | current_date = datetime.datetime.now().strftime("%Y_%m_%d") # Replace hyphens with underscores 27 | table_name = "attendance" 28 | create_table_sql = f"CREATE TABLE IF NOT EXISTS {table_name} (name TEXT, time TEXT, date DATE, UNIQUE(name, date))" 29 | cursor.execute(create_table_sql) 30 | 31 | 32 | # Commit changes and close the connection 33 | conn.commit() 34 | conn.close() 35 | 36 | 37 | class Face_Recognizer: 38 | def __init__(self): 39 | self.font = cv2.FONT_ITALIC 40 | 41 | # FPS 42 | self.frame_time = 0 43 | self.frame_start_time = 0 44 | self.fps = 0 45 | self.fps_show = 0 46 | self.start_time = time.time() 47 | 48 | # cnt for frame 49 | self.frame_cnt = 0 50 | 51 | # Save the features of faces in the database 52 | self.face_features_known_list = [] 53 | # / Save the name of faces in the database 54 | self.face_name_known_list = [] 55 | 56 | # List to save centroid positions of ROI in frame N-1 and N 57 | self.last_frame_face_centroid_list = [] 58 | self.current_frame_face_centroid_list = [] 59 | 60 | # List to save names of objects in frame N-1 and N 61 | self.last_frame_face_name_list = [] 62 | self.current_frame_face_name_list = [] 63 | 64 | # cnt for faces in frame N-1 and N 65 | self.last_frame_face_cnt = 0 66 | self.current_frame_face_cnt = 0 67 | 68 | # Save the e-distance for faceX when recognizing 69 | self.current_frame_face_X_e_distance_list = [] 70 | 71 | # Save the positions and names of current faces captured 72 | self.current_frame_face_position_list = [] 73 | # Save the features of people in current frame 74 | self.current_frame_face_feature_list = [] 75 | 76 | # e distance between centroid of ROI in last and current frame 77 | self.last_current_frame_centroid_e_distance = 0 78 | 79 | # Reclassify after 'reclassify_interval' frames 80 | self.reclassify_interval_cnt = 0 81 | self.reclassify_interval = 10 82 | 83 | # "features_all.csv" / Get known faces from "features_all.csv" 84 | def get_face_database(self): 85 | if os.path.exists("data/features_all.csv"): 86 | path_features_known_csv = "data/features_all.csv" 87 | csv_rd = pd.read_csv(path_features_known_csv, header=None) 88 | for i in range(csv_rd.shape[0]): 89 | features_someone_arr = [] 90 | self.face_name_known_list.append(csv_rd.iloc[i][0]) 91 | for j in range(1, 129): 92 | if csv_rd.iloc[i][j] == '': 93 | features_someone_arr.append('0') 94 | else: 95 | features_someone_arr.append(csv_rd.iloc[i][j]) 96 | self.face_features_known_list.append(features_someone_arr) 97 | logging.info("Faces in Database: %d", len(self.face_features_known_list)) 98 | return 1 99 | else: 100 | logging.warning("'features_all.csv' not found!") 101 | logging.warning("Please run 'get_faces_from_camera.py' " 102 | "and 'features_extraction_to_csv.py' before 'face_reco_from_camera.py'") 103 | return 0 104 | 105 | def update_fps(self): 106 | now = time.time() 107 | # Refresh fps per second 108 | if str(self.start_time).split(".")[0] != str(now).split(".")[0]: 109 | self.fps_show = self.fps 110 | self.start_time = now 111 | self.frame_time = now - self.frame_start_time 112 | self.fps = 1.0 / self.frame_time 113 | self.frame_start_time = now 114 | 115 | @staticmethod 116 | # / Compute the e-distance between two 128D features 117 | def return_euclidean_distance(feature_1, feature_2): 118 | feature_1 = np.array(feature_1) 119 | feature_2 = np.array(feature_2) 120 | dist = np.sqrt(np.sum(np.square(feature_1 - feature_2))) 121 | return dist 122 | 123 | # / Use centroid tracker to link face_x in current frame with person_x in last frame 124 | def centroid_tracker(self): 125 | for i in range(len(self.current_frame_face_centroid_list)): 126 | e_distance_current_frame_person_x_list = [] 127 | # For object 1 in current_frame, compute e-distance with object 1/2/3/4/... in last frame 128 | for j in range(len(self.last_frame_face_centroid_list)): 129 | self.last_current_frame_centroid_e_distance = self.return_euclidean_distance( 130 | self.current_frame_face_centroid_list[i], self.last_frame_face_centroid_list[j]) 131 | 132 | e_distance_current_frame_person_x_list.append( 133 | self.last_current_frame_centroid_e_distance) 134 | 135 | last_frame_num = e_distance_current_frame_person_x_list.index( 136 | min(e_distance_current_frame_person_x_list)) 137 | self.current_frame_face_name_list[i] = self.last_frame_face_name_list[last_frame_num] 138 | 139 | # cv2 window / putText on cv2 window 140 | def draw_note(self, img_rd): 141 | # / Add some info on windows 142 | cv2.putText(img_rd, "Face Recognizer with Deep Learning", (20, 40), self.font, 1, (255, 255, 255), 1, cv2.LINE_AA) 143 | cv2.putText(img_rd, "Frame: " + str(self.frame_cnt), (20, 100), self.font, 0.8, (0, 255, 0), 1, 144 | cv2.LINE_AA) 145 | cv2.putText(img_rd, "FPS: " + str(self.fps.__round__(2)), (20, 130), self.font, 0.8, (0, 255, 0), 1, 146 | cv2.LINE_AA) 147 | cv2.putText(img_rd, "Faces: " + str(self.current_frame_face_cnt), (20, 160), self.font, 0.8, (0, 255, 0), 1, 148 | cv2.LINE_AA) 149 | cv2.putText(img_rd, "Q: Quit", (20, 450), self.font, 0.8, (255, 255, 255), 1, cv2.LINE_AA) 150 | 151 | for i in range(len(self.current_frame_face_name_list)): 152 | img_rd = cv2.putText(img_rd, "Face_" + str(i + 1), tuple( 153 | [int(self.current_frame_face_centroid_list[i][0]), int(self.current_frame_face_centroid_list[i][1])]), 154 | self.font, 155 | 0.8, (255, 190, 0), 156 | 1, 157 | cv2.LINE_AA) 158 | # insert data in database 159 | 160 | def attendance(self, name): 161 | current_date = datetime.datetime.now().strftime('%Y-%m-%d') 162 | conn = sqlite3.connect("attendance.db") 163 | cursor = conn.cursor() 164 | # Check if the name already has an entry for the current date 165 | cursor.execute("SELECT * FROM attendance WHERE name = ? AND date = ?", (name, current_date)) 166 | existing_entry = cursor.fetchone() 167 | 168 | if existing_entry: 169 | print(f"{name} is already marked as present for {current_date}") 170 | else: 171 | current_time = datetime.datetime.now().strftime('%H:%M:%S') 172 | cursor.execute("INSERT INTO attendance (name, time, date) VALUES (?, ?, ?)", (name, current_time, current_date)) 173 | conn.commit() 174 | print(f"{name} marked as present for {current_date} at {current_time}") 175 | 176 | conn.close() 177 | 178 | # Face detection and recognition wit OT from input video stream 179 | def process(self, stream): 180 | # 1. Get faces known from "features.all.csv" 181 | if self.get_face_database(): 182 | while stream.isOpened(): 183 | self.frame_cnt += 1 184 | logging.debug("Frame " + str(self.frame_cnt) + " starts") 185 | flag, img_rd = stream.read() 186 | kk = cv2.waitKey(1) 187 | 188 | # 2. Detect faces for frame X 189 | faces = detector(img_rd, 0) 190 | 191 | # 3. Update cnt for faces in frames 192 | self.last_frame_face_cnt = self.current_frame_face_cnt 193 | self.current_frame_face_cnt = len(faces) 194 | 195 | # 4. Update the face name list in last frame 196 | self.last_frame_face_name_list = self.current_frame_face_name_list[:] 197 | 198 | # 5. update frame centroid list 199 | self.last_frame_face_centroid_list = self.current_frame_face_centroid_list 200 | self.current_frame_face_centroid_list = [] 201 | 202 | # 6.1 if cnt not changes 203 | if (self.current_frame_face_cnt == self.last_frame_face_cnt) and ( 204 | self.reclassify_interval_cnt != self.reclassify_interval): 205 | logging.debug("scene 1: No face cnt changes in this frame!!!") 206 | 207 | self.current_frame_face_position_list = [] 208 | 209 | if "unknown" in self.current_frame_face_name_list: 210 | self.reclassify_interval_cnt += 1 211 | 212 | if self.current_frame_face_cnt != 0: 213 | for k, d in enumerate(faces): 214 | self.current_frame_face_position_list.append(tuple( 215 | [faces[k].left(), int(faces[k].bottom() + (faces[k].bottom() - faces[k].top()) / 4)])) 216 | self.current_frame_face_centroid_list.append( 217 | [int(faces[k].left() + faces[k].right()) / 2, 218 | int(faces[k].top() + faces[k].bottom()) / 2]) 219 | 220 | img_rd = cv2.rectangle(img_rd, 221 | tuple([d.left(), d.top()]), 222 | tuple([d.right(), d.bottom()]), 223 | (255, 255, 255), 2) 224 | 225 | # Multi-faces in current frame, use centroid-tracker to track 226 | if self.current_frame_face_cnt != 1: 227 | self.centroid_tracker() 228 | 229 | for i in range(self.current_frame_face_cnt): 230 | # 6.2 Write names under ROI 231 | img_rd = cv2.putText(img_rd, self.current_frame_face_name_list[i], 232 | self.current_frame_face_position_list[i], self.font, 0.8, (0, 255, 255), 1, 233 | cv2.LINE_AA) 234 | self.draw_note(img_rd) 235 | 236 | # 6.2 If cnt of faces changes, 0->1 or 1->0 or ... 237 | else: 238 | logging.debug("scene 2: / Faces cnt changes in this frame") 239 | self.current_frame_face_position_list = [] 240 | self.current_frame_face_X_e_distance_list = [] 241 | self.current_frame_face_feature_list = [] 242 | self.reclassify_interval_cnt = 0 243 | 244 | # 6.2.1 Face cnt decreases: 1->0, 2->1, ... 245 | if self.current_frame_face_cnt == 0: 246 | logging.debug(" / No faces in this frame!!!") 247 | # clear list of names and features 248 | self.current_frame_face_name_list = [] 249 | # 6.2.2 / Face cnt increase: 0->1, 0->2, ..., 1->2, ... 250 | else: 251 | logging.debug(" scene 2.2 Get faces in this frame and do face recognition") 252 | self.current_frame_face_name_list = [] 253 | for i in range(len(faces)): 254 | shape = predictor(img_rd, faces[i]) 255 | self.current_frame_face_feature_list.append( 256 | face_reco_model.compute_face_descriptor(img_rd, shape)) 257 | self.current_frame_face_name_list.append("unknown") 258 | 259 | # 6.2.2.1 Traversal all the faces in the database 260 | for k in range(len(faces)): 261 | logging.debug(" For face %d in current frame:", k + 1) 262 | self.current_frame_face_centroid_list.append( 263 | [int(faces[k].left() + faces[k].right()) / 2, 264 | int(faces[k].top() + faces[k].bottom()) / 2]) 265 | 266 | self.current_frame_face_X_e_distance_list = [] 267 | 268 | # 6.2.2.2 Positions of faces captured 269 | self.current_frame_face_position_list.append(tuple( 270 | [faces[k].left(), int(faces[k].bottom() + (faces[k].bottom() - faces[k].top()) / 4)])) 271 | 272 | # 6.2.2.3 273 | # For every faces detected, compare the faces in the database 274 | for i in range(len(self.face_features_known_list)): 275 | # 276 | if str(self.face_features_known_list[i][0]) != '0.0': 277 | e_distance_tmp = self.return_euclidean_distance( 278 | self.current_frame_face_feature_list[k], 279 | self.face_features_known_list[i]) 280 | logging.debug(" with person %d, the e-distance: %f", i + 1, e_distance_tmp) 281 | self.current_frame_face_X_e_distance_list.append(e_distance_tmp) 282 | else: 283 | # person_X 284 | self.current_frame_face_X_e_distance_list.append(999999999) 285 | 286 | # 6.2.2.4 / Find the one with minimum e distance 287 | similar_person_num = self.current_frame_face_X_e_distance_list.index( 288 | min(self.current_frame_face_X_e_distance_list)) 289 | 290 | if min(self.current_frame_face_X_e_distance_list) < 0.4: 291 | self.current_frame_face_name_list[k] = self.face_name_known_list[similar_person_num] 292 | logging.debug(" Face recognition result: %s", 293 | self.face_name_known_list[similar_person_num]) 294 | 295 | # Insert attendance record 296 | nam =self.face_name_known_list[similar_person_num] 297 | 298 | print(type(self.face_name_known_list[similar_person_num])) 299 | print(nam) 300 | self.attendance(nam) 301 | else: 302 | logging.debug(" Face recognition result: Unknown person") 303 | 304 | # 7. / Add note on cv2 window 305 | self.draw_note(img_rd) 306 | 307 | # 8. 'q' / Press 'q' to exit 308 | if kk == ord('q'): 309 | break 310 | 311 | self.update_fps() 312 | cv2.namedWindow("camera", 1) 313 | cv2.imshow("camera", img_rd) 314 | 315 | logging.debug("Frame ends\n\n") 316 | 317 | 318 | 319 | 320 | def run(self): 321 | # cap = cv2.VideoCapture("video.mp4") # Get video stream from video file 322 | cap = cv2.VideoCapture(0) # Get video stream from camera 323 | self.process(cap) 324 | 325 | cap.release() 326 | cv2.destroyAllWindows() 327 | 328 | 329 | 330 | 331 | def main(): 332 | # logging.basicConfig(level=logging.DEBUG) # Set log level to 'logging.DEBUG' to print debug info of every frame 333 | logging.basicConfig(level=logging.INFO) 334 | Face_Recognizer_con = Face_Recognizer() 335 | Face_Recognizer_con.run() 336 | 337 | 338 | if __name__ == '__main__': 339 | main() 340 | -------------------------------------------------------------------------------- /features_extraction_to_csv.py: -------------------------------------------------------------------------------- 1 | # Extract features from images and save into "features_all.csv" 2 | 3 | import os 4 | import dlib 5 | import csv 6 | import numpy as np 7 | import logging 8 | import cv2 9 | 10 | # Path of cropped faces 11 | path_images_from_camera = "data/data_faces_from_camera/" 12 | 13 | # Use frontal face detector of Dlib 14 | detector = dlib.get_frontal_face_detector() 15 | 16 | # Get face landmarks 17 | predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat') 18 | 19 | # Use Dlib resnet50 model to get 128D face descriptor 20 | face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat") 21 | 22 | 23 | # Return 128D features for single image 24 | 25 | def return_128d_features(path_img): 26 | img_rd = cv2.imread(path_img) 27 | faces = detector(img_rd, 1) 28 | 29 | logging.info("%-40s %-20s", " Image with faces detected:", path_img) 30 | 31 | # For photos of faces saved, we need to make sure that we can detect faces from the cropped images 32 | if len(faces) != 0: 33 | shape = predictor(img_rd, faces[0]) 34 | face_descriptor = face_reco_model.compute_face_descriptor(img_rd, shape) 35 | else: 36 | face_descriptor = 0 37 | logging.warning("no face") 38 | return face_descriptor 39 | 40 | 41 | # Return the mean value of 128D face descriptor for person X 42 | 43 | def return_features_mean_personX(path_face_personX): 44 | features_list_personX = [] 45 | photos_list = os.listdir(path_face_personX) 46 | if photos_list: 47 | for i in range(len(photos_list)): 48 | # return_128d_features() 128D / Get 128D features for single image of personX 49 | logging.info("%-40s %-20s", " / Reading image:", path_face_personX + "/" + photos_list[i]) 50 | features_128d = return_128d_features(path_face_personX + "/" + photos_list[i]) 51 | # Jump if no face detected from image 52 | if features_128d == 0: 53 | i += 1 54 | else: 55 | features_list_personX.append(features_128d) 56 | else: 57 | logging.warning(" Warning: No images in%s/", path_face_personX) 58 | 59 | 60 | if features_list_personX: 61 | features_mean_personX = np.array(features_list_personX, dtype=object).mean(axis=0) 62 | else: 63 | features_mean_personX = np.zeros(128, dtype=object, order='C') 64 | return features_mean_personX 65 | 66 | 67 | def main(): 68 | logging.basicConfig(level=logging.INFO) 69 | # Get the order of latest person 70 | person_list = os.listdir("data/data_faces_from_camera/") 71 | person_list.sort() 72 | 73 | with open("data/features_all.csv", "w", newline="") as csvfile: 74 | writer = csv.writer(csvfile) 75 | for person in person_list: 76 | # Get the mean/average features of face/personX, it will be a list with a length of 128D 77 | logging.info("%sperson_%s", path_images_from_camera, person) 78 | features_mean_personX = return_features_mean_personX(path_images_from_camera + person) 79 | 80 | if len(person.split('_', 2)) == 2: 81 | # "person_x" 82 | person_name = person 83 | else: 84 | # "person_x_tom" 85 | person_name = person.split('_', 2)[-1] 86 | features_mean_personX = np.insert(features_mean_personX, 0, person_name, axis=0) 87 | # features_mean_personX will be 129D, person name + 128 features 88 | writer.writerow(features_mean_personX) 89 | logging.info('\n') 90 | logging.info("Save all the features of faces registered into: data/features_all.csv") 91 | 92 | 93 | if __name__ == '__main__': 94 | main() -------------------------------------------------------------------------------- /get_faces_from_camera_tkinter.py: -------------------------------------------------------------------------------- 1 | import dlib 2 | import numpy as np 3 | import cv2 4 | import os 5 | import shutil 6 | import time 7 | import logging 8 | import tkinter as tk 9 | from tkinter import font as tkFont 10 | from PIL import Image, ImageTk 11 | 12 | # Use frontal face detector of Dlib 13 | detector = dlib.get_frontal_face_detector() 14 | 15 | 16 | class Face_Register: 17 | def __init__(self): 18 | 19 | self.current_frame_faces_cnt = 0 # cnt for counting faces in current frame 20 | self.existing_faces_cnt = 0 # cnt for counting saved faces 21 | self.ss_cnt = 0 # cnt for screen shots 22 | 23 | # Tkinter GUI 24 | self.win = tk.Tk() 25 | self.win.title("Face Register") 26 | 27 | # PLease modify window size here if needed 28 | self.win.geometry("1000x500") 29 | 30 | # GUI left part 31 | self.frame_left_camera = tk.Frame(self.win) 32 | self.label = tk.Label(self.win) 33 | self.label.pack(side=tk.LEFT) 34 | self.frame_left_camera.pack() 35 | 36 | # GUI right part 37 | self.frame_right_info = tk.Frame(self.win) 38 | self.label_cnt_face_in_database = tk.Label(self.frame_right_info, text=str(self.existing_faces_cnt)) 39 | self.label_fps_info = tk.Label(self.frame_right_info, text="") 40 | self.input_name = tk.Entry(self.frame_right_info) 41 | self.input_name_char = "" 42 | self.label_warning = tk.Label(self.frame_right_info) 43 | self.label_face_cnt = tk.Label(self.frame_right_info, text="Faces in current frame: ") 44 | self.log_all = tk.Label(self.frame_right_info) 45 | 46 | self.font_title = tkFont.Font(family='Helvetica', size=20, weight='bold') 47 | self.font_step_title = tkFont.Font(family='Helvetica', size=15, weight='bold') 48 | self.font_warning = tkFont.Font(family='Helvetica', size=15, weight='bold') 49 | 50 | self.path_photos_from_camera = "data/data_faces_from_camera/" 51 | self.current_face_dir = "" 52 | self.font = cv2.FONT_ITALIC 53 | 54 | # Current frame and face ROI position 55 | self.current_frame = np.ndarray 56 | self.face_ROI_image = np.ndarray 57 | self.face_ROI_width_start = 0 58 | self.face_ROI_height_start = 0 59 | self.face_ROI_width = 0 60 | self.face_ROI_height = 0 61 | self.ww = 0 62 | self.hh = 0 63 | 64 | self.out_of_range_flag = False 65 | self.face_folder_created_flag = False 66 | 67 | # FPS 68 | self.frame_time = 0 69 | self.frame_start_time = 0 70 | self.fps = 0 71 | self.fps_show = 0 72 | self.start_time = time.time() 73 | 74 | self.cap = cv2.VideoCapture(0) # Get video stream from camera 75 | 76 | # self.cap = cv2.VideoCapture("test.mp4") # Input local video 77 | 78 | # Delete old face folders 79 | def GUI_clear_data(self): 80 | # "/data_faces_from_camera/person_x/"... 81 | folders_rd = os.listdir(self.path_photos_from_camera) 82 | for i in range(len(folders_rd)): 83 | shutil.rmtree(self.path_photos_from_camera + folders_rd[i]) 84 | if os.path.isfile("data/features_all.csv"): 85 | os.remove("data/features_all.csv") 86 | self.label_cnt_face_in_database['text'] = "0" 87 | self.existing_faces_cnt = 0 88 | self.log_all["text"] = "Face images and `features_all.csv` removed!" 89 | 90 | def GUI_get_input_name(self): 91 | self.input_name_char = self.input_name.get() 92 | self.create_face_folder() 93 | self.label_cnt_face_in_database['text'] = str(self.existing_faces_cnt) 94 | 95 | def GUI_info(self): 96 | tk.Label(self.frame_right_info, 97 | text="Face register", 98 | font=self.font_title).grid(row=0, column=0, columnspan=3, sticky=tk.W, padx=2, pady=20) 99 | 100 | tk.Label(self.frame_right_info, text="FPS: ").grid(row=1, column=0, sticky=tk.W, padx=5, pady=2) 101 | self.label_fps_info.grid(row=1, column=1, sticky=tk.W, padx=5, pady=2) 102 | 103 | tk.Label(self.frame_right_info, text="Faces in database: ").grid(row=2, column=0, sticky=tk.W, padx=5, pady=2) 104 | self.label_cnt_face_in_database.grid(row=2, column=1, sticky=tk.W, padx=5, pady=2) 105 | 106 | tk.Label(self.frame_right_info, 107 | text="Faces in current frame: ").grid(row=3, column=0, columnspan=2, sticky=tk.W, padx=5, pady=2) 108 | self.label_face_cnt.grid(row=3, column=2, columnspan=3, sticky=tk.W, padx=5, pady=2) 109 | 110 | self.label_warning.grid(row=4, column=0, columnspan=3, sticky=tk.W, padx=5, pady=2) 111 | 112 | # Step 1: Clear old data 113 | tk.Label(self.frame_right_info, 114 | font=self.font_step_title, 115 | text="Step 1: Clear face photos").grid(row=5, column=0, columnspan=2, sticky=tk.W, padx=5, pady=20) 116 | tk.Button(self.frame_right_info, 117 | text='Clear', 118 | command=self.GUI_clear_data).grid(row=6, column=0, columnspan=3, sticky=tk.W, padx=5, pady=2) 119 | 120 | # Step 2: Input name and create folders for face 121 | tk.Label(self.frame_right_info, 122 | font=self.font_step_title, 123 | text="Step 2: Input name").grid(row=7, column=0, columnspan=2, sticky=tk.W, padx=5, pady=20) 124 | 125 | tk.Label(self.frame_right_info, text="Name: ").grid(row=8, column=0, sticky=tk.W, padx=5, pady=0) 126 | self.input_name.grid(row=8, column=1, sticky=tk.W, padx=0, pady=2) 127 | 128 | tk.Button(self.frame_right_info, 129 | text='Input', 130 | command=self.GUI_get_input_name).grid(row=8, column=2, padx=5) 131 | 132 | # Step 3: Save current face in frame 133 | tk.Label(self.frame_right_info, 134 | font=self.font_step_title, 135 | text="Step 3: Save face image").grid(row=9, column=0, columnspan=2, sticky=tk.W, padx=5, pady=20) 136 | 137 | tk.Button(self.frame_right_info, 138 | text='Save current face', 139 | command=self.save_current_face).grid(row=10, column=0, columnspan=3, sticky=tk.W) 140 | 141 | # Show log in GUI 142 | self.log_all.grid(row=11, column=0, columnspan=20, sticky=tk.W, padx=5, pady=20) 143 | 144 | self.frame_right_info.pack() 145 | 146 | # Mkdir for saving photos and csv 147 | def pre_work_mkdir(self): 148 | # Create folders to save face images and csv 149 | if os.path.isdir(self.path_photos_from_camera): 150 | pass 151 | else: 152 | os.mkdir(self.path_photos_from_camera) 153 | 154 | # Start from person_x+1 155 | def check_existing_faces_cnt(self): 156 | if os.listdir("data/data_faces_from_camera/"): 157 | # Get the order of latest person 158 | person_list = os.listdir("data/data_faces_from_camera/") 159 | person_num_list = [] 160 | for person in person_list: 161 | person_order = person.split('_')[1].split('_')[0] 162 | person_num_list.append(int(person_order)) 163 | self.existing_faces_cnt = max(person_num_list) 164 | 165 | # Start from person_1 166 | else: 167 | self.existing_faces_cnt = 0 168 | 169 | # Update FPS of Video stream 170 | def update_fps(self): 171 | now = time.time() 172 | # Refresh fps per second 173 | if str(self.start_time).split(".")[0] != str(now).split(".")[0]: 174 | self.fps_show = self.fps 175 | self.start_time = now 176 | self.frame_time = now - self.frame_start_time 177 | self.fps = 1.0 / self.frame_time 178 | self.frame_start_time = now 179 | 180 | self.label_fps_info["text"] = str(self.fps.__round__(2)) 181 | 182 | def create_face_folder(self): 183 | # Create the folders for saving faces 184 | self.existing_faces_cnt += 1 185 | if self.input_name_char: 186 | self.current_face_dir = self.path_photos_from_camera + \ 187 | "person_" + str(self.existing_faces_cnt) + "_" + \ 188 | self.input_name_char 189 | else: 190 | self.current_face_dir = self.path_photos_from_camera + \ 191 | "person_" + str(self.existing_faces_cnt) 192 | os.makedirs(self.current_face_dir) 193 | self.log_all["text"] = "\"" + self.current_face_dir + "/\" created!" 194 | logging.info("\n%-40s %s", "Create folders:", self.current_face_dir) 195 | 196 | self.ss_cnt = 0 # Clear the cnt of screen shots 197 | self.face_folder_created_flag = True # Face folder already created 198 | 199 | def save_current_face(self): 200 | if self.face_folder_created_flag: 201 | if self.current_frame_faces_cnt == 1: 202 | if not self.out_of_range_flag: 203 | self.ss_cnt += 1 204 | # Create blank image according to the size of face detected 205 | self.face_ROI_image = np.zeros((int(self.face_ROI_height * 2), self.face_ROI_width * 2, 3), 206 | np.uint8) 207 | for ii in range(self.face_ROI_height * 2): 208 | for jj in range(self.face_ROI_width * 2): 209 | self.face_ROI_image[ii][jj] = self.current_frame[self.face_ROI_height_start - self.hh + ii][ 210 | self.face_ROI_width_start - self.ww + jj] 211 | self.log_all["text"] = "\"" + self.current_face_dir + "/img_face_" + str( 212 | self.ss_cnt) + ".jpg\"" + " saved!" 213 | self.face_ROI_image = cv2.cvtColor(self.face_ROI_image, cv2.COLOR_BGR2RGB) 214 | 215 | cv2.imwrite(self.current_face_dir + "/img_face_" + str(self.ss_cnt) + ".jpg", self.face_ROI_image) 216 | logging.info("%-40s %s/img_face_%s.jpg", "Save into:", 217 | str(self.current_face_dir), str(self.ss_cnt) + ".jpg") 218 | else: 219 | self.log_all["text"] = "Please do not out of range!" 220 | else: 221 | self.log_all["text"] = "No face in current frame!" 222 | else: 223 | self.log_all["text"] = "Please run step 2!" 224 | 225 | def get_frame(self): 226 | try: 227 | if self.cap.isOpened(): 228 | ret, frame = self.cap.read() 229 | frame = cv2.resize(frame, (640,480)) 230 | return ret, cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) 231 | except: 232 | print("Error: No video input!!!") 233 | 234 | # Main process of face detection and saving 235 | def process(self): 236 | ret, self.current_frame = self.get_frame() 237 | faces = detector(self.current_frame, 0) 238 | # Get frame 239 | if ret: 240 | self.update_fps() 241 | self.label_face_cnt["text"] = str(len(faces)) 242 | # Face detected 243 | if len(faces) != 0: 244 | # Show the ROI of faces 245 | for k, d in enumerate(faces): 246 | self.face_ROI_width_start = d.left() 247 | self.face_ROI_height_start = d.top() 248 | # Compute the size of rectangle box 249 | self.face_ROI_height = (d.bottom() - d.top()) 250 | self.face_ROI_width = (d.right() - d.left()) 251 | self.hh = int(self.face_ROI_height / 2) 252 | self.ww = int(self.face_ROI_width / 2) 253 | 254 | # If the size of ROI > 480x640 255 | if (d.right() + self.ww) > 640 or (d.bottom() + self.hh > 480) or (d.left() - self.ww < 0) or ( 256 | d.top() - self.hh < 0): 257 | self.label_warning["text"] = "OUT OF RANGE" 258 | self.label_warning['fg'] = 'red' 259 | self.out_of_range_flag = True 260 | color_rectangle = (255, 0, 0) 261 | else: 262 | self.out_of_range_flag = False 263 | self.label_warning["text"] = "" 264 | color_rectangle = (255, 255, 255) 265 | self.current_frame = cv2.rectangle(self.current_frame, 266 | tuple([d.left() - self.ww, d.top() - self.hh]), 267 | tuple([d.right() + self.ww, d.bottom() + self.hh]), 268 | color_rectangle, 2) 269 | self.current_frame_faces_cnt = len(faces) 270 | 271 | # Convert PIL.Image.Image to PIL.Image.PhotoImage 272 | img_Image = Image.fromarray(self.current_frame) 273 | img_PhotoImage = ImageTk.PhotoImage(image=img_Image) 274 | self.label.img_tk = img_PhotoImage 275 | self.label.configure(image=img_PhotoImage) 276 | 277 | # Refresh frame 278 | self.win.after(20, self.process) 279 | 280 | def run(self): 281 | self.pre_work_mkdir() 282 | self.check_existing_faces_cnt() 283 | self.GUI_info() 284 | self.process() 285 | self.win.mainloop() 286 | 287 | 288 | def main(): 289 | logging.basicConfig(level=logging.INFO) 290 | Face_Register_con = Face_Register() 291 | Face_Register_con.run() 292 | 293 | 294 | if __name__ == '__main__': 295 | main() 296 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | These are the updated 2024 methods of installing the following libraries onto your raspberry pi. Note that this project can be easily implemented in "Pycharm" without much hassle. 2 | >> The only issue with Raspberry Pi is that in most cases , you use standard commands which builds your libraries from source that either takes a lot of time(about 2 hours) or is stuck at a certain point. For example , even now when you use 3 | pip install opencv-python , it tends to get stuck at "Building wheels 517 | " and throws up errors after a few hours of patiently waiting. 4 | 5 | >> So, without further wasting time , here are the shortcuts that might help you save your precious time and efforts while downloading libraries: 6 | 7 | 1) dlib: 8 | i) If you are using PyCharm , use dlib-binary in the available package-installer ( Package installer can be found in ) 9 | - File > New Projects Setup > Settings for New Projects 10 | - Select Python interpreter 11 | - Use the dropdown and select your interpreter ( Example : Python 3.11 ( Face-Recognition-System-main)) 12 | -Use the + icon below it and search for dlib-binary (Note : Selecting regular dlib might throw errors and may not install properly . It is just a waste of time if you try to figure it out. dlib-binary does the job) 13 | -Select dlib-binary and install package. 14 | Follow the above steps for downloading all other libraries if you are using PyCharm in Windows / Mac 15 | 16 | ii) For Raspberry Pi Users: 17 | - Go to terminal(Located on top naviagtion bar) 18 | -Type the following commands one after the other. 19 | sudo apt update 20 | sudo apt install build-essential cmake libopenblas-dev liblapack-dev libjpeg-dev libx11-dev libgtk-3-dev 21 | git clone https://github.com/davisking/dlib.git 22 | cd dlib 23 | sudo python3 setup.py install ( After this step has been completed , close the terminal and open once again) 24 | 25 | 26 | 2) numpy: 27 | i)For numpy , there is no issue in Pycharm . Install the regular numpy(Any version) for your requirements using Package Installer 28 | 29 | ii)For Raspberry Pi Users: sudo apt-get install python3-numpy 30 | 31 | 32 | 3) pandas 33 | i)For pandas too , there is no issue in Pycharm . Install the regular pandas(Any version) for your requirements using Package Installer 34 | 35 | ii)For Raspberry Pi Users: 36 | Pandas is installed by default . To confirm it , type the following commands in the terminal 37 | import pandas as pd 38 | print(pd.__version__) 39 | If there is some sort of error like ( command not found , version not valid etc) use the following command to install pandas : sudo pip3 install pandas 40 | 41 | 4)scikit-image 42 | i)For scikit-image too , there is no issue in Pycharm . Install the regular scikit-image(Any version) for your requirements using Package Installer 43 | 44 | ii)For Raspberry Pi Users: 45 | Scikit-image is also cumbersome while installing from scratch (source) .So use the following alternative that gets the job done: sudo apt-get install python3-skimage 46 | 47 | 48 | 5)flask 49 | i)For flask , there is no issue in Pycharm . Install the regular flask(Any version) for your requirements using Package Installer 50 | 51 | ii)For Raspberry Pi Users: sudo pip3 install flask 52 | 53 | 54 | 6)opencv 55 | i)opencv generally doesn't throw errors in Pycharm. So, just download the regular package using pacakge installer. 56 | 57 | ii) For Raspberry Pi Users: 58 | git clone https://github.com/opencv/opencv.git 59 | cd opencv 60 | mkdir build 61 | cd build 62 | cmake -DCMAKE_INSTALL_PREFIX=/usr/local .. 63 | sudo make install (Close and reopen the terminal after this step) 64 | 65 | So, that's it. Now you have successfully installed all the required libraries for your python program.Feel free to contact me on Telegram: https://t.me/Ggproboi or email me here: kv10291029@gmail.com for any queries. 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | -------------------------------------------------------------------------------- /templates/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 |
4 | 5 | 6 |Name | 81 |Time | 82 |
---|---|
{{ name }} | 88 |{{ time }} | 89 |