├── target.npy ├── Research Publications └── IJSRET_V8_issue3_354_Durgesh_Rao.pdf ├── Detecting_Masks.py ├── Data-Preprocessing.py ├── Training_Model.py └── README.md /target.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DURGESH716/Face-Mask-Detection-CNN/HEAD/target.npy -------------------------------------------------------------------------------- /Research Publications/IJSRET_V8_issue3_354_Durgesh_Rao.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DURGESH716/Face-Mask-Detection-CNN/HEAD/Research Publications/IJSRET_V8_issue3_354_Durgesh_Rao.pdf -------------------------------------------------------------------------------- /Detecting_Masks.py: -------------------------------------------------------------------------------- 1 | # Detecting Masks in Real Time with camera 2 | 3 | from keras.models import load_model 4 | import cv2 5 | import numpy as np 6 | 7 | model = load_model('model-017.model') 8 | 9 | face_clsfr=cv2.CascadeClassifier('haarcascade_frontalface_default.xml') 10 | 11 | source=cv2.VideoCapture(0) 12 | 13 | labels_dict={0:'MASK',1:'NO MASK'} 14 | color_dict={0:(0,255,0),1:(0,0,255)} 15 | 16 | while(True): 17 | 18 | ret,img=source.read() 19 | gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) 20 | faces=face_clsfr.detectMultiScale(gray,1.3,5) 21 | 22 | for x,y,w,h in faces: 23 | 24 | face_img=gray[y:y+w,x:x+w] 25 | resized=cv2.resize(face_img,(100,100)) 26 | normalized=resized/255.0 27 | reshaped=np.reshape(normalized,(1,100,100,1)) 28 | result=model.predict(reshaped) 29 | 30 | label=np.argmax(result,axis=1)[0] 31 | 32 | cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[label],2) 33 | cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[label],-1) 34 | cv2.putText(img, labels_dict[label], (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2) 35 | 36 | 37 | cv2.imshow('LIVE',img) 38 | key=cv2.waitKey(1) 39 | 40 | if(key==27): 41 | break 42 | 43 | cv2.destroyAllWindows() 44 | source.release() 45 | -------------------------------------------------------------------------------- /Data-Preprocessing.py: -------------------------------------------------------------------------------- 1 | # Data Pre-Processing: Used for preparing the data before proceeding for analysis and training 2 | 3 | import cv2 4 | import os 5 | 6 | path='C:/users/Lenovo/Desktop/dataset' 7 | categories=os.listdir(path) 8 | 9 | labels=[i for i in range(len(categories))] 10 | 11 | label_dict=dict(zip(categories,labels)) 12 | 13 | print(categories) 14 | print(labels) 15 | print(label_dict) # importing the 'withmask' & 'withoutmask' folder from dataset folder 16 | 17 | size=100 18 | data=[] 19 | target=[] 20 | 21 | 22 | for j in categories: 23 | folder=os.path.join(path,j) 24 | img_names=os.listdir(folder) 25 | 26 | for k in img_names: 27 | img_path=os.path.join(folder,k) 28 | img=cv2.imread(img_path) 29 | 30 | try: 31 | gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # converting BGR images into grayscale method target.append(label_dict[j]) 32 | 33 | except Exception as e: 34 | print('Exception:',e) 35 | #if any exception rasied, the exception will be printed here. And pass to the next image 36 | 37 | 38 | import numpy as np 39 | 40 | data=np.array(data)/255.0 41 | data=np.reshape(data,(data.shape[0],size,size,1)) 42 | target=np.array(target) 43 | 44 | from keras.utils import np_utils 45 | 46 | new_target=np_utils.to_categorical(target) 47 | new_target 48 | # reshaping images into 100 x 100 px sizes 49 | 50 | np.save('data',data) 51 | np.save('target',new_target) # saving 'data' & 'target' arrays 52 | -------------------------------------------------------------------------------- /Training_Model.py: -------------------------------------------------------------------------------- 1 | # Training CNN Model 2 | 3 | import numpy as np 4 | 5 | data=np.load('data.npy') 6 | target=np.load('target.npy') 7 | 8 | #loading the save numpy arrays from the previous code 9 | 10 | from keras.models import Sequential 11 | from keras.layers import Dense,Activation,Flatten,Dropout 12 | from keras.layers import Conv2D,MaxPooling2D 13 | from keras.callbacks import ModelCheckpoint 14 | 15 | model=Sequential() 16 | 17 | model.add(Conv2D(200,(3,3),input_shape=data.shape[1:])) 18 | model.add(Activation('relu')) 19 | model.add(MaxPooling2D(pool_size=(2,2))) 20 | #The first CNN layer followed by Relu and MaxPooling layers 21 | 22 | model.add(Conv2D(100,(3,3))) 23 | model.add(Activation('relu')) 24 | model.add(MaxPooling2D(pool_size=(2,2))) 25 | #The second convolution layer followed by Relu and MaxPooling layers 26 | 27 | model.add(Flatten()) 28 | model.add(Dropout(0.5)) 29 | #Flatten layer to stack the output convolutions from second convolution layer 30 | model.add(Dense(50,activation='relu')) 31 | #Dense layer of 64 neurons 32 | model.add(Dense(2,activation='softmax')) 33 | #The Final layer with two outputs for two categories 34 | 35 | model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) 36 | 37 | # Using sklearn library for training and testing data split 38 | from sklearn.model_selection import train_test_split 39 | train_data,test_data,train_target,test_target=train_test_split(data,target,test_size=0.1) 40 | 41 | checkpoint = ModelCheckpoint('model-{epoch:03d}.model',monitor='val_loss',verbose=0,save_best_only=True,mode='auto') 42 | history=model.fit(train_data,train_target,epochs=20,callbacks=[checkpoint],validation_split=0.2) 43 | 44 | import matplotlib.pyplot as plt 45 | 46 | plt.plot(history.history['loss'],'r',label='training loss') 47 | plt.plot(history.history['val_loss'],label='validation loss') 48 | plt.xlabel('# epochs') 49 | plt.ylabel('loss') 50 | plt.legend() 51 | plt.show() 52 | 53 | plt.plot(history.history['accuracy'],'r',label='training accuracy') 54 | plt.plot(history.history['val_accuracy'],label='validation accuracy') 55 | plt.xlabel('# epochs') 56 | plt.ylabel('loss') 57 | plt.legend() 58 | plt.show() 59 | 60 | print(model.evaluate(test_data,test_target)) 61 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # "Face Mask Detection with CNN (Convolutional Neural Networks)" 2 |

3 | 4 | ### Problem Statement and Business Case:- 5 | As Predicted by scientists, many new deadly diseases will be born in the next 10 years. Just like Coronavirus disease (COVID-19), the best thing to do with these diseases is to take precaution, use sanitizer and use face masks.Wearing a face mask will help prevent the spread of infection and prevent the individual from contracting any airborne infectious germs. When someone coughs, talks, sneezes they could release germs into the air that may infect others. 6 | 7 | But the main problem is that people dont take seriously, that how important the mask is. So, to deal with such a problem, A Machine Learning Model has been made which detects people wearing mask or not within seconds, even in the highly crowded public places. Nowadays, wearing a mask is a must, so in every organization, As camera is placed in every organization, so the model will flag people who didnt wear mask and will give red signal for culprits. and allow people who wears mask. 8 | 9 | ### Pre-Requisites / Technologies Used:- 10 | - Python Programming Language (Intermediate), Deep Learning basic knowledge of CNN and Machine Learning 11 | - Libraries: NumPy, Pandas, Matplotlib, Seaborn, Scikit-learn, OpenCV and tensorflow 12 | 13 | ### Step_1: Data Pre-Processing of model :- 14 | - Creating a dataset of 500 images of each with and without mask. 15 | - Data Cleaning and transformation of data to similar size of 100x100 px. 16 | - Converting BGR [Blue Green Red] to Grayscale method [Black & White] 17 | - Visualizing data using graphs like histogram, count-plot, etc. 18 | 19 | ### Step_2: Training the Model:- 20 | - We train the Model using CNN Convolutional Neural Network to build the model 21 | - Used pooling layer, conv2d and other sequential layers to train the model with 64 neurons. 22 | - Then performing hyper-parameter tuning to the model. 23 | 24 | ### Step_3: Testing and Measuring the performance of the model:- 25 | - Splitting the dataset into three parts: Training (80% data) and testing (20% data) 26 | - Now, train the model using CNN Deep Learning Model 27 | - Achived Accuracy of 0.94 (94%) on testing dataset 28 | 29 | 30 | #### **Show 💗 by ⭐ My Repository** 31 | 32 | --------------------------------------------------------------------------------