├── LICENSE.txt
├── README.md
├── cascades
└── haarcascade_frontalface_default.xml
├── data
└── READ THIS.txt
├── demo.gif
├── images
├── keypoints_test_results.png
├── sunglasses.png
├── sunglasses_2.png
├── sunglasses_3.jpg
├── sunglasses_4.png
├── sunglasses_5.jpg
└── sunglasses_6.png
├── model_builder.py
├── my_CNN_model.py
├── my_model.h5
├── shades.py
└── utils.py
/LICENSE.txt:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2018 Akshay Chandra Lagandula
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ### [UPDATE]: I will not be responding to issues or emails related to this repo anymore as I am currently occupied with other research commitments. Also, the libraries used are pretty old and outdated. Thank you.
2 |
3 | # Selfie Filters Using Facial Landmarks [](https://github.com/akshaychandra21/Selfie_Filters_OpenCV/blob/master/LICENSE.txt)
4 |
5 | This deep learning application in python can put various sunglasses on a detected face (I am calling them 'Selfie Filters') by finding the Facial Keypoints (15 unique points). These keypoints mark important areas of the face - the eyes, corners of the mouth, the nose, etc.
6 |
7 | ## Working Example
8 |
9 |
10 | ## Data Description
11 | OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. Employing **Convolutional Neural Networks (CNN)** in [Keras](https://keras.io/) along with OpenCV - I built a couple of selfie filters (very boring ones).
12 |
13 | Facial keypoints can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.
14 |
15 |

16 |
17 | Facial keypoints (also called facial landmarks) are the small blue-green dots shown on each of the faces in the image above - there are 15 keypoints marked in each image.
18 |
19 | I used [this dataset from Kaggle](https://www.kaggle.com/c/facial-keypoints-detection/data) to train a decent CNN model to predict the facial keypoints given a face and used the keypoints to place the desired filters on the face (as shown below).
20 |
21 | ## Code Requirements
22 | The code is in Python (version 3.6 or higher). You also need to install OpenCV and Keras libraries.
23 |
24 | ## Execution
25 | Order of Execution is as follows:
26 |
27 | Step 0 - Download the _'training.zip'_ file from [here](https://www.kaggle.com/c/facial-keypoints-detection/data) and extract it into the _'data'_ folder.
28 |
29 | Step 1 - Execute ``` python model_builder.py ```
30 |
31 | Step 2 - This could take a while, so feel free to take a break.
32 |
33 | Step 3 - Execute ``` python shades.py ```
34 |
35 | Step 4 - Choose filters of your choice.
36 |
37 | Step 5 - And don't forget to SMILE !
38 |
--------------------------------------------------------------------------------
/data/READ THIS.txt:
--------------------------------------------------------------------------------
1 | I was having trouble pushing the dataset to the repo.
2 |
3 | So I urge you to download the dataset from - https://www.kaggle.com/c/facial-keypoints-detection/data
4 | and extract the zip to this folder.
5 |
6 | Cheers!
--------------------------------------------------------------------------------
/demo.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/demo.gif
--------------------------------------------------------------------------------
/images/keypoints_test_results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/keypoints_test_results.png
--------------------------------------------------------------------------------
/images/sunglasses.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/sunglasses.png
--------------------------------------------------------------------------------
/images/sunglasses_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/sunglasses_2.png
--------------------------------------------------------------------------------
/images/sunglasses_3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/sunglasses_3.jpg
--------------------------------------------------------------------------------
/images/sunglasses_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/sunglasses_4.png
--------------------------------------------------------------------------------
/images/sunglasses_5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/sunglasses_5.jpg
--------------------------------------------------------------------------------
/images/sunglasses_6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/images/sunglasses_6.png
--------------------------------------------------------------------------------
/model_builder.py:
--------------------------------------------------------------------------------
1 | from utils import load_data
2 | from my_CNN_model import *
3 | import cv2
4 |
5 |
6 | # Load training set
7 | X_train, y_train = load_data()
8 |
9 | # NOTE: Please check the load_data() method in utils.py to see how the data is preprocessed (normalizations and stuff)
10 |
11 |
12 | # Setting the CNN architecture
13 | my_model = get_my_CNN_model_architecture()
14 |
15 | # Compiling the CNN model with an appropriate optimizer and loss and metrics
16 | compile_my_CNN_model(my_model, optimizer = 'adam', loss = 'mean_squared_error', metrics = ['accuracy'])
17 |
18 | # Training the model
19 | hist = train_my_CNN_model(my_model, X_train, y_train)
20 |
21 | # train_my_CNN_model returns a History object. History.history attribute is a record of training loss values and metrics
22 | # values at successive epochs, as well as validation loss values and validation metrics values (if applicable).
23 |
24 | # Saving the model
25 | save_my_CNN_model(my_model, 'my_model')
26 |
27 | '''
28 | # You can skip all the steps above (from 'Setting the CNN architecture') after running the script for the first time.
29 | # Just load the recent model using load_my_CNN_model and use it to predict keypoints on any face data
30 | my_model = load_my_CNN_model('my_model')
31 | '''
32 |
--------------------------------------------------------------------------------
/my_CNN_model.py:
--------------------------------------------------------------------------------
1 | from keras.models import Sequential
2 | from keras.models import load_model
3 | from keras.layers import Convolution2D, MaxPooling2D, Dropout
4 | from keras.layers import Flatten, Dense
5 | from keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam
6 |
7 | def get_my_CNN_model_architecture():
8 | '''
9 | The network should accept a 96x96 grayscale image as input, and it should output a vector with 30 entries,
10 | corresponding to the predicted (horizontal and vertical) locations of 15 facial keypoints.
11 | '''
12 | model = Sequential()
13 | model.add(Convolution2D(32, (5, 5), input_shape=(96,96,1), activation='relu'))
14 | model.add(MaxPooling2D(pool_size=(2, 2)))
15 |
16 | model.add(Convolution2D(64, (3, 3), activation='relu'))
17 | model.add(MaxPooling2D(pool_size=(2, 2)))
18 | model.add(Dropout(0.1))
19 |
20 | model.add(Convolution2D(128, (3, 3), activation='relu'))
21 | model.add(MaxPooling2D(pool_size=(2, 2)))
22 | model.add(Dropout(0.2))
23 |
24 | model.add(Convolution2D(30, (3, 3), activation='relu'))
25 | model.add(MaxPooling2D(pool_size=(2, 2)))
26 | model.add(Dropout(0.3))
27 |
28 | model.add(Flatten())
29 |
30 | model.add(Dense(64, activation='relu'))
31 | model.add(Dense(128, activation='relu'))
32 | model.add(Dense(256, activation='relu'))
33 | model.add(Dense(64, activation='relu'))
34 | model.add(Dense(30))
35 |
36 | return model;
37 |
38 | def compile_my_CNN_model(model, optimizer, loss, metrics):
39 | model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
40 |
41 | def train_my_CNN_model(model, X_train, y_train):
42 | return model.fit(X_train, y_train, epochs=100, batch_size=200, verbose=1, validation_split=0.2)
43 |
44 | def save_my_CNN_model(model, fileName):
45 | model.save(fileName + '.h5')
46 |
47 | def load_my_CNN_model(fileName):
48 | return load_model(fileName + '.h5')
49 |
--------------------------------------------------------------------------------
/my_model.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/acl21/Selfie_Filters_OpenCV/de218ba3e8efdb6c69762c32a3b67afbe2aefff0/my_model.h5
--------------------------------------------------------------------------------
/shades.py:
--------------------------------------------------------------------------------
1 | from my_CNN_model import *
2 | import cv2
3 | import numpy as np
4 |
5 | # Load the model built in the previous step
6 | my_model = load_my_CNN_model('my_model')
7 |
8 | # Face cascade to detect faces
9 | face_cascade = cv2.CascadeClassifier('cascades/haarcascade_frontalface_default.xml')
10 |
11 | # Define the upper and lower boundaries for a color to be considered "Blue"
12 | blueLower = np.array([100, 60, 60])
13 | blueUpper = np.array([140, 255, 255])
14 |
15 | # Define a 5x5 kernel for erosion and dilation
16 | kernel = np.ones((5, 5), np.uint8)
17 |
18 | # Define filters
19 | filters = ['images/sunglasses.png', 'images/sunglasses_2.png', 'images/sunglasses_3.jpg', 'images/sunglasses_4.png', 'images/sunglasses_5.jpg', 'images/sunglasses_6.png']
20 | filterIndex = 0
21 |
22 | # Load the video
23 | camera = cv2.VideoCapture(0)
24 |
25 | # Keep looping
26 | while True:
27 | # Grab the current paintWindow
28 | (grabbed, frame) = camera.read()
29 | frame = cv2.flip(frame, 1)
30 | frame2 = np.copy(frame)
31 | hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
32 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
33 |
34 | # Add the 'Next Filter' button to the frame
35 | frame = cv2.rectangle(frame, (500,10), (620,65), (235,50,50), -1)
36 | cv2.putText(frame, "NEXT FILTER", (512, 37), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA)
37 |
38 | # Detect faces
39 | faces = face_cascade.detectMultiScale(gray, 1.25, 6)
40 |
41 | # Determine which pixels fall within the blue boundaries and then blur the binary image
42 | blueMask = cv2.inRange(hsv, blueLower, blueUpper)
43 | blueMask = cv2.erode(blueMask, kernel, iterations=2)
44 | blueMask = cv2.morphologyEx(blueMask, cv2.MORPH_OPEN, kernel)
45 | blueMask = cv2.dilate(blueMask, kernel, iterations=1)
46 |
47 | # Find contours (bottle cap in my case) in the image
48 | (_, cnts, _) = cv2.findContours(blueMask.copy(), cv2.RETR_EXTERNAL,
49 | cv2.CHAIN_APPROX_SIMPLE)
50 | center = None
51 |
52 | # Check to see if any contours were found
53 | if len(cnts) > 0:
54 | # Sort the contours and find the largest one -- we
55 | # will assume this contour correspondes to the area of the bottle cap
56 | cnt = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
57 | # Get the radius of the enclosing circle around the found contour
58 | ((x, y), radius) = cv2.minEnclosingCircle(cnt)
59 | # Draw the circle around the contour
60 | cv2.circle(frame, (int(x), int(y)), int(radius), (0, 255, 255), 2)
61 | # Get the moments to calculate the center of the contour (in this case Circle)
62 | M = cv2.moments(cnt)
63 | center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
64 |
65 | if center[1] <= 65:
66 | if 500 <= center[0] <= 620: # Next Filter
67 | filterIndex += 1
68 | filterIndex %= 6
69 | continue
70 |
71 | for (x, y, w, h) in faces:
72 |
73 | # Grab the face
74 | gray_face = gray[y:y+h, x:x+w]
75 | color_face = frame[y:y+h, x:x+w]
76 |
77 | # Normalize to match the input format of the model - Range of pixel to [0, 1]
78 | gray_normalized = gray_face / 255
79 |
80 | # Resize it to 96x96 to match the input format of the model
81 | original_shape = gray_face.shape # A Copy for future reference
82 | face_resized = cv2.resize(gray_normalized, (96, 96), interpolation = cv2.INTER_AREA)
83 | face_resized_copy = face_resized.copy()
84 | face_resized = face_resized.reshape(1, 96, 96, 1)
85 |
86 | # Predicting the keypoints using the model
87 | keypoints = my_model.predict(face_resized)
88 |
89 | # De-Normalize the keypoints values
90 | keypoints = keypoints * 48 + 48
91 |
92 | # Map the Keypoints back to the original image
93 | face_resized_color = cv2.resize(color_face, (96, 96), interpolation = cv2.INTER_AREA)
94 | face_resized_color2 = np.copy(face_resized_color)
95 |
96 | # Pair them together
97 | points = []
98 | for i, co in enumerate(keypoints[0][0::2]):
99 | points.append((co, keypoints[0][1::2][i]))
100 |
101 | # Add FILTER to the frame
102 | sunglasses = cv2.imread(filters[filterIndex], cv2.IMREAD_UNCHANGED)
103 | sunglass_width = int((points[7][0]-points[9][0])*1.1)
104 | sunglass_height = int((points[10][1]-points[8][1])/1.1)
105 | sunglass_resized = cv2.resize(sunglasses, (sunglass_width, sunglass_height), interpolation = cv2.INTER_CUBIC)
106 | transparent_region = sunglass_resized[:,:,:3] != 0
107 | face_resized_color[int(points[9][1]):int(points[9][1])+sunglass_height, int(points[9][0]):int(points[9][0])+sunglass_width,:][transparent_region] = sunglass_resized[:,:,:3][transparent_region]
108 |
109 | # Resize the face_resized_color image back to its original shape
110 | frame[y:y+h, x:x+w] = cv2.resize(face_resized_color, original_shape, interpolation = cv2.INTER_CUBIC)
111 |
112 | # Add KEYPOINTS to the frame2
113 | for keypoint in points:
114 | cv2.circle(face_resized_color2, keypoint, 1, (0,255,0), 1)
115 |
116 | frame2[y:y+h, x:x+w] = cv2.resize(face_resized_color2, original_shape, interpolation = cv2.INTER_CUBIC)
117 |
118 | # Show the frame and the frame2
119 | cv2.imshow("Selfie Filters", frame)
120 | cv2.imshow("Facial Keypoints", frame2)
121 |
122 | # If the 'q' key is pressed, stop the loop
123 | if cv2.waitKey(1) & 0xFF == ord("q"):
124 | break
125 |
126 | # Cleanup the camera and close any open windows
127 | camera.release()
128 | cv2.destroyAllWindows()
129 |
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | import cv2
3 | import numpy as np
4 | import matplotlib.pyplot as plt
5 |
6 | from keras.models import load_model
7 | from pandas.io.parsers import read_csv
8 | from sklearn.utils import shuffle
9 |
10 | def load_data(test=False):
11 | """
12 | Loads data from FTEST if *test* is True, otherwise from FTRAIN.
13 | Important that the files are in a `data` directory
14 | """
15 | FTRAIN = 'data/training.csv'
16 | FTEST = 'data/test.csv'
17 | fname = FTEST if test else FTRAIN
18 | df = read_csv(os.path.expanduser(fname)) # load dataframes
19 |
20 | # The Image column has pixel values separated by space; convert
21 | # the values to numpy arrays:
22 | df['Image'] = df['Image'].apply(lambda im: np.fromstring(im, sep=' '))
23 |
24 | df = df.dropna() # drop all rows that have missing values in them
25 |
26 | X = np.vstack(df['Image'].values) / 255. # scale pixel values to [0, 1] (Normalizing)
27 | X = X.astype(np.float32)
28 | X = X.reshape(-1, 96, 96, 1) # return each images as 96 x 96 x 1
29 |
30 | if not test: # only FTRAIN has target columns
31 | y = df[df.columns[:-1]].values
32 | y = (y - 48) / 48 # scale target coordinates to [-1, 1] (Normalizing)
33 | X, y = shuffle(X, y, random_state=42) # shuffle train data
34 | y = y.astype(np.float32)
35 | else:
36 | y = None
37 |
38 | return X, y
39 |
--------------------------------------------------------------------------------