├── LICENSE
├── OpenCV-Face-Recognition-Python.html
├── OpenCV-Face-Recognition-Python.ipynb
├── OpenCV-Face-Recognition-Python.py
├── README.md
├── face-recognition-demo.mov
├── opencv-files
├── haarcascade_frontalface_alt.xml
└── lbpcascade_frontalface.xml
├── output
├── output.png
└── tom-shahrukh.png
├── test-data
├── test1.jpg
└── test2.jpg
├── training-data
├── s1
│ ├── 1.jpg
│ ├── 10.jpg
│ ├── 11.jpg
│ ├── 12.jpg
│ ├── 2.jpg
│ ├── 3.jpg
│ ├── 4.jpg
│ ├── 5.jpg
│ ├── 6.jpg
│ ├── 7.jpg
│ ├── 8.jpg
│ └── 9.jpg
└── s2
│ ├── 1.jpg
│ ├── 10.jpeg
│ ├── 11.jpeg
│ ├── 12.jpg
│ ├── 2.jpg
│ ├── 3.jpg
│ ├── 4.jpg
│ ├── 5.jpeg
│ ├── 6.jpg
│ ├── 7.jpg
│ ├── 8.jpeg
│ └── 9.jpeg
└── visualization
├── eigenfaces_opencv.png
├── fisherfaces_opencv.png
├── histogram.png
├── illumination-changes.png
├── lbp-labeling.png
├── lbph-faces.jpg
├── test-images.png
└── tom-shahrukh.png
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Ramiz Raja
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/OpenCV-Face-Recognition-Python.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # Face Recognition with OpenCV
5 |
6 | # To detect faces, I will use the code from my previous article on [face detection](https://www.superdatascience.com/opencv-face-detection/). So if you have not read it, I encourage you to do so to understand how face detection works and its Python coding.
7 |
8 | # ### Import Required Modules
9 |
10 | # Before starting the actual coding we need to import the required modules for coding. So let's import them first.
11 | #
12 | # - **cv2:** is _OpenCV_ module for Python which we will use for face detection and face recognition.
13 | # - **os:** We will use this Python module to read our training directories and file names.
14 | # - **numpy:** We will use this module to convert Python lists to numpy arrays as OpenCV face recognizers accept numpy arrays.
15 |
16 | # In[1]:
17 |
18 | #import OpenCV module
19 | import cv2
20 | #import os module for reading training data directories and paths
21 | import os
22 | #import numpy to convert python lists to numpy arrays as
23 | #it is needed by OpenCV face recognizers
24 | import numpy as np
25 |
26 |
27 | # ### Training Data
28 |
29 | # The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with beard, without beard etc. To keep our tutorial simple we are going to use only 12 images for each person.
30 | #
31 | # So our training data consists of total 2 persons with 12 images of each person. All training data is inside _`training-data`_ folder. _`training-data`_ folder contains one folder for each person and **each folder is named with format `sLabel (e.g. s1, s2)` where label is actually the integer label assigned to that person**. For example folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:
32 | #
33 | # ```
34 | # training-data
35 | # |-------------- s1
36 | # | |-- 1.jpg
37 | # | |-- ...
38 | # | |-- 12.jpg
39 | # |-------------- s2
40 | # | |-- 1.jpg
41 | # | |-- ...
42 | # | |-- 12.jpg
43 | # ```
44 | #
45 | # The _`test-data`_ folder contains images that we will use to test our face recognizer after it has been successfully trained.
46 |
47 | # As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.
48 | #
49 | # **Note:** As we have not assigned `label 0` to any person so **the mapping for label 0 is empty**.
50 |
51 | # In[2]:
52 |
53 | #there is no label 0 in our training data so subject name for index/label 0 is empty
54 | subjects = ["", "Ramiz Raja", "Elvis Presley"]
55 |
56 |
57 | # ### Prepare training data
58 |
59 | # You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.
60 | #
61 | # For example, if we had 2 persons and 2 images for each person.
62 | #
63 | # ```
64 | # PERSON-1 PERSON-2
65 | #
66 | # img1 img1
67 | # img2 img2
68 | # ```
69 | #
70 | # Then the prepare data step will produce following face and label vectors.
71 | #
72 | # ```
73 | # FACES LABELS
74 | #
75 | # person1_img1_face 1
76 | # person1_img2_face 1
77 | # person2_img1_face 2
78 | # person2_img2_face 2
79 | # ```
80 | #
81 | #
82 | # Preparing data step can be further divided into following sub-steps.
83 | #
84 | # 1. Read all the folder names of subjects/persons provided in training data folder. So for example, in this tutorial we have folder names: `s1, s2`.
85 | # 2. For each subject, extract label number. **Do you remember that our folders have a special naming convention?** Folder names follow the format `sLabel` where `Label` is an integer representing the label we have assigned to that subject. So for example, folder name `s1` means that the subject has label 1, s2 means subject label is 2 and so on. The label extracted in this step is assigned to each face detected in the next step.
86 | # 3. Read all the images of the subject, detect face from each image.
87 | # 4. Add each face to faces vector with corresponding subject label (extracted in above step) added to labels vector.
88 | #
89 | # **[There should be a visualization for above steps here]**
90 |
91 | # Did you read my last article on [face detection](https://www.superdatascience.com/opencv-face-detection/)? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on [face detection](https://www.superdatascience.com/opencv-face-detection/). So if you have not read it, I encourage you to do so to understand how face detection works and its coding. Below is the same code.
92 |
93 | # In[3]:
94 |
95 | #function to detect face using OpenCV
96 | def detect_face(img):
97 | #convert the test image to gray image as opencv face detector expects gray images
98 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
99 |
100 | #load OpenCV face detector, I am using LBP which is fast
101 | #there is also a more accurate but slow Haar classifier
102 | face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')
103 |
104 | #let's detect multiscale (some images may be closer to camera than others) images
105 | #result is a list of faces
106 | faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
107 |
108 | #if no faces are detected then return original img
109 | if (len(faces) == 0):
110 | return None, None
111 |
112 | #under the assumption that there will be only one face,
113 | #extract the face area
114 | (x, y, w, h) = faces[0]
115 |
116 | #return only the face part of the image
117 | return gray[y:y+w, x:x+h], faces[0]
118 |
119 |
120 | # I am using OpenCV's **LBP face detector**. On _line 4_, I convert the image to grayscale because most operations in OpenCV are performed in gray scale, then on _line 8_ I load LBP face detector using `cv2.CascadeClassifier` class. After that on _line 12_ I use `cv2.CascadeClassifier` class' `detectMultiScale` method to detect all the faces in the image. on _line 20_, from detected faces I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by `detectMultiScale` method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on _line 23_ I extract face area from gray image and return both the face image area and face rectangle.
121 | #
122 | # Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let's do it.
123 |
124 | # In[4]:
125 |
126 | #this function will read all persons' training images, detect face from each image
127 | #and will return two lists of exactly same size, one list
128 | # of faces and another list of labels for each face
129 | def prepare_training_data(data_folder_path):
130 |
131 | #------STEP-1--------
132 | #get the directories (one directory for each subject) in data folder
133 | dirs = os.listdir(data_folder_path)
134 |
135 | #list to hold all subject faces
136 | faces = []
137 | #list to hold labels for all subjects
138 | labels = []
139 |
140 | #let's go through each directory and read images within it
141 | for dir_name in dirs:
142 |
143 | #our subject directories start with letter 's' so
144 | #ignore any non-relevant directories if any
145 | if not dir_name.startswith("s"):
146 | continue;
147 |
148 | #------STEP-2--------
149 | #extract label number of subject from dir_name
150 | #format of dir name = slabel
151 | #, so removing letter 's' from dir_name will give us label
152 | label = int(dir_name.replace("s", ""))
153 |
154 | #build path of directory containin images for current subject subject
155 | #sample subject_dir_path = "training-data/s1"
156 | subject_dir_path = data_folder_path + "/" + dir_name
157 |
158 | #get the images names that are inside the given subject directory
159 | subject_images_names = os.listdir(subject_dir_path)
160 |
161 | #------STEP-3--------
162 | #go through each image name, read image,
163 | #detect face and add face to list of faces
164 | for image_name in subject_images_names:
165 |
166 | #ignore system files like .DS_Store
167 | if image_name.startswith("."):
168 | continue;
169 |
170 | #build image path
171 | #sample image path = training-data/s1/1.pgm
172 | image_path = subject_dir_path + "/" + image_name
173 |
174 | #read image
175 | image = cv2.imread(image_path)
176 |
177 | #display an image window to show the image
178 | cv2.imshow("Training on image...", cv2.resize(image, (400, 500)))
179 | cv2.waitKey(100)
180 |
181 | #detect face
182 | face, rect = detect_face(image)
183 |
184 | #------STEP-4--------
185 | #for the purpose of this tutorial
186 | #we will ignore faces that are not detected
187 | if face is not None:
188 | #add face to list of faces
189 | faces.append(face)
190 | #add label for this face
191 | labels.append(label)
192 |
193 | cv2.destroyAllWindows()
194 | cv2.waitKey(1)
195 | cv2.destroyAllWindows()
196 |
197 | return faces, labels
198 |
199 |
200 | # I have defined a function that takes the path, where training subjects' folders are stored, as parameter. This function follows the same 4 prepare data substeps mentioned above.
201 | #
202 | # **(step-1)** On _line 8_ I am using `os.listdir` method to read names of all folders stored on path passed to function as parameter. On _line 10-13_ I am defining labels and faces vectors.
203 | #
204 | # **(step-2)** After that I traverse through all subjects' folder names and from each subject's folder name on _line 27_ I am extracting the label information. As folder names follow the `sLabel` naming convention so removing the letter `s` from folder name will give us the label assigned to that subject.
205 | #
206 | # **(step-3)** On _line 34_, I read all the images names of of the current subject being traversed and on _line 39-66_ I traverse those images one by one. On _line 53-54_ I am using OpenCV's `imshow(window_title, image)` along with OpenCV's `waitKey(interval)` method to display the current image being traveresed. The `waitKey(interval)` method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On _line 57_, I detect face from the current image being traversed.
207 | #
208 | # **(step-4)** On _line 62-66_, I add the detected face and label to their respective vectors.
209 |
210 | # But a function can't do anything unless we call it on some data that it has to prepare, right? Don't worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!
211 | #
212 | # 
213 | #
214 | # Let's call this function on images of these beautiful celebrities to prepare data for training of our Face Recognizer. Below is a simple code to do that.
215 |
216 | # In[5]:
217 |
218 | #let's first prepare our training data
219 | #data will be in two lists of same size
220 | #one list will contain all the faces
221 | #and other list will contain respective labels for each face
222 | print("Preparing data...")
223 | faces, labels = prepare_training_data("training-data")
224 | print("Data prepared")
225 |
226 | #print total faces and labels
227 | print("Total faces: ", len(faces))
228 | print("Total labels: ", len(labels))
229 |
230 |
231 | # This was probably the boring part, right? Don't worry, the fun stuff is coming up next. It's time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let's train our face recognizer.
232 |
233 | # ### Train Face Recognizer
234 |
235 | # As we know, OpenCV comes equipped with three face recognizers.
236 | #
237 | # 1. EigenFace Recognizer: This can be created with `cv2.face.createEigenFaceRecognizer()`
238 | # 2. FisherFace Recognizer: This can be created with `cv2.face.createFisherFaceRecognizer()`
239 | # 3. Local Binary Patterns Histogram (LBPH): This can be created with `cv2.face.LBPHFisherFaceRecognizer()`
240 | #
241 | # I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV's face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.
242 |
243 | # In[6]:
244 |
245 | #create our LBPH face recognizer
246 | face_recognizer = cv2.face.LBPHFaceRecognizer_create()
247 |
248 | #or use EigenFaceRecognizer by replacing above line with
249 | #face_recognizer = cv2.face.EigenFaceRecognizer_create()
250 |
251 | #or use FisherFaceRecognizer by replacing above line with
252 | #face_recognizer = cv2.face.FisherFaceRecognizer_create()
253 |
254 |
255 | # Now that we have initialized our face recognizer and we also have prepared our training data, it's time to train the face recognizer. We will do that by calling the `train(faces-vector, labels-vector)` method of face recognizer.
256 |
257 | # In[7]:
258 |
259 | #train our face recognizer of our training faces
260 | face_recognizer.train(faces, np.array(labels))
261 |
262 |
263 | # **Did you notice** that instead of passing `labels` vector directly to face recognizer I am first converting it to **numpy** array? This is because OpenCV expects labels vector to be a `numpy` array.
264 | #
265 | # Still not satisfied? Want to see some action? Next step is the real action, I promise!
266 |
267 | # ### Prediction
268 |
269 | # Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects's faces or not. We will take two test images of our celeberities, detect faces from each of them and then pass those faces to our trained face recognizer to see if it recognizes them.
270 | #
271 | # Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.
272 |
273 | # In[8]:
274 |
275 | #function to draw rectangle on image
276 | #according to given (x, y) coordinates and
277 | #given width and heigh
278 | def draw_rectangle(img, rect):
279 | (x, y, w, h) = rect
280 | cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
281 |
282 | #function to draw text on give image starting from
283 | #passed (x, y) coordinates.
284 | def draw_text(img, text, x, y):
285 | cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)
286 |
287 |
288 | # First function `draw_rectangle` draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function `cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth)` to draw rectangle. We will use it to draw a rectangle around the face detected in test image.
289 | #
290 | # Second function `draw_text` uses OpenCV's built in function `cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth)` to draw text on image.
291 | #
292 | # Now that we have the drawing functions, we just need to call the face recognizer's `predict(face)` method to test our face recognizer on test images. Following function does the prediction for us.
293 |
294 | # In[9]:
295 |
296 | #this function recognizes the person in image passed
297 | #and draws a rectangle around detected face with name of the
298 | #subject
299 | def predict(test_img):
300 | #make a copy of the image as we don't want to chang original image
301 | img = test_img.copy()
302 | #detect face from the image
303 | face, rect = detect_face(img)
304 |
305 | #predict the image using our face recognizer
306 | label, confidence = face_recognizer.predict(face)
307 | #get name of respective label returned by face recognizer
308 | label_text = subjects[label]
309 |
310 | #draw a rectangle around face detected
311 | draw_rectangle(img, rect)
312 | #draw name of predicted person
313 | draw_text(img, label_text, rect[0], rect[1]-5)
314 |
315 | return img
316 |
317 | # Now that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let's do it. This is what we have been waiting for.
318 |
319 | # In[10]:
320 |
321 | print("Predicting images...")
322 |
323 | #load test images
324 | test_img1 = cv2.imread("test-data/test1.jpg")
325 | test_img2 = cv2.imread("test-data/test2.jpg")
326 |
327 | #perform a prediction
328 | predicted_img1 = predict(test_img1)
329 | predicted_img2 = predict(test_img2)
330 | print("Prediction complete")
331 |
332 | #display both images
333 | cv2.imshow(subjects[1], cv2.resize(predicted_img1, (400, 500)))
334 | cv2.imshow(subjects[2], cv2.resize(predicted_img2, (400, 500)))
335 | cv2.waitKey(0)
336 | cv2.destroyAllWindows()
337 | cv2.waitKey(1)
338 | cv2.destroyAllWindows()
339 |
340 |
341 |
342 |
343 |
344 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # Face Recognition with OpenCV and Python
3 |
4 | ## Introduction
5 |
6 | What is face recognition? Or what is recognition? When you look at an apple fruit, your mind immediately tells you that this is an apple fruit. This process, your mind telling you that this is an apple fruit is recognition in simple words. So what is face recognition then? I am sure you have guessed it right. When you look at your friend walking down the street or a picture of him, you recognize that he is your friend Paulo. Interestingly when you look at your friend or a picture of him you look at his face first before looking at anything else. Ever wondered why you do that? This is so that you can recognize him by looking at his face. Well, this is you doing face recognition.
7 |
8 | But the real question is how does face recognition works? It is quite simple and intuitive. Take a real life example, when you meet someone first time in your life you don't recognize him, right? While he talks or shakes hands with you, you look at his face, eyes, nose, mouth, color and overall look. This is your mind learning or training for the face recognition of that person by gathering face data. Then he tells you that his name is Paulo. At this point your mind knows that the face data it just learned belongs to Paulo. Now your mind is trained and ready to do face recognition on Paulo's face. Next time when you will see Paulo or his face in a picture you will immediately recognize him. This is how face recognition work. The more you will meet Paulo, the more data your mind will collect about Paulo and especially his face and the better you will become at recognizing him.
9 |
10 | Now the next question is how to code face recognition with OpenCV, after all this is the only reason why you are reading this article, right? OK then. You might say that our mind can do these things easily but to actually code them into a computer is difficult? Don't worry, it is not. Thanks to OpenCV, coding face recognition is as easier as it feels. The coding steps for face recognition are same as we discussed it in real life example above.
11 |
12 | - **Training Data Gathering:** Gather face data (face images in this case) of the persons you want to recognize
13 | - **Training of Recognizer:** Feed that face data (and respective names of each face) to the face recognizer so that it can learn.
14 | - **Recognition:** Feed new faces of the persons and see if the face recognizer you just trained recognizes them.
15 |
16 | OpenCV comes equipped with built in face recognizer, all you have to do is feed it the face data. It's that simple and this how it will look once we are done coding it.
17 |
18 | 
19 |
20 | ## OpenCV Face Recognizers
21 |
22 | OpenCV has three built in face recognizers and thanks to OpenCV's clean coding, you can use any of them by just changing a single line of code. Below are the names of those face recognizers and their OpenCV calls.
23 |
24 | 1. EigenFaces Face Recognizer Recognizer - `cv2.face.createEigenFaceRecognizer()`
25 | 2. FisherFaces Face Recognizer Recognizer - `cv2.face.createFisherFaceRecognizer()`
26 | 3. Local Binary Patterns Histograms (LBPH) Face Recognizer - `cv2.face.createLBPHFaceRecognizer()`
27 |
28 | We have got three face recognizers but do you know which one to use and when? Or which one is better? I guess not. So why not go through a brief summary of each, what you say? I am assuming you said yes :) So let's dive into the theory of each.
29 |
30 | ### EigenFaces Face Recognizer
31 |
32 | This algorithm considers the fact that not all parts of a face are equally important and equally useful. When you look at some one you recognize him/her by his distinct features like eyes, nose, cheeks, forehead and how they vary with respect to each other. So you are actually focusing on the areas of maximum change (mathematically speaking, this change is variance) of the face. For example, from eyes to nose there is a significant change and same is the case from nose to mouth. When you look at multiple faces you compare them by looking at these parts of the faces because these parts are the most useful and important components of a face. Important because they catch the maximum change among faces, change the helps you differentiate one face from the other. This is exactly how EigenFaces face recognizer works.
33 |
34 | EigenFaces face recognizer looks at all the training images of all the persons as a whole and try to extract the components which are important and useful (the components that catch the maximum variance/change) and discards the rest of the components. This way it not only extracts the important components from the training data but also saves memory by discarding the less important components. These important components it extracts are called **principal components**. Below is an image showing the principal components extracted from a list of faces.
35 |
36 | **Principal Components**
37 | 
38 | **[source](http://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html)**
39 |
40 | You can see that principal components actually represent faces and these faces are called **eigen faces** and hence the name of the algorithm.
41 |
42 | So this is how EigenFaces face recognizer trains itself (by extracting principal components). Remember, it also keeps a record of which principal component belongs to which person. One thing to note in above image is that **Eigenfaces algorithm also considers illumination as an important component**.
43 |
44 | Later during recognition, when you feed a new image to the algorithm, it repeats the same process on that image as well. It extracts the principal component from that new image and compares that component with the list of components it stored during training and finds the component with the best match and returns the person label associated with that best match component.
45 |
46 | Easy peasy, right? Next one is easier than this one.
47 |
48 | ### FisherFaces Face Recognizer
49 |
50 | This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer looks at all the training faces of all the persons at once and finds principal components from all of them combined. By capturing principal components from all the of them combined you are not focusing on the features that discriminate one person from the other but the features that represent all the persons in the training data as a whole.
51 |
52 | This approach has drawbacks, for example, **images with sharp changes (like light changes which is not a useful feature at all) may dominate the rest of the images** and you may end up with features that are from external source like light and are not useful for discrimination at all. In the end, your principal components will represent light changes and not the actual face features.
53 |
54 | Fisherfaces algorithm, instead of extracting useful features that represent all the faces of all the persons, it extracts useful features that discriminate one person from the others. This way features of one person do not dominate over the others and you have the features that discriminate one person from the others.
55 |
56 | Below is an image of features extracted using Fisherfaces algorithm.
57 |
58 | **Fisher Faces**
59 | 
60 | **[source](http://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html)**
61 |
62 | You can see that features extracted actually represent faces and these faces are called **fisher faces** and hence the name of the algorithm.
63 |
64 | One thing to note here is that **even in Fisherfaces algorithm if multiple persons have images with sharp changes due to external sources like light they will dominate over other features and affect recognition accuracy**.
65 |
66 | Getting bored with this theory? Don't worry, only one face recognizer is left and then we will dive deep into the coding part.
67 |
68 | ### Local Binary Patterns Histograms (LBPH) Face Recognizer
69 |
70 | I wrote a detailed explaination on Local Binary Patterns Histograms in my previous article on [face detection](https://www.superdatascience.com/opencv-face-detection/) using local binary patterns histograms. So here I will just give a brief overview of how it works.
71 |
72 | We know that Eigenfaces and Fisherfaces are both affected by light and in real life we can't guarantee perfect light conditions. LBPH face recognizer is an improvement to overcome this drawback.
73 |
74 | Idea is to not look at the image as a whole instead find the local features of an image. LBPH alogrithm try to find the local structure of an image and it does that by comparing each pixel with its neighboring pixels.
75 |
76 | Take a 3x3 window and move it one image, at each move (each local part of an image), compare the pixel at the center with its neighbor pixels. The neighbors with intensity value less than or equal to center pixel are denoted by 1 and others by 0. Then you read these 0/1 values under 3x3 window in a clockwise order and you will have a binary pattern like 11100011 and this pattern is local to some area of the image. You do this on whole image and you will have a list of local binary patterns.
77 |
78 | **LBP Labeling**
79 | 
80 |
81 | Now you get why this algorithm has Local Binary Patterns in its name? Because you get a list of local binary patterns. Now you may be wondering, what about the histogram part of the LBPH? Well after you get a list of local binary patterns, you convert each binary pattern into a decimal number (as shown in above image) and then you make a [histogram](https://www.mathsisfun.com/data/histograms.html) of all of those values. A sample histogram looks like this.
82 |
83 | **Sample Histogram**
84 | 
85 |
86 |
87 | I guess this answers the question about histogram part. So in the end you will have **one histogram for each face** image in the training data set. That means if there were 100 images in training data set then LBPH will extract 100 histograms after training and store them for later recognition. Remember, **algorithm also keeps track of which histogram belongs to which person**.
88 |
89 | Later during recognition, when you will feed a new image to the recognizer for recognition it will generate a histogram for that new image, compare that histogram with the histograms it already has, find the best match histogram and return the person label associated with that best match histogram.
90 |
91 | Below is a list of faces and their respective local binary patterns images. You can see that the LBP images are not affected by changes in light conditions.
92 |
93 | **LBP Faces**
94 | 
95 | **[source](http://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html)**
96 |
97 |
98 | The theory part is over and now comes the coding part! Ready to dive into coding? Let's get into it then.
99 |
100 | # Coding Face Recognition with OpenCV
101 |
102 | The Face Recognition process in this tutorial is divided into three steps.
103 |
104 | 1. **Prepare training data:** In this step we will read training images for each person/subject along with their labels, detect faces from each image and assign each detected face an integer label of the person it belongs to.
105 | 2. **Train Face Recognizer:** In this step we will train OpenCV's LBPH face recognizer by feeding it the data we prepared in step 1.
106 | 3. **Testing:** In this step we will pass some test images to face recognizer and see if it predicts them correctly.
107 |
108 | **[There should be a visualization diagram for above steps here]**
109 |
110 | To detect faces, I will use the code from my previous article on [face detection](https://www.superdatascience.com/opencv-face-detection/). So if you have not read it, I encourage you to do so to understand how face detection works and its Python coding.
111 |
112 | ### Import Required Modules
113 |
114 | Before starting the actual coding we need to import the required modules for coding. So let's import them first.
115 |
116 | - **cv2:** is _OpenCV_ module for Python which we will use for face detection and face recognition.
117 | - **os:** We will use this Python module to read our training directories and file names.
118 | - **numpy:** We will use this module to convert Python lists to numpy arrays as OpenCV face recognizers accept numpy arrays.
119 |
120 |
121 | ```python
122 | #import OpenCV module
123 | import cv2
124 | #import os module for reading training data directories and paths
125 | import os
126 | #import numpy to convert python lists to numpy arrays as
127 | #it is needed by OpenCV face recognizers
128 | import numpy as np
129 |
130 | #matplotlib for display our images
131 | import matplotlib.pyplot as plt
132 | %matplotlib inline
133 | ```
134 |
135 | ### Training Data
136 |
137 | The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with beard, without beard etc. To keep our tutorial simple we are going to use only 12 images for each person.
138 |
139 | So our training data consists of total 2 persons with 12 images of each person. All training data is inside _`training-data`_ folder. _`training-data`_ folder contains one folder for each person and **each folder is named with format `sLabel (e.g. s1, s2)` where label is actually the integer label assigned to that person**. For example folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:
140 |
141 | ```
142 | training-data
143 | |-------------- s1
144 | | |-- 1.jpg
145 | | |-- ...
146 | | |-- 12.jpg
147 | |-------------- s2
148 | | |-- 1.jpg
149 | | |-- ...
150 | | |-- 12.jpg
151 | ```
152 |
153 | The _`test-data`_ folder contains images that we will use to test our face recognizer after it has been successfully trained.
154 |
155 | As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.
156 |
157 | **Note:** As we have not assigned `label 0` to any person so **the mapping for label 0 is empty**.
158 |
159 |
160 | ```python
161 | #there is no label 0 in our training data so subject name for index/label 0 is empty
162 | subjects = ["", "Tom Cruise", "Shahrukh Khan"]
163 | ```
164 |
165 | ### Prepare training data
166 |
167 | You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.
168 |
169 | For example, if we had 2 persons and 2 images for each person.
170 |
171 | ```
172 | PERSON-1 PERSON-2
173 |
174 | img1 img1
175 | img2 img2
176 | ```
177 |
178 | Then the prepare data step will produce following face and label vectors.
179 |
180 | ```
181 | FACES LABELS
182 |
183 | person1_img1_face 1
184 | person1_img2_face 1
185 | person2_img1_face 2
186 | person2_img2_face 2
187 | ```
188 |
189 |
190 | Preparing data step can be further divided into following sub-steps.
191 |
192 | 1. Read all the folder names of subjects/persons provided in training data folder. So for example, in this tutorial we have folder names: `s1, s2`.
193 | 2. For each subject, extract label number. **Do you remember that our folders have a special naming convention?** Folder names follow the format `sLabel` where `Label` is an integer representing the label we have assigned to that subject. So for example, folder name `s1` means that the subject has label 1, s2 means subject label is 2 and so on. The label extracted in this step is assigned to each face detected in the next step.
194 | 3. Read all the images of the subject, detect face from each image.
195 | 4. Add each face to faces vector with corresponding subject label (extracted in above step) added to labels vector.
196 |
197 | **[There should be a visualization for above steps here]**
198 |
199 | Did you read my last article on [face detection](https://www.superdatascience.com/opencv-face-detection/)? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on [face detection](https://www.superdatascience.com/opencv-face-detection/). So if you have not read it, I encourage you to do so to understand how face detection works and its coding. Below is the same code.
200 |
201 |
202 | ```python
203 | #function to detect face using OpenCV
204 | def detect_face(img):
205 | #convert the test image to gray image as opencv face detector expects gray images
206 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
207 |
208 | #load OpenCV face detector, I am using LBP which is fast
209 | #there is also a more accurate but slow Haar classifier
210 | face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')
211 |
212 | #let's detect multiscale (some images may be closer to camera than others) images
213 | #result is a list of faces
214 | faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
215 |
216 | #if no faces are detected then return original img
217 | if (len(faces) == 0):
218 | return None, None
219 |
220 | #under the assumption that there will be only one face,
221 | #extract the face area
222 | (x, y, w, h) = faces[0]
223 |
224 | #return only the face part of the image
225 | return gray[y:y+w, x:x+h], faces[0]
226 | ```
227 |
228 | I am using OpenCV's **LBP face detector**. On _line 4_, I convert the image to grayscale because most operations in OpenCV are performed in gray scale, then on _line 8_ I load LBP face detector using `cv2.CascadeClassifier` class. After that on _line 12_ I use `cv2.CascadeClassifier` class' `detectMultiScale` method to detect all the faces in the image. on _line 20_, from detected faces I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by `detectMultiScale` method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on _line 23_ I extract face area from gray image and return both the face image area and face rectangle.
229 |
230 | Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let's do it.
231 |
232 |
233 | ```python
234 | #this function will read all persons' training images, detect face from each image
235 | #and will return two lists of exactly same size, one list
236 | # of faces and another list of labels for each face
237 | def prepare_training_data(data_folder_path):
238 |
239 | #------STEP-1--------
240 | #get the directories (one directory for each subject) in data folder
241 | dirs = os.listdir(data_folder_path)
242 |
243 | #list to hold all subject faces
244 | faces = []
245 | #list to hold labels for all subjects
246 | labels = []
247 |
248 | #let's go through each directory and read images within it
249 | for dir_name in dirs:
250 |
251 | #our subject directories start with letter 's' so
252 | #ignore any non-relevant directories if any
253 | if not dir_name.startswith("s"):
254 | continue;
255 |
256 | #------STEP-2--------
257 | #extract label number of subject from dir_name
258 | #format of dir name = slabel
259 | #, so removing letter 's' from dir_name will give us label
260 | label = int(dir_name.replace("s", ""))
261 |
262 | #build path of directory containin images for current subject subject
263 | #sample subject_dir_path = "training-data/s1"
264 | subject_dir_path = data_folder_path + "/" + dir_name
265 |
266 | #get the images names that are inside the given subject directory
267 | subject_images_names = os.listdir(subject_dir_path)
268 |
269 | #------STEP-3--------
270 | #go through each image name, read image,
271 | #detect face and add face to list of faces
272 | for image_name in subject_images_names:
273 |
274 | #ignore system files like .DS_Store
275 | if image_name.startswith("."):
276 | continue;
277 |
278 | #build image path
279 | #sample image path = training-data/s1/1.pgm
280 | image_path = subject_dir_path + "/" + image_name
281 |
282 | #read image
283 | image = cv2.imread(image_path)
284 |
285 | #display an image window to show the image
286 | cv2.imshow("Training on image...", image)
287 | cv2.waitKey(100)
288 |
289 | #detect face
290 | face, rect = detect_face(image)
291 |
292 | #------STEP-4--------
293 | #for the purpose of this tutorial
294 | #we will ignore faces that are not detected
295 | if face is not None:
296 | #add face to list of faces
297 | faces.append(face)
298 | #add label for this face
299 | labels.append(label)
300 |
301 | cv2.destroyAllWindows()
302 | cv2.waitKey(1)
303 | cv2.destroyAllWindows()
304 |
305 | return faces, labels
306 | ```
307 |
308 | I have defined a function that takes the path, where training subjects' folders are stored, as parameter. This function follows the same 4 prepare data substeps mentioned above.
309 |
310 | **(step-1)** On _line 8_ I am using `os.listdir` method to read names of all folders stored on path passed to function as parameter. On _line 10-13_ I am defining labels and faces vectors.
311 |
312 | **(step-2)** After that I traverse through all subjects' folder names and from each subject's folder name on _line 27_ I am extracting the label information. As folder names follow the `sLabel` naming convention so removing the letter `s` from folder name will give us the label assigned to that subject.
313 |
314 | **(step-3)** On _line 34_, I read all the images names of of the current subject being traversed and on _line 39-66_ I traverse those images one by one. On _line 53-54_ I am using OpenCV's `imshow(window_title, image)` along with OpenCV's `waitKey(interval)` method to display the current image being traveresed. The `waitKey(interval)` method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On _line 57_, I detect face from the current image being traversed.
315 |
316 | **(step-4)** On _line 62-66_, I add the detected face and label to their respective vectors.
317 |
318 | But a function can't do anything unless we call it on some data that it has to prepare, right? Don't worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!
319 |
320 | 
321 |
322 | Let's call this function on images of these beautiful celebrities to prepare data for training of our Face Recognizer. Below is a simple code to do that.
323 |
324 |
325 | ```python
326 | #let's first prepare our training data
327 | #data will be in two lists of same size
328 | #one list will contain all the faces
329 | #and other list will contain respective labels for each face
330 | print("Preparing data...")
331 | faces, labels = prepare_training_data("training-data")
332 | print("Data prepared")
333 |
334 | #print total faces and labels
335 | print("Total faces: ", len(faces))
336 | print("Total labels: ", len(labels))
337 | ```
338 |
339 | Preparing data...
340 | Data prepared
341 | Total faces: 23
342 | Total labels: 23
343 |
344 |
345 | This was probably the boring part, right? Don't worry, the fun stuff is coming up next. It's time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let's train our face recognizer.
346 |
347 | ### Train Face Recognizer
348 |
349 | As we know, OpenCV comes equipped with three face recognizers.
350 |
351 | 1. EigenFace Recognizer: This can be created with `cv2.face.createEigenFaceRecognizer()`
352 | 2. FisherFace Recognizer: This can be created with `cv2.face.createFisherFaceRecognizer()`
353 | 3. Local Binary Patterns Histogram (LBPH): This can be created with `cv2.face.LBPHFisherFaceRecognizer()`
354 |
355 | I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV's face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.
356 |
357 |
358 | ```python
359 | #create our LBPH face recognizer
360 | face_recognizer = cv2.face.createLBPHFaceRecognizer()
361 |
362 | #or use EigenFaceRecognizer by replacing above line with
363 | #face_recognizer = cv2.face.createEigenFaceRecognizer()
364 |
365 | #or use FisherFaceRecognizer by replacing above line with
366 | #face_recognizer = cv2.face.createFisherFaceRecognizer()
367 | ```
368 |
369 | Now that we have initialized our face recognizer and we also have prepared our training data, it's time to train the face recognizer. We will do that by calling the `train(faces-vector, labels-vector)` method of face recognizer.
370 |
371 |
372 | ```python
373 | #train our face recognizer of our training faces
374 | face_recognizer.train(faces, np.array(labels))
375 | ```
376 |
377 | **Did you notice** that instead of passing `labels` vector directly to face recognizer I am first converting it to **numpy** array? This is because OpenCV expects labels vector to be a `numpy` array.
378 |
379 | Still not satisfied? Want to see some action? Next step is the real action, I promise!
380 |
381 | ### Prediction
382 |
383 | Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects's faces or not. We will take two test images of our celeberities, detect faces from each of them and then pass those faces to our trained face recognizer to see if it recognizes them.
384 |
385 | Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.
386 |
387 |
388 | ```python
389 | #function to draw rectangle on image
390 | #according to given (x, y) coordinates and
391 | #given width and heigh
392 | def draw_rectangle(img, rect):
393 | (x, y, w, h) = rect
394 | cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
395 |
396 | #function to draw text on give image starting from
397 | #passed (x, y) coordinates.
398 | def draw_text(img, text, x, y):
399 | cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)
400 | ```
401 |
402 | First function `draw_rectangle` draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function `cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth)` to draw rectangle. We will use it to draw a rectangle around the face detected in test image.
403 |
404 | Second function `draw_text` uses OpenCV's built in function `cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth)` to draw text on image.
405 |
406 | Now that we have the drawing functions, we just need to call the face recognizer's `predict(face)` method to test our face recognizer on test images. Following function does the prediction for us.
407 |
408 |
409 | ```python
410 | #this function recognizes the person in image passed
411 | #and draws a rectangle around detected face with name of the
412 | #subject
413 | def predict(test_img):
414 | #make a copy of the image as we don't want to chang original image
415 | img = test_img.copy()
416 | #detect face from the image
417 | face, rect = detect_face(img)
418 |
419 | #predict the image using our face recognizer
420 | label= face_recognizer.predict(face)
421 | #get name of respective label returned by face recognizer
422 | label_text = subjects[label]
423 |
424 | #draw a rectangle around face detected
425 | draw_rectangle(img, rect)
426 | #draw name of predicted person
427 | draw_text(img, label_text, rect[0], rect[1]-5)
428 |
429 | return img
430 | ```
431 |
432 | * **line-6** read the test image
433 | * **line-7** detect face from test image
434 | * **line-11** recognize the face by calling face recognizer's `predict(face)` method. This method will return a lable
435 | * **line-12** get the name associated with the label
436 | * **line-16** draw rectangle around the detected face
437 | * **line-18** draw name of predicted subject above face rectangle
438 |
439 | Now that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let's do it. This is what we have been waiting for.
440 |
441 |
442 | ```python
443 | print("Predicting images...")
444 |
445 | #load test images
446 | test_img1 = cv2.imread("test-data/test1.jpg")
447 | test_img2 = cv2.imread("test-data/test2.jpg")
448 |
449 | #perform a prediction
450 | predicted_img1 = predict(test_img1)
451 | predicted_img2 = predict(test_img2)
452 | print("Prediction complete")
453 |
454 | #create a figure of 2 plots (one for each test image)
455 | f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
456 |
457 | #display test image1 result
458 | ax1.imshow(cv2.cvtColor(predicted_img1, cv2.COLOR_BGR2RGB))
459 |
460 | #display test image2 result
461 | ax2.imshow(cv2.cvtColor(predicted_img2, cv2.COLOR_BGR2RGB))
462 |
463 | #display both images
464 | cv2.imshow("Tom cruise test", predicted_img1)
465 | cv2.imshow("Shahrukh Khan test", predicted_img2)
466 | cv2.waitKey(0)
467 | cv2.destroyAllWindows()
468 | cv2.waitKey(1)
469 | cv2.destroyAllWindows()
470 | ```
471 |
472 | Predicting images...
473 | Prediction complete
474 |
475 |
476 |
477 | 
478 |
479 |
480 | wohooo! Is'nt it beautiful? Indeed, it is!
481 |
482 | ## End Notes
483 |
484 | Face Recognition is a fascinating idea to work on and OpenCV has made it extremely simple and easy for us to code it. It just takes a few lines of code to have a fully working face recognition application and we can switch between all three face recognizers with a single line of code change. It's that simple.
485 |
486 | Although EigenFaces, FisherFaces and LBPH face recognizers are good but there are even better ways to perform face recognition like using Histogram of Oriented Gradients (HOGs) and Neural Networks. So the more advanced face recognition algorithms are now a days implemented using a combination of OpenCV and Machine learning. I have plans to write some articles on those more advanced methods as well, so stay tuned!
487 |
488 |
489 | ```python
490 |
491 | ```
492 |
--------------------------------------------------------------------------------
/face-recognition-demo.mov:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/face-recognition-demo.mov
--------------------------------------------------------------------------------
/opencv-files/lbpcascade_frontalface.xml:
--------------------------------------------------------------------------------
1 |
2 |
6 |
7 |
8 | BOOST
9 | LBP
10 | 24
11 | 24
12 |
13 | GAB
14 | 0.9950000047683716
15 | 0.5000000000000000
16 | 0.9500000000000000
17 | 1
18 | 100
19 |
20 | 256
21 | 20
22 |
23 |
24 | <_>
25 | 3
26 | -0.7520892024040222
27 |
28 |
29 | <_>
30 |
31 | 0 -1 46 -67130709 -21569 -1426120013 -1275125205 -21585
32 | -16385 587145899 -24005
33 |
34 | -0.6543210148811340 0.8888888955116272
35 |
36 | <_>
37 |
38 | 0 -1 13 -163512766 -769593758 -10027009 -262145 -514457854
39 | -193593353 -524289 -1
40 |
41 | -0.7739216089248657 0.7278633713722229
42 |
43 | <_>
44 |
45 | 0 -1 2 -363936790 -893203669 -1337948010 -136907894
46 | 1088782736 -134217726 -741544961 -1590337
47 |
48 | -0.7068563103675842 0.6761534214019775
49 |
50 | <_>
51 | 4
52 | -0.4872078299522400
53 |
54 |
55 | <_>
56 |
57 | 0 -1 84 2147483647 1946124287 -536870913 2147450879
58 | 738132490 1061101567 243204619 2147446655
59 |
60 | -0.8083735704421997 0.7685696482658386
61 |
62 | <_>
63 |
64 | 0 -1 21 2147483647 263176079 1879048191 254749487 1879048191
65 | -134252545 -268435457 801111999
66 |
67 | -0.7698410153388977 0.6592915654182434
68 |
69 | <_>
70 |
71 | 0 -1 106 -98110272 1610939566 -285484400 -850010381
72 | -189334372 -1671954433 -571026695 -262145
73 |
74 | -0.7506558895111084 0.5444605946540833
75 |
76 | <_>
77 |
78 | 0 -1 48 -798690576 -131075 1095771153 -237144073 -65569 -1
79 | -216727745 -69206049
80 |
81 | -0.7775990366935730 0.5465461611747742
82 |
83 | <_>
84 | 4
85 | -1.1592328548431396
86 |
87 |
88 | <_>
89 |
90 | 0 -1 47 -21585 -20549 -100818262 -738254174 -20561 -36865
91 | -151016790 -134238549
92 |
93 | -0.5601882934570313 0.7743113040924072
94 |
95 | <_>
96 |
97 | 0 -1 12 -286003217 183435247 -268994614 -421330945
98 | -402686081 1090387966 -286785545 -402653185
99 |
100 | -0.6124526262283325 0.6978127956390381
101 |
102 | <_>
103 |
104 | 0 -1 26 -50347012 970882927 -50463492 -1253377 -134218251
105 | -50364513 -33619992 -172490753
106 |
107 | -0.6114496588706970 0.6537628173828125
108 |
109 | <_>
110 |
111 | 0 -1 8 -273 -135266321 1877977738 -2088243418 -134217987
112 | 2146926575 -18910642 1095231247
113 |
114 | -0.6854077577590942 0.5403239130973816
115 |
116 | <_>
117 | 5
118 | -0.7562355995178223
119 |
120 |
121 | <_>
122 |
123 | 0 -1 96 -1273 1870659519 -20971602 -67633153 -134250731
124 | 2004875127 -250 -150995969
125 |
126 | -0.4051094949245453 0.7584033608436585
127 |
128 | <_>
129 |
130 | 0 -1 33 -868162224 -76810262 -4262145 -257 1465211989
131 | -268959873 -2656269 -524289
132 |
133 | -0.7388162612915039 0.5340843200683594
134 |
135 | <_>
136 |
137 | 0 -1 57 -12817 -49 -541103378 -152950 -38993 -20481 -1153876
138 | -72478976
139 |
140 | -0.6582943797111511 0.5339496731758118
141 |
142 | <_>
143 |
144 | 0 -1 125 -269484161 -452984961 -319816180 -1594032130 -2111
145 | -990117891 -488975296 -520947741
146 |
147 | -0.5981323719024658 0.5323504805564880
148 |
149 | <_>
150 |
151 | 0 -1 53 557787431 670265215 -1342193665 -1075892225
152 | 1998528318 1056964607 -33570977 -1
153 |
154 | -0.6498787999153137 0.4913350641727448
155 |
156 | <_>
157 | 5
158 | -0.8085358142852783
159 |
160 |
161 | <_>
162 |
163 | 0 -1 60 -536873708 880195381 -16842788 -20971521 -176687276
164 | -168427659 -16777260 -33554626
165 |
166 | -0.5278195738792419 0.6946372389793396
167 |
168 | <_>
169 |
170 | 0 -1 7 -1 -62981529 -1090591130 805330978 -8388827 -41945787
171 | -39577 -531118985
172 |
173 | -0.5206505060195923 0.6329920291900635
174 |
175 | <_>
176 |
177 | 0 -1 98 -725287348 1347747543 -852489 -16809993 1489881036
178 | -167903241 -1 -1
179 |
180 | -0.7516061067581177 0.4232024252414703
181 |
182 | <_>
183 |
184 | 0 -1 44 -32777 1006582562 -65 935312171 -8388609 -1078198273
185 | -1 733886267
186 |
187 | -0.7639313936233521 0.4123568832874298
188 |
189 | <_>
190 |
191 | 0 -1 24 -85474705 2138828511 -1036436754 817625855
192 | 1123369029 -58796809 -1013468481 -194513409
193 |
194 | -0.5123769044876099 0.5791834592819214
195 |
196 | <_>
197 | 5
198 | -0.5549971461296082
199 |
200 |
201 | <_>
202 |
203 | 0 -1 42 -17409 -20481 -268457797 -134239493 -17473 -1 -21829
204 | -21846
205 |
206 | -0.3763174116611481 0.7298233509063721
207 |
208 | <_>
209 |
210 | 0 -1 6 -805310737 -2098262358 -269504725 682502698
211 | 2147483519 1740574719 -1090519233 -268472385
212 |
213 | -0.5352765917778015 0.5659480094909668
214 |
215 | <_>
216 |
217 | 0 -1 61 -67109678 -6145 -8 -87884584 -20481 -1073762305
218 | -50856216 -16849696
219 |
220 | -0.5678374171257019 0.4961479902267456
221 |
222 | <_>
223 |
224 | 0 -1 123 -138428633 1002418167 -1359008245 -1908670465
225 | -1346685918 910098423 -1359010520 -1346371657
226 |
227 | -0.5706262588500977 0.4572288393974304
228 |
229 | <_>
230 |
231 | 0 -1 9 -89138513 -4196353 1256531674 -1330665426 1216308261
232 | -36190633 33498198 -151796633
233 |
234 | -0.5344601869583130 0.4672054052352905
235 |
236 | <_>
237 | 5
238 | -0.8776460289955139
239 |
240 |
241 | <_>
242 |
243 | 0 -1 105 1073769576 206601725 -34013449 -33554433 -789514004
244 | -101384321 -690225153 -264193
245 |
246 | -0.7700348496437073 0.5943940877914429
247 |
248 | <_>
249 |
250 | 0 -1 30 -1432340997 -823623681 -49153 -34291724 -269484035
251 | -1342767105 -1078198273 -1277955
252 |
253 | -0.5043668746948242 0.6151274442672730
254 |
255 | <_>
256 |
257 | 0 -1 35 -1067385040 -195758209 -436748425 -134217731
258 | -50855988 -129 -1 -1
259 |
260 | -0.6808040738105774 0.4667325913906097
261 |
262 | <_>
263 |
264 | 0 -1 119 832534325 -34111555 -26050561 -423659521 -268468364
265 | 2105014143 -2114244 -17367185
266 |
267 | -0.4927591383457184 0.5401885509490967
268 |
269 | <_>
270 |
271 | 0 -1 82 -1089439888 -1080524865 2143059967 -1114121
272 | -1140949004 -3 -2361356 -739516
273 |
274 | -0.6445107460021973 0.4227822124958038
275 |
276 | <_>
277 | 6
278 | -1.1139287948608398
279 |
280 |
281 | <_>
282 |
283 | 0 -1 52 -1074071553 -1074003969 -1 -1280135430 -5324817 -1
284 | -335548482 582134442
285 |
286 | -0.5307556986808777 0.6258179545402527
287 |
288 | <_>
289 |
290 | 0 -1 99 -706937396 -705364068 -540016724 -570495027
291 | -570630659 -587857963 -33628164 -35848193
292 |
293 | -0.5227634310722351 0.5049746036529541
294 |
295 | <_>
296 |
297 | 0 -1 18 -2035630093 42119158 -268503053 -1671444 261017599
298 | 1325432815 1954394111 -805306449
299 |
300 | -0.4983572661876679 0.5106441378593445
301 |
302 | <_>
303 |
304 | 0 -1 111 -282529488 -1558073088 1426018736 -170526448
305 | -546832487 -5113037 -34243375 -570427929
306 |
307 | -0.4990860521793366 0.5060507059097290
308 |
309 | <_>
310 |
311 | 0 -1 92 1016332500 -606301707 915094269 -1080086049
312 | -1837027144 -1361600280 2147318747 1067975613
313 |
314 | -0.5695009231567383 0.4460467398166657
315 |
316 | <_>
317 |
318 | 0 -1 51 -656420166 -15413034 -141599534 -603435836
319 | 1505950458 -787556946 -79823438 -1326199134
320 |
321 | -0.6590405106544495 0.3616424500942230
322 |
323 | <_>
324 | 7
325 | -0.8243625760078430
326 |
327 |
328 | <_>
329 |
330 | 0 -1 28 -901591776 -201916417 -262 -67371009 -143312112
331 | -524289 -41943178 -1
332 |
333 | -0.4972776770591736 0.6027074456214905
334 |
335 | <_>
336 |
337 | 0 -1 112 -4507851 -411340929 -268437513 -67502145 -17350859
338 | -32901 -71344315 -29377
339 |
340 | -0.4383158981800079 0.5966237187385559
341 |
342 | <_>
343 |
344 | 0 -1 69 -75894785 -117379438 -239063587 -12538500 1485072126
345 | 2076233213 2123118847 801906927
346 |
347 | -0.6386105418205261 0.3977999985218048
348 |
349 | <_>
350 |
351 | 0 -1 19 -823480413 786628589 -16876049 -1364262914 242165211
352 | 1315930109 -696268833 -455082829
353 |
354 | -0.5512794256210327 0.4282079637050629
355 |
356 | <_>
357 |
358 | 0 -1 73 -521411968 6746762 -1396236286 -2038436114
359 | -185612509 57669627 -143132877 -1041235973
360 |
361 | -0.6418755054473877 0.3549866080284119
362 |
363 | <_>
364 |
365 | 0 -1 126 -478153869 1076028979 -1645895615 1365298272
366 | -557859073 -339771473 1442574528 -1058802061
367 |
368 | -0.4841901361942291 0.4668019413948059
369 |
370 | <_>
371 |
372 | 0 -1 45 -246350404 -1650402048 -1610612745 -788400696
373 | 1467604861 -2787397 1476263935 -4481349
374 |
375 | -0.5855734348297119 0.3879135847091675
376 |
377 | <_>
378 | 7
379 | -1.2237116098403931
380 |
381 |
382 | <_>
383 |
384 | 0 -1 114 -24819 1572863935 -16809993 -67108865 2146778388
385 | 1433927541 -268608444 -34865205
386 |
387 | -0.2518476545810700 0.7088654041290283
388 |
389 | <_>
390 |
391 | 0 -1 97 -1841359 -134271049 -32769 -5767369 -1116675 -2185
392 | -8231 -33603327
393 |
394 | -0.4303432404994965 0.5283288359642029
395 |
396 | <_>
397 |
398 | 0 -1 25 -1359507589 -1360593090 -1073778729 -269553812
399 | -809512977 1744707583 -41959433 -134758978
400 |
401 | -0.4259553551673889 0.5440809130668640
402 |
403 | <_>
404 |
405 | 0 -1 34 729753407 -134270989 -1140907329 -235200777
406 | 658456383 2147467263 -1140900929 -16385
407 |
408 | -0.5605589151382446 0.4220733344554901
409 |
410 | <_>
411 |
412 | 0 -1 134 -310380553 -420675595 -193005472 -353568129
413 | 1205338070 -990380036 887604324 -420544526
414 |
415 | -0.5192656517028809 0.4399855434894562
416 |
417 | <_>
418 |
419 | 0 -1 16 -1427119361 1978920959 -287119734 -487068946
420 | 114759245 -540578051 -707510259 -671660453
421 |
422 | -0.5013077259063721 0.4570254683494568
423 |
424 | <_>
425 |
426 | 0 -1 74 -738463762 -889949281 -328301948 -121832450
427 | -1142658284 -1863576559 2146417353 -263185
428 |
429 | -0.4631414115428925 0.4790246188640595
430 |
431 | <_>
432 | 7
433 | -0.5544230937957764
434 |
435 |
436 | <_>
437 |
438 | 0 -1 113 -76228780 -65538 -1 -67174401 -148007 -33 -221796
439 | -272842924
440 |
441 | -0.3949716091156006 0.6082032322883606
442 |
443 | <_>
444 |
445 | 0 -1 110 369147696 -1625232112 2138570036 -1189900 790708019
446 | -1212613127 799948719 -4456483
447 |
448 | -0.4855885505676270 0.4785369932651520
449 |
450 | <_>
451 |
452 | 0 -1 37 784215839 -290015241 536832799 -402984963
453 | -1342414991 -838864897 -176769 -268456129
454 |
455 | -0.4620285332202911 0.4989669024944305
456 |
457 | <_>
458 |
459 | 0 -1 41 -486418688 -171915327 -340294900 -21938 -519766032
460 | -772751172 -73096060 -585322623
461 |
462 | -0.6420643329620361 0.3624351918697357
463 |
464 | <_>
465 |
466 | 0 -1 117 -33554953 -475332625 -1423463824 -2077230421
467 | -4849669 -2080505925 -219032928 -1071915349
468 |
469 | -0.4820112884044647 0.4632140696048737
470 |
471 | <_>
472 |
473 | 0 -1 65 -834130468 -134217476 -1349314083 -1073803559
474 | -619913764 -1449131844 -1386890321 -1979118423
475 |
476 | -0.4465552568435669 0.5061788558959961
477 |
478 | <_>
479 |
480 | 0 -1 56 -285249779 1912569855 -16530 -1731022870 -1161904146
481 | -1342177297 -268439634 -1464078708
482 |
483 | -0.5190586447715759 0.4441480338573456
484 |
485 | <_>
486 | 7
487 | -0.7161560654640198
488 |
489 |
490 | <_>
491 |
492 | 0 -1 20 1246232575 1078001186 -10027057 60102 -277348353
493 | -43646987 -1210581153 1195769615
494 |
495 | -0.4323809444904327 0.5663768053054810
496 |
497 | <_>
498 |
499 | 0 -1 15 -778583572 -612921106 -578775890 -4036478
500 | -1946580497 -1164766570 -1986687009 -12103599
501 |
502 | -0.4588732719421387 0.4547033011913300
503 |
504 | <_>
505 |
506 | 0 -1 129 -1073759445 2013231743 -1363169553 -1082459201
507 | -1414286549 868185983 -1356133589 -1077936257
508 |
509 | -0.5218553543090820 0.4111092388629913
510 |
511 | <_>
512 |
513 | 0 -1 102 -84148365 -2093417722 -1204850272 564290299
514 | -67121221 -1342177350 -1309195902 -776734797
515 |
516 | -0.4920000731945038 0.4326725304126740
517 |
518 | <_>
519 |
520 | 0 -1 88 -25694458 67104495 -290216278 -168563037 2083877442
521 | 1702788383 -144191964 -234882162
522 |
523 | -0.4494568109512329 0.4448510706424713
524 |
525 | <_>
526 |
527 | 0 -1 59 -857980836 904682741 -1612267521 232279415
528 | 1550862252 -574825221 -357380888 -4579409
529 |
530 | -0.5180826783180237 0.3888972699642181
531 |
532 | <_>
533 |
534 | 0 -1 27 -98549440 -137838400 494928389 -246013630 939541351
535 | -1196072350 -620603549 2137216273
536 |
537 | -0.6081240773200989 0.3333222270011902
538 |
539 | <_>
540 | 8
541 | -0.6743940711021423
542 |
543 |
544 | <_>
545 |
546 | 0 -1 29 -150995201 2071191945 -1302151626 536934335
547 | -1059008937 914128709 1147328110 -268369925
548 |
549 | -0.1790193915367127 0.6605972051620483
550 |
551 | <_>
552 |
553 | 0 -1 128 -134509479 1610575703 -1342177289 1861484541
554 | -1107833788 1577058173 -333558568 -136319041
555 |
556 | -0.3681024610996246 0.5139749646186829
557 |
558 | <_>
559 |
560 | 0 -1 70 -1 1060154476 -1090984524 -630918524 -539492875
561 | 779616255 -839568424 -321
562 |
563 | -0.3217232525348663 0.6171553134918213
564 |
565 | <_>
566 |
567 | 0 -1 4 -269562385 -285029906 -791084350 -17923776 235286671
568 | 1275504943 1344390399 -966276889
569 |
570 | -0.4373284578323364 0.4358185231685638
571 |
572 | <_>
573 |
574 | 0 -1 76 17825984 -747628419 595427229 1474759671 575672208
575 | -1684005538 872217086 -1155858277
576 |
577 | -0.4404836893081665 0.4601220190525055
578 |
579 | <_>
580 |
581 | 0 -1 124 -336593039 1873735591 -822231622 -355795238
582 | -470820869 -1997537409 -1057132384 -1015285005
583 |
584 | -0.4294152259826660 0.4452161788940430
585 |
586 | <_>
587 |
588 | 0 -1 54 -834212130 -593694721 -322142257 -364892500
589 | -951029539 -302125121 -1615106053 -79249765
590 |
591 | -0.3973052501678467 0.4854526817798615
592 |
593 | <_>
594 |
595 | 0 -1 95 1342144479 2147431935 -33554561 -47873 -855685912 -1
596 | 1988052447 536827383
597 |
598 | -0.7054683566093445 0.2697997391223908
599 |
600 | <_>
601 | 9
602 | -1.2042298316955566
603 |
604 |
605 | <_>
606 |
607 | 0 -1 39 1431368960 -183437936 -537002499 -137497097
608 | 1560590321 -84611081 -2097193 -513
609 |
610 | -0.5905947685241699 0.5101932883262634
611 |
612 | <_>
613 |
614 | 0 -1 120 -1645259691 2105491231 2130706431 1458995007
615 | -8567536 -42483883 -33780003 -21004417
616 |
617 | -0.4449204802513123 0.4490709304809570
618 |
619 | <_>
620 |
621 | 0 -1 89 -612381022 -505806938 -362027516 -452985106
622 | 275854917 1920431639 -12600561 -134221825
623 |
624 | -0.4693818688392639 0.4061094820499420
625 |
626 | <_>
627 |
628 | 0 -1 14 -805573153 -161 -554172679 -530519488 -16779441
629 | 2000682871 -33604275 -150997129
630 |
631 | -0.3600351214408875 0.5056326985359192
632 |
633 | <_>
634 |
635 | 0 -1 67 6192 435166195 1467449341 2046691505 -1608493775
636 | -4755729 -1083162625 -71365637
637 |
638 | -0.4459891915321350 0.4132415652275085
639 |
640 | <_>
641 |
642 | 0 -1 86 -41689215 -3281034 1853357967 -420712635 -415924289
643 | -270209208 -1088293113 -825311232
644 |
645 | -0.4466069042682648 0.4135067760944367
646 |
647 | <_>
648 |
649 | 0 -1 80 -117391116 -42203396 2080374461 -188709 -542008165
650 | -356831940 -1091125345 -1073796897
651 |
652 | -0.3394956290721893 0.5658645033836365
653 |
654 | <_>
655 |
656 | 0 -1 75 -276830049 1378714472 -1342181951 757272098
657 | 1073740607 -282199241 -415761549 170896931
658 |
659 | -0.5346512198448181 0.3584479391574860
660 |
661 | <_>
662 |
663 | 0 -1 55 -796075825 -123166849 2113667055 -217530421
664 | -1107432194 -16385 -806359809 -391188771
665 |
666 | -0.4379335641860962 0.4123645126819611
667 |
668 | <_>
669 | 10
670 | -0.8402050137519836
671 |
672 |
673 | <_>
674 |
675 | 0 -1 71 -890246622 15525883 -487690486 47116238 -1212319899
676 | -1291847681 -68159890 -469829921
677 |
678 | -0.2670986354351044 0.6014143228530884
679 |
680 | <_>
681 |
682 | 0 -1 31 -1361180685 -1898008841 -1090588811 -285410071
683 | -1074016265 -840443905 2147221487 -262145
684 |
685 | -0.4149844348430634 0.4670888185501099
686 |
687 | <_>
688 |
689 | 0 -1 40 1426190596 1899364271 2142731795 -142607505
690 | -508232452 -21563393 -41960001 -65
691 |
692 | -0.4985891580581665 0.3719584941864014
693 |
694 | <_>
695 |
696 | 0 -1 109 -201337965 10543906 -236498096 -746195597
697 | 1974565825 -15204415 921907633 -190058309
698 |
699 | -0.4568729996681213 0.3965812027454376
700 |
701 | <_>
702 |
703 | 0 -1 130 -595026732 -656401928 -268649235 -571490699
704 | -440600392 -133131 -358810952 -2004088646
705 |
706 | -0.4770836830139160 0.3862601518630981
707 |
708 | <_>
709 |
710 | 0 -1 66 941674740 -1107882114 1332789109 -67691015
711 | -1360463693 -1556612430 -609108546 733546933
712 |
713 | -0.4877715110778809 0.3778986334800720
714 |
715 | <_>
716 |
717 | 0 -1 49 -17114945 -240061474 1552871558 -82775604 -932393844
718 | -1308544889 -532635478 -99042357
719 |
720 | -0.3721654713153839 0.4994400143623352
721 |
722 | <_>
723 |
724 | 0 -1 133 -655906006 1405502603 -939205164 1884929228
725 | -498859222 559417357 -1928559445 -286264385
726 |
727 | -0.3934195041656494 0.4769641458988190
728 |
729 | <_>
730 |
731 | 0 -1 0 -335837777 1860677295 -90 -1946186226 931096183
732 | 251612987 2013265917 -671232197
733 |
734 | -0.4323300719261169 0.4342164099216461
735 |
736 | <_>
737 |
738 | 0 -1 103 37769424 -137772680 374692301 2002666345 -536176194
739 | -1644484728 807009019 1069089930
740 |
741 | -0.4993278682231903 0.3665378093719482
742 |
743 | <_>
744 | 9
745 | -1.1974394321441650
746 |
747 |
748 | <_>
749 |
750 | 0 -1 43 -5505 2147462911 2143265466 -4511070 -16450 -257
751 | -201348440 -71333206
752 |
753 | -0.3310225307941437 0.5624626278877258
754 |
755 | <_>
756 |
757 | 0 -1 90 -136842268 -499330741 2015250980 -87107126
758 | -641665744 -788524639 -1147864792 -134892563
759 |
760 | -0.5266560912132263 0.3704403042793274
761 |
762 | <_>
763 |
764 | 0 -1 104 -146800880 -1780368555 2111170033 -140904684
765 | -16777551 -1946681885 -1646463595 -839131947
766 |
767 | -0.4171888828277588 0.4540435671806335
768 |
769 | <_>
770 |
771 | 0 -1 85 -832054034 -981663763 -301990281 -578814081
772 | -932319000 -1997406723 -33555201 -69206017
773 |
774 | -0.4556705355644226 0.3704262077808380
775 |
776 | <_>
777 |
778 | 0 -1 24 -118492417 -1209026825 1119023838 -1334313353
779 | 1112948738 -297319313 1378887291 -139469193
780 |
781 | -0.4182529747486115 0.4267231225967407
782 |
783 | <_>
784 |
785 | 0 -1 78 -1714382628 -2353704 -112094959 -549613092
786 | -1567058760 -1718550464 -342315012 -1074972227
787 |
788 | -0.3625369668006897 0.4684656262397766
789 |
790 | <_>
791 |
792 | 0 -1 5 -85219702 316836394 -33279 1904970288 2117267315
793 | -260901769 -621461759 -88607770
794 |
795 | -0.4742925167083740 0.3689507246017456
796 |
797 | <_>
798 |
799 | 0 -1 11 -294654041 -353603585 -1641159686 -50331921
800 | -2080899877 1145569279 -143132713 -152044037
801 |
802 | -0.3666271567344666 0.4580127298831940
803 |
804 | <_>
805 |
806 | 0 -1 32 1887453658 -638545712 -1877976819 -34320972
807 | -1071067983 -661345416 -583338277 1060190561
808 |
809 | -0.4567637443542481 0.3894708156585693
810 |
811 | <_>
812 | 9
813 | -0.5733128190040588
814 |
815 |
816 | <_>
817 |
818 | 0 -1 122 -994063296 1088745462 -318837116 -319881377
819 | 1102566613 1165490103 -121679694 -134744129
820 |
821 | -0.4055117964744568 0.5487945079803467
822 |
823 | <_>
824 |
825 | 0 -1 68 -285233233 -538992907 1811935199 -369234005 -529
826 | -20593 -20505 -1561401854
827 |
828 | -0.3787897229194641 0.4532003402709961
829 |
830 | <_>
831 |
832 | 0 -1 58 -1335245632 1968917183 1940861695 536816369
833 | -1226071367 -570908176 457026619 1000020667
834 |
835 | -0.4258328974246979 0.4202791750431061
836 |
837 | <_>
838 |
839 | 0 -1 94 -1360318719 -1979797897 -50435249 -18646473
840 | -608879292 -805306691 -269304244 -17840167
841 |
842 | -0.4561023116111755 0.4002747833728790
843 |
844 | <_>
845 |
846 | 0 -1 87 2062765935 -16449 -1275080721 -16406 45764335
847 | -1090552065 -772846337 -570464322
848 |
849 | -0.4314672648906708 0.4086346626281738
850 |
851 | <_>
852 |
853 | 0 -1 127 -536896021 1080817663 -738234288 -965478709
854 | -2082767969 1290855887 1993822934 -990381609
855 |
856 | -0.4174543321132660 0.4249868988990784
857 |
858 | <_>
859 |
860 | 0 -1 3 -818943025 168730891 -293610428 -79249354 669224671
861 | 621166734 1086506807 1473768907
862 |
863 | -0.4321364760398865 0.4090838730335236
864 |
865 | <_>
866 |
867 | 0 -1 79 -68895696 -67107736 -1414315879 -841676168
868 | -619843344 -1180610531 -1081990469 1043203389
869 |
870 | -0.5018386244773865 0.3702533841133118
871 |
872 | <_>
873 |
874 | 0 -1 116 -54002134 -543485719 -2124882422 -1437445858
875 | -115617074 -1195787391 -1096024366 -2140472445
876 |
877 | -0.5037505626678467 0.3564981222152710
878 |
879 | <_>
880 | 9
881 | -0.4892596900463104
882 |
883 |
884 | <_>
885 |
886 | 0 -1 132 -67113211 2003808111 1862135111 846461923 -2752
887 | 2002237273 -273154752 1937223539
888 |
889 | -0.2448196411132813 0.5689709186553955
890 |
891 | <_>
892 |
893 | 0 -1 62 1179423888 -78064940 -611839555 -539167899
894 | -1289358360 -1650810108 -892540499 -1432827684
895 |
896 | -0.4633283913135529 0.3587929606437683
897 |
898 | <_>
899 |
900 | 0 -1 23 -285212705 -78450761 -656212031 -264050110 -27787425
901 | -1334349961 -547662981 -135796924
902 |
903 | -0.3731099069118500 0.4290455579757690
904 |
905 | <_>
906 |
907 | 0 -1 77 341863476 403702016 -550588417 1600194541
908 | -1080690735 951127993 -1388580949 -1153717473
909 |
910 | -0.3658909499645233 0.4556473195552826
911 |
912 | <_>
913 |
914 | 0 -1 22 -586880702 -204831512 -100644596 -39319550
915 | -1191150794 705692513 457203315 -75806957
916 |
917 | -0.5214384198188782 0.3221037387847900
918 |
919 | <_>
920 |
921 | 0 -1 72 -416546870 545911370 -673716192 -775559454
922 | -264113598 139424 -183369982 -204474641
923 |
924 | -0.4289036989212036 0.4004956185817719
925 |
926 | <_>
927 |
928 | 0 -1 50 -1026505020 -589692154 -1740499937 -1563770497
929 | 1348491006 -60710713 -1109853489 -633909413
930 |
931 | -0.4621542394161224 0.3832748532295227
932 |
933 | <_>
934 |
935 | 0 -1 108 -1448872304 -477895040 -1778390608 -772418127
936 | -1789923416 -1612057181 -805306693 -1415842113
937 |
938 | -0.3711548447608948 0.4612701535224915
939 |
940 | <_>
941 |
942 | 0 -1 92 407905424 -582449988 52654751 -1294472 -285103725
943 | -74633006 1871559083 1057955850
944 |
945 | -0.5180652141571045 0.3205870389938355
946 |
947 | <_>
948 | 10
949 | -0.5911940932273865
950 |
951 |
952 | <_>
953 |
954 | 0 -1 81 4112 -1259563825 -846671428 -100902460 1838164148
955 | -74153752 -90653988 -1074263896
956 |
957 | -0.2592592537403107 0.5873016119003296
958 |
959 | <_>
960 |
961 | 0 -1 1 -285216785 -823206977 -1085589 -1081346 1207959293
962 | 1157103471 2097133565 -2097169
963 |
964 | -0.3801195919513702 0.4718827307224274
965 |
966 | <_>
967 |
968 | 0 -1 121 -12465 -536875169 2147478367 2130706303 -37765492
969 | -866124467 -318782328 -1392509185
970 |
971 | -0.3509117066860199 0.5094807147979736
972 |
973 | <_>
974 |
975 | 0 -1 38 2147449663 -20741 -16794757 1945873146 -16710 -1
976 | -8406341 -67663041
977 |
978 | -0.4068757295608521 0.4130136370658875
979 |
980 | <_>
981 |
982 | 0 -1 17 -155191713 866117231 1651407483 548272812 -479201468
983 | -447742449 1354229504 -261884429
984 |
985 | -0.4557141065597534 0.3539792001247406
986 |
987 | <_>
988 |
989 | 0 -1 100 -225319378 -251682065 -492783986 -792341777
990 | -1287261695 1393643841 -11274182 -213909521
991 |
992 | -0.4117803275585175 0.4118592441082001
993 |
994 | <_>
995 |
996 | 0 -1 63 -382220122 -2002072729 -51404800 -371201558
997 | -923011069 -2135301457 -2066104743 -1042557441
998 |
999 | -0.4008397758007050 0.4034757018089294
1000 |
1001 | <_>
1002 |
1003 | 0 -1 101 -627353764 -48295149 1581203952 -436258614
1004 | -105268268 -1435893445 -638126888 -1061107126
1005 |
1006 | -0.5694189667701721 0.2964762747287750
1007 |
1008 | <_>
1009 |
1010 | 0 -1 118 -8399181 1058107691 -621022752 -251003468 -12582915
1011 | -574619739 -994397789 -1648362021
1012 |
1013 | -0.3195341229438782 0.5294018983840942
1014 |
1015 | <_>
1016 |
1017 | 0 -1 92 -348343812 -1078389516 1717960437 364735981
1018 | -1783841602 -4883137 -457572354 -1076950384
1019 |
1020 | -0.3365339040756226 0.5067458748817444
1021 |
1022 | <_>
1023 | 10
1024 | -0.7612916231155396
1025 |
1026 |
1027 | <_>
1028 |
1029 | 0 -1 10 -1976661318 -287957604 -1659497122 -782068 43591089
1030 | -453637880 1435470000 -1077438561
1031 |
1032 | -0.4204545319080353 0.5165745615959168
1033 |
1034 | <_>
1035 |
1036 | 0 -1 131 -67110925 14874979 -142633168 -1338923040
1037 | 2046713291 -2067933195 1473503712 -789579837
1038 |
1039 | -0.3762553930282593 0.4075302779674530
1040 |
1041 | <_>
1042 |
1043 | 0 -1 83 -272814301 -1577073 -1118685 -305156120 -1052289
1044 | -1073813756 -538971154 -355523038
1045 |
1046 | -0.4253497421741486 0.3728055357933044
1047 |
1048 | <_>
1049 |
1050 | 0 -1 135 -2233 -214486242 -538514758 573747007 -159390971
1051 | 1994225489 -973738098 -203424005
1052 |
1053 | -0.3601998090744019 0.4563256204128265
1054 |
1055 | <_>
1056 |
1057 | 0 -1 115 -261031688 -1330369299 -641860609 1029570301
1058 | -1306461192 -1196149518 -1529767778 683139823
1059 |
1060 | -0.4034293889999390 0.4160816967487335
1061 |
1062 | <_>
1063 |
1064 | 0 -1 64 -572993608 -34042628 -417865 -111109 -1433365268
1065 | -19869715 -1920939864 -1279457063
1066 |
1067 | -0.3620899617671967 0.4594142735004425
1068 |
1069 | <_>
1070 |
1071 | 0 -1 36 -626275097 -615256993 1651946018 805366393
1072 | 2016559730 -430780849 -799868165 -16580645
1073 |
1074 | -0.3903816640377045 0.4381459355354309
1075 |
1076 | <_>
1077 |
1078 | 0 -1 93 1354797300 -1090957603 1976418270 -1342502178
1079 | -1851873892 -1194637077 -1153521668 -1108399474
1080 |
1081 | -0.3591445386409760 0.4624078869819641
1082 |
1083 | <_>
1084 |
1085 | 0 -1 91 68157712 1211368313 -304759523 1063017136 798797750
1086 | -275513546 648167355 -1145357350
1087 |
1088 | -0.4297670423984528 0.4023293554782867
1089 |
1090 | <_>
1091 |
1092 | 0 -1 107 -546318240 -1628569602 -163577944 -537002306
1093 | -545456389 -1325465645 -380446736 -1058473386
1094 |
1095 | -0.5727006793022156 0.2995934784412384
1096 |
1097 | <_>
1098 |
1099 | 0 0 3 5
1100 | <_>
1101 |
1102 | 0 0 4 2
1103 | <_>
1104 |
1105 | 0 0 6 3
1106 | <_>
1107 |
1108 | 0 1 2 3
1109 | <_>
1110 |
1111 | 0 1 3 3
1112 | <_>
1113 |
1114 | 0 1 3 7
1115 | <_>
1116 |
1117 | 0 4 3 3
1118 | <_>
1119 |
1120 | 0 11 3 4
1121 | <_>
1122 |
1123 | 0 12 8 4
1124 | <_>
1125 |
1126 | 0 14 4 3
1127 | <_>
1128 |
1129 | 1 0 5 3
1130 | <_>
1131 |
1132 | 1 1 2 2
1133 | <_>
1134 |
1135 | 1 3 3 1
1136 | <_>
1137 |
1138 | 1 7 4 4
1139 | <_>
1140 |
1141 | 1 12 2 2
1142 | <_>
1143 |
1144 | 1 13 4 1
1145 | <_>
1146 |
1147 | 1 14 4 3
1148 | <_>
1149 |
1150 | 1 17 3 2
1151 | <_>
1152 |
1153 | 2 0 2 3
1154 | <_>
1155 |
1156 | 2 1 2 2
1157 | <_>
1158 |
1159 | 2 2 4 6
1160 | <_>
1161 |
1162 | 2 3 4 4
1163 | <_>
1164 |
1165 | 2 7 2 1
1166 | <_>
1167 |
1168 | 2 11 2 3
1169 | <_>
1170 |
1171 | 2 17 3 2
1172 | <_>
1173 |
1174 | 3 0 2 2
1175 | <_>
1176 |
1177 | 3 1 7 3
1178 | <_>
1179 |
1180 | 3 7 2 1
1181 | <_>
1182 |
1183 | 3 7 2 4
1184 | <_>
1185 |
1186 | 3 18 2 2
1187 | <_>
1188 |
1189 | 4 0 2 3
1190 | <_>
1191 |
1192 | 4 3 2 1
1193 | <_>
1194 |
1195 | 4 6 2 1
1196 | <_>
1197 |
1198 | 4 6 2 5
1199 | <_>
1200 |
1201 | 4 7 5 2
1202 | <_>
1203 |
1204 | 4 8 4 3
1205 | <_>
1206 |
1207 | 4 18 2 2
1208 | <_>
1209 |
1210 | 5 0 2 2
1211 | <_>
1212 |
1213 | 5 3 4 4
1214 | <_>
1215 |
1216 | 5 6 2 5
1217 | <_>
1218 |
1219 | 5 9 2 2
1220 | <_>
1221 |
1222 | 5 10 2 2
1223 | <_>
1224 |
1225 | 6 3 4 4
1226 | <_>
1227 |
1228 | 6 4 4 3
1229 | <_>
1230 |
1231 | 6 5 2 3
1232 | <_>
1233 |
1234 | 6 5 2 5
1235 | <_>
1236 |
1237 | 6 5 4 3
1238 | <_>
1239 |
1240 | 6 6 4 2
1241 | <_>
1242 |
1243 | 6 6 4 4
1244 | <_>
1245 |
1246 | 6 18 1 2
1247 | <_>
1248 |
1249 | 6 21 2 1
1250 | <_>
1251 |
1252 | 7 0 3 7
1253 | <_>
1254 |
1255 | 7 4 2 3
1256 | <_>
1257 |
1258 | 7 9 5 1
1259 | <_>
1260 |
1261 | 7 21 2 1
1262 | <_>
1263 |
1264 | 8 0 1 4
1265 | <_>
1266 |
1267 | 8 5 2 2
1268 | <_>
1269 |
1270 | 8 5 3 2
1271 | <_>
1272 |
1273 | 8 17 3 1
1274 | <_>
1275 |
1276 | 8 18 1 2
1277 | <_>
1278 |
1279 | 9 0 5 3
1280 | <_>
1281 |
1282 | 9 2 2 6
1283 | <_>
1284 |
1285 | 9 5 1 1
1286 | <_>
1287 |
1288 | 9 11 1 1
1289 | <_>
1290 |
1291 | 9 16 1 1
1292 | <_>
1293 |
1294 | 9 16 2 1
1295 | <_>
1296 |
1297 | 9 17 1 1
1298 | <_>
1299 |
1300 | 9 18 1 1
1301 | <_>
1302 |
1303 | 10 5 1 2
1304 | <_>
1305 |
1306 | 10 5 3 3
1307 | <_>
1308 |
1309 | 10 7 1 5
1310 | <_>
1311 |
1312 | 10 8 1 1
1313 | <_>
1314 |
1315 | 10 9 1 1
1316 | <_>
1317 |
1318 | 10 10 1 1
1319 | <_>
1320 |
1321 | 10 10 1 2
1322 | <_>
1323 |
1324 | 10 14 3 3
1325 | <_>
1326 |
1327 | 10 15 1 1
1328 | <_>
1329 |
1330 | 10 15 2 1
1331 | <_>
1332 |
1333 | 10 16 1 1
1334 | <_>
1335 |
1336 | 10 16 2 1
1337 | <_>
1338 |
1339 | 10 17 1 1
1340 | <_>
1341 |
1342 | 10 21 1 1
1343 | <_>
1344 |
1345 | 11 3 2 2
1346 | <_>
1347 |
1348 | 11 5 1 2
1349 | <_>
1350 |
1351 | 11 5 3 3
1352 | <_>
1353 |
1354 | 11 5 4 6
1355 | <_>
1356 |
1357 | 11 6 1 1
1358 | <_>
1359 |
1360 | 11 7 2 2
1361 | <_>
1362 |
1363 | 11 8 1 2
1364 | <_>
1365 |
1366 | 11 10 1 1
1367 | <_>
1368 |
1369 | 11 10 1 2
1370 | <_>
1371 |
1372 | 11 15 1 1
1373 | <_>
1374 |
1375 | 11 17 1 1
1376 | <_>
1377 |
1378 | 11 18 1 1
1379 | <_>
1380 |
1381 | 12 0 2 2
1382 | <_>
1383 |
1384 | 12 1 2 5
1385 | <_>
1386 |
1387 | 12 2 4 1
1388 | <_>
1389 |
1390 | 12 3 1 3
1391 | <_>
1392 |
1393 | 12 7 3 4
1394 | <_>
1395 |
1396 | 12 10 3 2
1397 | <_>
1398 |
1399 | 12 11 1 1
1400 | <_>
1401 |
1402 | 12 12 3 2
1403 | <_>
1404 |
1405 | 12 14 4 3
1406 | <_>
1407 |
1408 | 12 17 1 1
1409 | <_>
1410 |
1411 | 12 21 2 1
1412 | <_>
1413 |
1414 | 13 6 2 5
1415 | <_>
1416 |
1417 | 13 7 3 5
1418 | <_>
1419 |
1420 | 13 11 3 2
1421 | <_>
1422 |
1423 | 13 17 2 2
1424 | <_>
1425 |
1426 | 13 17 3 2
1427 | <_>
1428 |
1429 | 13 18 1 2
1430 | <_>
1431 |
1432 | 13 18 2 2
1433 | <_>
1434 |
1435 | 14 0 2 2
1436 | <_>
1437 |
1438 | 14 1 1 3
1439 | <_>
1440 |
1441 | 14 2 3 2
1442 | <_>
1443 |
1444 | 14 7 2 1
1445 | <_>
1446 |
1447 | 14 13 2 1
1448 | <_>
1449 |
1450 | 14 13 3 3
1451 | <_>
1452 |
1453 | 14 17 2 2
1454 | <_>
1455 |
1456 | 15 0 2 2
1457 | <_>
1458 |
1459 | 15 0 2 3
1460 | <_>
1461 |
1462 | 15 4 3 2
1463 | <_>
1464 |
1465 | 15 4 3 6
1466 | <_>
1467 |
1468 | 15 6 3 2
1469 | <_>
1470 |
1471 | 15 11 3 4
1472 | <_>
1473 |
1474 | 15 13 3 2
1475 | <_>
1476 |
1477 | 15 17 2 2
1478 | <_>
1479 |
1480 | 15 17 3 2
1481 | <_>
1482 |
1483 | 16 1 2 3
1484 | <_>
1485 |
1486 | 16 3 2 4
1487 | <_>
1488 |
1489 | 16 6 1 1
1490 | <_>
1491 |
1492 | 16 16 2 2
1493 | <_>
1494 |
1495 | 17 1 2 2
1496 | <_>
1497 |
1498 | 17 1 2 5
1499 | <_>
1500 |
1501 | 17 12 2 2
1502 | <_>
1503 |
1504 | 18 0 2 2
1505 |
1506 |
--------------------------------------------------------------------------------
/output/output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/output/output.png
--------------------------------------------------------------------------------
/output/tom-shahrukh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/output/tom-shahrukh.png
--------------------------------------------------------------------------------
/test-data/test1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/test-data/test1.jpg
--------------------------------------------------------------------------------
/test-data/test2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/test-data/test2.jpg
--------------------------------------------------------------------------------
/training-data/s1/1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/1.jpg
--------------------------------------------------------------------------------
/training-data/s1/10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/10.jpg
--------------------------------------------------------------------------------
/training-data/s1/11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/11.jpg
--------------------------------------------------------------------------------
/training-data/s1/12.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/12.jpg
--------------------------------------------------------------------------------
/training-data/s1/2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/2.jpg
--------------------------------------------------------------------------------
/training-data/s1/3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/3.jpg
--------------------------------------------------------------------------------
/training-data/s1/4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/4.jpg
--------------------------------------------------------------------------------
/training-data/s1/5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/5.jpg
--------------------------------------------------------------------------------
/training-data/s1/6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/6.jpg
--------------------------------------------------------------------------------
/training-data/s1/7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/7.jpg
--------------------------------------------------------------------------------
/training-data/s1/8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/8.jpg
--------------------------------------------------------------------------------
/training-data/s1/9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s1/9.jpg
--------------------------------------------------------------------------------
/training-data/s2/1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/1.jpg
--------------------------------------------------------------------------------
/training-data/s2/10.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/10.jpeg
--------------------------------------------------------------------------------
/training-data/s2/11.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/11.jpeg
--------------------------------------------------------------------------------
/training-data/s2/12.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/12.jpg
--------------------------------------------------------------------------------
/training-data/s2/2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/2.jpg
--------------------------------------------------------------------------------
/training-data/s2/3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/3.jpg
--------------------------------------------------------------------------------
/training-data/s2/4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/4.jpg
--------------------------------------------------------------------------------
/training-data/s2/5.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/5.jpeg
--------------------------------------------------------------------------------
/training-data/s2/6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/6.jpg
--------------------------------------------------------------------------------
/training-data/s2/7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/7.jpg
--------------------------------------------------------------------------------
/training-data/s2/8.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/8.jpeg
--------------------------------------------------------------------------------
/training-data/s2/9.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/training-data/s2/9.jpeg
--------------------------------------------------------------------------------
/visualization/eigenfaces_opencv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/eigenfaces_opencv.png
--------------------------------------------------------------------------------
/visualization/fisherfaces_opencv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/fisherfaces_opencv.png
--------------------------------------------------------------------------------
/visualization/histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/histogram.png
--------------------------------------------------------------------------------
/visualization/illumination-changes.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/illumination-changes.png
--------------------------------------------------------------------------------
/visualization/lbp-labeling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/lbp-labeling.png
--------------------------------------------------------------------------------
/visualization/lbph-faces.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/lbph-faces.jpg
--------------------------------------------------------------------------------
/visualization/test-images.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/test-images.png
--------------------------------------------------------------------------------
/visualization/tom-shahrukh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/informramiz/opencv-face-recognition-python/0edc6e061d7c2983b37b4e1ccbd2d1c86b7d5473/visualization/tom-shahrukh.png
--------------------------------------------------------------------------------