├── .travis.yml ├── README.md ├── align ├── __pycache__ │ ├── __init__.cpython-36.pyc │ └── detect_face.cpython-36.pyc ├── align_dataset_mtcnn.py ├── det1.npy ├── det2.npy ├── det3.npy └── detect_face.py ├── bounding.py ├── cascade.py ├── check.py ├── coordinates.py ├── dataset_util.py ├── generate_tfrecord.py ├── haarcascade_frontalface_default.xml ├── images ├── Steve Jobs │ ├── 10.jpg │ ├── 11.jpg │ ├── 12.jpg │ ├── 14.jpg │ ├── 16.jpg │ ├── 18.jpg │ ├── 19.jpg │ ├── 2.jpg │ ├── 20.jpg │ ├── 24.jpg │ ├── 25.jpg │ ├── 26.jpg │ ├── 28.jpg │ ├── 29.jpg │ ├── 3.jpg │ ├── 30.jpg │ ├── 31.jpg │ ├── 32.jpg │ ├── 33.jpg │ ├── 34.jpg │ ├── 36.jpg │ ├── 38.jpg │ ├── 39.jpg │ ├── 4.jpg │ ├── 41.jpg │ ├── 43.jpg │ ├── 45.jpg │ ├── 5.jpeg │ ├── 6.jpg │ ├── 7.jpg │ ├── 8.jpg │ └── 9.jpg └── Tim Cook │ ├── 1.jpg │ ├── 10.jpg │ ├── 11.jpg │ ├── 12.jpg │ ├── 13.jpg │ ├── 16.jpg │ ├── 18.jpg │ ├── 19.jpeg │ ├── 2.jpg │ ├── 23.jpg │ ├── 24.jpg │ ├── 26.jpg │ ├── 27.jpg │ ├── 3.jpg │ ├── 31.jpg │ ├── 32.jpg │ ├── 33.jpg │ ├── 35.jpg │ ├── 36.jpg │ ├── 37.jpg │ ├── 38.jpg │ ├── 43.jpg │ ├── 44.jpg │ ├── 45.jpg │ ├── 46.jpg │ ├── 47.jpg │ ├── 48.jpg │ ├── 49.jpeg │ ├── 5.jpeg │ ├── 50.jpg │ ├── 51.jpg │ ├── 54.jpg │ ├── 55.jpg │ ├── 56.jpg │ ├── 58.jpg │ ├── 59.jpg │ ├── 6.jpg │ ├── 60.jpg │ ├── 61.jpg │ ├── 62.jpg │ ├── 64.jpg │ ├── 65.jpg │ ├── 66.jpg │ ├── 67.jpg │ ├── 68.jpg │ ├── 69.jpg │ ├── 7.jpg │ ├── 8.jpg │ └── 9.jpg ├── main.py ├── output ├── Steve Jobs │ ├── 10.jpg │ ├── 11.jpg │ ├── 12.jpg │ ├── 14.jpg │ ├── 16.jpg │ ├── 18.jpg │ ├── 19.jpg │ ├── 2.jpg │ ├── 20.jpg │ ├── 24.jpg │ ├── 25.jpg │ ├── 26.jpg │ ├── 28.jpg │ ├── 29.jpg │ ├── 3.jpg │ ├── 30.jpg │ ├── 31.jpg │ ├── 32.jpg │ ├── 33.jpg │ ├── 34.jpg │ ├── 36.jpg │ ├── 38.jpg │ ├── 39.jpg │ ├── 4.jpg │ ├── 41.jpg │ ├── 43.jpg │ ├── 45.jpg │ ├── 5.jpeg │ ├── 6.jpg │ ├── 7.jpg │ ├── 8.jpg │ └── 9.jpg └── Tim Cook │ ├── 1.jpg │ ├── 10.jpg │ ├── 11.jpg │ ├── 12.jpg │ ├── 13.jpg │ ├── 16.jpg │ ├── 18.jpg │ ├── 19.jpeg │ ├── 2.jpg │ ├── 23.jpg │ ├── 24.jpg │ ├── 26.jpg │ ├── 27.jpg │ ├── 3.jpg │ ├── 31.jpg │ ├── 32.jpg │ ├── 33.jpg │ ├── 35.jpg │ ├── 36.jpg │ ├── 37.jpg │ ├── 38.jpg │ ├── 43.jpg │ ├── 44.jpg │ ├── 45.jpg │ ├── 46.jpg │ ├── 47.jpg │ ├── 48.jpg │ ├── 49.jpeg │ ├── 5.jpeg │ ├── 50.jpg │ ├── 51.jpg │ ├── 54.jpg │ ├── 55.jpg │ ├── 56.jpg │ ├── 58.jpg │ ├── 59.jpg │ ├── 6.jpg │ ├── 60.jpg │ ├── 61.jpg │ ├── 62.jpg │ ├── 64.jpg │ ├── 65.jpg │ ├── 66.jpg │ ├── 67.jpg │ ├── 68.jpg │ ├── 69.jpg │ ├── 7.jpg │ ├── 8.jpg │ └── 9.jpg ├── requirements.txt ├── resource ├── 18.jpg └── structue.png ├── train.csv └── train.record /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - "3.6" 4 | 5 | # command to install dependencies 6 | install: 7 | - pip install -r requirements.txt 8 | 9 | script: 10 | - python main.py train.csv train.record facenet 11 | 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Automatic-Face-Detection-Annotation-and-Preprocessing 2 | [![Build Status][travis-image]][travis] 3 | [![Codacy Badge](https://api.codacy.com/project/badge/Grade/26997f5031314d00960e8d2a8f8b9b2c)](https://app.codacy.com/app/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing?utm_source=github.com&utm_medium=referral&utm_content=robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing&utm_campaign=Badge_Grade_Dashboard) 4 | 5 | [travis-image]: https://travis-ci.org/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing.svg?branch=master 6 | [travis]: https://travis-ci.org/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing 7 | 8 | For creating a facial recognition model we need the facial landmarked data from the images . To get it , we need to manually label all the images using the labeltools , annotate the image with their coordiantes and then convert it to a csv file . Then we preprocess it to respective data file format like tfrecord etc. To make it easy for the AI Developers , I coded this module which can automatically detect , annotate , collect the coordinates , convert to csv and to tfrecord . And I added a feature to visulaize your detected face on the image according by their respective classes. 9 | 10 | 11 |

12 | 13 |

14 | 15 | ## Compatibility 16 | The code is tested and developed in ubuntu 18.04 and using pyton 3.6.But the code has the realiability to run on most of the configuration . If you face issues , do open up an issue for this repo .All the package dependencies are mentioned in requirements.txt. 17 | 18 | ## Workings 19 | 1. Preprocessing all the images to a standard size and format 20 | 2. Loading the preprocessed image 21 | 3. Detecting the Face in the image using MTCNN or Harr-Cascade Algorithm and removing bad images 22 | 4. Getting the face coordinates 23 | 5. Writing into csv 24 | 6. Converting into tfrecord 25 | 7. Exporting the images with the face bounding box for debuging 26 | 27 | **Core Functionality** 28 | + `main.py` - Parse the arguments , load the images , call the detect,cordinates,preprocess functions 29 | + `coordinates.py` - Using MTCNN it detects the facial boundary coordinates 30 | + `cascade.py` - Using Harr-Cascade detects the facial boundary coordinates 31 | + `generate_tfrecord.py` - Used to generate the tfrecord for the csv 32 | + `dataset_util` - some utility functions for generate_tfrecord 33 | + `check.py` - load the processed image and export the output with the bounding box 34 | + `bounding.py` - draw the bounding box using opencv 35 | + `requirements.txt` - contains all the packages and their versions 36 | 37 |

38 | 39 |

40 | 41 | ### Note : The images should be in jpg format and each image of a class should have only one person image. 42 | 43 | ## Steps to run the code for your custom dataset 44 | 1. Create a python virtual environment and pip install the requirements.txt 45 | 2. Then prepare the images folder according to structure 46 | 3. Create a empty output directory 47 | 4. To detect the face using Facenet run `python main.py csv_name tfrecord_name facenet`.Example : `python main.py train.csv train.record facenet` 48 | 5. To detect the face using Harr-Cascade run `python main.py csv_name tfrecord_name harr`. Example : `python main.py train.csv train.record harr` 49 | 6. After code execution , you will get a csv and tfrecord file . To view the detection is perfect , go to the output directory you can view all the images with detection according to their respective class. 50 | 7. Simple ! 51 | 52 | ### Additional Feature 53 | It automatically removes all the bad format images and multiple face images 54 | 55 | ## For More Reference : 56 | + MTCNN : https://kpzhang93.github.io/MTCNN_face_detection_alignment/paper/spl.pdf 57 | + Harr-Cascade : https://docs.opencv.org/3.3.0/d7/d8b/tutorial_py_face_detection.html 58 | + To know more about tfrecord : https://www.skcript.com/svr/why-every-tensorflow-developer-should-know-about-tfrecord/ 59 | 60 | 61 | 62 | Copyright © 2021 Robin Reni. All rights reserved 63 | 64 | 65 | 66 | 67 | -------------------------------------------------------------------------------- /align/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/align/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /align/__pycache__/detect_face.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/align/__pycache__/detect_face.cpython-36.pyc -------------------------------------------------------------------------------- /align/align_dataset_mtcnn.py: -------------------------------------------------------------------------------- 1 | """Performs face alignment and stores face thumbnails in the output directory.""" 2 | # MIT License 3 | # 4 | # Copyright (c) 2016 David Sandberg 5 | # 6 | # Permission is hereby granted, free of charge, to any person obtaining a copy 7 | # of this software and associated documentation files (the "Software"), to deal 8 | # in the Software without restriction, including without limitation the rights 9 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 | # copies of the Software, and to permit persons to whom the Software is 11 | # furnished to do so, subject to the following conditions: 12 | # 13 | # The above copyright notice and this permission notice shall be included in all 14 | # copies or substantial portions of the Software. 15 | # 16 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 22 | # SOFTWARE. 23 | 24 | from __future__ import absolute_import 25 | from __future__ import division 26 | from __future__ import print_function 27 | 28 | from scipy import misc 29 | import sys 30 | import os 31 | import argparse 32 | import tensorflow as tf 33 | import numpy as np 34 | import facenet 35 | import align.detect_face 36 | import random 37 | from time import sleep 38 | 39 | def main(args): 40 | sleep(random.random()) 41 | output_dir = os.path.expanduser(args.output_dir) 42 | if not os.path.exists(output_dir): 43 | os.makedirs(output_dir) 44 | # Store some git revision info in a text file in the log directory 45 | src_path,_ = os.path.split(os.path.realpath(__file__)) 46 | facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv)) 47 | dataset = facenet.get_dataset(args.input_dir) 48 | 49 | print('Creating networks and loading parameters') 50 | 51 | with tf.Graph().as_default(): 52 | gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_memory_fraction) 53 | sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) 54 | with sess.as_default(): 55 | pnet, rnet, onet = align.detect_face.create_mtcnn(sess, None) 56 | 57 | minsize = 20 # minimum size of face 58 | threshold = [ 0.6, 0.7, 0.7 ] # three steps's threshold 59 | factor = 0.709 # scale factor 60 | 61 | # Add a random key to the filename to allow alignment using multiple processes 62 | random_key = np.random.randint(0, high=99999) 63 | bounding_boxes_filename = os.path.join(output_dir, 'bounding_boxes_%05d.txt' % random_key) 64 | 65 | with open(bounding_boxes_filename, "w") as text_file: 66 | nrof_images_total = 0 67 | nrof_successfully_aligned = 0 68 | if args.random_order: 69 | random.shuffle(dataset) 70 | for cls in dataset: 71 | output_class_dir = os.path.join(output_dir, cls.name) 72 | if not os.path.exists(output_class_dir): 73 | os.makedirs(output_class_dir) 74 | if args.random_order: 75 | random.shuffle(cls.image_paths) 76 | for image_path in cls.image_paths: 77 | nrof_images_total += 1 78 | filename = os.path.splitext(os.path.split(image_path)[1])[0] 79 | output_filename = os.path.join(output_class_dir, filename+'.png') 80 | print(image_path) 81 | if not os.path.exists(output_filename): 82 | try: 83 | img = misc.imread(image_path) 84 | except (IOError, ValueError, IndexError) as e: 85 | errorMessage = '{}: {}'.format(image_path, e) 86 | print(errorMessage) 87 | else: 88 | if img.ndim<2: 89 | print('Unable to align "%s"' % image_path) 90 | text_file.write('%s\n' % (output_filename)) 91 | continue 92 | if img.ndim == 2: 93 | img = facenet.to_rgb(img) 94 | img = img[:,:,0:3] 95 | 96 | bounding_boxes, _ = align.detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor) 97 | nrof_faces = bounding_boxes.shape[0] 98 | if nrof_faces>0: 99 | det = bounding_boxes[:,0:4] 100 | det_arr = [] 101 | img_size = np.asarray(img.shape)[0:2] 102 | if nrof_faces>1: 103 | if args.detect_multiple_faces: 104 | for i in range(nrof_faces): 105 | det_arr.append(np.squeeze(det[i])) 106 | else: 107 | bounding_box_size = (det[:,2]-det[:,0])*(det[:,3]-det[:,1]) 108 | img_center = img_size / 2 109 | offsets = np.vstack([ (det[:,0]+det[:,2])/2-img_center[1], (det[:,1]+det[:,3])/2-img_center[0] ]) 110 | offset_dist_squared = np.sum(np.power(offsets,2.0),0) 111 | index = np.argmax(bounding_box_size-offset_dist_squared*2.0) # some extra weight on the centering 112 | det_arr.append(det[index,:]) 113 | else: 114 | det_arr.append(np.squeeze(det)) 115 | 116 | for i, det in enumerate(det_arr): 117 | det = np.squeeze(det) 118 | bb = np.zeros(4, dtype=np.int32) 119 | bb[0] = np.maximum(det[0]-args.margin/2, 0) 120 | bb[1] = np.maximum(det[1]-args.margin/2, 0) 121 | bb[2] = np.minimum(det[2]+args.margin/2, img_size[1]) 122 | bb[3] = np.minimum(det[3]+args.margin/2, img_size[0]) 123 | cropped = img[bb[1]:bb[3],bb[0]:bb[2],:] 124 | scaled = misc.imresize(cropped, (args.image_size, args.image_size), interp='bilinear') 125 | nrof_successfully_aligned += 1 126 | filename_base, file_extension = os.path.splitext(output_filename) 127 | if args.detect_multiple_faces: 128 | output_filename_n = "{}_{}{}".format(filename_base, i, file_extension) 129 | else: 130 | output_filename_n = "{}{}".format(filename_base, file_extension) 131 | misc.imsave(output_filename_n, scaled) 132 | text_file.write('%s %d %d %d %d\n' % (output_filename_n, bb[0], bb[1], bb[2], bb[3])) 133 | else: 134 | print('Unable to align "%s"' % image_path) 135 | text_file.write('%s\n' % (output_filename)) 136 | 137 | print('Total number of images: %d' % nrof_images_total) 138 | print('Number of successfully aligned images: %d' % nrof_successfully_aligned) 139 | 140 | 141 | def parse_arguments(argv): 142 | parser = argparse.ArgumentParser() 143 | 144 | parser.add_argument('input_dir', type=str, help='Directory with unaligned images.') 145 | parser.add_argument('output_dir', type=str, help='Directory with aligned face thumbnails.') 146 | parser.add_argument('--image_size', type=int, 147 | help='Image size (height, width) in pixels.', default=182) 148 | parser.add_argument('--margin', type=int, 149 | help='Margin for the crop around the bounding box (height, width) in pixels.', default=44) 150 | parser.add_argument('--random_order', 151 | help='Shuffles the order of images to enable alignment using multiple processes.', action='store_true') 152 | parser.add_argument('--gpu_memory_fraction', type=float, 153 | help='Upper bound on the amount of GPU memory that will be used by the process.', default=1.0) 154 | parser.add_argument('--detect_multiple_faces', type=bool, 155 | help='Detect and align multiple faces per image.', default=False) 156 | return parser.parse_args(argv) 157 | 158 | if __name__ == '__main__': 159 | main(parse_arguments(sys.argv[1:])) 160 | -------------------------------------------------------------------------------- /align/det1.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/align/det1.npy -------------------------------------------------------------------------------- /align/det2.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/align/det2.npy -------------------------------------------------------------------------------- /align/det3.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/align/det3.npy -------------------------------------------------------------------------------- /align/detect_face.py: -------------------------------------------------------------------------------- 1 | """ Tensorflow implementation of the face detection / alignment algorithm found at 2 | https://github.com/kpzhang93/MTCNN_face_detection_alignment 3 | """ 4 | # MIT License 5 | # 6 | # Copyright (c) 2016 David Sandberg 7 | # 8 | # Permission is hereby granted, free of charge, to any person obtaining a copy 9 | # of this software and associated documentation files (the "Software"), to deal 10 | # in the Software without restriction, including without limitation the rights 11 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 12 | # copies of the Software, and to permit persons to whom the Software is 13 | # furnished to do so, subject to the following conditions: 14 | # 15 | # The above copyright notice and this permission notice shall be included in all 16 | # copies or substantial portions of the Software. 17 | # 18 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 19 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 20 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 21 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 22 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 23 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 24 | # SOFTWARE. 25 | 26 | from __future__ import absolute_import 27 | from __future__ import division 28 | from __future__ import print_function 29 | from six import string_types, iteritems 30 | 31 | import numpy as np 32 | import tensorflow as tf 33 | #from math import floor 34 | import cv2 35 | import os 36 | 37 | def layer(op): 38 | """Decorator for composable network layers.""" 39 | 40 | def layer_decorated(self, *args, **kwargs): 41 | # Automatically set a name if not provided. 42 | name = kwargs.setdefault('name', self.get_unique_name(op.__name__)) 43 | # Figure out the layer inputs. 44 | if len(self.terminals) == 0: 45 | raise RuntimeError('No input variables found for layer %s.' % name) 46 | elif len(self.terminals) == 1: 47 | layer_input = self.terminals[0] 48 | else: 49 | layer_input = list(self.terminals) 50 | # Perform the operation and get the output. 51 | layer_output = op(self, layer_input, *args, **kwargs) 52 | # Add to layer LUT. 53 | self.layers[name] = layer_output 54 | # This output is now the input for the next layer. 55 | self.feed(layer_output) 56 | # Return self for chained calls. 57 | return self 58 | 59 | return layer_decorated 60 | 61 | class Network(object): 62 | 63 | def __init__(self, inputs, trainable=True): 64 | # The input nodes for this network 65 | self.inputs = inputs 66 | # The current list of terminal nodes 67 | self.terminals = [] 68 | # Mapping from layer names to layers 69 | self.layers = dict(inputs) 70 | # If true, the resulting variables are set as trainable 71 | self.trainable = trainable 72 | 73 | self.setup() 74 | 75 | def setup(self): 76 | """Construct the network. """ 77 | raise NotImplementedError('Must be implemented by the subclass.') 78 | 79 | def load(self, data_path, session, ignore_missing=False): 80 | """Load network weights. 81 | data_path: The path to the numpy-serialized network weights 82 | session: The current TensorFlow session 83 | ignore_missing: If true, serialized weights for missing layers are ignored. 84 | """ 85 | data_dict = np.load(data_path, encoding='latin1').item() #pylint: disable=no-member 86 | 87 | for op_name in data_dict: 88 | with tf.variable_scope(op_name, reuse=True): 89 | for param_name, data in iteritems(data_dict[op_name]): 90 | try: 91 | var = tf.get_variable(param_name) 92 | session.run(var.assign(data)) 93 | except ValueError: 94 | if not ignore_missing: 95 | raise 96 | 97 | def feed(self, *args): 98 | """Set the input(s) for the next operation by replacing the terminal nodes. 99 | The arguments can be either layer names or the actual layers. 100 | """ 101 | assert len(args) != 0 102 | self.terminals = [] 103 | for fed_layer in args: 104 | if isinstance(fed_layer, string_types): 105 | try: 106 | fed_layer = self.layers[fed_layer] 107 | except KeyError: 108 | raise KeyError('Unknown layer name fed: %s' % fed_layer) 109 | self.terminals.append(fed_layer) 110 | return self 111 | 112 | def get_output(self): 113 | """Returns the current network output.""" 114 | return self.terminals[-1] 115 | 116 | def get_unique_name(self, prefix): 117 | """Returns an index-suffixed unique name for the given prefix. 118 | This is used for auto-generating layer names based on the type-prefix. 119 | """ 120 | ident = sum(t.startswith(prefix) for t, _ in self.layers.items()) + 1 121 | return '%s_%d' % (prefix, ident) 122 | 123 | def make_var(self, name, shape): 124 | """Creates a new TensorFlow variable.""" 125 | return tf.get_variable(name, shape, trainable=self.trainable) 126 | 127 | def validate_padding(self, padding): 128 | """Verifies that the padding is one of the supported ones.""" 129 | assert padding in ('SAME', 'VALID') 130 | 131 | @layer 132 | def conv(self, 133 | inp, 134 | k_h, 135 | k_w, 136 | c_o, 137 | s_h, 138 | s_w, 139 | name, 140 | relu=True, 141 | padding='SAME', 142 | group=1, 143 | biased=True): 144 | # Verify that the padding is acceptable 145 | self.validate_padding(padding) 146 | # Get the number of channels in the input 147 | c_i = int(inp.get_shape()[-1]) 148 | # Verify that the grouping parameter is valid 149 | assert c_i % group == 0 150 | assert c_o % group == 0 151 | # Convolution for a given input and kernel 152 | convolve = lambda i, k: tf.nn.conv2d(i, k, [1, s_h, s_w, 1], padding=padding) 153 | with tf.variable_scope(name) as scope: 154 | kernel = self.make_var('weights', shape=[k_h, k_w, c_i // group, c_o]) 155 | # This is the common-case. Convolve the input without any further complications. 156 | output = convolve(inp, kernel) 157 | # Add the biases 158 | if biased: 159 | biases = self.make_var('biases', [c_o]) 160 | output = tf.nn.bias_add(output, biases) 161 | if relu: 162 | # ReLU non-linearity 163 | output = tf.nn.relu(output, name=scope.name) 164 | return output 165 | 166 | @layer 167 | def prelu(self, inp, name): 168 | with tf.variable_scope(name): 169 | i = int(inp.get_shape()[-1]) 170 | alpha = self.make_var('alpha', shape=(i,)) 171 | output = tf.nn.relu(inp) + tf.multiply(alpha, -tf.nn.relu(-inp)) 172 | return output 173 | 174 | @layer 175 | def max_pool(self, inp, k_h, k_w, s_h, s_w, name, padding='SAME'): 176 | self.validate_padding(padding) 177 | return tf.nn.max_pool(inp, 178 | ksize=[1, k_h, k_w, 1], 179 | strides=[1, s_h, s_w, 1], 180 | padding=padding, 181 | name=name) 182 | 183 | @layer 184 | def fc(self, inp, num_out, name, relu=True): 185 | with tf.variable_scope(name): 186 | input_shape = inp.get_shape() 187 | if input_shape.ndims == 4: 188 | # The input is spatial. Vectorize it first. 189 | dim = 1 190 | for d in input_shape[1:].as_list(): 191 | dim *= int(d) 192 | feed_in = tf.reshape(inp, [-1, dim]) 193 | else: 194 | feed_in, dim = (inp, input_shape[-1].value) 195 | weights = self.make_var('weights', shape=[dim, num_out]) 196 | biases = self.make_var('biases', [num_out]) 197 | op = tf.nn.relu_layer if relu else tf.nn.xw_plus_b 198 | fc = op(feed_in, weights, biases, name=name) 199 | return fc 200 | 201 | 202 | """ 203 | Multi dimensional softmax, 204 | refer to https://github.com/tensorflow/tensorflow/issues/210 205 | compute softmax along the dimension of target 206 | the native softmax only supports batch_size x dimension 207 | """ 208 | @layer 209 | def softmax(self, target, axis, name=None): 210 | max_axis = tf.reduce_max(target, axis, keepdims=True) 211 | target_exp = tf.exp(target-max_axis) 212 | normalize = tf.reduce_sum(target_exp, axis, keepdims=True) 213 | softmax = tf.div(target_exp, normalize, name) 214 | return softmax 215 | 216 | class PNet(Network): 217 | def setup(self): 218 | (self.feed('data') #pylint: disable=no-value-for-parameter, no-member 219 | .conv(3, 3, 10, 1, 1, padding='VALID', relu=False, name='conv1') 220 | .prelu(name='PReLU1') 221 | .max_pool(2, 2, 2, 2, name='pool1') 222 | .conv(3, 3, 16, 1, 1, padding='VALID', relu=False, name='conv2') 223 | .prelu(name='PReLU2') 224 | .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3') 225 | .prelu(name='PReLU3') 226 | .conv(1, 1, 2, 1, 1, relu=False, name='conv4-1') 227 | .softmax(3,name='prob1')) 228 | 229 | (self.feed('PReLU3') #pylint: disable=no-value-for-parameter 230 | .conv(1, 1, 4, 1, 1, relu=False, name='conv4-2')) 231 | 232 | class RNet(Network): 233 | def setup(self): 234 | (self.feed('data') #pylint: disable=no-value-for-parameter, no-member 235 | .conv(3, 3, 28, 1, 1, padding='VALID', relu=False, name='conv1') 236 | .prelu(name='prelu1') 237 | .max_pool(3, 3, 2, 2, name='pool1') 238 | .conv(3, 3, 48, 1, 1, padding='VALID', relu=False, name='conv2') 239 | .prelu(name='prelu2') 240 | .max_pool(3, 3, 2, 2, padding='VALID', name='pool2') 241 | .conv(2, 2, 64, 1, 1, padding='VALID', relu=False, name='conv3') 242 | .prelu(name='prelu3') 243 | .fc(128, relu=False, name='conv4') 244 | .prelu(name='prelu4') 245 | .fc(2, relu=False, name='conv5-1') 246 | .softmax(1,name='prob1')) 247 | 248 | (self.feed('prelu4') #pylint: disable=no-value-for-parameter 249 | .fc(4, relu=False, name='conv5-2')) 250 | 251 | class ONet(Network): 252 | def setup(self): 253 | (self.feed('data') #pylint: disable=no-value-for-parameter, no-member 254 | .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv1') 255 | .prelu(name='prelu1') 256 | .max_pool(3, 3, 2, 2, name='pool1') 257 | .conv(3, 3, 64, 1, 1, padding='VALID', relu=False, name='conv2') 258 | .prelu(name='prelu2') 259 | .max_pool(3, 3, 2, 2, padding='VALID', name='pool2') 260 | .conv(3, 3, 64, 1, 1, padding='VALID', relu=False, name='conv3') 261 | .prelu(name='prelu3') 262 | .max_pool(2, 2, 2, 2, name='pool3') 263 | .conv(2, 2, 128, 1, 1, padding='VALID', relu=False, name='conv4') 264 | .prelu(name='prelu4') 265 | .fc(256, relu=False, name='conv5') 266 | .prelu(name='prelu5') 267 | .fc(2, relu=False, name='conv6-1') 268 | .softmax(1, name='prob1')) 269 | 270 | (self.feed('prelu5') #pylint: disable=no-value-for-parameter 271 | .fc(4, relu=False, name='conv6-2')) 272 | 273 | (self.feed('prelu5') #pylint: disable=no-value-for-parameter 274 | .fc(10, relu=False, name='conv6-3')) 275 | 276 | def create_mtcnn(sess, model_path): 277 | if not model_path: 278 | model_path,_ = os.path.split(os.path.realpath(__file__)) 279 | 280 | with tf.variable_scope('pnet'): 281 | data = tf.placeholder(tf.float32, (None,None,None,3), 'input') 282 | pnet = PNet({'data':data}) 283 | pnet.load(os.path.join(model_path, 'det1.npy'), sess) 284 | with tf.variable_scope('rnet'): 285 | data = tf.placeholder(tf.float32, (None,24,24,3), 'input') 286 | rnet = RNet({'data':data}) 287 | rnet.load(os.path.join(model_path, 'det2.npy'), sess) 288 | with tf.variable_scope('onet'): 289 | data = tf.placeholder(tf.float32, (None,48,48,3), 'input') 290 | onet = ONet({'data':data}) 291 | onet.load(os.path.join(model_path, 'det3.npy'), sess) 292 | 293 | pnet_fun = lambda img : sess.run(('pnet/conv4-2/BiasAdd:0', 'pnet/prob1:0'), feed_dict={'pnet/input:0':img}) 294 | rnet_fun = lambda img : sess.run(('rnet/conv5-2/conv5-2:0', 'rnet/prob1:0'), feed_dict={'rnet/input:0':img}) 295 | onet_fun = lambda img : sess.run(('onet/conv6-2/conv6-2:0', 'onet/conv6-3/conv6-3:0', 'onet/prob1:0'), feed_dict={'onet/input:0':img}) 296 | return pnet_fun, rnet_fun, onet_fun 297 | 298 | def detect_face(img, minsize, pnet, rnet, onet, threshold, factor): 299 | """Detects faces in an image, and returns bounding boxes and points for them. 300 | img: input image 301 | minsize: minimum faces' size 302 | pnet, rnet, onet: caffemodel 303 | threshold: threshold=[th1, th2, th3], th1-3 are three steps's threshold 304 | factor: the factor used to create a scaling pyramid of face sizes to detect in the image. 305 | """ 306 | factor_count=0 307 | total_boxes=np.empty((0,9)) 308 | points=np.empty(0) 309 | h=img.shape[0] 310 | w=img.shape[1] 311 | minl=np.amin([h, w]) 312 | m=12.0/minsize 313 | minl=minl*m 314 | # create scale pyramid 315 | scales=[] 316 | while minl>=12: 317 | scales += [m*np.power(factor, factor_count)] 318 | minl = minl*factor 319 | factor_count += 1 320 | 321 | # first stage 322 | for scale in scales: 323 | hs=int(np.ceil(h*scale)) 324 | ws=int(np.ceil(w*scale)) 325 | im_data = imresample(img, (hs, ws)) 326 | im_data = (im_data-127.5)*0.0078125 327 | img_x = np.expand_dims(im_data, 0) 328 | img_y = np.transpose(img_x, (0,2,1,3)) 329 | out = pnet(img_y) 330 | out0 = np.transpose(out[0], (0,2,1,3)) 331 | out1 = np.transpose(out[1], (0,2,1,3)) 332 | 333 | boxes, _ = generateBoundingBox(out1[0,:,:,1].copy(), out0[0,:,:,:].copy(), scale, threshold[0]) 334 | 335 | # inter-scale nms 336 | pick = nms(boxes.copy(), 0.5, 'Union') 337 | if boxes.size>0 and pick.size>0: 338 | boxes = boxes[pick,:] 339 | total_boxes = np.append(total_boxes, boxes, axis=0) 340 | 341 | numbox = total_boxes.shape[0] 342 | if numbox>0: 343 | pick = nms(total_boxes.copy(), 0.7, 'Union') 344 | total_boxes = total_boxes[pick,:] 345 | regw = total_boxes[:,2]-total_boxes[:,0] 346 | regh = total_boxes[:,3]-total_boxes[:,1] 347 | qq1 = total_boxes[:,0]+total_boxes[:,5]*regw 348 | qq2 = total_boxes[:,1]+total_boxes[:,6]*regh 349 | qq3 = total_boxes[:,2]+total_boxes[:,7]*regw 350 | qq4 = total_boxes[:,3]+total_boxes[:,8]*regh 351 | total_boxes = np.transpose(np.vstack([qq1, qq2, qq3, qq4, total_boxes[:,4]])) 352 | total_boxes = rerec(total_boxes.copy()) 353 | total_boxes[:,0:4] = np.fix(total_boxes[:,0:4]).astype(np.int32) 354 | dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(total_boxes.copy(), w, h) 355 | 356 | numbox = total_boxes.shape[0] 357 | if numbox>0: 358 | # second stage 359 | tempimg = np.zeros((24,24,3,numbox)) 360 | for k in range(0,numbox): 361 | tmp = np.zeros((int(tmph[k]),int(tmpw[k]),3)) 362 | tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:] 363 | if tmp.shape[0]>0 and tmp.shape[1]>0 or tmp.shape[0]==0 and tmp.shape[1]==0: 364 | tempimg[:,:,:,k] = imresample(tmp, (24, 24)) 365 | else: 366 | return np.empty() 367 | tempimg = (tempimg-127.5)*0.0078125 368 | tempimg1 = np.transpose(tempimg, (3,1,0,2)) 369 | out = rnet(tempimg1) 370 | out0 = np.transpose(out[0]) 371 | out1 = np.transpose(out[1]) 372 | score = out1[1,:] 373 | ipass = np.where(score>threshold[1]) 374 | total_boxes = np.hstack([total_boxes[ipass[0],0:4].copy(), np.expand_dims(score[ipass].copy(),1)]) 375 | mv = out0[:,ipass[0]] 376 | if total_boxes.shape[0]>0: 377 | pick = nms(total_boxes, 0.7, 'Union') 378 | total_boxes = total_boxes[pick,:] 379 | total_boxes = bbreg(total_boxes.copy(), np.transpose(mv[:,pick])) 380 | total_boxes = rerec(total_boxes.copy()) 381 | 382 | numbox = total_boxes.shape[0] 383 | if numbox>0: 384 | # third stage 385 | total_boxes = np.fix(total_boxes).astype(np.int32) 386 | dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(total_boxes.copy(), w, h) 387 | tempimg = np.zeros((48,48,3,numbox)) 388 | for k in range(0,numbox): 389 | tmp = np.zeros((int(tmph[k]),int(tmpw[k]),3)) 390 | tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:] 391 | if tmp.shape[0]>0 and tmp.shape[1]>0 or tmp.shape[0]==0 and tmp.shape[1]==0: 392 | tempimg[:,:,:,k] = imresample(tmp, (48, 48)) 393 | else: 394 | return np.empty() 395 | tempimg = (tempimg-127.5)*0.0078125 396 | tempimg1 = np.transpose(tempimg, (3,1,0,2)) 397 | out = onet(tempimg1) 398 | out0 = np.transpose(out[0]) 399 | out1 = np.transpose(out[1]) 400 | out2 = np.transpose(out[2]) 401 | score = out2[1,:] 402 | points = out1 403 | ipass = np.where(score>threshold[2]) 404 | points = points[:,ipass[0]] 405 | total_boxes = np.hstack([total_boxes[ipass[0],0:4].copy(), np.expand_dims(score[ipass].copy(),1)]) 406 | mv = out0[:,ipass[0]] 407 | 408 | w = total_boxes[:,2]-total_boxes[:,0]+1 409 | h = total_boxes[:,3]-total_boxes[:,1]+1 410 | points[0:5,:] = np.tile(w,(5, 1))*points[0:5,:] + np.tile(total_boxes[:,0],(5, 1))-1 411 | points[5:10,:] = np.tile(h,(5, 1))*points[5:10,:] + np.tile(total_boxes[:,1],(5, 1))-1 412 | if total_boxes.shape[0]>0: 413 | total_boxes = bbreg(total_boxes.copy(), np.transpose(mv)) 414 | pick = nms(total_boxes.copy(), 0.7, 'Min') 415 | total_boxes = total_boxes[pick,:] 416 | points = points[:,pick] 417 | 418 | return total_boxes, points 419 | 420 | 421 | def bulk_detect_face(images, detection_window_size_ratio, pnet, rnet, onet, threshold, factor): 422 | """Detects faces in a list of images 423 | images: list containing input images 424 | detection_window_size_ratio: ratio of minimum face size to smallest image dimension 425 | pnet, rnet, onet: caffemodel 426 | threshold: threshold=[th1 th2 th3], th1-3 are three steps's threshold [0-1] 427 | factor: the factor used to create a scaling pyramid of face sizes to detect in the image. 428 | """ 429 | all_scales = [None] * len(images) 430 | images_with_boxes = [None] * len(images) 431 | 432 | for i in range(len(images)): 433 | images_with_boxes[i] = {'total_boxes': np.empty((0, 9))} 434 | 435 | # create scale pyramid 436 | for index, img in enumerate(images): 437 | all_scales[index] = [] 438 | h = img.shape[0] 439 | w = img.shape[1] 440 | minsize = int(detection_window_size_ratio * np.minimum(w, h)) 441 | factor_count = 0 442 | minl = np.amin([h, w]) 443 | if minsize <= 12: 444 | minsize = 12 445 | 446 | m = 12.0 / minsize 447 | minl = minl * m 448 | while minl >= 12: 449 | all_scales[index].append(m * np.power(factor, factor_count)) 450 | minl = minl * factor 451 | factor_count += 1 452 | 453 | # # # # # # # # # # # # # 454 | # first stage - fast proposal network (pnet) to obtain face candidates 455 | # # # # # # # # # # # # # 456 | 457 | images_obj_per_resolution = {} 458 | 459 | # TODO: use some type of rounding to number module 8 to increase probability that pyramid images will have the same resolution across input images 460 | 461 | for index, scales in enumerate(all_scales): 462 | h = images[index].shape[0] 463 | w = images[index].shape[1] 464 | 465 | for scale in scales: 466 | hs = int(np.ceil(h * scale)) 467 | ws = int(np.ceil(w * scale)) 468 | 469 | if (ws, hs) not in images_obj_per_resolution: 470 | images_obj_per_resolution[(ws, hs)] = [] 471 | 472 | im_data = imresample(images[index], (hs, ws)) 473 | im_data = (im_data - 127.5) * 0.0078125 474 | img_y = np.transpose(im_data, (1, 0, 2)) # caffe uses different dimensions ordering 475 | images_obj_per_resolution[(ws, hs)].append({'scale': scale, 'image': img_y, 'index': index}) 476 | 477 | for resolution in images_obj_per_resolution: 478 | images_per_resolution = [i['image'] for i in images_obj_per_resolution[resolution]] 479 | outs = pnet(images_per_resolution) 480 | 481 | for index in range(len(outs[0])): 482 | scale = images_obj_per_resolution[resolution][index]['scale'] 483 | image_index = images_obj_per_resolution[resolution][index]['index'] 484 | out0 = np.transpose(outs[0][index], (1, 0, 2)) 485 | out1 = np.transpose(outs[1][index], (1, 0, 2)) 486 | 487 | boxes, _ = generateBoundingBox(out1[:, :, 1].copy(), out0[:, :, :].copy(), scale, threshold[0]) 488 | 489 | # inter-scale nms 490 | pick = nms(boxes.copy(), 0.5, 'Union') 491 | if boxes.size > 0 and pick.size > 0: 492 | boxes = boxes[pick, :] 493 | images_with_boxes[image_index]['total_boxes'] = np.append(images_with_boxes[image_index]['total_boxes'], 494 | boxes, 495 | axis=0) 496 | 497 | for index, image_obj in enumerate(images_with_boxes): 498 | numbox = image_obj['total_boxes'].shape[0] 499 | if numbox > 0: 500 | h = images[index].shape[0] 501 | w = images[index].shape[1] 502 | pick = nms(image_obj['total_boxes'].copy(), 0.7, 'Union') 503 | image_obj['total_boxes'] = image_obj['total_boxes'][pick, :] 504 | regw = image_obj['total_boxes'][:, 2] - image_obj['total_boxes'][:, 0] 505 | regh = image_obj['total_boxes'][:, 3] - image_obj['total_boxes'][:, 1] 506 | qq1 = image_obj['total_boxes'][:, 0] + image_obj['total_boxes'][:, 5] * regw 507 | qq2 = image_obj['total_boxes'][:, 1] + image_obj['total_boxes'][:, 6] * regh 508 | qq3 = image_obj['total_boxes'][:, 2] + image_obj['total_boxes'][:, 7] * regw 509 | qq4 = image_obj['total_boxes'][:, 3] + image_obj['total_boxes'][:, 8] * regh 510 | image_obj['total_boxes'] = np.transpose(np.vstack([qq1, qq2, qq3, qq4, image_obj['total_boxes'][:, 4]])) 511 | image_obj['total_boxes'] = rerec(image_obj['total_boxes'].copy()) 512 | image_obj['total_boxes'][:, 0:4] = np.fix(image_obj['total_boxes'][:, 0:4]).astype(np.int32) 513 | dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(image_obj['total_boxes'].copy(), w, h) 514 | 515 | numbox = image_obj['total_boxes'].shape[0] 516 | tempimg = np.zeros((24, 24, 3, numbox)) 517 | 518 | if numbox > 0: 519 | for k in range(0, numbox): 520 | tmp = np.zeros((int(tmph[k]), int(tmpw[k]), 3)) 521 | tmp[dy[k] - 1:edy[k], dx[k] - 1:edx[k], :] = images[index][y[k] - 1:ey[k], x[k] - 1:ex[k], :] 522 | if tmp.shape[0] > 0 and tmp.shape[1] > 0 or tmp.shape[0] == 0 and tmp.shape[1] == 0: 523 | tempimg[:, :, :, k] = imresample(tmp, (24, 24)) 524 | else: 525 | return np.empty() 526 | 527 | tempimg = (tempimg - 127.5) * 0.0078125 528 | image_obj['rnet_input'] = np.transpose(tempimg, (3, 1, 0, 2)) 529 | 530 | # # # # # # # # # # # # # 531 | # second stage - refinement of face candidates with rnet 532 | # # # # # # # # # # # # # 533 | 534 | bulk_rnet_input = np.empty((0, 24, 24, 3)) 535 | for index, image_obj in enumerate(images_with_boxes): 536 | if 'rnet_input' in image_obj: 537 | bulk_rnet_input = np.append(bulk_rnet_input, image_obj['rnet_input'], axis=0) 538 | 539 | out = rnet(bulk_rnet_input) 540 | out0 = np.transpose(out[0]) 541 | out1 = np.transpose(out[1]) 542 | score = out1[1, :] 543 | 544 | i = 0 545 | for index, image_obj in enumerate(images_with_boxes): 546 | if 'rnet_input' not in image_obj: 547 | continue 548 | 549 | rnet_input_count = image_obj['rnet_input'].shape[0] 550 | score_per_image = score[i:i + rnet_input_count] 551 | out0_per_image = out0[:, i:i + rnet_input_count] 552 | 553 | ipass = np.where(score_per_image > threshold[1]) 554 | image_obj['total_boxes'] = np.hstack([image_obj['total_boxes'][ipass[0], 0:4].copy(), 555 | np.expand_dims(score_per_image[ipass].copy(), 1)]) 556 | 557 | mv = out0_per_image[:, ipass[0]] 558 | 559 | if image_obj['total_boxes'].shape[0] > 0: 560 | h = images[index].shape[0] 561 | w = images[index].shape[1] 562 | pick = nms(image_obj['total_boxes'], 0.7, 'Union') 563 | image_obj['total_boxes'] = image_obj['total_boxes'][pick, :] 564 | image_obj['total_boxes'] = bbreg(image_obj['total_boxes'].copy(), np.transpose(mv[:, pick])) 565 | image_obj['total_boxes'] = rerec(image_obj['total_boxes'].copy()) 566 | 567 | numbox = image_obj['total_boxes'].shape[0] 568 | 569 | if numbox > 0: 570 | tempimg = np.zeros((48, 48, 3, numbox)) 571 | image_obj['total_boxes'] = np.fix(image_obj['total_boxes']).astype(np.int32) 572 | dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(image_obj['total_boxes'].copy(), w, h) 573 | 574 | for k in range(0, numbox): 575 | tmp = np.zeros((int(tmph[k]), int(tmpw[k]), 3)) 576 | tmp[dy[k] - 1:edy[k], dx[k] - 1:edx[k], :] = images[index][y[k] - 1:ey[k], x[k] - 1:ex[k], :] 577 | if tmp.shape[0] > 0 and tmp.shape[1] > 0 or tmp.shape[0] == 0 and tmp.shape[1] == 0: 578 | tempimg[:, :, :, k] = imresample(tmp, (48, 48)) 579 | else: 580 | return np.empty() 581 | tempimg = (tempimg - 127.5) * 0.0078125 582 | image_obj['onet_input'] = np.transpose(tempimg, (3, 1, 0, 2)) 583 | 584 | i += rnet_input_count 585 | 586 | # # # # # # # # # # # # # 587 | # third stage - further refinement and facial landmarks positions with onet 588 | # # # # # # # # # # # # # 589 | 590 | bulk_onet_input = np.empty((0, 48, 48, 3)) 591 | for index, image_obj in enumerate(images_with_boxes): 592 | if 'onet_input' in image_obj: 593 | bulk_onet_input = np.append(bulk_onet_input, image_obj['onet_input'], axis=0) 594 | 595 | out = onet(bulk_onet_input) 596 | 597 | out0 = np.transpose(out[0]) 598 | out1 = np.transpose(out[1]) 599 | out2 = np.transpose(out[2]) 600 | score = out2[1, :] 601 | points = out1 602 | 603 | i = 0 604 | ret = [] 605 | for index, image_obj in enumerate(images_with_boxes): 606 | if 'onet_input' not in image_obj: 607 | ret.append(None) 608 | continue 609 | 610 | onet_input_count = image_obj['onet_input'].shape[0] 611 | 612 | out0_per_image = out0[:, i:i + onet_input_count] 613 | score_per_image = score[i:i + onet_input_count] 614 | points_per_image = points[:, i:i + onet_input_count] 615 | 616 | ipass = np.where(score_per_image > threshold[2]) 617 | points_per_image = points_per_image[:, ipass[0]] 618 | 619 | image_obj['total_boxes'] = np.hstack([image_obj['total_boxes'][ipass[0], 0:4].copy(), 620 | np.expand_dims(score_per_image[ipass].copy(), 1)]) 621 | mv = out0_per_image[:, ipass[0]] 622 | 623 | w = image_obj['total_boxes'][:, 2] - image_obj['total_boxes'][:, 0] + 1 624 | h = image_obj['total_boxes'][:, 3] - image_obj['total_boxes'][:, 1] + 1 625 | points_per_image[0:5, :] = np.tile(w, (5, 1)) * points_per_image[0:5, :] + np.tile( 626 | image_obj['total_boxes'][:, 0], (5, 1)) - 1 627 | points_per_image[5:10, :] = np.tile(h, (5, 1)) * points_per_image[5:10, :] + np.tile( 628 | image_obj['total_boxes'][:, 1], (5, 1)) - 1 629 | 630 | if image_obj['total_boxes'].shape[0] > 0: 631 | image_obj['total_boxes'] = bbreg(image_obj['total_boxes'].copy(), np.transpose(mv)) 632 | pick = nms(image_obj['total_boxes'].copy(), 0.7, 'Min') 633 | image_obj['total_boxes'] = image_obj['total_boxes'][pick, :] 634 | points_per_image = points_per_image[:, pick] 635 | 636 | ret.append((image_obj['total_boxes'], points_per_image)) 637 | else: 638 | ret.append(None) 639 | 640 | i += onet_input_count 641 | 642 | return ret 643 | 644 | 645 | # function [boundingbox] = bbreg(boundingbox,reg) 646 | def bbreg(boundingbox,reg): 647 | """Calibrate bounding boxes""" 648 | if reg.shape[1]==1: 649 | reg = np.reshape(reg, (reg.shape[2], reg.shape[3])) 650 | 651 | w = boundingbox[:,2]-boundingbox[:,0]+1 652 | h = boundingbox[:,3]-boundingbox[:,1]+1 653 | b1 = boundingbox[:,0]+reg[:,0]*w 654 | b2 = boundingbox[:,1]+reg[:,1]*h 655 | b3 = boundingbox[:,2]+reg[:,2]*w 656 | b4 = boundingbox[:,3]+reg[:,3]*h 657 | boundingbox[:,0:4] = np.transpose(np.vstack([b1, b2, b3, b4 ])) 658 | return boundingbox 659 | 660 | def generateBoundingBox(imap, reg, scale, t): 661 | """Use heatmap to generate bounding boxes""" 662 | stride=2 663 | cellsize=12 664 | 665 | imap = np.transpose(imap) 666 | dx1 = np.transpose(reg[:,:,0]) 667 | dy1 = np.transpose(reg[:,:,1]) 668 | dx2 = np.transpose(reg[:,:,2]) 669 | dy2 = np.transpose(reg[:,:,3]) 670 | y, x = np.where(imap >= t) 671 | if y.shape[0]==1: 672 | dx1 = np.flipud(dx1) 673 | dy1 = np.flipud(dy1) 674 | dx2 = np.flipud(dx2) 675 | dy2 = np.flipud(dy2) 676 | score = imap[(y,x)] 677 | reg = np.transpose(np.vstack([ dx1[(y,x)], dy1[(y,x)], dx2[(y,x)], dy2[(y,x)] ])) 678 | if reg.size==0: 679 | reg = np.empty((0,3)) 680 | bb = np.transpose(np.vstack([y,x])) 681 | q1 = np.fix((stride*bb+1)/scale) 682 | q2 = np.fix((stride*bb+cellsize-1+1)/scale) 683 | boundingbox = np.hstack([q1, q2, np.expand_dims(score,1), reg]) 684 | return boundingbox, reg 685 | 686 | # function pick = nms(boxes,threshold,type) 687 | def nms(boxes, threshold, method): 688 | if boxes.size==0: 689 | return np.empty((0,3)) 690 | x1 = boxes[:,0] 691 | y1 = boxes[:,1] 692 | x2 = boxes[:,2] 693 | y2 = boxes[:,3] 694 | s = boxes[:,4] 695 | area = (x2-x1+1) * (y2-y1+1) 696 | I = np.argsort(s) 697 | pick = np.zeros_like(s, dtype=np.int16) 698 | counter = 0 699 | while I.size>0: 700 | i = I[-1] 701 | pick[counter] = i 702 | counter += 1 703 | idx = I[0:-1] 704 | xx1 = np.maximum(x1[i], x1[idx]) 705 | yy1 = np.maximum(y1[i], y1[idx]) 706 | xx2 = np.minimum(x2[i], x2[idx]) 707 | yy2 = np.minimum(y2[i], y2[idx]) 708 | w = np.maximum(0.0, xx2-xx1+1) 709 | h = np.maximum(0.0, yy2-yy1+1) 710 | inter = w * h 711 | if method is 'Min': 712 | o = inter / np.minimum(area[i], area[idx]) 713 | else: 714 | o = inter / (area[i] + area[idx] - inter) 715 | I = I[np.where(o<=threshold)] 716 | pick = pick[0:counter] 717 | return pick 718 | 719 | # function [dy edy dx edx y ey x ex tmpw tmph] = pad(total_boxes,w,h) 720 | def pad(total_boxes, w, h): 721 | """Compute the padding coordinates (pad the bounding boxes to square)""" 722 | tmpw = (total_boxes[:,2]-total_boxes[:,0]+1).astype(np.int32) 723 | tmph = (total_boxes[:,3]-total_boxes[:,1]+1).astype(np.int32) 724 | numbox = total_boxes.shape[0] 725 | 726 | dx = np.ones((numbox), dtype=np.int32) 727 | dy = np.ones((numbox), dtype=np.int32) 728 | edx = tmpw.copy().astype(np.int32) 729 | edy = tmph.copy().astype(np.int32) 730 | 731 | x = total_boxes[:,0].copy().astype(np.int32) 732 | y = total_boxes[:,1].copy().astype(np.int32) 733 | ex = total_boxes[:,2].copy().astype(np.int32) 734 | ey = total_boxes[:,3].copy().astype(np.int32) 735 | 736 | tmp = np.where(ex>w) 737 | edx.flat[tmp] = np.expand_dims(-ex[tmp]+w+tmpw[tmp],1) 738 | ex[tmp] = w 739 | 740 | tmp = np.where(ey>h) 741 | edy.flat[tmp] = np.expand_dims(-ey[tmp]+h+tmph[tmp],1) 742 | ey[tmp] = h 743 | 744 | tmp = np.where(x<1) 745 | dx.flat[tmp] = np.expand_dims(2-x[tmp],1) 746 | x[tmp] = 1 747 | 748 | tmp = np.where(y<1) 749 | dy.flat[tmp] = np.expand_dims(2-y[tmp],1) 750 | y[tmp] = 1 751 | 752 | return dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph 753 | 754 | # function [bboxA] = rerec(bboxA) 755 | def rerec(bboxA): 756 | """Convert bboxA to square.""" 757 | h = bboxA[:,3]-bboxA[:,1] 758 | w = bboxA[:,2]-bboxA[:,0] 759 | l = np.maximum(w, h) 760 | bboxA[:,0] = bboxA[:,0]+w*0.5-l*0.5 761 | bboxA[:,1] = bboxA[:,1]+h*0.5-l*0.5 762 | bboxA[:,2:4] = bboxA[:,0:2] + np.transpose(np.tile(l,(2,1))) 763 | return bboxA 764 | 765 | def imresample(img, sz): 766 | im_data = cv2.resize(img, (sz[1], sz[0]), interpolation=cv2.INTER_AREA) #@UndefinedVariable 767 | return im_data 768 | 769 | # This method is kept for debugging purpose 770 | # h=img.shape[0] 771 | # w=img.shape[1] 772 | # hs, ws = sz 773 | # dx = float(w) / ws 774 | # dy = float(h) / hs 775 | # im_data = np.zeros((hs,ws,3)) 776 | # for a1 in range(0,hs): 777 | # for a2 in range(0,ws): 778 | # for a3 in range(0,3): 779 | # im_data[a1,a2,a3] = img[int(floor(a1*dy)),int(floor(a2*dx)),a3] 780 | # return im_data 781 | 782 | -------------------------------------------------------------------------------- /bounding.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import os 3 | 4 | def draw_bounding_box(filename,width,height,label,xmin,ymin,xmax,ymax,save_path): 5 | 6 | # get the file path 7 | images = "images" 8 | 9 | # Image Path 10 | img_path = os.path.join(os.getcwd(),images,label,filename) 11 | 12 | # Reading the Image 13 | im = cv2.imread(img_path) 14 | cv2.rectangle(im,(xmin,ymin),(xmax,ymax),(0,255,0),2) 15 | cv2.imwrite(save_path, im) 16 | 17 | # for i in range(0, len(contours)): 18 | # if (i % 2 == 0): 19 | # cnt = contours[i] 20 | # #mask = np.zeros(im2.shape,np.uint8) 21 | # #cv2.drawContours(mask,[cnt],0,255,-1) 22 | # x,y,w,h = cv2.boundingRect(cnt) 23 | cv2.waitKey() 24 | cv2.destroyAllWindows() 25 | 26 | 27 | -------------------------------------------------------------------------------- /cascade.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import os 3 | import numpy as np 4 | 5 | CASE_PATH=os.path.join(os.getcwd(),"haarcascade_frontalface_default.xml") 6 | face_size = 64 7 | 8 | def crop_face( imgarray, section, margin=40, size=64): 9 | """ 10 | :param imgarray: full image 11 | :param section: face detected area (x, y, w, h) 12 | :param margin: add some margin to the face detected area to include a full head 13 | :param size: the result image resolution with be (size x size) 14 | :return: resized image in numpy array with shape (size x size x 3) 15 | """ 16 | img_h, img_w, _ = imgarray.shape 17 | if section is None: 18 | section = [0, 0, img_w, img_h] 19 | (x, y, w, h) = section 20 | margin = int(min(w,h) * margin / 100) 21 | x_a = x - margin 22 | y_a = y - margin 23 | x_b = x + w + margin 24 | y_b = y + h + margin 25 | if x_a < 0: 26 | x_b = min(x_b - x_a, img_w-1) 27 | x_a = 0 28 | if y_a < 0: 29 | y_b = min(y_b - y_a, img_h-1) 30 | y_a = 0 31 | if x_b > img_w: 32 | x_a = max(x_a - (x_b - img_w), 0) 33 | x_b = img_w 34 | if y_b > img_h: 35 | y_a = max(y_a - (y_b - img_h), 0) 36 | y_b = img_h 37 | cropped = imgarray[y_a: y_b, x_a: x_b] 38 | resized_img = cv2.resize(cropped, (size, size), interpolation=cv2.INTER_AREA) 39 | resized_img = np.array(resized_img) 40 | return resized_img, (x_a, y_a, x_b - x_a, y_b - y_a) 41 | 42 | 43 | 44 | 45 | 46 | def detect_face(image): 47 | xmin = ymin = xmax = ymax = 0 48 | frame = np.asarray(image) 49 | face_cascade = cv2.CascadeClassifier(CASE_PATH) 50 | gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 51 | faces = face_cascade.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=5,minSize=(face_size, face_size), flags=cv2.CASCADE_SCALE_IMAGE) 52 | face_imgs = np.empty((len(faces), face_size, face_size, 3)) 53 | for i, face in enumerate(faces): 54 | face_img, cropped = crop_face(frame, face, margin=40, size=face_size) 55 | (x, y, w, h) = cropped 56 | if i == 0: 57 | xmin = x 58 | ymin = y 59 | xmax = x+w 60 | ymax = y+h 61 | 62 | return xmin , ymin , xmax , ymax -------------------------------------------------------------------------------- /check.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import argparse 4 | from bounding import draw_bounding_box 5 | 6 | 7 | def read_csv(csv_name): 8 | # reading the file 9 | file = open(csv_name, 'r') 10 | file_data = file.read() 11 | 12 | # Split lines into list. 13 | file_data_lines = file_data.split('\n') 14 | file_data_lines 15 | 16 | # Create the final cleaned list. 17 | cleaned_file = [] 18 | # Loop to iterate and process each line. 19 | for line in file_data_lines: 20 | processed_line = line.split(',') 21 | cleaned_file.append(processed_line) 22 | 23 | return cleaned_file 24 | 25 | def create_output_dirs(class_label): 26 | # defining the output directory 27 | output_path=os.path.join(os.getcwd(),"output") 28 | # defining the output label directory 29 | for i in class_label: 30 | # saving directory 31 | save_path=os.path.join(output_path,i) 32 | if not os.path.exists(save_path): 33 | os.mkdir(save_path) 34 | 35 | return output_path 36 | 37 | 38 | 39 | def export(csv_name,class_label): 40 | 41 | # read the csv and return the list 42 | data_list = read_csv(csv_name) 43 | 44 | # setting up the output directory 45 | primary_path = create_output_dirs(class_label) 46 | 47 | # Drawing boundary box for the images 48 | for i in range(1,len(data_list)): 49 | if len(data_list[i]) == 8 : 50 | filename=data_list[i][0] 51 | width=int(float(data_list[i][1])) 52 | height=int(float(data_list[i][2])) 53 | label=data_list[i][3] 54 | xmin=int(float(data_list[i][4])) 55 | ymin=int(float(data_list[i][5])) 56 | xmax=int(float(data_list[i][6])) 57 | ymax=int(float(data_list[i][7])) 58 | 59 | save_path = os.path.join(primary_path,label,filename) 60 | draw_bounding_box(filename,width,height,label,xmin,ymin,xmax,ymax,save_path) 61 | 62 | print("Successfully Exported the Output") 63 | 64 | -------------------------------------------------------------------------------- /coordinates.py: -------------------------------------------------------------------------------- 1 | from scipy import misc 2 | import tensorflow as tf 3 | import numpy as np 4 | import os 5 | import align.detect_face 6 | 7 | 8 | 9 | 10 | def embeddings(image_path,image_size,margin,gpu_memory_fraction): 11 | 12 | minsize = 10 # minimum size of face 13 | threshold = [0.6,0.7,0.7] # p,r,o nets threshold 14 | factor = 0.709 # Standard Scaling Factor 15 | x = y = w = h = 0 16 | 17 | with tf.Graph().as_default(): 18 | #creating a tf graph and also setting the gpu options 19 | gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction) 20 | 21 | #Defining the session to run 22 | sess= tf.Session(config=tf.ConfigProto(gpu_options=gpu_options,log_device_placement=False)) 23 | with sess.as_default(): 24 | # Structuring pnet , rnet and onet 25 | pnet , rnet , onet = align.detect_face.create_mtcnn(sess,None) 26 | 27 | # Reading the image using misc 28 | img=misc.imread(image_path) 29 | img = img[:,:,0:3] 30 | img_size = np.asarray(img.shape)[0:2] 31 | 32 | # detecting the face and getting the coordinates of the bounding box 33 | bounding_boxes , _ = align.detect_face.detect_face(img,minsize,pnet,rnet,onet,threshold,factor) 34 | nrof_faces = bounding_boxes.shape[0] 35 | 36 | if nrof_faces > 0: 37 | det = bounding_boxes[:,0:4] 38 | det_arr =[] 39 | det_arr.append(np.squeeze(det)) 40 | 41 | for i , det in enumerate(det_arr): 42 | det = np.squeeze(det) 43 | 44 | if (len(det.shape)==1): 45 | bb = np.zeros(4,dtype=np.int32) 46 | bb[0] = x = np.maximum(det[0]-margin/2,0) 47 | bb[1] = y = np.maximum(det[1]-margin/2,0) 48 | bb[2] = w = np.minimum(det[2]+margin/2, img_size[1]) 49 | bb[3] = h = np.minimum(det[3]+margin/2, img_size[0]) 50 | 51 | else : 52 | x = y = w = h = 0 53 | 54 | # Calculating the coordinates 55 | xmin , ymin , xmax , ymax = x , y , w , h 56 | 57 | return xmin , ymin , xmax , ymax 58 | 59 | 60 | 61 | 62 | 63 | 64 | -------------------------------------------------------------------------------- /dataset_util.py: -------------------------------------------------------------------------------- 1 | """Utility functions for creating TFRecord data sets.""" 2 | 3 | import tensorflow as tf 4 | 5 | 6 | def int64_feature(value): 7 | return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) 8 | 9 | 10 | def int64_list_feature(value): 11 | return tf.train.Feature(int64_list=tf.train.Int64List(value=value)) 12 | 13 | 14 | def bytes_feature(value): 15 | return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) 16 | 17 | 18 | def bytes_list_feature(value): 19 | return tf.train.Feature(bytes_list=tf.train.BytesList(value=value)) 20 | 21 | 22 | def float_list_feature(value): 23 | return tf.train.Feature(float_list=tf.train.FloatList(value=value)) 24 | 25 | 26 | def read_examples_list(path): 27 | """Read list of training or validation examples. 28 | 29 | The file is assumed to contain a single example per line where the first 30 | token in the line is an identifier that allows us to find the image and 31 | annotation xml for that example. 32 | 33 | For example, the line: 34 | xyz 3 35 | would allow us to find files xyz.jpg and xyz.xml (the 3 would be ignored). 36 | 37 | Args: 38 | path: absolute path to examples list file. 39 | 40 | Returns: 41 | list of example identifiers (strings). 42 | """ 43 | with tf.gfile.GFile(path) as fid: 44 | lines = fid.readlines() 45 | return [line.strip().split(' ')[0] for line in lines] 46 | 47 | 48 | def recursive_parse_xml_to_dict(xml): 49 | """Recursively parses XML contents to python dict. 50 | 51 | We assume that `object` tags are the only ones that can appear 52 | multiple times at the same level of a tree. 53 | 54 | Args: 55 | xml: xml tree obtained by parsing XML file contents using lxml.etree 56 | 57 | Returns: 58 | Python dictionary holding XML contents. 59 | """ 60 | if not xml: 61 | return {xml.tag: xml.text} 62 | result = {} 63 | for child in xml: 64 | child_result = recursive_parse_xml_to_dict(child) 65 | if child.tag != 'object': 66 | result[child.tag] = child_result[child.tag] 67 | else: 68 | if child.tag not in result: 69 | result[child.tag] = [] 70 | result[child.tag].append(child_result[child.tag]) 71 | return {xml.tag: result} 72 | 73 | 74 | -------------------------------------------------------------------------------- /generate_tfrecord.py: -------------------------------------------------------------------------------- 1 | import os 2 | import io 3 | import pandas as pd 4 | import tensorflow as tf 5 | 6 | from PIL import Image 7 | import dataset_util 8 | from collections import namedtuple , OrderedDict 9 | 10 | 11 | def class_img_dict(path): 12 | # Getting the class as a list 13 | class_list = os.listdir(path) 14 | 15 | class_dict ={} 16 | 17 | for i in range(0,len(class_list)): 18 | class_dict[class_list[i]] = i+1 19 | 20 | return class_dict 21 | 22 | 23 | 24 | def split(df,group): 25 | # Spliting the object from the files 26 | data = namedtuple('data',['filename','label','object']) 27 | gb = df.groupby(group) 28 | li=[] 29 | for key, x in zip(gb.groups.keys(), gb.groups): 30 | d = data(key[0],key[1],gb.get_group(x)) 31 | li.append(d) 32 | return li 33 | 34 | def create_tf_example(group, path): 35 | # Class numeric labels as dict 36 | class_dict=class_img_dict(path) 37 | 38 | #Opening and readinf the files 39 | with tf.gfile.GFile(os.path.join(path,'{}/{}'.format(group.label,group.filename)),'rb') as fid: 40 | encoded_jpg = fid.read() 41 | 42 | # Encode the image in jpeg format to array values 43 | encoded_jpg_io= io.BytesIO(encoded_jpg) 44 | image = Image.open(encoded_jpg_io) 45 | 46 | # Setting up the image size 47 | width , height = image.size 48 | 49 | #Creating the boundary box coordinate instances such as xmin,ymin,xmax,ymax 50 | filename = group.filename.encode('utf8') 51 | image_format = b'jpg' 52 | xmins = [] 53 | xmaxs = [] 54 | ymins = [] 55 | ymaxs = [] 56 | classes_text = [] 57 | classes = [] 58 | 59 | for index, row in group.object.iterrows(): 60 | xmins.append(row['xmin'] / width) 61 | xmaxs.append(row['xmax'] /width) 62 | ymins.append(row['ymin'] / height) 63 | ymaxs.append(row['ymax'] / height) 64 | classes_text.append(row['class'].encode('utf8')) 65 | classes.append(class_dict[row['class']]) 66 | 67 | # This is already exisiting code to convert csv to tfrecord 68 | tf_example = tf.train.Example(features=tf.train.Features(feature={ 69 | 'image/height': dataset_util.int64_feature(height), 70 | 'image/width': dataset_util.int64_feature(width), 71 | 'image/filename': dataset_util.bytes_feature(filename), 72 | 'image/source_id': dataset_util.bytes_feature(filename), 73 | 'image/encoded': dataset_util.bytes_feature(encoded_jpg), 74 | 'image/format': dataset_util.bytes_feature(image_format), 75 | 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins), 76 | 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs), 77 | 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins), 78 | 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs), 79 | 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), 80 | 'image/object/class/label': dataset_util.int64_list_feature(classes), 81 | })) 82 | return tf_example 83 | 84 | 85 | def generate_tf(csv_name,tf_name,img_dir): 86 | #Creating a TFRecordWriter 87 | writer = tf.python_io.TFRecordWriter(tf_name) 88 | 89 | # selecting the path to the image folder 90 | path = os.path.join(os.getcwd(),'images') 91 | 92 | # Reading the csv from the data folder 93 | examples = pd.read_csv(csv_name) 94 | grouped = split(examples, ['filename','class']) 95 | for group in grouped: 96 | tf_example = create_tf_example(group,path) 97 | writer.write(tf_example.SerializeToString()) 98 | 99 | writer.close() 100 | 101 | # After the conversion display the message 102 | output_path = os.path.join(os.getcwd(),tf_name) 103 | print('Successfully created the tfrecord for the images: {}'.format(output_path)) 104 | 105 | 106 | if __name__ == '__main__': 107 | tf.app.run() 108 | -------------------------------------------------------------------------------- /images/Steve Jobs/10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/10.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/11.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/12.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/14.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/14.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/16.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/18.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/19.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/19.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/2.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/20.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/20.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/24.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/24.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/25.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/25.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/26.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/26.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/28.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/28.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/29.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/29.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/3.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/30.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/30.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/31.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/31.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/32.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/32.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/33.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/33.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/34.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/34.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/36.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/36.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/38.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/38.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/39.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/39.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/4.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/41.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/41.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/43.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/43.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/45.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/45.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/5.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/5.jpeg -------------------------------------------------------------------------------- /images/Steve Jobs/6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/6.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/7.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/8.jpg -------------------------------------------------------------------------------- /images/Steve Jobs/9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Steve Jobs/9.jpg -------------------------------------------------------------------------------- /images/Tim Cook/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/1.jpg -------------------------------------------------------------------------------- /images/Tim Cook/10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/10.jpg -------------------------------------------------------------------------------- /images/Tim Cook/11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/11.jpg -------------------------------------------------------------------------------- /images/Tim Cook/12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/12.jpg -------------------------------------------------------------------------------- /images/Tim Cook/13.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/13.jpg -------------------------------------------------------------------------------- /images/Tim Cook/16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/16.jpg -------------------------------------------------------------------------------- /images/Tim Cook/18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/18.jpg -------------------------------------------------------------------------------- /images/Tim Cook/19.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/19.jpeg -------------------------------------------------------------------------------- /images/Tim Cook/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/2.jpg -------------------------------------------------------------------------------- /images/Tim Cook/23.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/23.jpg -------------------------------------------------------------------------------- /images/Tim Cook/24.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/24.jpg -------------------------------------------------------------------------------- /images/Tim Cook/26.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/26.jpg -------------------------------------------------------------------------------- /images/Tim Cook/27.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/27.jpg -------------------------------------------------------------------------------- /images/Tim Cook/3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/3.jpg -------------------------------------------------------------------------------- /images/Tim Cook/31.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/31.jpg -------------------------------------------------------------------------------- /images/Tim Cook/32.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/32.jpg -------------------------------------------------------------------------------- /images/Tim Cook/33.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/33.jpg -------------------------------------------------------------------------------- /images/Tim Cook/35.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/35.jpg -------------------------------------------------------------------------------- /images/Tim Cook/36.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/36.jpg -------------------------------------------------------------------------------- /images/Tim Cook/37.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/37.jpg -------------------------------------------------------------------------------- /images/Tim Cook/38.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/38.jpg -------------------------------------------------------------------------------- /images/Tim Cook/43.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/43.jpg -------------------------------------------------------------------------------- /images/Tim Cook/44.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/44.jpg -------------------------------------------------------------------------------- /images/Tim Cook/45.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/45.jpg -------------------------------------------------------------------------------- /images/Tim Cook/46.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/46.jpg -------------------------------------------------------------------------------- /images/Tim Cook/47.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/47.jpg -------------------------------------------------------------------------------- /images/Tim Cook/48.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/48.jpg -------------------------------------------------------------------------------- /images/Tim Cook/49.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/49.jpeg -------------------------------------------------------------------------------- /images/Tim Cook/5.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/5.jpeg -------------------------------------------------------------------------------- /images/Tim Cook/50.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/50.jpg -------------------------------------------------------------------------------- /images/Tim Cook/51.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/51.jpg -------------------------------------------------------------------------------- /images/Tim Cook/54.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/54.jpg -------------------------------------------------------------------------------- /images/Tim Cook/55.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/55.jpg -------------------------------------------------------------------------------- /images/Tim Cook/56.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/56.jpg -------------------------------------------------------------------------------- /images/Tim Cook/58.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/58.jpg -------------------------------------------------------------------------------- /images/Tim Cook/59.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/59.jpg -------------------------------------------------------------------------------- /images/Tim Cook/6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/6.jpg -------------------------------------------------------------------------------- /images/Tim Cook/60.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/60.jpg -------------------------------------------------------------------------------- /images/Tim Cook/61.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/61.jpg -------------------------------------------------------------------------------- /images/Tim Cook/62.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/62.jpg -------------------------------------------------------------------------------- /images/Tim Cook/64.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/64.jpg -------------------------------------------------------------------------------- /images/Tim Cook/65.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/65.jpg -------------------------------------------------------------------------------- /images/Tim Cook/66.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/66.jpg -------------------------------------------------------------------------------- /images/Tim Cook/67.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/67.jpg -------------------------------------------------------------------------------- /images/Tim Cook/68.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/68.jpg -------------------------------------------------------------------------------- /images/Tim Cook/69.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/69.jpg -------------------------------------------------------------------------------- /images/Tim Cook/7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/7.jpg -------------------------------------------------------------------------------- /images/Tim Cook/8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/8.jpg -------------------------------------------------------------------------------- /images/Tim Cook/9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/images/Tim Cook/9.jpg -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import argparse 4 | import csv 5 | from PIL import Image 6 | from coordinates import embeddings 7 | from generate_tfrecord import generate_tf 8 | from cascade import detect_face 9 | import cv2 10 | import numpy as np 11 | from check import export 12 | 13 | 14 | def writeCsvFile(file_name,data,*args,**kwargs): 15 | # Opening and writing the csv file 16 | with open(file_name,"w") as f: 17 | writer =csv.writer(f) 18 | writer.writerows(data) 19 | 20 | print ("****** Successfully CSV File Generated *************") 21 | 22 | 23 | def standard_size(image_dir): 24 | for dirs in os.listdir(image_dir): 25 | dir_path=os.path.join(image_dir,dirs) 26 | for f in os.listdir(dir_path): 27 | img_p = os.path.join(image_dir,dirs,f) 28 | im = Image.open(img_p) 29 | image_data = np.asarray(im) 30 | file_name=f 31 | imResize = cv2.resize(image_data, (500, 500)) 32 | try: 33 | im = Image.fromarray(imResize, 'RGB') 34 | except ValueError: 35 | os.remove(img_p) 36 | print("Bad Image Format, Removing") 37 | continue 38 | im.save(dir_path+'/'+f, 'JPEG', quality=90) 39 | 40 | 41 | def image_process(image_dir,data_list,method): 42 | 43 | margin = 44 # Default Margin Value of the image 44 | counter = 0 # To count how many faces detected 45 | gpu_memory_fraction =1.0 # if we use GPU ,we define the upper bound of the GPU Memory 46 | standard_size(image_dir) # Converting all the image to standard size 47 | for dirs in os.listdir(image_dir): 48 | dir_path=os.path.join(image_dir,dirs) 49 | for f in os.listdir(dir_path): 50 | img_p = os.path.join(image_dir,dirs,f) 51 | label=img_p.split('/')[-2] 52 | file_name=f 53 | im = Image.open(img_p) # Opening the image using PILLOW 54 | w,h = im.size # getting the width and the height of the image 55 | size = im.size # for passing the face embeddings parameters 56 | if method == "harr": 57 | xmin,ymin,xmax,ymax=detect_face(im) # Using opencv method 58 | if method == "facenet": 59 | xmin,ymin,xmax,ymax = embeddings(img_p,size,margin,gpu_memory_fraction) # Calling the facenet embedding function 60 | 61 | if xmin == ymin == xmax == ymax == 0: 62 | # It will remove the undetected and error image 63 | os.remove(img_p) 64 | print("*"+img_p+"*") 65 | print("********** Error With the Image , So Removing **********") 66 | 67 | else: 68 | # It will add the detected image 69 | counter += 1 70 | print("Face Detected and Processed : {}".format(counter)) 71 | data_list.append([file_name,w,h,label,xmin,ymin,xmax,ymax]) # Appending in a list format 72 | 73 | print("**** Successfully image processed *********") 74 | 75 | return data_list 76 | 77 | 78 | def parse_arguments(argv): 79 | # Defining the parser 80 | parser = argparse.ArgumentParser() 81 | parser.add_argument('csv_name',type=str,help='Name of the csv') 82 | parser.add_argument('tfrecord_name',type=str,help='Name of the tfrecord') 83 | parser.add_argument('method',type=str,help='Method to use to detect Face') 84 | 85 | return parser.parse_args(argv) 86 | 87 | # image directory 88 | image_dir = os.path.join(os.getcwd(),"images") 89 | 90 | # parsing the arguments 91 | args=parse_arguments(sys.argv[1:]) 92 | 93 | # defining the list structure to convert to csv 94 | data_list = [['filename','width','height','class','xmin','ymin','xmax','ymax']] 95 | 96 | # process the images and getting the coordinate values as list 97 | coordinate_list = image_process(image_dir,data_list,args.method) 98 | 99 | 100 | 101 | # writing into csv file 102 | writeCsvFile(args.csv_name, coordinate_list) 103 | 104 | # conversion to tfrecord 105 | generate_tf(args.csv_name,args.tfrecord_name,image_dir) 106 | 107 | # Exporting the output 108 | class_labels=[] 109 | for dirs in os.listdir(image_dir): 110 | class_labels.append(dirs) 111 | export(args.csv_name,class_labels) # Calling the export function from check 112 | -------------------------------------------------------------------------------- /output/Steve Jobs/10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/10.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/11.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/12.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/14.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/14.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/16.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/18.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/19.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/19.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/2.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/20.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/20.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/24.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/24.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/25.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/25.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/26.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/26.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/28.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/28.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/29.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/29.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/3.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/30.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/30.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/31.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/31.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/32.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/32.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/33.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/33.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/34.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/34.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/36.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/36.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/38.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/38.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/39.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/39.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/4.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/41.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/41.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/43.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/43.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/45.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/45.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/5.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/5.jpeg -------------------------------------------------------------------------------- /output/Steve Jobs/6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/6.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/7.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/8.jpg -------------------------------------------------------------------------------- /output/Steve Jobs/9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Steve Jobs/9.jpg -------------------------------------------------------------------------------- /output/Tim Cook/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/1.jpg -------------------------------------------------------------------------------- /output/Tim Cook/10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/10.jpg -------------------------------------------------------------------------------- /output/Tim Cook/11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/11.jpg -------------------------------------------------------------------------------- /output/Tim Cook/12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/12.jpg -------------------------------------------------------------------------------- /output/Tim Cook/13.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/13.jpg -------------------------------------------------------------------------------- /output/Tim Cook/16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/16.jpg -------------------------------------------------------------------------------- /output/Tim Cook/18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/18.jpg -------------------------------------------------------------------------------- /output/Tim Cook/19.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/19.jpeg -------------------------------------------------------------------------------- /output/Tim Cook/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/2.jpg -------------------------------------------------------------------------------- /output/Tim Cook/23.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/23.jpg -------------------------------------------------------------------------------- /output/Tim Cook/24.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/24.jpg -------------------------------------------------------------------------------- /output/Tim Cook/26.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/26.jpg -------------------------------------------------------------------------------- /output/Tim Cook/27.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/27.jpg -------------------------------------------------------------------------------- /output/Tim Cook/3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/3.jpg -------------------------------------------------------------------------------- /output/Tim Cook/31.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/31.jpg -------------------------------------------------------------------------------- /output/Tim Cook/32.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/32.jpg -------------------------------------------------------------------------------- /output/Tim Cook/33.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/33.jpg -------------------------------------------------------------------------------- /output/Tim Cook/35.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/35.jpg -------------------------------------------------------------------------------- /output/Tim Cook/36.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/36.jpg -------------------------------------------------------------------------------- /output/Tim Cook/37.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/37.jpg -------------------------------------------------------------------------------- /output/Tim Cook/38.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/38.jpg -------------------------------------------------------------------------------- /output/Tim Cook/43.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/43.jpg -------------------------------------------------------------------------------- /output/Tim Cook/44.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/44.jpg -------------------------------------------------------------------------------- /output/Tim Cook/45.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/45.jpg -------------------------------------------------------------------------------- /output/Tim Cook/46.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/46.jpg -------------------------------------------------------------------------------- /output/Tim Cook/47.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/47.jpg -------------------------------------------------------------------------------- /output/Tim Cook/48.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/48.jpg -------------------------------------------------------------------------------- /output/Tim Cook/49.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/49.jpeg -------------------------------------------------------------------------------- /output/Tim Cook/5.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/5.jpeg -------------------------------------------------------------------------------- /output/Tim Cook/50.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/50.jpg -------------------------------------------------------------------------------- /output/Tim Cook/51.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/51.jpg -------------------------------------------------------------------------------- /output/Tim Cook/54.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/54.jpg -------------------------------------------------------------------------------- /output/Tim Cook/55.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/55.jpg -------------------------------------------------------------------------------- /output/Tim Cook/56.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/56.jpg -------------------------------------------------------------------------------- /output/Tim Cook/58.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/58.jpg -------------------------------------------------------------------------------- /output/Tim Cook/59.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/59.jpg -------------------------------------------------------------------------------- /output/Tim Cook/6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/6.jpg -------------------------------------------------------------------------------- /output/Tim Cook/60.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/60.jpg -------------------------------------------------------------------------------- /output/Tim Cook/61.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/61.jpg -------------------------------------------------------------------------------- /output/Tim Cook/62.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/62.jpg -------------------------------------------------------------------------------- /output/Tim Cook/64.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/64.jpg -------------------------------------------------------------------------------- /output/Tim Cook/65.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/65.jpg -------------------------------------------------------------------------------- /output/Tim Cook/66.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/66.jpg -------------------------------------------------------------------------------- /output/Tim Cook/67.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/67.jpg -------------------------------------------------------------------------------- /output/Tim Cook/68.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/68.jpg -------------------------------------------------------------------------------- /output/Tim Cook/69.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/69.jpg -------------------------------------------------------------------------------- /output/Tim Cook/7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/7.jpg -------------------------------------------------------------------------------- /output/Tim Cook/8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/8.jpg -------------------------------------------------------------------------------- /output/Tim Cook/9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/output/Tim Cook/9.jpg -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | absl-py==0.5.0 2 | astor==0.7.1 3 | gast==0.2.0 4 | grpcio==1.15.0 5 | Markdown==2.6.11 6 | numpy==1.14.5 7 | opencv-python==3.4.3.18 8 | pandas==0.23.4 9 | Pillow==5.2.0 10 | protobuf==3.6.1 11 | python-dateutil==2.7.3 12 | pytz==2018.5 13 | scipy==1.1.0 14 | six==1.11.0 15 | tensorboard==1.10.0 16 | tensorflow==1.5 17 | termcolor==1.1.0 18 | Werkzeug==0.14.1 19 | -------------------------------------------------------------------------------- /resource/18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/resource/18.jpg -------------------------------------------------------------------------------- /resource/structue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/resource/structue.png -------------------------------------------------------------------------------- /train.csv: -------------------------------------------------------------------------------- 1 | filename,width,height,class,xmin,ymin,xmax,ymax 2 | 36.jpg,500,500,Steve Jobs,171.53427171707153,40.27968019247055,293.6941141486168,194.24924451112747 3 | 31.jpg,500,500,Steve Jobs,20.856376826763153,61.38668072223663,265.7656219303608,423.2184410095215 4 | 12.jpg,500,500,Steve Jobs,263.5558865070343,77.57165282964706,439.00265419483185,332.80702716857195 5 | 4.jpg,500,500,Steve Jobs,134.9934187680483,102.3432895988226,315.4985604286194,358.0450374316424 6 | 19.jpg,500,500,Steve Jobs,260.16265738755465,186.07838281802833,353.4275072142482,285.98285686969757 7 | 39.jpg,500,500,Steve Jobs,132.85471452772617,43.72793362289667,406.84071741998196,344.7960756011307 8 | 30.jpg,500,500,Steve Jobs,69.6488224864006,103.89483960717916,375.934373177588,445.36543129198253 9 | 26.jpg,500,500,Steve Jobs,174.88001748919487,67.54025060683489,292.9362176284194,211.80037638545036 10 | 45.jpg,500,500,Steve Jobs,112.9232965707779,50.246217004954815,319.0321378707886,365.74996127188206 11 | 3.jpg,500,500,Steve Jobs,153.37354189157486,87.71736541390419,322.62566661834717,315.29533237218857 12 | 38.jpg,500,500,Steve Jobs,119.17405650019646,71.04310619086027,335.71576872467995,386.6238234397024 13 | 2.jpg,500,500,Steve Jobs,94.16867733001709,85.49527981877327,391.0485260486603,421.4040758293122 14 | 16.jpg,500,500,Steve Jobs,119.40123045444489,62.19941744208336,349.55695663392544,343.7139991745353 15 | 7.jpg,500,500,Steve Jobs,70.9281195551157,114.38175730407238,381.53218069672585,479.8401178345084 16 | 18.jpg,500,500,Steve Jobs,49.280452102422714,56.28637653589249,236.07860572636127,321.6046568453312 17 | 28.jpg,500,500,Steve Jobs,276.79405860602856,77.40585582703352,445.4402040913701,329.14954851567745 18 | 34.jpg,500,500,Steve Jobs,160.32525795698166,0.0,419.2535489052534,335.3202270567417 19 | 8.jpg,500,500,Steve Jobs,191.9572937786579,28.009204253554344,327.0374589562416,223.15527799725533 20 | 41.jpg,500,500,Steve Jobs,143.2198634147644,0.0,375.5425228998065,284.7547485437244 21 | 9.jpg,500,500,Steve Jobs,146.68806380033493,77.73356232047081,315.03020241856575,335.83591252565384 22 | 10.jpg,500,500,Steve Jobs,225.13706648349762,2.398385599255562,361.0492430701852,154.7256064414978 23 | 6.jpg,500,500,Steve Jobs,218.12086181342602,8.11365981400013,343.0373011380434,144.2709012106061 24 | 11.jpg,500,500,Steve Jobs,244.65993185341358,84.82998438179493,457.49497958272696,391.64134995639324 25 | 43.jpg,500,500,Steve Jobs,184.63716019690037,66.1186697781086,289.2974757179618,205.20922099798918 26 | 25.jpg,500,500,Steve Jobs,65.07906196266413,118.83178769052029,305.5486790537834,452.98920045234263 27 | 5.jpeg,500,500,Steve Jobs,258.1336739063263,82.46733379364014,412.70067942142487,295.176632123068 28 | 29.jpg,500,500,Steve Jobs,290.2734102010727,14.02329371869564,392.66675551980734,151.8797261044383 29 | 32.jpg,500,500,Steve Jobs,174.5612851381302,64.4969094991684,399.8769828081131,365.0464195907116 30 | 14.jpg,500,500,Steve Jobs,120.8039962053299,0.0,360.8931660503149,332.9454622119665 31 | 33.jpg,500,500,Steve Jobs,175.4041292965412,46.94836711883545,400.03346379101276,323.7934452742338 32 | 24.jpg,500,500,Steve Jobs,109.88684584200382,134.08298152685165,360.8996148407459,446.91195073723793 33 | 20.jpg,500,500,Steve Jobs,133.7601788789034,76.41618722677231,269.9057053029537,272.0421134829521 34 | 58.jpg,500,500,Tim Cook,145.1617490798235,77.79348681867123,304.15142683684826,246.61871741898358 35 | 59.jpg,500,500,Tim Cook,209.26165553927422,99.50566458702087,311.23967906832695,236.30761414766312 36 | 65.jpg,500,500,Tim Cook,171.1068091392517,58.0843586102128,260.09109196066856,173.77179324626923 37 | 36.jpg,500,500,Tim Cook,94.24210131168365,134.7042466700077,294.74460627138615,401.6068183928728 38 | 31.jpg,500,500,Tim Cook,88.71308527886868,119.32604068517685,195.81208154559135,261.2351347319782 39 | 1.jpg,500,500,Tim Cook,138.7424283400178,57.43666069209576,356.4918710887432,364.8453649505973 40 | 66.jpg,500,500,Tim Cook,193.90937635302544,85.78355178236961,377.4630342721939,331.06251605413854 41 | 12.jpg,500,500,Tim Cook,201.04393975436687,61.03041268885136,321.1926921904087,225.50918701291084 42 | 62.jpg,500,500,Tim Cook,108.27165423333645,78.35507720708847,339.80464677512646,400.1638348158449 43 | 51.jpg,500,500,Tim Cook,167.98967625200748,28.243788480758667,328.21968583762646,238.17200105637312 44 | 47.jpg,500,500,Tim Cook,261.1581438779831,41.30834736675024,475.5712353885174,366.7664822284132 45 | 26.jpg,500,500,Tim Cook,55.13479648157954,230.58568020537496,131.7380462884903,319.0580322239548 46 | 45.jpg,500,500,Tim Cook,238.96991322934628,66.15845806896687,340.0155978947878,189.7120985686779 47 | 3.jpg,500,500,Tim Cook,131.97085511684418,23.375316470861435,429.0734723955393,409.9286998193711 48 | 38.jpg,500,500,Tim Cook,180.6731907427311,109.2551458477974,369.706024043262,368.0653402209282 49 | 64.jpg,500,500,Tim Cook,176.12478201836348,80.97224187850952,346.8815124928951,302.7344298362732 50 | 2.jpg,500,500,Tim Cook,130.35535702109337,55.788608849048615,318.04762771725655,351.5606455206871 51 | 16.jpg,500,500,Tim Cook,90.1915619969368,49.568554759025574,365.0864183306694,395.9880170226097 52 | 46.jpg,500,500,Tim Cook,74.5439283773303,112.7702138274908,234.0143654793501,335.4605439007282 53 | 7.jpg,500,500,Tim Cook,85.71998123824596,108.36914601922035,281.6766269803047,339.00113795511425 54 | 23.jpg,500,500,Tim Cook,178.53562933206558,57.87383475899696,328.01815658807755,253.42727002501488 55 | 37.jpg,500,500,Tim Cook,103.07761377096176,80.29253044724464,339.8693901002407,373.1692569553852 56 | 18.jpg,500,500,Tim Cook,98.20617580413818,54.846071004867554,322.1488095521927,361.17918434739113 57 | 48.jpg,500,500,Tim Cook,62.90743473172188,77.87760281562805,354.9854444563389,427.6464519947767 58 | 8.jpg,500,500,Tim Cook,136.35785099864006,47.5033420920372,403.55128115415573,448.85420718602836 59 | 49.jpeg,500,500,Tim Cook,96.78456282615662,43.11342504620552,380.62895504385233,395.05495394580066 60 | 9.jpg,500,500,Tim Cook,154.25770449638367,58.94842228293419,333.88705982267857,323.98244766145945 61 | 19.jpeg,500,500,Tim Cook,131.16991011798382,71.66776690632105,324.49768133461475,317.0154584888369 62 | 10.jpg,500,500,Tim Cook,229.72305999696255,105.6425204128027,416.722112134099,369.4993640705943 63 | 6.jpg,500,500,Tim Cook,134.0563516393304,31.391966372728348,348.24291322380304,319.5538585484028 64 | 67.jpg,500,500,Tim Cook,68.7516679763794,105.971385627985,262.9316173195839,367.1877672225237 65 | 50.jpg,500,500,Tim Cook,208.72644520550966,123.21461126208305,317.6124521046877,261.56570184230804 66 | 35.jpg,500,500,Tim Cook,189.03446793556213,26.41620969772339,322.59043407440186,194.43461275100708 67 | 11.jpg,500,500,Tim Cook,216.23747350275517,107.8433733060956,346.9579378515482,245.98768024146557 68 | 60.jpg,500,500,Tim Cook,160.69035148620605,60.01768986880779,344.6268826946616,309.40651085972786 69 | 54.jpg,500,500,Tim Cook,106.15717387199402,102.04869528114796,247.98449602723122,284.8523946925998 70 | 43.jpg,500,500,Tim Cook,205.6414727345109,81.81095653772354,346.5953520089388,271.2438712120056 71 | 56.jpg,500,500,Tim Cook,190.26858639717102,80.96642634272575,384.7701907157898,350.5161276701838 72 | 5.jpeg,500,500,Tim Cook,173.45566235482693,82.91859129071236,319.0773280262947,265.87047067284584 73 | 69.jpg,500,500,Tim Cook,192.77310627698898,260.121173620224,328.89927838742733,429.2760783396661 74 | 44.jpg,500,500,Tim Cook,255.5412304699421,99.52657423913479,349.57018576562405,211.28186763823032 75 | 13.jpg,500,500,Tim Cook,143.15577447414398,129.9544226527214,265.2668823748827,260.149150878191 76 | 32.jpg,500,500,Tim Cook,86.37660647928715,55.35516995936632,351.10457775741816,371.5751672629267 77 | 68.jpg,500,500,Tim Cook,151.32515504956245,0.0,307.6344681829214,178.8307080771774 78 | 27.jpg,500,500,Tim Cook,42.70212519168854,116.69068264961243,222.66294527053833,367.2019245624542 79 | 33.jpg,500,500,Tim Cook,189.63145922124386,103.35713970661163,301.883116543293,244.2731357961893 80 | 55.jpg,500,500,Tim Cook,179.6229516863823,79.94993408024311,321.0401841402054,275.98883505165577 81 | 61.jpg,500,500,Tim Cook,171.21220479160547,53.7367187589407,299.2701509743929,216.05458861589432 82 | 24.jpg,500,500,Tim Cook,304.19089172780514,152.54564601182938,428.0053465887904,325.58133359253407 83 | -------------------------------------------------------------------------------- /train.record: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/robinreni96/Automatic-Face-Detection-Annotation-and-Preprocessing/3a8e62c9a58de44043a7968e8d658d5d5d566f3e/train.record --------------------------------------------------------------------------------