├── requirements_docs.txt ├── docs ├── authors.rst ├── history.rst ├── readme.rst ├── contributing.rst ├── modules.rst ├── face_recognition.rst ├── index.rst ├── installation.rst ├── usage.rst ├── Makefile ├── make.bat └── conf.py ├── tests ├── __init__.py ├── test_images │ ├── 32bit.png │ ├── biden.jpg │ ├── obama.jpg │ ├── obama2.jpg │ ├── obama3.jpg │ ├── obama_partial_face.jpg │ └── obama_partial_face2.jpg └── test_face_recognition.py ├── examples ├── biden.jpg ├── obama.jpg ├── obama2.jpg ├── obama-1080p.jpg ├── obama-240p.jpg ├── obama-480p.jpg ├── obama-720p.jpg ├── obama_small.jpg ├── two_people.jpg ├── alex-lacamoire.png ├── hamilton_clip.mp4 ├── lin-manuel-miranda.png ├── short_hamilton_clip.mp4 ├── knn_examples │ ├── test │ │ ├── obama1.jpg │ │ ├── kit_with_rose.jpg │ │ ├── alex_lacamoire1.jpg │ │ ├── johnsnow_test1.jpg │ │ └── obama_and_biden.jpg │ └── train │ │ ├── biden │ │ ├── biden.jpg │ │ └── biden2.jpg │ │ ├── obama │ │ ├── obama.jpg │ │ └── obama2.jpg │ │ ├── rose_leslie │ │ ├── img1.jpg │ │ └── img2.jpg │ │ ├── alex_lacamoire │ │ └── img1.jpg │ │ └── kit_harington │ │ ├── john1.jpeg │ │ └── john2.jpeg ├── find_faces_in_picture.py ├── find_faces_in_picture_cnn.py ├── find_facial_features_in_picture.py ├── recognize_faces_in_pictures.py ├── digital_makeup.py ├── blur_faces_on_webcam.py ├── face_distance.py ├── facerec_on_raspberry_pi.py ├── find_faces_in_batches.py ├── benchmark.py ├── identify_and_draw_boxes_on_faces.py ├── facerec_from_webcam.py ├── facerec_from_video_file.py ├── facerec_from_webcam_faster.py ├── web_service_example.py └── face_recognition_knn.py ├── requirements.txt ├── face_recognition ├── __init__.py ├── face_detection_cli.py ├── face_recognition_cli.py └── api.py ├── requirements_dev.txt ├── MANIFEST.in ├── .editorconfig ├── tox.ini ├── .travis.yml ├── .github └── ISSUE_TEMPLATE.md ├── AUTHORS.rst ├── setup.cfg ├── .gitignore ├── LICENSE ├── Dockerfile ├── setup.py ├── Makefile ├── CONTRIBUTING.rst ├── HISTORY.rst ├── README.md └── README.rst /requirements_docs.txt: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/authors.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../AUTHORS.rst 2 | -------------------------------------------------------------------------------- /docs/history.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../HISTORY.rst 2 | -------------------------------------------------------------------------------- /docs/readme.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../README.rst 2 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | -------------------------------------------------------------------------------- /docs/contributing.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../CONTRIBUTING.rst 2 | -------------------------------------------------------------------------------- /examples/biden.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/biden.jpg -------------------------------------------------------------------------------- /examples/obama.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama.jpg -------------------------------------------------------------------------------- /examples/obama2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama2.jpg -------------------------------------------------------------------------------- /examples/obama-1080p.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama-1080p.jpg -------------------------------------------------------------------------------- /examples/obama-240p.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama-240p.jpg -------------------------------------------------------------------------------- /examples/obama-480p.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama-480p.jpg -------------------------------------------------------------------------------- /examples/obama-720p.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama-720p.jpg -------------------------------------------------------------------------------- /examples/obama_small.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/obama_small.jpg -------------------------------------------------------------------------------- /examples/two_people.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/two_people.jpg -------------------------------------------------------------------------------- /examples/alex-lacamoire.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/alex-lacamoire.png -------------------------------------------------------------------------------- /examples/hamilton_clip.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/hamilton_clip.mp4 -------------------------------------------------------------------------------- /tests/test_images/32bit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/32bit.png -------------------------------------------------------------------------------- /tests/test_images/biden.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/biden.jpg -------------------------------------------------------------------------------- /tests/test_images/obama.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/obama.jpg -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | face_recognition_models 2 | Click>=6.0 3 | dlib>=19.3.0 4 | numpy 5 | Pillow 6 | scipy>=0.17.0 7 | -------------------------------------------------------------------------------- /tests/test_images/obama2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/obama2.jpg -------------------------------------------------------------------------------- /tests/test_images/obama3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/obama3.jpg -------------------------------------------------------------------------------- /examples/lin-manuel-miranda.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/lin-manuel-miranda.png -------------------------------------------------------------------------------- /examples/short_hamilton_clip.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/short_hamilton_clip.mp4 -------------------------------------------------------------------------------- /docs/modules.rst: -------------------------------------------------------------------------------- 1 | face_recognition 2 | ================ 3 | 4 | .. toctree:: 5 | :maxdepth: 4 6 | 7 | face_recognition 8 | -------------------------------------------------------------------------------- /examples/knn_examples/test/obama1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/test/obama1.jpg -------------------------------------------------------------------------------- /tests/test_images/obama_partial_face.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/obama_partial_face.jpg -------------------------------------------------------------------------------- /tests/test_images/obama_partial_face2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/tests/test_images/obama_partial_face2.jpg -------------------------------------------------------------------------------- /examples/knn_examples/test/kit_with_rose.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/test/kit_with_rose.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/biden/biden.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/biden/biden.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/biden/biden2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/biden/biden2.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/obama/obama.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/obama/obama.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/obama/obama2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/obama/obama2.jpg -------------------------------------------------------------------------------- /examples/knn_examples/test/alex_lacamoire1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/test/alex_lacamoire1.jpg -------------------------------------------------------------------------------- /examples/knn_examples/test/johnsnow_test1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/test/johnsnow_test1.jpg -------------------------------------------------------------------------------- /examples/knn_examples/test/obama_and_biden.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/test/obama_and_biden.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/rose_leslie/img1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/rose_leslie/img1.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/rose_leslie/img2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/rose_leslie/img2.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/alex_lacamoire/img1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/alex_lacamoire/img1.jpg -------------------------------------------------------------------------------- /examples/knn_examples/train/kit_harington/john1.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/kit_harington/john1.jpeg -------------------------------------------------------------------------------- /examples/knn_examples/train/kit_harington/john2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Todo/face_recognition/master/examples/knn_examples/train/kit_harington/john2.jpeg -------------------------------------------------------------------------------- /docs/face_recognition.rst: -------------------------------------------------------------------------------- 1 | face_recognition package 2 | ======================== 3 | 4 | Module contents 5 | --------------- 6 | 7 | .. automodule:: face_recognition.api 8 | :members: 9 | :undoc-members: 10 | :show-inheritance: 11 | -------------------------------------------------------------------------------- /face_recognition/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | __author__ = """Adam Geitgey""" 4 | __email__ = 'ageitgey@gmail.com' 5 | __version__ = '1.2.2' 6 | 7 | from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance 8 | -------------------------------------------------------------------------------- /requirements_dev.txt: -------------------------------------------------------------------------------- 1 | pip==8.1.2 2 | bumpversion==0.5.3 3 | wheel==0.29.0 4 | watchdog==0.8.3 5 | flake8==2.6.0 6 | tox==2.3.1 7 | coverage==4.1 8 | Sphinx==1.4.8 9 | cryptography==1.7 10 | PyYAML==3.11 11 | face_recognition_models 12 | Click>=6.0 13 | dlib>=19.3.0 14 | numpy 15 | scipy 16 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | 2 | include AUTHORS.rst 3 | 4 | include CONTRIBUTING.rst 5 | include HISTORY.rst 6 | include LICENSE 7 | include README.rst 8 | 9 | recursive-include tests * 10 | recursive-exclude * __pycache__ 11 | recursive-exclude * *.py[co] 12 | 13 | recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif 14 | -------------------------------------------------------------------------------- /.editorconfig: -------------------------------------------------------------------------------- 1 | # http://editorconfig.org 2 | 3 | root = true 4 | 5 | [*] 6 | indent_style = space 7 | indent_size = 4 8 | trim_trailing_whitespace = true 9 | insert_final_newline = true 10 | charset = utf-8 11 | end_of_line = lf 12 | 13 | [*.bat] 14 | indent_style = tab 15 | end_of_line = crlf 16 | 17 | [LICENSE] 18 | insert_final_newline = false 19 | 20 | [Makefile] 21 | indent_style = tab 22 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Welcome to Face Recognition's documentation! 2 | ====================================== 3 | 4 | Contents: 5 | 6 | .. toctree:: 7 | :maxdepth: 2 8 | 9 | readme 10 | installation 11 | usage 12 | modules 13 | contributing 14 | authors 15 | history 16 | 17 | Indices and tables 18 | ================== 19 | 20 | * :ref:`genindex` 21 | * :ref:`modindex` 22 | * :ref:`search` 23 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = 3 | py27 4 | py34 5 | py35 6 | py36 7 | flake8 8 | 9 | 10 | [travis] 11 | python = 12 | 2.7: py27, flake8 13 | 3.4: py34, flake8 14 | 3.5: py35, flake8 15 | 3.6: py36, flake8 16 | 17 | 18 | [testenv] 19 | commands = 20 | python setup.py test 21 | 22 | 23 | [testenv:flake8] 24 | deps = 25 | flake8==2.6.0 26 | 27 | commands = 28 | flake8 29 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | dist: trusty 2 | sudo: required 3 | language: python 4 | python: 5 | - "2.7" 6 | - "3.4" 7 | - "3.5" 8 | - "3.6" 9 | 10 | before_install: 11 | - sudo apt-get -qq update 12 | - sudo apt-get install -qq cmake python-numpy python-scipy libboost-python-dev 13 | - pip install git+https://github.com/ageitgey/face_recognition_models 14 | 15 | install: 16 | - pip install -r requirements.txt 17 | - pip install tox-travis 18 | 19 | script: tox 20 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | * face_recognition version: 2 | * Python version: 3 | * Operating System: 4 | 5 | ### Description 6 | 7 | Describe what you were trying to get done. 8 | Tell us what happened, what went wrong, and what you expected to happen. 9 | IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue. 10 | 11 | ### What I Did 12 | 13 | ``` 14 | Paste the command(s) you ran and the output. 15 | If there was a crash, please include the traceback here. 16 | ``` 17 | -------------------------------------------------------------------------------- /AUTHORS.rst: -------------------------------------------------------------------------------- 1 | ======= 2 | Authors 3 | ======= 4 | 5 | * Adam Geitgey 6 | 7 | Thanks 8 | ------ 9 | 10 | * Many, many thanks to Davis King (@nulhom) 11 | for creating dlib and for providing the trained facial feature detection and face encoding models 12 | used in this library. 13 | * Thanks to everyone who works on all the awesome Python data science libraries like numpy, scipy, scikit-image, 14 | pillow, etc, etc that makes this kind of stuff so easy and fun in Python. 15 | * Thanks to Cookiecutter and the audreyr/cookiecutter-pypackage project template 16 | for making Python project packaging way more tolerable. 17 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [bumpversion] 2 | current_version = 1.2.1 3 | commit = True 4 | tag = True 5 | 6 | [bumpversion:file:setup.py] 7 | search = version='{current_version}' 8 | replace = version='{new_version}' 9 | 10 | [bumpversion:file:face_recognition/__init__.py] 11 | search = __version__ = '{current_version}' 12 | replace = __version__ = '{new_version}' 13 | 14 | [bdist_wheel] 15 | universal = 1 16 | 17 | [flake8] 18 | exclude = 19 | .github, 20 | .idea, 21 | .eggs, 22 | examples, 23 | docs, 24 | .tox, 25 | bin, 26 | dist, 27 | tools, 28 | *.egg-info, 29 | __init__.py, 30 | *.yml 31 | max-line-length = 160 32 | -------------------------------------------------------------------------------- /examples/find_faces_in_picture.py: -------------------------------------------------------------------------------- 1 | from PIL import Image 2 | import face_recognition 3 | 4 | # Load the jpg file into a numpy array 5 | image = face_recognition.load_image_file("biden.jpg") 6 | 7 | # Find all the faces in the image using the default HOG-based model. 8 | # This method is fairly accurate, but not as accurate as the CNN model and not GPU accelerated. 9 | # See also: find_faces_in_picture_cnn.py 10 | face_locations = face_recognition.face_locations(image) 11 | 12 | print("I found {} face(s) in this photograph.".format(len(face_locations))) 13 | 14 | for face_location in face_locations: 15 | 16 | # Print the location of each face in this image 17 | top, right, bottom, left = face_location 18 | print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right)) 19 | 20 | # You can access the actual face itself like this: 21 | face_image = image[top:bottom, left:right] 22 | pil_image = Image.fromarray(face_image) 23 | pil_image.show() 24 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | .DS_Store 52 | 53 | # Django stuff: 54 | *.log 55 | 56 | # Sphinx documentation 57 | docs/_build/ 58 | 59 | # PyBuilder 60 | target/ 61 | 62 | # pyenv python configuration file 63 | .python-version 64 | 65 | .idea/ 66 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | MIT License 3 | 4 | Copyright (c) 2017, Adam Geitgey 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 7 | 8 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 11 | 12 | -------------------------------------------------------------------------------- /examples/find_faces_in_picture_cnn.py: -------------------------------------------------------------------------------- 1 | from PIL import Image 2 | import face_recognition 3 | 4 | # Load the jpg file into a numpy array 5 | image = face_recognition.load_image_file("biden.jpg") 6 | 7 | # Find all the faces in the image using a pre-trained convolutional neural network. 8 | # This method is more accurate than the default HOG model, but it's slower 9 | # unless you have an nvidia GPU and dlib compiled with CUDA extensions. But if you do, 10 | # this will use GPU acceleration and perform well. 11 | # See also: find_faces_in_picture.py 12 | face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn") 13 | 14 | print("I found {} face(s) in this photograph.".format(len(face_locations))) 15 | 16 | for face_location in face_locations: 17 | 18 | # Print the location of each face in this image 19 | top, right, bottom, left = face_location 20 | print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right)) 21 | 22 | # You can access the actual face itself like this: 23 | face_image = image[top:bottom, left:right] 24 | pil_image = Image.fromarray(face_image) 25 | pil_image.show() 26 | -------------------------------------------------------------------------------- /examples/find_facial_features_in_picture.py: -------------------------------------------------------------------------------- 1 | from PIL import Image, ImageDraw 2 | import face_recognition 3 | 4 | # Load the jpg file into a numpy array 5 | image = face_recognition.load_image_file("biden.jpg") 6 | 7 | # Find all facial features in all the faces in the image 8 | face_landmarks_list = face_recognition.face_landmarks(image) 9 | 10 | print("I found {} face(s) in this photograph.".format(len(face_landmarks_list))) 11 | 12 | for face_landmarks in face_landmarks_list: 13 | 14 | # Print the location of each facial feature in this image 15 | facial_features = [ 16 | 'chin', 17 | 'left_eyebrow', 18 | 'right_eyebrow', 19 | 'nose_bridge', 20 | 'nose_tip', 21 | 'left_eye', 22 | 'right_eye', 23 | 'top_lip', 24 | 'bottom_lip' 25 | ] 26 | 27 | for facial_feature in facial_features: 28 | print("The {} in this face has the following points: {}".format(facial_feature, face_landmarks[facial_feature])) 29 | 30 | # Let's trace out each facial feature in the image with a line! 31 | pil_image = Image.fromarray(image) 32 | d = ImageDraw.Draw(pil_image) 33 | 34 | for facial_feature in facial_features: 35 | d.line(face_landmarks[facial_feature], width=5) 36 | 37 | pil_image.show() 38 | -------------------------------------------------------------------------------- /docs/installation.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: shell 2 | 3 | ============ 4 | Installation 5 | ============ 6 | 7 | 8 | Stable release 9 | -------------- 10 | 11 | To install Face Recognition, run this command in your terminal: 12 | 13 | .. code-block:: console 14 | 15 | $ pip3 install face_recognition 16 | 17 | This is the preferred method to install Face Recognition, as it will always install the most recent stable release. 18 | 19 | If you don't have `pip`_ installed, this `Python installation guide`_ can guide 20 | you through the process. 21 | 22 | .. _pip: https://pip.pypa.io 23 | .. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/ 24 | 25 | 26 | From sources 27 | ------------ 28 | 29 | The sources for Face Recognition can be downloaded from the `Github repo`_. 30 | 31 | You can either clone the public repository: 32 | 33 | .. code-block:: console 34 | 35 | $ git clone git://github.com/ageitgey/face_recognition 36 | 37 | Or download the `tarball`_: 38 | 39 | .. code-block:: console 40 | 41 | $ curl -OL https://github.com/ageitgey/face_recognition/tarball/master 42 | 43 | Once you have a copy of the source, you can install it with: 44 | 45 | .. code-block:: console 46 | 47 | $ python setup.py install 48 | 49 | 50 | .. _Github repo: https://github.com/ageitgey/face_recognition 51 | .. _tarball: https://github.com/ageitgey/face_recognition/tarball/master 52 | -------------------------------------------------------------------------------- /examples/recognize_faces_in_pictures.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | 3 | # Load the jpg files into numpy arrays 4 | biden_image = face_recognition.load_image_file("biden.jpg") 5 | obama_image = face_recognition.load_image_file("obama.jpg") 6 | unknown_image = face_recognition.load_image_file("obama2.jpg") 7 | 8 | # Get the face encodings for each face in each image file 9 | # Since there could be more than one face in each image, it returns a list of encodings. 10 | # But since I know each image only has one face, I only care about the first encoding in each image, so I grab index 0. 11 | try: 12 | biden_face_encoding = face_recognition.face_encodings(biden_image)[0] 13 | obama_face_encoding = face_recognition.face_encodings(obama_image)[0] 14 | unknown_face_encoding = face_recognition.face_encodings(unknown_image)[0] 15 | except IndexError: 16 | print("I wasn't able to locate any faces in at least one of the images. Check the image files. Aborting...") 17 | quit() 18 | 19 | known_faces = [ 20 | biden_face_encoding, 21 | obama_face_encoding 22 | ] 23 | 24 | # results is an array of True/False telling if the unknown face matched anyone in the known_faces array 25 | results = face_recognition.compare_faces(known_faces, unknown_face_encoding) 26 | 27 | print("Is the unknown face a picture of Biden? {}".format(results[0])) 28 | print("Is the unknown face a picture of Obama? {}".format(results[1])) 29 | print("Is the unknown face a new person that we've never seen before? {}".format(not True in results)) 30 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # This is a sample Dockerfile you can modify to deploy your own app based on face_recognition 2 | 3 | FROM python:3.6-slim-stretch 4 | 5 | RUN apt-get -y update 6 | RUN apt-get install -y --fix-missing \ 7 | build-essential \ 8 | cmake \ 9 | gfortran \ 10 | git \ 11 | wget \ 12 | curl \ 13 | graphicsmagick \ 14 | libgraphicsmagick1-dev \ 15 | libatlas-dev \ 16 | libavcodec-dev \ 17 | libavformat-dev \ 18 | libgtk2.0-dev \ 19 | libjpeg-dev \ 20 | liblapack-dev \ 21 | libswscale-dev \ 22 | pkg-config \ 23 | python3-dev \ 24 | python3-numpy \ 25 | software-properties-common \ 26 | zip \ 27 | && apt-get clean && rm -rf /tmp/* /var/tmp/* 28 | 29 | RUN cd ~ && \ 30 | mkdir -p dlib && \ 31 | git clone -b 'v19.9' --single-branch https://github.com/davisking/dlib.git dlib/ && \ 32 | cd dlib/ && \ 33 | python3 setup.py install --yes USE_AVX_INSTRUCTIONS 34 | 35 | 36 | # The rest of this file just runs an example script. 37 | 38 | # If you wanted to use this Dockerfile to run your own app instead, maybe you would do this: 39 | # COPY . /root/your_app_or_whatever 40 | # RUN cd /root/your_app_or_whatever && \ 41 | # pip3 install -r requirements.txt 42 | # RUN whatever_command_you_run_to_start_your_app 43 | 44 | COPY . /root/face_recognition 45 | RUN cd /root/face_recognition && \ 46 | pip3 install -r requirements.txt && \ 47 | python3 setup.py install 48 | 49 | CMD cd /root/face_recognition/examples && \ 50 | python3 recognize_faces_in_pictures.py 51 | -------------------------------------------------------------------------------- /examples/digital_makeup.py: -------------------------------------------------------------------------------- 1 | from PIL import Image, ImageDraw 2 | import face_recognition 3 | 4 | # Load the jpg file into a numpy array 5 | image = face_recognition.load_image_file("biden.jpg") 6 | 7 | # Find all facial features in all the faces in the image 8 | face_landmarks_list = face_recognition.face_landmarks(image) 9 | 10 | for face_landmarks in face_landmarks_list: 11 | pil_image = Image.fromarray(image) 12 | d = ImageDraw.Draw(pil_image, 'RGBA') 13 | 14 | # Make the eyebrows into a nightmare 15 | d.polygon(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 128)) 16 | d.polygon(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 128)) 17 | d.line(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 150), width=5) 18 | d.line(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 150), width=5) 19 | 20 | # Gloss the lips 21 | d.polygon(face_landmarks['top_lip'], fill=(150, 0, 0, 128)) 22 | d.polygon(face_landmarks['bottom_lip'], fill=(150, 0, 0, 128)) 23 | d.line(face_landmarks['top_lip'], fill=(150, 0, 0, 64), width=8) 24 | d.line(face_landmarks['bottom_lip'], fill=(150, 0, 0, 64), width=8) 25 | 26 | # Sparkle the eyes 27 | d.polygon(face_landmarks['left_eye'], fill=(255, 255, 255, 30)) 28 | d.polygon(face_landmarks['right_eye'], fill=(255, 255, 255, 30)) 29 | 30 | # Apply some eyeliner 31 | d.line(face_landmarks['left_eye'] + [face_landmarks['left_eye'][0]], fill=(0, 0, 0, 110), width=6) 32 | d.line(face_landmarks['right_eye'] + [face_landmarks['right_eye'][0]], fill=(0, 0, 0, 110), width=6) 33 | 34 | pil_image.show() 35 | -------------------------------------------------------------------------------- /docs/usage.rst: -------------------------------------------------------------------------------- 1 | ===== 2 | Usage 3 | ===== 4 | 5 | To use Face Recognition in a project:: 6 | 7 | import face_recognition 8 | 9 | See the examples in the /examples folder on github for how to use each function. 10 | 11 | You can also check the API docs for the 'face_recognition' module to see the possible parameters for each function. 12 | 13 | The basic idea is that first you load an image:: 14 | 15 | import face_recognition 16 | 17 | image = face_recognition.load_image_file("your_file.jpg") 18 | 19 | That loads the image into a numpy array. If you already have an image in a numpy array, you can skip this step. 20 | 21 | Then you can perform operations on the image, like finding faces, identifying facial features or finding face encodings:: 22 | 23 | # Find all the faces in the image 24 | face_locations = face_recognition.face_locations(image) 25 | 26 | # Or maybe find the facial features in the image 27 | face_landmarks_list = face_recognition.face_landmarks(image) 28 | 29 | # Or you could get face encodings for each face in the image: 30 | list_of_face_encodings = face_recognition.face_encodings(image) 31 | 32 | Face encodings can be compared against each other to see if the faces are a match. Note: Finding the encoding for a face 33 | is a bit slow, so you might want to save the results for each image in a database or cache if you need to refer back to 34 | it later. 35 | 36 | But once you have the encodings for faces, you can compare them like this:: 37 | 38 | # results is an array of True/False telling if the unknown face matched anyone in the known_faces array 39 | results = face_recognition.compare_faces(known_face_encodings, a_single_unknown_face_encoding) 40 | 41 | It's that simple! Check out the examples for more details. 42 | -------------------------------------------------------------------------------- /examples/blur_faces_on_webcam.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | import cv2 3 | 4 | # This is a demo of blurring faces in video. 5 | 6 | # PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam. 7 | # OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this 8 | # specific demo. If you have trouble installing it, try any of the other demos that don't require it instead. 9 | 10 | # Get a reference to webcam #0 (the default one) 11 | video_capture = cv2.VideoCapture(0) 12 | 13 | # Initialize some variables 14 | face_locations = [] 15 | 16 | while True: 17 | # Grab a single frame of video 18 | ret, frame = video_capture.read() 19 | 20 | # Resize frame of video to 1/4 size for faster face detection processing 21 | small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) 22 | 23 | # Find all the faces and face encodings in the current frame of video 24 | face_locations = face_recognition.face_locations(small_frame, model="cnn") 25 | 26 | # Display the results 27 | for top, right, bottom, left in face_locations: 28 | # Scale back up face locations since the frame we detected in was scaled to 1/4 size 29 | top *= 4 30 | right *= 4 31 | bottom *= 4 32 | left *= 4 33 | 34 | # Extract the region of the image that contains the face 35 | face_image = frame[top:bottom, left:right] 36 | 37 | # Blur the face image 38 | face_image = cv2.GaussianBlur(face_image, (99, 99), 30) 39 | 40 | # Put the blurred face region back into the frame image 41 | frame[top:bottom, left:right] = face_image 42 | 43 | # Display the resulting image 44 | cv2.imshow('Video', frame) 45 | 46 | # Hit 'q' on the keyboard to quit! 47 | if cv2.waitKey(1) & 0xFF == ord('q'): 48 | break 49 | 50 | # Release handle to the webcam 51 | video_capture.release() 52 | cv2.destroyAllWindows() 53 | -------------------------------------------------------------------------------- /examples/face_distance.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | 3 | # Often instead of just checking if two faces match or not (True or False), it's helpful to see how similar they are. 4 | # You can do that by using the face_distance function. 5 | 6 | # The model was trained in a way that faces with a distance of 0.6 or less should be a match. But if you want to 7 | # be more strict, you can look for a smaller face distance. For example, using a 0.55 cutoff would reduce false 8 | # positive matches at the risk of more false negatives. 9 | 10 | # Note: This isn't exactly the same as a "percent match". The scale isn't linear. But you can assume that images with a 11 | # smaller distance are more similar to each other than ones with a larger distance. 12 | 13 | # Load some images to compare against 14 | known_obama_image = face_recognition.load_image_file("obama.jpg") 15 | known_biden_image = face_recognition.load_image_file("biden.jpg") 16 | 17 | # Get the face encodings for the known images 18 | obama_face_encoding = face_recognition.face_encodings(known_obama_image)[0] 19 | biden_face_encoding = face_recognition.face_encodings(known_biden_image)[0] 20 | 21 | known_encodings = [ 22 | obama_face_encoding, 23 | biden_face_encoding 24 | ] 25 | 26 | # Load a test image and get encondings for it 27 | image_to_test = face_recognition.load_image_file("obama2.jpg") 28 | image_to_test_encoding = face_recognition.face_encodings(image_to_test)[0] 29 | 30 | # See how far apart the test image is from the known faces 31 | face_distances = face_recognition.face_distance(known_encodings, image_to_test_encoding) 32 | 33 | for i, face_distance in enumerate(face_distances): 34 | print("The test image has a distance of {:.2} from known image #{}".format(face_distance, i)) 35 | print("- With a normal cutoff of 0.6, would the test image match the known image? {}".format(face_distance < 0.6)) 36 | print("- With a very strict cutoff of 0.5, would the test image match the known image? {}".format(face_distance < 0.5)) 37 | print() 38 | -------------------------------------------------------------------------------- /examples/facerec_on_raspberry_pi.py: -------------------------------------------------------------------------------- 1 | # This is a demo of running face recognition on a Raspberry Pi. 2 | # This program will print out the names of anyone it recognizes to the console. 3 | 4 | # To run this, you need a Raspberry Pi 2 (or greater) with face_recognition and 5 | # the picamera[array] module installed. 6 | # You can follow this installation instructions to get your RPi set up: 7 | # https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65 8 | 9 | import face_recognition 10 | import picamera 11 | import numpy as np 12 | 13 | # Get a reference to the Raspberry Pi camera. 14 | # If this fails, make sure you have a camera connected to the RPi and that you 15 | # enabled your camera in raspi-config and rebooted first. 16 | camera = picamera.PiCamera() 17 | camera.resolution = (320, 240) 18 | output = np.empty((240, 320, 3), dtype=np.uint8) 19 | 20 | # Load a sample picture and learn how to recognize it. 21 | print("Loading known face image(s)") 22 | obama_image = face_recognition.load_image_file("obama_small.jpg") 23 | obama_face_encoding = face_recognition.face_encodings(obama_image)[0] 24 | 25 | # Initialize some variables 26 | face_locations = [] 27 | face_encodings = [] 28 | 29 | while True: 30 | print("Capturing image.") 31 | # Grab a single frame of video from the RPi camera as a numpy array 32 | camera.capture(output, format="rgb") 33 | 34 | # Find all the faces and face encodings in the current frame of video 35 | face_locations = face_recognition.face_locations(output) 36 | print("Found {} faces in image.".format(len(face_locations))) 37 | face_encodings = face_recognition.face_encodings(output, face_locations) 38 | 39 | # Loop over each face found in the frame to see if it's someone we know. 40 | for face_encoding in face_encodings: 41 | # See if the face is a match for the known face(s) 42 | match = face_recognition.compare_faces([obama_face_encoding], face_encoding) 43 | name = "" 44 | 45 | if match[0]: 46 | name = "Barack Obama" 47 | 48 | print("I see someone named {}!".format(name)) 49 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | from setuptools import setup 5 | 6 | with open('README.rst') as readme_file: 7 | readme = readme_file.read() 8 | 9 | with open('HISTORY.rst') as history_file: 10 | history = history_file.read() 11 | 12 | requirements = [ 13 | 'face_recognition_models>=0.3.0', 14 | 'Click>=6.0', 15 | 'dlib>=19.7', 16 | 'numpy', 17 | 'Pillow' 18 | ] 19 | 20 | test_requirements = [ 21 | 'tox', 22 | 'flake8==2.6.0' 23 | ] 24 | 25 | setup( 26 | name='face_recognition', 27 | version='1.2.2', 28 | description="Recognize faces from Python or from the command line", 29 | long_description=readme + '\n\n' + history, 30 | author="Adam Geitgey", 31 | author_email='ageitgey@gmail.com', 32 | url='https://github.com/ageitgey/face_recognition', 33 | packages=[ 34 | 'face_recognition', 35 | ], 36 | package_dir={'face_recognition': 'face_recognition'}, 37 | package_data={ 38 | 'face_recognition': ['models/*.dat'] 39 | }, 40 | entry_points={ 41 | 'console_scripts': [ 42 | 'face_recognition=face_recognition.face_recognition_cli:main', 43 | 'face_detection=face_recognition.face_detection_cli:main' 44 | ] 45 | }, 46 | install_requires=requirements, 47 | license="MIT license", 48 | zip_safe=False, 49 | keywords='face_recognition', 50 | classifiers=[ 51 | 'Development Status :: 4 - Beta', 52 | 'Intended Audience :: Developers', 53 | 'License :: OSI Approved :: MIT License', 54 | 'Natural Language :: English', 55 | "Programming Language :: Python :: 2", 56 | 'Programming Language :: Python :: 2.6', 57 | 'Programming Language :: Python :: 2.7', 58 | 'Programming Language :: Python :: 3', 59 | 'Programming Language :: Python :: 3.3', 60 | 'Programming Language :: Python :: 3.4', 61 | 'Programming Language :: Python :: 3.5', 62 | 'Programming Language :: Python :: 3.6', 63 | ], 64 | test_suite='tests', 65 | tests_require=test_requirements 66 | ) 67 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: clean clean-test clean-pyc clean-build docs help 2 | .DEFAULT_GOAL := help 3 | define BROWSER_PYSCRIPT 4 | import os, webbrowser, sys 5 | try: 6 | from urllib import pathname2url 7 | except: 8 | from urllib.request import pathname2url 9 | 10 | webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1]))) 11 | endef 12 | export BROWSER_PYSCRIPT 13 | 14 | define PRINT_HELP_PYSCRIPT 15 | import re, sys 16 | 17 | for line in sys.stdin: 18 | match = re.match(r'^([a-zA-Z_-]+):.*?## (.*)$$', line) 19 | if match: 20 | target, help = match.groups() 21 | print("%-20s %s" % (target, help)) 22 | endef 23 | export PRINT_HELP_PYSCRIPT 24 | BROWSER := python3 -c "$$BROWSER_PYSCRIPT" 25 | 26 | help: 27 | @python3 -c "$$PRINT_HELP_PYSCRIPT" < $(MAKEFILE_LIST) 28 | 29 | clean: clean-build clean-pyc clean-test ## remove all build, test, coverage and Python artifacts 30 | 31 | 32 | clean-build: ## remove build artifacts 33 | rm -fr build/ 34 | rm -fr dist/ 35 | rm -fr .eggs/ 36 | find . -name '*.egg-info' -exec rm -fr {} + 37 | find . -name '*.egg' -exec rm -f {} + 38 | 39 | clean-pyc: ## remove Python file artifacts 40 | find . -name '*.pyc' -exec rm -f {} + 41 | find . -name '*.pyo' -exec rm -f {} + 42 | find . -name '*~' -exec rm -f {} + 43 | find . -name '__pycache__' -exec rm -fr {} + 44 | 45 | clean-test: ## remove test and coverage artifacts 46 | rm -fr .tox/ 47 | rm -f .coverage 48 | rm -fr htmlcov/ 49 | 50 | lint: ## check style with flake8 51 | flake8 face_recognition tests 52 | 53 | test: ## run tests quickly with the default Python 54 | 55 | python3 setup.py test 56 | 57 | test-all: ## run tests on every Python version with tox 58 | tox 59 | 60 | coverage: ## check code coverage quickly with the default Python 61 | 62 | coverage run --source face_recognition setup.py test 63 | 64 | coverage report -m 65 | coverage html 66 | $(BROWSER) htmlcov/index.html 67 | 68 | docs: ## generate Sphinx HTML documentation, including API docs 69 | sphinx-apidoc -o docs/ face_recognition 70 | $(MAKE) -C docs clean 71 | $(MAKE) -C docs html 72 | $(BROWSER) docs/_build/html/index.html 73 | 74 | servedocs: docs ## compile the docs watching for changes 75 | watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D . 76 | 77 | release: clean ## package and upload a release 78 | python3 setup.py sdist upload 79 | python3 setup.py bdist_wheel upload 80 | 81 | dist: clean ## builds source and wheel package 82 | python3 setup.py sdist 83 | python3 setup.py bdist_wheel 84 | ls -l dist 85 | 86 | install: clean ## install the package to the active Python's site-packages 87 | python3 setup.py install 88 | -------------------------------------------------------------------------------- /examples/find_faces_in_batches.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | import cv2 3 | 4 | # This code finds all faces in a list of images using the CNN model. 5 | # 6 | # This demo is for the _special case_ when you need to find faces in LOTS of images very quickly and all the images 7 | # are the exact same size. This is common in video processing applications where you have lots of video frames 8 | # to process. 9 | # 10 | # If you are processing a lot of images and using a GPU with CUDA, batch processing can be ~3x faster then processing 11 | # single images at a time. But if you aren't using a GPU, then batch processing isn't going to be very helpful. 12 | # 13 | # PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read the video file. 14 | # OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this 15 | # specific demo. If you have trouble installing it, try any of the other demos that don't require it instead. 16 | 17 | # Open video file 18 | video_capture = cv2.VideoCapture("short_hamilton_clip.mp4") 19 | 20 | frames = [] 21 | frame_count = 0 22 | 23 | while video_capture.isOpened(): 24 | # Grab a single frame of video 25 | ret, frame = video_capture.read() 26 | 27 | # Bail out when the video file ends 28 | if not ret: 29 | break 30 | 31 | # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses) 32 | frame = frame[:, :, ::-1] 33 | 34 | # Save each frame of the video to a list 35 | frame_count += 1 36 | frames.append(frame) 37 | 38 | # Every 128 frames (the default batch size), batch process the list of frames to find faces 39 | if len(frames) == 128: 40 | batch_of_face_locations = face_recognition.batch_face_locations(frames, number_of_times_to_upsample=0) 41 | 42 | # Now let's list all the faces we found in all 128 frames 43 | for frame_number_in_batch, face_locations in enumerate(batch_of_face_locations): 44 | number_of_faces_in_frame = len(face_locations) 45 | 46 | frame_number = frame_count - 128 + frame_number_in_batch 47 | print("I found {} face(s) in frame #{}.".format(number_of_faces_in_frame, frame_number)) 48 | 49 | for face_location in face_locations: 50 | # Print the location of each face in this frame 51 | top, right, bottom, left = face_location 52 | print(" - A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right)) 53 | 54 | # Clear the frames array to start the next batch 55 | frames = [] 56 | -------------------------------------------------------------------------------- /face_recognition/face_detection_cli.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | from __future__ import print_function 3 | import click 4 | import os 5 | import re 6 | import face_recognition.api as face_recognition 7 | import multiprocessing 8 | import sys 9 | import itertools 10 | 11 | 12 | def print_result(filename, location): 13 | top, right, bottom, left = location 14 | print("{},{},{},{},{}".format(filename, top, right, bottom, left)) 15 | 16 | 17 | def test_image(image_to_check, model): 18 | unknown_image = face_recognition.load_image_file(image_to_check) 19 | face_locations = face_recognition.face_locations(unknown_image, number_of_times_to_upsample=0, model=model) 20 | 21 | for face_location in face_locations: 22 | print_result(image_to_check, face_location) 23 | 24 | 25 | def image_files_in_folder(folder): 26 | return [os.path.join(folder, f) for f in os.listdir(folder) if re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)] 27 | 28 | 29 | def process_images_in_process_pool(images_to_check, number_of_cpus, model): 30 | if number_of_cpus == -1: 31 | processes = None 32 | else: 33 | processes = number_of_cpus 34 | 35 | # macOS will crash due to a bug in libdispatch if you don't use 'forkserver' 36 | context = multiprocessing 37 | if "forkserver" in multiprocessing.get_all_start_methods(): 38 | context = multiprocessing.get_context("forkserver") 39 | 40 | pool = context.Pool(processes=processes) 41 | 42 | function_parameters = zip( 43 | images_to_check, 44 | itertools.repeat(model), 45 | ) 46 | 47 | pool.starmap(test_image, function_parameters) 48 | 49 | 50 | @click.command() 51 | @click.argument('image_to_check') 52 | @click.option('--cpus', default=1, help='number of CPU cores to use in parallel. -1 means "use all in system"') 53 | @click.option('--model', default="hog", help='Which face detection model to use. Options are "hog" or "cnn".') 54 | def main(image_to_check, cpus, model): 55 | # Multi-core processing only supported on Python 3.4 or greater 56 | if (sys.version_info < (3, 4)) and cpus != 1: 57 | click.echo("WARNING: Multi-processing support requires Python 3.4 or greater. Falling back to single-threaded processing!") 58 | cpus = 1 59 | 60 | if os.path.isdir(image_to_check): 61 | if cpus == 1: 62 | [test_image(image_file, model) for image_file in image_files_in_folder(image_to_check)] 63 | else: 64 | process_images_in_process_pool(image_files_in_folder(image_to_check), cpus, model) 65 | else: 66 | test_image(image_to_check, model) 67 | 68 | 69 | if __name__ == "__main__": 70 | main() 71 | -------------------------------------------------------------------------------- /examples/benchmark.py: -------------------------------------------------------------------------------- 1 | import timeit 2 | 3 | # Note: This example is only tested with Python 3 (not Python 2) 4 | 5 | # This is a very simple benchmark to give you an idea of how fast each step of face recognition will run on your system. 6 | # Notice that face detection gets very slow at large image sizes. So you might consider running face detection on a 7 | # scaled down version of your image and then running face encodings on the the full size image. 8 | 9 | TEST_IMAGES = [ 10 | "obama-240p.jpg", 11 | "obama-480p.jpg", 12 | "obama-720p.jpg", 13 | "obama-1080p.jpg" 14 | ] 15 | 16 | 17 | def run_test(setup, test, iterations_per_test=5, tests_to_run=10): 18 | fastest_execution = min(timeit.Timer(test, setup=setup).repeat(tests_to_run, iterations_per_test)) 19 | execution_time = fastest_execution / iterations_per_test 20 | fps = 1.0 / execution_time 21 | return execution_time, fps 22 | 23 | 24 | setup_locate_faces = """ 25 | import face_recognition 26 | 27 | image = face_recognition.load_image_file("{}") 28 | """ 29 | 30 | test_locate_faces = """ 31 | face_locations = face_recognition.face_locations(image) 32 | """ 33 | 34 | setup_face_landmarks = """ 35 | import face_recognition 36 | 37 | image = face_recognition.load_image_file("{}") 38 | face_locations = face_recognition.face_locations(image) 39 | """ 40 | 41 | test_face_landmarks = """ 42 | landmarks = face_recognition.face_landmarks(image, face_locations=face_locations)[0] 43 | """ 44 | 45 | setup_encode_face = """ 46 | import face_recognition 47 | 48 | image = face_recognition.load_image_file("{}") 49 | face_locations = face_recognition.face_locations(image) 50 | """ 51 | 52 | test_encode_face = """ 53 | encoding = face_recognition.face_encodings(image, known_face_locations=face_locations)[0] 54 | """ 55 | 56 | setup_end_to_end = """ 57 | import face_recognition 58 | 59 | image = face_recognition.load_image_file("{}") 60 | """ 61 | 62 | test_end_to_end = """ 63 | encoding = face_recognition.face_encodings(image)[0] 64 | """ 65 | 66 | print("Benchmarks (Note: All benchmarks are only using a single CPU core)") 67 | print() 68 | 69 | for image in TEST_IMAGES: 70 | size = image.split("-")[1].split(".")[0] 71 | print("Timings at {}:".format(size)) 72 | 73 | print(" - Face locations: {:.4f}s ({:.2f} fps)".format(*run_test(setup_locate_faces.format(image), test_locate_faces))) 74 | print(" - Face landmarks: {:.4f}s ({:.2f} fps)".format(*run_test(setup_face_landmarks.format(image), test_face_landmarks))) 75 | print(" - Encode face (inc. landmarks): {:.4f}s ({:.2f} fps)".format(*run_test(setup_encode_face.format(image), test_encode_face))) 76 | print(" - End-to-end: {:.4f}s ({:.2f} fps)".format(*run_test(setup_end_to_end.format(image), test_end_to_end))) 77 | print() 78 | -------------------------------------------------------------------------------- /examples/identify_and_draw_boxes_on_faces.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | from PIL import Image, ImageDraw 3 | 4 | # This is an example of running face recognition on a single image 5 | # and drawing a box around each person that was identified. 6 | 7 | # Load a sample picture and learn how to recognize it. 8 | obama_image = face_recognition.load_image_file("obama.jpg") 9 | obama_face_encoding = face_recognition.face_encodings(obama_image)[0] 10 | 11 | # Load a second sample picture and learn how to recognize it. 12 | biden_image = face_recognition.load_image_file("biden.jpg") 13 | biden_face_encoding = face_recognition.face_encodings(biden_image)[0] 14 | 15 | # Create arrays of known face encodings and their names 16 | known_face_encodings = [ 17 | obama_face_encoding, 18 | biden_face_encoding 19 | ] 20 | known_face_names = [ 21 | "Barack Obama", 22 | "Joe Biden" 23 | ] 24 | 25 | # Load an image with an unknown face 26 | unknown_image = face_recognition.load_image_file("two_people.jpg") 27 | 28 | # Find all the faces and face encodings in the unknown image 29 | face_locations = face_recognition.face_locations(unknown_image) 30 | face_encodings = face_recognition.face_encodings(unknown_image, face_locations) 31 | 32 | # Convert the image to a PIL-format image so that we can draw on top of it with the Pillow library 33 | # See http://pillow.readthedocs.io/ for more about PIL/Pillow 34 | pil_image = Image.fromarray(unknown_image) 35 | # Create a Pillow ImageDraw Draw instance to draw with 36 | draw = ImageDraw.Draw(pil_image) 37 | 38 | # Loop through each face found in the unknown image 39 | for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings): 40 | # See if the face is a match for the known face(s) 41 | matches = face_recognition.compare_faces(known_face_encodings, face_encoding) 42 | 43 | name = "Unknown" 44 | 45 | # If a match was found in known_face_encodings, just use the first one. 46 | if True in matches: 47 | first_match_index = matches.index(True) 48 | name = known_face_names[first_match_index] 49 | 50 | # Draw a box around the face using the Pillow module 51 | draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255)) 52 | 53 | # Draw a label with a name below the face 54 | text_width, text_height = draw.textsize(name) 55 | draw.rectangle(((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255)) 56 | draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255)) 57 | 58 | 59 | # Remove the drawing library from memory as per the Pillow docs 60 | del draw 61 | 62 | # Display the resulting image 63 | pil_image.show() 64 | 65 | # You can also save a copy of the new image to disk if you want by uncommenting this line 66 | # pil_image.save("image_with_boxes.jpg") 67 | -------------------------------------------------------------------------------- /examples/facerec_from_webcam.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | import cv2 3 | 4 | # This is a super simple (but slow) example of running face recognition on live video from your webcam. 5 | # There's a second example that's a little more complicated but runs faster. 6 | 7 | # PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam. 8 | # OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this 9 | # specific demo. If you have trouble installing it, try any of the other demos that don't require it instead. 10 | 11 | # Get a reference to webcam #0 (the default one) 12 | video_capture = cv2.VideoCapture(0) 13 | 14 | # Load a sample picture and learn how to recognize it. 15 | obama_image = face_recognition.load_image_file("obama.jpg") 16 | obama_face_encoding = face_recognition.face_encodings(obama_image)[0] 17 | 18 | # Load a second sample picture and learn how to recognize it. 19 | biden_image = face_recognition.load_image_file("biden.jpg") 20 | biden_face_encoding = face_recognition.face_encodings(biden_image)[0] 21 | 22 | # Create arrays of known face encodings and their names 23 | known_face_encodings = [ 24 | obama_face_encoding, 25 | biden_face_encoding 26 | ] 27 | known_face_names = [ 28 | "Barack Obama", 29 | "Joe Biden" 30 | ] 31 | 32 | while True: 33 | # Grab a single frame of video 34 | ret, frame = video_capture.read() 35 | 36 | # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses) 37 | rgb_frame = frame[:, :, ::-1] 38 | 39 | # Find all the faces and face enqcodings in the frame of video 40 | face_locations = face_recognition.face_locations(rgb_frame) 41 | face_encodings = face_recognition.face_encodings(rgb_frame, face_locations) 42 | 43 | # Loop through each face in this frame of video 44 | for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings): 45 | # See if the face is a match for the known face(s) 46 | matches = face_recognition.compare_faces(known_face_encodings, face_encoding) 47 | 48 | name = "Unknown" 49 | 50 | # If a match was found in known_face_encodings, just use the first one. 51 | if True in matches: 52 | first_match_index = matches.index(True) 53 | name = known_face_names[first_match_index] 54 | 55 | # Draw a box around the face 56 | cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2) 57 | 58 | # Draw a label with a name below the face 59 | cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED) 60 | font = cv2.FONT_HERSHEY_DUPLEX 61 | cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1) 62 | 63 | # Display the resulting image 64 | cv2.imshow('Video', frame) 65 | 66 | # Hit 'q' on the keyboard to quit! 67 | if cv2.waitKey(1) & 0xFF == ord('q'): 68 | break 69 | 70 | # Release handle to the webcam 71 | video_capture.release() 72 | cv2.destroyAllWindows() 73 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | .. highlight:: shell 2 | 3 | ============ 4 | Contributing 5 | ============ 6 | 7 | Contributions are welcome, and they are greatly appreciated! Every 8 | little bit helps, and credit will always be given. 9 | 10 | You can contribute in many ways: 11 | 12 | Types of Contributions 13 | ---------------------- 14 | 15 | Report Bugs 16 | ~~~~~~~~~~~ 17 | 18 | Report bugs at https://github.com/ageitgey/face_recognition/issues. 19 | 20 | If you are reporting a bug, please include: 21 | 22 | * Your operating system name and version. 23 | * Any details about your local setup that might be helpful in troubleshooting. 24 | * Detailed steps to reproduce the bug. 25 | 26 | Submit Feedback 27 | ~~~~~~~~~~~~~~~ 28 | 29 | The best way to send feedback is to file an issue at https://github.com/ageitgey/face_recognition/issues. 30 | 31 | If you are proposing a feature: 32 | 33 | * Explain in detail how it would work. 34 | * Keep the scope as narrow as possible, to make it easier to implement. 35 | * Remember that this is a volunteer-driven project, and that contributions 36 | are welcome :) 37 | 38 | Get Started! 39 | ------------ 40 | 41 | Ready to contribute? Here's how to set up `face_recognition` for local development. 42 | 43 | 1. Fork the `face_recognition` repo on GitHub. 44 | 2. Clone your fork locally:: 45 | 46 | $ git clone git@github.com:your_name_here/face_recognition.git 47 | 48 | 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: 49 | 50 | $ mkvirtualenv face_recognition 51 | $ cd face_recognition/ 52 | $ python setup.py develop 53 | 54 | 4. Create a branch for local development:: 55 | 56 | $ git checkout -b name-of-your-bugfix-or-feature 57 | 58 | Now you can make your changes locally. 59 | 60 | 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: 61 | 62 | $ flake8 face_recognition tests 63 | $ python setup.py test or py.test 64 | $ tox 65 | 66 | To get flake8 and tox, just pip install them into your virtualenv. 67 | 68 | 6. Commit your changes and push your branch to GitHub:: 69 | 70 | $ git add . 71 | $ git commit -m "Your detailed description of your changes." 72 | $ git push origin name-of-your-bugfix-or-feature 73 | 74 | 7. Submit a pull request through the GitHub website. 75 | 76 | Pull Request Guidelines 77 | ----------------------- 78 | 79 | Before you submit a pull request, check that it meets these guidelines: 80 | 81 | 1. The pull request should include tests. 82 | 2. If the pull request adds functionality, the docs should be updated. Put 83 | your new functionality into a function with a docstring, and add the 84 | feature to the list in README.rst. 85 | 3. The pull request should work for Python 2.6, 2.7, 3.3, 3.4 and 3.5, and for PyPy. Check 86 | https://travis-ci.org/ageitgey/face_recognition/pull_requests 87 | and make sure that the tests pass for all supported Python versions. 88 | 89 | Tips 90 | ---- 91 | 92 | To run a subset of tests:: 93 | 94 | 95 | $ python -m unittest tests.test_face_recognition 96 | -------------------------------------------------------------------------------- /examples/facerec_from_video_file.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | import cv2 3 | 4 | # This is a demo of running face recognition on a video file and saving the results to a new video file. 5 | # 6 | # PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam. 7 | # OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this 8 | # specific demo. If you have trouble installing it, try any of the other demos that don't require it instead. 9 | 10 | # Open the input movie file 11 | input_movie = cv2.VideoCapture("hamilton_clip.mp4") 12 | length = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT)) 13 | 14 | # Create an output movie file (make sure resolution/frame rate matches input video!) 15 | fourcc = cv2.VideoWriter_fourcc(*'XVID') 16 | output_movie = cv2.VideoWriter('output.avi', fourcc, 29.97, (640, 360)) 17 | 18 | # Load some sample pictures and learn how to recognize them. 19 | lmm_image = face_recognition.load_image_file("lin-manuel-miranda.png") 20 | lmm_face_encoding = face_recognition.face_encodings(lmm_image)[0] 21 | 22 | al_image = face_recognition.load_image_file("alex-lacamoire.png") 23 | al_face_encoding = face_recognition.face_encodings(al_image)[0] 24 | 25 | known_faces = [ 26 | lmm_face_encoding, 27 | al_face_encoding 28 | ] 29 | 30 | # Initialize some variables 31 | face_locations = [] 32 | face_encodings = [] 33 | face_names = [] 34 | frame_number = 0 35 | 36 | while True: 37 | # Grab a single frame of video 38 | ret, frame = input_movie.read() 39 | frame_number += 1 40 | 41 | # Quit when the input video file ends 42 | if not ret: 43 | break 44 | 45 | # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses) 46 | rgb_frame = frame[:, :, ::-1] 47 | 48 | # Find all the faces and face encodings in the current frame of video 49 | face_locations = face_recognition.face_locations(rgb_frame) 50 | face_encodings = face_recognition.face_encodings(rgb_frame, face_locations) 51 | 52 | face_names = [] 53 | for face_encoding in face_encodings: 54 | # See if the face is a match for the known face(s) 55 | match = face_recognition.compare_faces(known_faces, face_encoding, tolerance=0.50) 56 | 57 | # If you had more than 2 faces, you could make this logic a lot prettier 58 | # but I kept it simple for the demo 59 | name = None 60 | if match[0]: 61 | name = "Lin-Manuel Miranda" 62 | elif match[1]: 63 | name = "Alex Lacamoire" 64 | 65 | face_names.append(name) 66 | 67 | # Label the results 68 | for (top, right, bottom, left), name in zip(face_locations, face_names): 69 | if not name: 70 | continue 71 | 72 | # Draw a box around the face 73 | cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2) 74 | 75 | # Draw a label with a name below the face 76 | cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED) 77 | font = cv2.FONT_HERSHEY_DUPLEX 78 | cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1) 79 | 80 | # Write the resulting image to the output video file 81 | print("Writing frame {} / {}".format(frame_number, length)) 82 | output_movie.write(frame) 83 | 84 | # All done! 85 | input_movie.release() 86 | cv2.destroyAllWindows() 87 | -------------------------------------------------------------------------------- /HISTORY.rst: -------------------------------------------------------------------------------- 1 | History 2 | ======= 3 | 4 | 1.2.2 (2018-04-02) 5 | ------------------ 6 | 7 | * Added the face_detection CLI command 8 | * Removed dependencies on scipy to make installation easier 9 | * Cleaned up KNN example and fixed a bug with drawing fonts to label detected faces in the demo 10 | 11 | 12 | 1.2.1 (2018-02-01) 13 | ------------------ 14 | 15 | * Fixed version numbering inside of module code. 16 | 17 | 18 | 1.2.0 (2018-02-01) 19 | ------------------ 20 | 21 | * Fixed a bug where batch size parameter didn't work correctly when doing batch face detections on GPU. 22 | * Updated OpenCV examples to do proper BGR -> RGB conversion 23 | * Updated webcam examples to avoid common mistakes and reduce support questions 24 | * Added a KNN classification example 25 | * Added an example of automatically blurring faces in images or videos 26 | * Updated Dockerfile example to use dlib v19.9 which removes the boost dependency. 27 | 28 | 29 | 1.1.0 (2017-09-23) 30 | ------------------ 31 | 32 | * Will use dlib's 5-point face pose estimator when possible for speed (instead of 68-point face pose esimator) 33 | * dlib v19.7 is now the minimum required version 34 | * face_recognition_models v0.3.0 is now the minimum required version 35 | 36 | 37 | 1.0.0 (2017-08-29) 38 | ------------------ 39 | 40 | * Added support for dlib's CNN face detection model via model="cnn" parameter on face detecion call 41 | * Added support for GPU batched face detections using dlib's CNN face detector model 42 | * Added find_faces_in_picture_cnn.py to examples 43 | * Added find_faces_in_batches.py to examples 44 | * Added face_rec_from_video_file.py to examples 45 | * dlib v19.5 is now the minimum required version 46 | * face_recognition_models v0.2.0 is now the minimum required version 47 | 48 | 49 | 0.2.2 (2017-07-07) 50 | ------------------ 51 | 52 | * Added --show-distance to cli 53 | * Fixed a bug where --tolerance was ignored in cli if testing a single image 54 | * Added benchmark.py to examples 55 | 56 | 57 | 0.2.1 (2017-07-03) 58 | ------------------ 59 | 60 | * Added --tolerance to cli 61 | 62 | 63 | 0.2.0 (2017-06-03) 64 | ------------------ 65 | 66 | * The CLI can now take advantage of multiple CPUs. Just pass in the -cpus X parameter where X is the number of CPUs to use. 67 | * Added face_distance.py example 68 | * Improved CLI tests to actually test the CLI functionality 69 | * Updated facerec_on_raspberry_pi.py to capture in rgb (not bgr) format. 70 | 71 | 72 | 0.1.14 (2017-04-22) 73 | ------------------- 74 | 75 | * Fixed a ValueError crash when using the CLI on Python 2.7 76 | 77 | 78 | 0.1.13 (2017-04-20) 79 | ------------------- 80 | 81 | * Raspberry Pi support. 82 | 83 | 84 | 0.1.12 (2017-04-13) 85 | ------------------- 86 | 87 | * Fixed: Face landmarks wasn't returning all chin points. 88 | 89 | 90 | 0.1.11 (2017-03-30) 91 | ------------------- 92 | 93 | * Fixed a minor bug in the command-line interface. 94 | 95 | 96 | 0.1.10 (2017-03-21) 97 | ------------------- 98 | 99 | * Minor pref improvements with face comparisons. 100 | * Test updates. 101 | 102 | 103 | 0.1.9 (2017-03-16) 104 | ------------------ 105 | 106 | * Fix minimum scipy version required. 107 | 108 | 109 | 0.1.8 (2017-03-16) 110 | ------------------ 111 | 112 | * Fix missing Pillow dependency. 113 | 114 | 115 | 0.1.7 (2017-03-13) 116 | ------------------ 117 | 118 | * First working release. 119 | -------------------------------------------------------------------------------- /examples/facerec_from_webcam_faster.py: -------------------------------------------------------------------------------- 1 | import face_recognition 2 | import cv2 3 | 4 | # This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the 5 | # other example, but it includes some basic performance tweaks to make things run a lot faster: 6 | # 1. Process each video frame at 1/4 resolution (though still display it at full resolution) 7 | # 2. Only detect faces in every other frame of video. 8 | 9 | # PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam. 10 | # OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this 11 | # specific demo. If you have trouble installing it, try any of the other demos that don't require it instead. 12 | 13 | # Get a reference to webcam #0 (the default one) 14 | video_capture = cv2.VideoCapture(0) 15 | 16 | # Load a sample picture and learn how to recognize it. 17 | obama_image = face_recognition.load_image_file("obama.jpg") 18 | obama_face_encoding = face_recognition.face_encodings(obama_image)[0] 19 | 20 | # Load a second sample picture and learn how to recognize it. 21 | biden_image = face_recognition.load_image_file("biden.jpg") 22 | biden_face_encoding = face_recognition.face_encodings(biden_image)[0] 23 | 24 | # Create arrays of known face encodings and their names 25 | known_face_encodings = [ 26 | obama_face_encoding, 27 | biden_face_encoding 28 | ] 29 | known_face_names = [ 30 | "Barack Obama", 31 | "Joe Biden" 32 | ] 33 | 34 | # Initialize some variables 35 | face_locations = [] 36 | face_encodings = [] 37 | face_names = [] 38 | process_this_frame = True 39 | 40 | while True: 41 | # Grab a single frame of video 42 | ret, frame = video_capture.read() 43 | 44 | # Resize frame of video to 1/4 size for faster face recognition processing 45 | small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) 46 | 47 | # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses) 48 | rgb_small_frame = small_frame[:, :, ::-1] 49 | 50 | # Only process every other frame of video to save time 51 | if process_this_frame: 52 | # Find all the faces and face encodings in the current frame of video 53 | face_locations = face_recognition.face_locations(rgb_small_frame) 54 | face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations) 55 | 56 | face_names = [] 57 | for face_encoding in face_encodings: 58 | # See if the face is a match for the known face(s) 59 | matches = face_recognition.compare_faces(known_face_encodings, face_encoding) 60 | name = "Unknown" 61 | 62 | # If a match was found in known_face_encodings, just use the first one. 63 | if True in matches: 64 | first_match_index = matches.index(True) 65 | name = known_face_names[first_match_index] 66 | 67 | face_names.append(name) 68 | 69 | process_this_frame = not process_this_frame 70 | 71 | 72 | # Display the results 73 | for (top, right, bottom, left), name in zip(face_locations, face_names): 74 | # Scale back up face locations since the frame we detected in was scaled to 1/4 size 75 | top *= 4 76 | right *= 4 77 | bottom *= 4 78 | left *= 4 79 | 80 | # Draw a box around the face 81 | cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2) 82 | 83 | # Draw a label with a name below the face 84 | cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED) 85 | font = cv2.FONT_HERSHEY_DUPLEX 86 | cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1) 87 | 88 | # Display the resulting image 89 | cv2.imshow('Video', frame) 90 | 91 | # Hit 'q' on the keyboard to quit! 92 | if cv2.waitKey(1) & 0xFF == ord('q'): 93 | break 94 | 95 | # Release handle to the webcam 96 | video_capture.release() 97 | cv2.destroyAllWindows() 98 | -------------------------------------------------------------------------------- /face_recognition/face_recognition_cli.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | from __future__ import print_function 3 | import click 4 | import os 5 | import re 6 | import face_recognition.api as face_recognition 7 | import multiprocessing 8 | import itertools 9 | import sys 10 | import PIL.Image 11 | import numpy as np 12 | 13 | 14 | def scan_known_people(known_people_folder): 15 | known_names = [] 16 | known_face_encodings = [] 17 | 18 | for file in image_files_in_folder(known_people_folder): 19 | basename = os.path.splitext(os.path.basename(file))[0] 20 | img = face_recognition.load_image_file(file) 21 | encodings = face_recognition.face_encodings(img) 22 | 23 | if len(encodings) > 1: 24 | click.echo("WARNING: More than one face found in {}. Only considering the first face.".format(file)) 25 | 26 | if len(encodings) == 0: 27 | click.echo("WARNING: No faces found in {}. Ignoring file.".format(file)) 28 | else: 29 | known_names.append(basename) 30 | known_face_encodings.append(encodings[0]) 31 | 32 | return known_names, known_face_encodings 33 | 34 | 35 | def print_result(filename, name, distance, show_distance=False): 36 | if show_distance: 37 | print("{},{},{}".format(filename, name, distance)) 38 | else: 39 | print("{},{}".format(filename, name)) 40 | 41 | 42 | def test_image(image_to_check, known_names, known_face_encodings, tolerance=0.6, show_distance=False): 43 | unknown_image = face_recognition.load_image_file(image_to_check) 44 | 45 | # Scale down image if it's giant so things run a little faster 46 | if max(unknown_image.shape) > 1600: 47 | pil_img = PIL.Image.fromarray(unknown_image) 48 | pil_img.thumbnail((1600, 1600), PIL.Image.LANCZOS) 49 | unknown_image = np.array(pil_img) 50 | 51 | unknown_encodings = face_recognition.face_encodings(unknown_image) 52 | 53 | for unknown_encoding in unknown_encodings: 54 | distances = face_recognition.face_distance(known_face_encodings, unknown_encoding) 55 | result = list(distances <= tolerance) 56 | 57 | if True in result: 58 | [print_result(image_to_check, name, distance, show_distance) for is_match, name, distance in zip(result, known_names, distances) if is_match] 59 | else: 60 | print_result(image_to_check, "unknown_person", None, show_distance) 61 | 62 | if not unknown_encodings: 63 | # print out fact that no faces were found in image 64 | print_result(image_to_check, "no_persons_found", None, show_distance) 65 | 66 | 67 | def image_files_in_folder(folder): 68 | return [os.path.join(folder, f) for f in os.listdir(folder) if re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)] 69 | 70 | 71 | def process_images_in_process_pool(images_to_check, known_names, known_face_encodings, number_of_cpus, tolerance, show_distance): 72 | if number_of_cpus == -1: 73 | processes = None 74 | else: 75 | processes = number_of_cpus 76 | 77 | # macOS will crash due to a bug in libdispatch if you don't use 'forkserver' 78 | context = multiprocessing 79 | if "forkserver" in multiprocessing.get_all_start_methods(): 80 | context = multiprocessing.get_context("forkserver") 81 | 82 | pool = context.Pool(processes=processes) 83 | 84 | function_parameters = zip( 85 | images_to_check, 86 | itertools.repeat(known_names), 87 | itertools.repeat(known_face_encodings), 88 | itertools.repeat(tolerance), 89 | itertools.repeat(show_distance) 90 | ) 91 | 92 | pool.starmap(test_image, function_parameters) 93 | 94 | 95 | @click.command() 96 | @click.argument('known_people_folder') 97 | @click.argument('image_to_check') 98 | @click.option('--cpus', default=1, help='number of CPU cores to use in parallel (can speed up processing lots of images). -1 means "use all in system"') 99 | @click.option('--tolerance', default=0.6, help='Tolerance for face comparisons. Default is 0.6. Lower this if you get multiple matches for the same person.') 100 | @click.option('--show-distance', default=False, type=bool, help='Output face distance. Useful for tweaking tolerance setting.') 101 | def main(known_people_folder, image_to_check, cpus, tolerance, show_distance): 102 | known_names, known_face_encodings = scan_known_people(known_people_folder) 103 | 104 | # Multi-core processing only supported on Python 3.4 or greater 105 | if (sys.version_info < (3, 4)) and cpus != 1: 106 | click.echo("WARNING: Multi-processing support requires Python 3.4 or greater. Falling back to single-threaded processing!") 107 | cpus = 1 108 | 109 | if os.path.isdir(image_to_check): 110 | if cpus == 1: 111 | [test_image(image_file, known_names, known_face_encodings, tolerance, show_distance) for image_file in image_files_in_folder(image_to_check)] 112 | else: 113 | process_images_in_process_pool(image_files_in_folder(image_to_check), known_names, known_face_encodings, cpus, tolerance, show_distance) 114 | else: 115 | test_image(image_to_check, known_names, known_face_encodings, tolerance, show_distance) 116 | 117 | 118 | if __name__ == "__main__": 119 | main() 120 | -------------------------------------------------------------------------------- /examples/web_service_example.py: -------------------------------------------------------------------------------- 1 | # This is a _very simple_ example of a web service that recognizes faces in uploaded images. 2 | # Upload an image file and it will check if the image contains a picture of Barack Obama. 3 | # The result is returned as json. For example: 4 | # 5 | # $ curl -XPOST -F "file=@obama2.jpg" http://127.0.0.1:5001 6 | # 7 | # Returns: 8 | # 9 | # { 10 | # "face_found_in_image": true, 11 | # "is_picture_of_obama": true 12 | # } 13 | # 14 | # This example is based on the Flask file upload example: http://flask.pocoo.org/docs/0.12/patterns/fileuploads/ 15 | 16 | # NOTE: This example requires flask to be installed! You can install it with pip: 17 | # $ pip3 install flask 18 | 19 | import face_recognition 20 | from flask import Flask, jsonify, request, redirect 21 | 22 | # You can change this to any folder on your system 23 | ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'} 24 | 25 | app = Flask(__name__) 26 | 27 | 28 | def allowed_file(filename): 29 | return '.' in filename and \ 30 | filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS 31 | 32 | 33 | @app.route('/', methods=['GET', 'POST']) 34 | def upload_image(): 35 | # Check if a valid image file was uploaded 36 | if request.method == 'POST': 37 | if 'file' not in request.files: 38 | return redirect(request.url) 39 | 40 | file = request.files['file'] 41 | 42 | if file.filename == '': 43 | return redirect(request.url) 44 | 45 | if file and allowed_file(file.filename): 46 | # The image file seems valid! Detect faces and return the result. 47 | return detect_faces_in_image(file) 48 | 49 | # If no valid image file was uploaded, show the file upload form: 50 | return ''' 51 | 52 | Is this a picture of Obama? 53 |

Upload a picture and see if it's a picture of Obama!

54 |
55 | 56 | 57 |
58 | ''' 59 | 60 | 61 | def detect_faces_in_image(file_stream): 62 | # Pre-calculated face encoding of Obama generated with face_recognition.face_encodings(img) 63 | known_face_encoding = [-0.09634063, 0.12095481, -0.00436332, -0.07643753, 0.0080383, 64 | 0.01902981, -0.07184699, -0.09383309, 0.18518871, -0.09588896, 65 | 0.23951106, 0.0986533 , -0.22114635, -0.1363683 , 0.04405268, 66 | 0.11574756, -0.19899382, -0.09597053, -0.11969153, -0.12277931, 67 | 0.03416885, -0.00267565, 0.09203379, 0.04713435, -0.12731361, 68 | -0.35371891, -0.0503444 , -0.17841317, -0.00310897, -0.09844551, 69 | -0.06910533, -0.00503746, -0.18466514, -0.09851682, 0.02903969, 70 | -0.02174894, 0.02261871, 0.0032102 , 0.20312519, 0.02999607, 71 | -0.11646006, 0.09432904, 0.02774341, 0.22102901, 0.26725179, 72 | 0.06896867, -0.00490024, -0.09441824, 0.11115381, -0.22592428, 73 | 0.06230862, 0.16559327, 0.06232892, 0.03458837, 0.09459756, 74 | -0.18777156, 0.00654241, 0.08582542, -0.13578284, 0.0150229 , 75 | 0.00670836, -0.08195844, -0.04346499, 0.03347827, 0.20310158, 76 | 0.09987706, -0.12370517, -0.06683611, 0.12704916, -0.02160804, 77 | 0.00984683, 0.00766284, -0.18980607, -0.19641446, -0.22800779, 78 | 0.09010898, 0.39178532, 0.18818057, -0.20875394, 0.03097027, 79 | -0.21300618, 0.02532415, 0.07938635, 0.01000703, -0.07719778, 80 | -0.12651891, -0.04318593, 0.06219772, 0.09163868, 0.05039065, 81 | -0.04922386, 0.21839413, -0.02394437, 0.06173781, 0.0292527 , 82 | 0.06160797, -0.15553983, -0.02440624, -0.17509389, -0.0630486 , 83 | 0.01428208, -0.03637431, 0.03971229, 0.13983178, -0.23006812, 84 | 0.04999552, 0.0108454 , -0.03970895, 0.02501768, 0.08157793, 85 | -0.03224047, -0.04502571, 0.0556995 , -0.24374914, 0.25514284, 86 | 0.24795187, 0.04060191, 0.17597422, 0.07966681, 0.01920104, 87 | -0.01194376, -0.02300822, -0.17204897, -0.0596558 , 0.05307484, 88 | 0.07417042, 0.07126575, 0.00209804] 89 | 90 | # Load the uploaded image file 91 | img = face_recognition.load_image_file(file_stream) 92 | # Get face encodings for any faces in the uploaded image 93 | unknown_face_encodings = face_recognition.face_encodings(img) 94 | 95 | face_found = False 96 | is_obama = False 97 | 98 | if len(unknown_face_encodings) > 0: 99 | face_found = True 100 | # See if the first face in the uploaded image matches the known face of Obama 101 | match_results = face_recognition.compare_faces([known_face_encoding], unknown_face_encodings[0]) 102 | if match_results[0]: 103 | is_obama = True 104 | 105 | # Return the result as json 106 | result = { 107 | "face_found_in_image": face_found, 108 | "is_picture_of_obama": is_obama 109 | } 110 | return jsonify(result) 111 | 112 | if __name__ == "__main__": 113 | app.run(host='0.0.0.0', port=5001, debug=True) 114 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " devhelp to make HTML files and a Devhelp project" 34 | @echo " epub to make an epub" 35 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 36 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 37 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 38 | @echo " text to make text files" 39 | @echo " man to make manual pages" 40 | @echo " texinfo to make Texinfo files" 41 | @echo " info to make Texinfo files and run them through makeinfo" 42 | @echo " gettext to make PO message catalogs" 43 | @echo " changes to make an overview of all changed/added/deprecated items" 44 | @echo " xml to make Docutils-native XML files" 45 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 46 | @echo " linkcheck to check all external links for integrity" 47 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 48 | 49 | clean: 50 | rm -rf $(BUILDDIR)/* 51 | 52 | html: 53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 54 | @echo 55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 56 | 57 | dirhtml: 58 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 59 | @echo 60 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 61 | 62 | singlehtml: 63 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 64 | @echo 65 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 66 | 67 | pickle: 68 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 69 | @echo 70 | @echo "Build finished; now you can process the pickle files." 71 | 72 | json: 73 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 74 | @echo 75 | @echo "Build finished; now you can process the JSON files." 76 | 77 | htmlhelp: 78 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 79 | @echo 80 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 81 | ".hhp project file in $(BUILDDIR)/htmlhelp." 82 | 83 | qthelp: 84 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 85 | @echo 86 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 87 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 88 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/face_recognition.qhcp" 89 | @echo "To view the help file:" 90 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/face_recognition.qhc" 91 | 92 | devhelp: 93 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 94 | @echo 95 | @echo "Build finished." 96 | @echo "To view the help file:" 97 | @echo "# mkdir -p $$HOME/.local/share/devhelp/face_recognition" 98 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/face_recognition" 99 | @echo "# devhelp" 100 | 101 | epub: 102 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 103 | @echo 104 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 105 | 106 | latex: 107 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 108 | @echo 109 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 110 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 111 | "(use \`make latexpdf' here to do that automatically)." 112 | 113 | latexpdf: 114 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 115 | @echo "Running LaTeX files through pdflatex..." 116 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 117 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 118 | 119 | latexpdfja: 120 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 121 | @echo "Running LaTeX files through platex and dvipdfmx..." 122 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 123 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 124 | 125 | text: 126 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 127 | @echo 128 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 129 | 130 | man: 131 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 132 | @echo 133 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 134 | 135 | texinfo: 136 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 137 | @echo 138 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 139 | @echo "Run \`make' in that directory to run these through makeinfo" \ 140 | "(use \`make info' here to do that automatically)." 141 | 142 | info: 143 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 144 | @echo "Running Texinfo files through makeinfo..." 145 | make -C $(BUILDDIR)/texinfo info 146 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 147 | 148 | gettext: 149 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 150 | @echo 151 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 152 | 153 | changes: 154 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 155 | @echo 156 | @echo "The overview file is in $(BUILDDIR)/changes." 157 | 158 | linkcheck: 159 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 160 | @echo 161 | @echo "Link check complete; look for any errors in the above output " \ 162 | "or in $(BUILDDIR)/linkcheck/output.txt." 163 | 164 | doctest: 165 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 166 | @echo "Testing of doctests in the sources finished, look at the " \ 167 | "results in $(BUILDDIR)/doctest/output.txt." 168 | 169 | xml: 170 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 171 | @echo 172 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 173 | 174 | pseudoxml: 175 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 176 | @echo 177 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 178 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. xml to make Docutils-native XML files 37 | echo. pseudoxml to make pseudoxml-XML files for display purposes 38 | echo. linkcheck to check all external links for integrity 39 | echo. doctest to run all doctests embedded in the documentation if enabled 40 | goto end 41 | ) 42 | 43 | if "%1" == "clean" ( 44 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 45 | del /q /s %BUILDDIR%\* 46 | goto end 47 | ) 48 | 49 | 50 | %SPHINXBUILD% 2> nul 51 | if errorlevel 9009 ( 52 | echo. 53 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 54 | echo.installed, then set the SPHINXBUILD environment variable to point 55 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 56 | echo.may add the Sphinx directory to PATH. 57 | echo. 58 | echo.If you don't have Sphinx installed, grab it from 59 | echo.http://sphinx-doc.org/ 60 | exit /b 1 61 | ) 62 | 63 | if "%1" == "html" ( 64 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 65 | if errorlevel 1 exit /b 1 66 | echo. 67 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 68 | goto end 69 | ) 70 | 71 | if "%1" == "dirhtml" ( 72 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 73 | if errorlevel 1 exit /b 1 74 | echo. 75 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 76 | goto end 77 | ) 78 | 79 | if "%1" == "singlehtml" ( 80 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 81 | if errorlevel 1 exit /b 1 82 | echo. 83 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 84 | goto end 85 | ) 86 | 87 | if "%1" == "pickle" ( 88 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 89 | if errorlevel 1 exit /b 1 90 | echo. 91 | echo.Build finished; now you can process the pickle files. 92 | goto end 93 | ) 94 | 95 | if "%1" == "json" ( 96 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 97 | if errorlevel 1 exit /b 1 98 | echo. 99 | echo.Build finished; now you can process the JSON files. 100 | goto end 101 | ) 102 | 103 | if "%1" == "htmlhelp" ( 104 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 105 | if errorlevel 1 exit /b 1 106 | echo. 107 | echo.Build finished; now you can run HTML Help Workshop with the ^ 108 | .hhp project file in %BUILDDIR%/htmlhelp. 109 | goto end 110 | ) 111 | 112 | if "%1" == "qthelp" ( 113 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 114 | if errorlevel 1 exit /b 1 115 | echo. 116 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 117 | .qhcp project file in %BUILDDIR%/qthelp, like this: 118 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\face_recognition.qhcp 119 | echo.To view the help file: 120 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\face_recognition.ghc 121 | goto end 122 | ) 123 | 124 | if "%1" == "devhelp" ( 125 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished. 129 | goto end 130 | ) 131 | 132 | if "%1" == "epub" ( 133 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 134 | if errorlevel 1 exit /b 1 135 | echo. 136 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 137 | goto end 138 | ) 139 | 140 | if "%1" == "latex" ( 141 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 142 | if errorlevel 1 exit /b 1 143 | echo. 144 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 145 | goto end 146 | ) 147 | 148 | if "%1" == "latexpdf" ( 149 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 150 | cd %BUILDDIR%/latex 151 | make all-pdf 152 | cd %BUILDDIR%/.. 153 | echo. 154 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 155 | goto end 156 | ) 157 | 158 | if "%1" == "latexpdfja" ( 159 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 160 | cd %BUILDDIR%/latex 161 | make all-pdf-ja 162 | cd %BUILDDIR%/.. 163 | echo. 164 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 165 | goto end 166 | ) 167 | 168 | if "%1" == "text" ( 169 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 170 | if errorlevel 1 exit /b 1 171 | echo. 172 | echo.Build finished. The text files are in %BUILDDIR%/text. 173 | goto end 174 | ) 175 | 176 | if "%1" == "man" ( 177 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 178 | if errorlevel 1 exit /b 1 179 | echo. 180 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 181 | goto end 182 | ) 183 | 184 | if "%1" == "texinfo" ( 185 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 186 | if errorlevel 1 exit /b 1 187 | echo. 188 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 189 | goto end 190 | ) 191 | 192 | if "%1" == "gettext" ( 193 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 194 | if errorlevel 1 exit /b 1 195 | echo. 196 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 197 | goto end 198 | ) 199 | 200 | if "%1" == "changes" ( 201 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 202 | if errorlevel 1 exit /b 1 203 | echo. 204 | echo.The overview file is in %BUILDDIR%/changes. 205 | goto end 206 | ) 207 | 208 | if "%1" == "linkcheck" ( 209 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 210 | if errorlevel 1 exit /b 1 211 | echo. 212 | echo.Link check complete; look for any errors in the above output ^ 213 | or in %BUILDDIR%/linkcheck/output.txt. 214 | goto end 215 | ) 216 | 217 | if "%1" == "doctest" ( 218 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 219 | if errorlevel 1 exit /b 1 220 | echo. 221 | echo.Testing of doctests in the sources finished, look at the ^ 222 | results in %BUILDDIR%/doctest/output.txt. 223 | goto end 224 | ) 225 | 226 | if "%1" == "xml" ( 227 | %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml 228 | if errorlevel 1 exit /b 1 229 | echo. 230 | echo.Build finished. The XML files are in %BUILDDIR%/xml. 231 | goto end 232 | ) 233 | 234 | if "%1" == "pseudoxml" ( 235 | %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml 236 | if errorlevel 1 exit /b 1 237 | echo. 238 | echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. 239 | goto end 240 | ) 241 | 242 | :end 243 | -------------------------------------------------------------------------------- /examples/face_recognition_knn.py: -------------------------------------------------------------------------------- 1 | """ 2 | This is an example of using the k-nearest-neighbors (KNN) algorithm for face recognition. 3 | 4 | When should I use this example? 5 | This example is useful when you wish to recognize a large set of known people, 6 | and make a prediction for an unknown person in a feasible computation time. 7 | 8 | Algorithm Description: 9 | The knn classifier is first trained on a set of labeled (known) faces and can then predict the person 10 | in an unknown image by finding the k most similar faces (images with closet face-features under eucledian distance) 11 | in its training set, and performing a majority vote (possibly weighted) on their label. 12 | 13 | For example, if k=3, and the three closest face images to the given image in the training set are one image of Biden 14 | and two images of Obama, The result would be 'Obama'. 15 | 16 | * This implementation uses a weighted vote, such that the votes of closer-neighbors are weighted more heavily. 17 | 18 | Usage: 19 | 20 | 1. Prepare a set of images of the known people you want to recognize. Organize the images in a single directory 21 | with a sub-directory for each known person. 22 | 23 | 2. Then, call the 'train' function with the appropriate parameters. Make sure to pass in the 'model_save_path' if you 24 | want to save the model to disk so you can re-use the model without having to re-train it. 25 | 26 | 3. Call 'predict' and pass in your trained model to recognize the people in an unknown image. 27 | 28 | NOTE: This example requires scikit-learn to be installed! You can install it with pip: 29 | 30 | $ pip3 install scikit-learn 31 | 32 | """ 33 | 34 | import math 35 | from sklearn import neighbors 36 | import os 37 | import os.path 38 | import pickle 39 | from PIL import Image, ImageDraw 40 | import face_recognition 41 | from face_recognition.face_recognition_cli import image_files_in_folder 42 | 43 | ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'} 44 | 45 | 46 | def train(train_dir, model_save_path=None, n_neighbors=None, knn_algo='ball_tree', verbose=False): 47 | """ 48 | Trains a k-nearest neighbors classifier for face recognition. 49 | 50 | :param train_dir: directory that contains a sub-directory for each known person, with its name. 51 | 52 | (View in source code to see train_dir example tree structure) 53 | 54 | Structure: 55 | / 56 | ├── / 57 | │ ├── .jpeg 58 | │ ├── .jpeg 59 | │ ├── ... 60 | ├── / 61 | │ ├── .jpeg 62 | │ └── .jpeg 63 | └── ... 64 | 65 | :param model_save_path: (optional) path to save model on disk 66 | :param n_neighbors: (optional) number of neighbors to weigh in classification. Chosen automatically if not specified 67 | :param knn_algo: (optional) underlying data structure to support knn.default is ball_tree 68 | :param verbose: verbosity of training 69 | :return: returns knn classifier that was trained on the given data. 70 | """ 71 | X = [] 72 | y = [] 73 | 74 | # Loop through each person in the training set 75 | for class_dir in os.listdir(train_dir): 76 | if not os.path.isdir(os.path.join(train_dir, class_dir)): 77 | continue 78 | 79 | # Loop through each training image for the current person 80 | for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)): 81 | image = face_recognition.load_image_file(img_path) 82 | face_bounding_boxes = face_recognition.face_locations(image) 83 | 84 | if len(face_bounding_boxes) != 1: 85 | # If there are no people (or too many people) in a training image, skip the image. 86 | if verbose: 87 | print("Image {} not suitable for training: {}".format(img_path, "Didn't find a face" if len(face_bounding_boxes) < 1 else "Found more than one face")) 88 | else: 89 | # Add face encoding for current image to the training set 90 | X.append(face_recognition.face_encodings(image, known_face_locations=face_bounding_boxes)[0]) 91 | y.append(class_dir) 92 | 93 | # Determine how many neighbors to use for weighting in the KNN classifier 94 | if n_neighbors is None: 95 | n_neighbors = int(round(math.sqrt(len(X)))) 96 | if verbose: 97 | print("Chose n_neighbors automatically:", n_neighbors) 98 | 99 | # Create and train the KNN classifier 100 | knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance') 101 | knn_clf.fit(X, y) 102 | 103 | # Save the trained KNN classifier 104 | if model_save_path is not None: 105 | with open(model_save_path, 'wb') as f: 106 | pickle.dump(knn_clf, f) 107 | 108 | return knn_clf 109 | 110 | 111 | def predict(X_img_path, knn_clf=None, model_path=None, distance_threshold=0.6): 112 | """ 113 | Recognizes faces in given image using a trained KNN classifier 114 | 115 | :param X_img_path: path to image to be recognized 116 | :param knn_clf: (optional) a knn classifier object. if not specified, model_save_path must be specified. 117 | :param model_path: (optional) path to a pickled knn classifier. if not specified, model_save_path must be knn_clf. 118 | :param distance_threshold: (optional) distance threshold for face classification. the larger it is, the more chance 119 | of mis-classifying an unknown person as a known one. 120 | :return: a list of names and face locations for the recognized faces in the image: [(name, bounding box), ...]. 121 | For faces of unrecognized persons, the name 'unknown' will be returned. 122 | """ 123 | if not os.path.isfile(X_img_path) or os.path.splitext(X_img_path)[1][1:] not in ALLOWED_EXTENSIONS: 124 | raise Exception("Invalid image path: {}".format(X_img_path)) 125 | 126 | if knn_clf is None and model_path is None: 127 | raise Exception("Must supply knn classifier either thourgh knn_clf or model_path") 128 | 129 | # Load a trained KNN model (if one was passed in) 130 | if knn_clf is None: 131 | with open(model_path, 'rb') as f: 132 | knn_clf = pickle.load(f) 133 | 134 | # Load image file and find face locations 135 | X_img = face_recognition.load_image_file(X_img_path) 136 | X_face_locations = face_recognition.face_locations(X_img) 137 | 138 | # If no faces are found in the image, return an empty result. 139 | if len(X_face_locations) == 0: 140 | return [] 141 | 142 | # Find encodings for faces in the test iamge 143 | faces_encodings = face_recognition.face_encodings(X_img, known_face_locations=X_face_locations) 144 | 145 | # Use the KNN model to find the best matches for the test face 146 | closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1) 147 | are_matches = [closest_distances[0][i][0] <= distance_threshold for i in range(len(X_face_locations))] 148 | 149 | # Predict classes and remove classifications that aren't within the threshold 150 | return [(pred, loc) if rec else ("unknown", loc) for pred, loc, rec in zip(knn_clf.predict(faces_encodings), X_face_locations, are_matches)] 151 | 152 | 153 | def show_prediction_labels_on_image(img_path, predictions): 154 | """ 155 | Shows the face recognition results visually. 156 | 157 | :param img_path: path to image to be recognized 158 | :param predictions: results of the predict function 159 | :return: 160 | """ 161 | pil_image = Image.open(img_path).convert("RGB") 162 | draw = ImageDraw.Draw(pil_image) 163 | 164 | for name, (top, right, bottom, left) in predictions: 165 | # Draw a box around the face using the Pillow module 166 | draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255)) 167 | 168 | # There's a bug in Pillow where it blows up with non-UTF-8 text 169 | # when using the default bitmap font 170 | name = name.encode("UTF-8") 171 | 172 | # Draw a label with a name below the face 173 | text_width, text_height = draw.textsize(name) 174 | draw.rectangle(((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255)) 175 | draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255)) 176 | 177 | # Remove the drawing library from memory as per the Pillow docs 178 | del draw 179 | 180 | # Display the resulting image 181 | pil_image.show() 182 | 183 | 184 | if __name__ == "__main__": 185 | # STEP 1: Train the KNN classifier and save it to disk 186 | # Once the model is trained and saved, you can skip this step next time. 187 | print("Training KNN classifier...") 188 | classifier = train("knn_examples/train", model_save_path="trained_knn_model.clf", n_neighbors=2) 189 | print("Training complete!") 190 | 191 | # STEP 2: Using the trained classifier, make predictions for unknown images 192 | for image_file in os.listdir("knn_examples/test"): 193 | full_file_path = os.path.join("knn_examples/test", image_file) 194 | 195 | print("Looking for faces in {}".format(image_file)) 196 | 197 | # Find all people in the image using a trained classifier model 198 | # Note: You can pass in either a classifier file name or a classifier model instance 199 | predictions = predict(full_file_path, model_path="trained_knn_model.clf") 200 | 201 | # Print results on the console 202 | for name, (top, right, bottom, left) in predictions: 203 | print("- Found {} at ({}, {})".format(name, left, top)) 204 | 205 | # Display results overlaid on an image 206 | show_prediction_labels_on_image(os.path.join("knn_examples/test", image_file), predictions) 207 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # face_recognition documentation build configuration file, created by 5 | # sphinx-quickstart on Tue Jul 9 22:26:36 2013. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | import sys 17 | import os 18 | from unittest.mock import MagicMock 19 | 20 | class Mock(MagicMock): 21 | @classmethod 22 | def __getattr__(cls, name): 23 | return MagicMock() 24 | 25 | MOCK_MODULES = ['face_recognition_models', 'Click', 'dlib', 'numpy', 'PIL'] 26 | sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES) 27 | 28 | # If extensions (or modules to document with autodoc) are in another 29 | # directory, add these directories to sys.path here. If the directory is 30 | # relative to the documentation root, use os.path.abspath to make it 31 | # absolute, like shown here. 32 | #sys.path.insert(0, os.path.abspath('.')) 33 | 34 | # Get the project root dir, which is the parent dir of this 35 | cwd = os.getcwd() 36 | project_root = os.path.dirname(cwd) 37 | 38 | # Insert the project root dir as the first element in the PYTHONPATH. 39 | # This lets us ensure that the source package is imported, and that its 40 | # version is used. 41 | sys.path.insert(0, project_root) 42 | 43 | import face_recognition 44 | 45 | # -- General configuration --------------------------------------------- 46 | 47 | # If your documentation needs a minimal Sphinx version, state it here. 48 | #needs_sphinx = '1.0' 49 | 50 | # Add any Sphinx extension module names here, as strings. They can be 51 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 52 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] 53 | 54 | # Add any paths that contain templates here, relative to this directory. 55 | templates_path = ['_templates'] 56 | 57 | # The suffix of source filenames. 58 | source_suffix = '.rst' 59 | 60 | # The encoding of source files. 61 | #source_encoding = 'utf-8-sig' 62 | 63 | # The master toctree document. 64 | master_doc = 'index' 65 | 66 | # General information about the project. 67 | project = u'Face Recognition' 68 | copyright = u"2017, Adam Geitgey" 69 | 70 | # The version info for the project you're documenting, acts as replacement 71 | # for |version| and |release|, also used in various other places throughout 72 | # the built documents. 73 | # 74 | # The short X.Y version. 75 | version = face_recognition.__version__ 76 | # The full version, including alpha/beta/rc tags. 77 | release = face_recognition.__version__ 78 | 79 | # The language for content autogenerated by Sphinx. Refer to documentation 80 | # for a list of supported languages. 81 | #language = None 82 | 83 | # There are two options for replacing |today|: either, you set today to 84 | # some non-false value, then it is used: 85 | #today = '' 86 | # Else, today_fmt is used as the format for a strftime call. 87 | #today_fmt = '%B %d, %Y' 88 | 89 | # List of patterns, relative to source directory, that match files and 90 | # directories to ignore when looking for source files. 91 | exclude_patterns = ['_build'] 92 | 93 | # The reST default role (used for this markup: `text`) to use for all 94 | # documents. 95 | #default_role = None 96 | 97 | # If true, '()' will be appended to :func: etc. cross-reference text. 98 | #add_function_parentheses = True 99 | 100 | # If true, the current module name will be prepended to all description 101 | # unit titles (such as .. function::). 102 | #add_module_names = True 103 | 104 | # If true, sectionauthor and moduleauthor directives will be shown in the 105 | # output. They are ignored by default. 106 | #show_authors = False 107 | 108 | # The name of the Pygments (syntax highlighting) style to use. 109 | pygments_style = 'sphinx' 110 | 111 | # A list of ignored prefixes for module index sorting. 112 | #modindex_common_prefix = [] 113 | 114 | # If true, keep warnings as "system message" paragraphs in the built 115 | # documents. 116 | #keep_warnings = False 117 | 118 | 119 | # -- Options for HTML output ------------------------------------------- 120 | 121 | # The theme to use for HTML and HTML Help pages. See the documentation for 122 | # a list of builtin themes. 123 | html_theme = 'default' 124 | 125 | # Theme options are theme-specific and customize the look and feel of a 126 | # theme further. For a list of options available for each theme, see the 127 | # documentation. 128 | #html_theme_options = {} 129 | 130 | # Add any paths that contain custom themes here, relative to this directory. 131 | #html_theme_path = [] 132 | 133 | # The name for this set of Sphinx documents. If None, it defaults to 134 | # " v documentation". 135 | #html_title = None 136 | 137 | # A shorter title for the navigation bar. Default is the same as 138 | # html_title. 139 | #html_short_title = None 140 | 141 | # The name of an image file (relative to this directory) to place at the 142 | # top of the sidebar. 143 | #html_logo = None 144 | 145 | # The name of an image file (within the static path) to use as favicon 146 | # of the docs. This file should be a Windows icon file (.ico) being 147 | # 16x16 or 32x32 pixels large. 148 | #html_favicon = None 149 | 150 | # Add any paths that contain custom static files (such as style sheets) 151 | # here, relative to this directory. They are copied after the builtin 152 | # static files, so a file named "default.css" will overwrite the builtin 153 | # "default.css". 154 | html_static_path = ['_static'] 155 | 156 | # If not '', a 'Last updated on:' timestamp is inserted at every page 157 | # bottom, using the given strftime format. 158 | #html_last_updated_fmt = '%b %d, %Y' 159 | 160 | # If true, SmartyPants will be used to convert quotes and dashes to 161 | # typographically correct entities. 162 | #html_use_smartypants = True 163 | 164 | # Custom sidebar templates, maps document names to template names. 165 | #html_sidebars = {} 166 | 167 | # Additional templates that should be rendered to pages, maps page names 168 | # to template names. 169 | #html_additional_pages = {} 170 | 171 | # If false, no module index is generated. 172 | #html_domain_indices = True 173 | 174 | # If false, no index is generated. 175 | #html_use_index = True 176 | 177 | # If true, the index is split into individual pages for each letter. 178 | #html_split_index = False 179 | 180 | # If true, links to the reST sources are added to the pages. 181 | #html_show_sourcelink = True 182 | 183 | # If true, "Created using Sphinx" is shown in the HTML footer. 184 | # Default is True. 185 | #html_show_sphinx = True 186 | 187 | # If true, "(C) Copyright ..." is shown in the HTML footer. 188 | # Default is True. 189 | #html_show_copyright = True 190 | 191 | # If true, an OpenSearch description file will be output, and all pages 192 | # will contain a tag referring to it. The value of this option 193 | # must be the base URL from which the finished HTML is served. 194 | #html_use_opensearch = '' 195 | 196 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 197 | #html_file_suffix = None 198 | 199 | # Output file base name for HTML help builder. 200 | htmlhelp_basename = 'face_recognitiondoc' 201 | 202 | 203 | # -- Options for LaTeX output ------------------------------------------ 204 | 205 | latex_elements = { 206 | # The paper size ('letterpaper' or 'a4paper'). 207 | #'papersize': 'letterpaper', 208 | 209 | # The font size ('10pt', '11pt' or '12pt'). 210 | #'pointsize': '10pt', 211 | 212 | # Additional stuff for the LaTeX preamble. 213 | #'preamble': '', 214 | } 215 | 216 | # Grouping the document tree into LaTeX files. List of tuples 217 | # (source start file, target name, title, author, documentclass 218 | # [howto/manual]). 219 | latex_documents = [ 220 | ('index', 'face_recognition.tex', 221 | u'Face Recognition Documentation', 222 | u'Adam Geitgey', 'manual'), 223 | ] 224 | 225 | # The name of an image file (relative to this directory) to place at 226 | # the top of the title page. 227 | #latex_logo = None 228 | 229 | # For "manual" documents, if this is true, then toplevel headings 230 | # are parts, not chapters. 231 | #latex_use_parts = False 232 | 233 | # If true, show page references after internal links. 234 | #latex_show_pagerefs = False 235 | 236 | # If true, show URL addresses after external links. 237 | #latex_show_urls = False 238 | 239 | # Documents to append as an appendix to all manuals. 240 | #latex_appendices = [] 241 | 242 | # If false, no module index is generated. 243 | #latex_domain_indices = True 244 | 245 | 246 | # -- Options for manual page output ------------------------------------ 247 | 248 | # One entry per manual page. List of tuples 249 | # (source start file, name, description, authors, manual section). 250 | man_pages = [ 251 | ('index', 'face_recognition', 252 | u'Face Recognition Documentation', 253 | [u'Adam Geitgey'], 1) 254 | ] 255 | 256 | # If true, show URL addresses after external links. 257 | #man_show_urls = False 258 | 259 | 260 | # -- Options for Texinfo output ---------------------------------------- 261 | 262 | # Grouping the document tree into Texinfo files. List of tuples 263 | # (source start file, target name, title, author, 264 | # dir menu entry, description, category) 265 | texinfo_documents = [ 266 | ('index', 'face_recognition', 267 | u'Face Recognition Documentation', 268 | u'Adam Geitgey', 269 | 'face_recognition', 270 | 'One line description of project.', 271 | 'Miscellaneous'), 272 | ] 273 | 274 | # Documents to append as an appendix to all manuals. 275 | #texinfo_appendices = [] 276 | 277 | # If false, no module index is generated. 278 | #texinfo_domain_indices = True 279 | 280 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 281 | #texinfo_show_urls = 'footnote' 282 | 283 | # If true, do not generate a @detailmenu in the "Top" node's menu. 284 | #texinfo_no_detailmenu = False 285 | -------------------------------------------------------------------------------- /face_recognition/api.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import PIL.Image 4 | import dlib 5 | import numpy as np 6 | 7 | try: 8 | import face_recognition_models 9 | except Exception: 10 | print("Please install `face_recognition_models` with this command before using `face_recognition`:\n") 11 | print("pip install git+https://github.com/ageitgey/face_recognition_models") 12 | quit() 13 | 14 | face_detector = dlib.get_frontal_face_detector() 15 | 16 | predictor_68_point_model = face_recognition_models.pose_predictor_model_location() 17 | pose_predictor_68_point = dlib.shape_predictor(predictor_68_point_model) 18 | 19 | predictor_5_point_model = face_recognition_models.pose_predictor_five_point_model_location() 20 | pose_predictor_5_point = dlib.shape_predictor(predictor_5_point_model) 21 | 22 | cnn_face_detection_model = face_recognition_models.cnn_face_detector_model_location() 23 | cnn_face_detector = dlib.cnn_face_detection_model_v1(cnn_face_detection_model) 24 | 25 | face_recognition_model = face_recognition_models.face_recognition_model_location() 26 | face_encoder = dlib.face_recognition_model_v1(face_recognition_model) 27 | 28 | 29 | def _rect_to_css(rect): 30 | """ 31 | Convert a dlib 'rect' object to a plain tuple in (top, right, bottom, left) order 32 | 33 | :param rect: a dlib 'rect' object 34 | :return: a plain tuple representation of the rect in (top, right, bottom, left) order 35 | """ 36 | return rect.top(), rect.right(), rect.bottom(), rect.left() 37 | 38 | 39 | def _css_to_rect(css): 40 | """ 41 | Convert a tuple in (top, right, bottom, left) order to a dlib `rect` object 42 | 43 | :param css: plain tuple representation of the rect in (top, right, bottom, left) order 44 | :return: a dlib `rect` object 45 | """ 46 | return dlib.rectangle(css[3], css[0], css[1], css[2]) 47 | 48 | 49 | def _trim_css_to_bounds(css, image_shape): 50 | """ 51 | Make sure a tuple in (top, right, bottom, left) order is within the bounds of the image. 52 | 53 | :param css: plain tuple representation of the rect in (top, right, bottom, left) order 54 | :param image_shape: numpy shape of the image array 55 | :return: a trimmed plain tuple representation of the rect in (top, right, bottom, left) order 56 | """ 57 | return max(css[0], 0), min(css[1], image_shape[1]), min(css[2], image_shape[0]), max(css[3], 0) 58 | 59 | 60 | def face_distance(face_encodings, face_to_compare): 61 | """ 62 | Given a list of face encodings, compare them to a known face encoding and get a euclidean distance 63 | for each comparison face. The distance tells you how similar the faces are. 64 | 65 | :param faces: List of face encodings to compare 66 | :param face_to_compare: A face encoding to compare against 67 | :return: A numpy ndarray with the distance for each face in the same order as the 'faces' array 68 | """ 69 | if len(face_encodings) == 0: 70 | return np.empty((0)) 71 | 72 | return np.linalg.norm(face_encodings - face_to_compare, axis=1) 73 | 74 | 75 | def load_image_file(file, mode='RGB'): 76 | """ 77 | Loads an image file (.jpg, .png, etc) into a numpy array 78 | 79 | :param file: image file name or file object to load 80 | :param mode: format to convert the image to. Only 'RGB' (8-bit RGB, 3 channels) and 'L' (black and white) are supported. 81 | :return: image contents as numpy array 82 | """ 83 | im = PIL.Image.open(file) 84 | if mode: 85 | im = im.convert(mode) 86 | return np.array(im) 87 | 88 | 89 | def _raw_face_locations(img, number_of_times_to_upsample=1, model="hog"): 90 | """ 91 | Returns an array of bounding boxes of human faces in a image 92 | 93 | :param img: An image (as a numpy array) 94 | :param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces. 95 | :param model: Which face detection model to use. "hog" is less accurate but faster on CPUs. "cnn" is a more accurate 96 | deep-learning model which is GPU/CUDA accelerated (if available). The default is "hog". 97 | :return: A list of dlib 'rect' objects of found face locations 98 | """ 99 | if model == "cnn": 100 | return cnn_face_detector(img, number_of_times_to_upsample) 101 | else: 102 | return face_detector(img, number_of_times_to_upsample) 103 | 104 | 105 | def face_locations(img, number_of_times_to_upsample=1, model="hog"): 106 | """ 107 | Returns an array of bounding boxes of human faces in a image 108 | 109 | :param img: An image (as a numpy array) 110 | :param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces. 111 | :param model: Which face detection model to use. "hog" is less accurate but faster on CPUs. "cnn" is a more accurate 112 | deep-learning model which is GPU/CUDA accelerated (if available). The default is "hog". 113 | :return: A list of tuples of found face locations in css (top, right, bottom, left) order 114 | """ 115 | if model == "cnn": 116 | return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")] 117 | else: 118 | return [_trim_css_to_bounds(_rect_to_css(face), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, model)] 119 | 120 | 121 | def _raw_face_locations_batched(images, number_of_times_to_upsample=1, batch_size=128): 122 | """ 123 | Returns an 2d array of dlib rects of human faces in a image using the cnn face detector 124 | 125 | :param img: A list of images (each as a numpy array) 126 | :param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces. 127 | :return: A list of dlib 'rect' objects of found face locations 128 | """ 129 | return cnn_face_detector(images, number_of_times_to_upsample, batch_size=batch_size) 130 | 131 | 132 | def batch_face_locations(images, number_of_times_to_upsample=1, batch_size=128): 133 | """ 134 | Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector 135 | If you are using a GPU, this can give you much faster results since the GPU 136 | can process batches of images at once. If you aren't using a GPU, you don't need this function. 137 | 138 | :param img: A list of images (each as a numpy array) 139 | :param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces. 140 | :param batch_size: How many images to include in each GPU processing batch. 141 | :return: A list of tuples of found face locations in css (top, right, bottom, left) order 142 | """ 143 | def convert_cnn_detections_to_css(detections): 144 | return [_trim_css_to_bounds(_rect_to_css(face.rect), images[0].shape) for face in detections] 145 | 146 | raw_detections_batched = _raw_face_locations_batched(images, number_of_times_to_upsample, batch_size) 147 | 148 | return list(map(convert_cnn_detections_to_css, raw_detections_batched)) 149 | 150 | 151 | def _raw_face_landmarks(face_image, face_locations=None, model="large"): 152 | if face_locations is None: 153 | face_locations = _raw_face_locations(face_image) 154 | else: 155 | face_locations = [_css_to_rect(face_location) for face_location in face_locations] 156 | 157 | pose_predictor = pose_predictor_68_point 158 | 159 | if model == "small": 160 | pose_predictor = pose_predictor_5_point 161 | 162 | return [pose_predictor(face_image, face_location) for face_location in face_locations] 163 | 164 | 165 | def face_landmarks(face_image, face_locations=None): 166 | """ 167 | Given an image, returns a dict of face feature locations (eyes, nose, etc) for each face in the image 168 | 169 | :param face_image: image to search 170 | :param face_locations: Optionally provide a list of face locations to check. 171 | :return: A list of dicts of face feature locations (eyes, nose, etc) 172 | """ 173 | landmarks = _raw_face_landmarks(face_image, face_locations) 174 | landmarks_as_tuples = [[(p.x, p.y) for p in landmark.parts()] for landmark in landmarks] 175 | 176 | # For a definition of each point index, see https://cdn-images-1.medium.com/max/1600/1*AbEg31EgkbXSQehuNJBlWg.png 177 | return [{ 178 | "chin": points[0:17], 179 | "left_eyebrow": points[17:22], 180 | "right_eyebrow": points[22:27], 181 | "nose_bridge": points[27:31], 182 | "nose_tip": points[31:36], 183 | "left_eye": points[36:42], 184 | "right_eye": points[42:48], 185 | "top_lip": points[48:55] + [points[64]] + [points[63]] + [points[62]] + [points[61]] + [points[60]], 186 | "bottom_lip": points[54:60] + [points[48]] + [points[60]] + [points[67]] + [points[66]] + [points[65]] + [points[64]] 187 | } for points in landmarks_as_tuples] 188 | 189 | 190 | def face_encodings(face_image, known_face_locations=None, num_jitters=1): 191 | """ 192 | Given an image, return the 128-dimension face encoding for each face in the image. 193 | 194 | :param face_image: The image that contains one or more faces 195 | :param known_face_locations: Optional - the bounding boxes of each face if you already know them. 196 | :param num_jitters: How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower) 197 | :return: A list of 128-dimensional face encodings (one for each face in the image) 198 | """ 199 | raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model="small") 200 | return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks] 201 | 202 | 203 | def compare_faces(known_face_encodings, face_encoding_to_check, tolerance=0.6): 204 | """ 205 | Compare a list of face encodings against a candidate encoding to see if they match. 206 | 207 | :param known_face_encodings: A list of known face encodings 208 | :param face_encoding_to_check: A single face encoding to compare against the list 209 | :param tolerance: How much distance between faces to consider it a match. Lower is more strict. 0.6 is typical best performance. 210 | :return: A list of True/False values indicating which known_face_encodings match the face encoding to check 211 | """ 212 | return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance) 213 | -------------------------------------------------------------------------------- /tests/test_face_recognition.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | test_face_recognition 6 | ---------------------------------- 7 | 8 | Tests for `face_recognition` module. 9 | """ 10 | 11 | 12 | import unittest 13 | import os 14 | import numpy as np 15 | from click.testing import CliRunner 16 | 17 | from face_recognition import api 18 | from face_recognition import face_recognition_cli 19 | from face_recognition import face_detection_cli 20 | 21 | 22 | class Test_face_recognition(unittest.TestCase): 23 | 24 | def test_load_image_file(self): 25 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 26 | self.assertEqual(img.shape, (1137, 910, 3)) 27 | 28 | def test_load_image_file_32bit(self): 29 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', '32bit.png')) 30 | self.assertEqual(img.shape, (1200, 626, 3)) 31 | 32 | def test_raw_face_locations(self): 33 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 34 | detected_faces = api._raw_face_locations(img) 35 | 36 | self.assertEqual(len(detected_faces), 1) 37 | self.assertEqual(detected_faces[0].top(), 142) 38 | self.assertEqual(detected_faces[0].bottom(), 409) 39 | 40 | def test_cnn_raw_face_locations(self): 41 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 42 | detected_faces = api._raw_face_locations(img, model="cnn") 43 | 44 | self.assertEqual(len(detected_faces), 1) 45 | self.assertAlmostEqual(detected_faces[0].rect.top(), 144, delta=25) 46 | self.assertAlmostEqual(detected_faces[0].rect.bottom(), 389, delta=25) 47 | 48 | def test_raw_face_locations_32bit_image(self): 49 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', '32bit.png')) 50 | detected_faces = api._raw_face_locations(img) 51 | 52 | self.assertEqual(len(detected_faces), 1) 53 | self.assertEqual(detected_faces[0].top(), 290) 54 | self.assertEqual(detected_faces[0].bottom(), 558) 55 | 56 | def test_cnn_raw_face_locations_32bit_image(self): 57 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', '32bit.png')) 58 | detected_faces = api._raw_face_locations(img, model="cnn") 59 | 60 | self.assertEqual(len(detected_faces), 1) 61 | self.assertAlmostEqual(detected_faces[0].rect.top(), 259, delta=25) 62 | self.assertAlmostEqual(detected_faces[0].rect.bottom(), 552, delta=25) 63 | 64 | def test_face_locations(self): 65 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 66 | detected_faces = api.face_locations(img) 67 | 68 | self.assertEqual(len(detected_faces), 1) 69 | self.assertEqual(detected_faces[0], (142, 617, 409, 349)) 70 | 71 | def test_cnn_face_locations(self): 72 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 73 | detected_faces = api.face_locations(img, model="cnn") 74 | 75 | self.assertEqual(len(detected_faces), 1) 76 | self.assertAlmostEqual(detected_faces[0][0], 144, delta=25) 77 | self.assertAlmostEqual(detected_faces[0][1], 608, delta=25) 78 | self.assertAlmostEqual(detected_faces[0][2], 389, delta=25) 79 | self.assertAlmostEqual(detected_faces[0][3], 363, delta=25) 80 | 81 | def test_partial_face_locations(self): 82 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama_partial_face.jpg')) 83 | detected_faces = api.face_locations(img) 84 | 85 | self.assertEqual(len(detected_faces), 1) 86 | self.assertEqual(detected_faces[0], (142, 191, 365, 0)) 87 | 88 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama_partial_face2.jpg')) 89 | detected_faces = api.face_locations(img) 90 | 91 | self.assertEqual(len(detected_faces), 1) 92 | self.assertEqual(detected_faces[0], (142, 551, 409, 349)) 93 | 94 | def test_raw_face_locations_batched(self): 95 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 96 | images = [img, img, img] 97 | batched_detected_faces = api._raw_face_locations_batched(images, number_of_times_to_upsample=0) 98 | 99 | for detected_faces in batched_detected_faces: 100 | self.assertEqual(len(detected_faces), 1) 101 | self.assertEqual(detected_faces[0].rect.top(), 154) 102 | self.assertEqual(detected_faces[0].rect.bottom(), 390) 103 | 104 | def test_batched_face_locations(self): 105 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 106 | images = [img, img, img] 107 | 108 | batched_detected_faces = api.batch_face_locations(images, number_of_times_to_upsample=0) 109 | 110 | for detected_faces in batched_detected_faces: 111 | self.assertEqual(len(detected_faces), 1) 112 | self.assertEqual(detected_faces[0], (154, 611, 390, 375)) 113 | 114 | def test_raw_face_landmarks(self): 115 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 116 | face_landmarks = api._raw_face_landmarks(img) 117 | example_landmark = face_landmarks[0].parts()[10] 118 | 119 | self.assertEqual(len(face_landmarks), 1) 120 | self.assertEqual(face_landmarks[0].num_parts, 68) 121 | self.assertEqual((example_landmark.x, example_landmark.y), (552, 399)) 122 | 123 | def test_face_landmarks(self): 124 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 125 | face_landmarks = api.face_landmarks(img) 126 | 127 | self.assertEqual( 128 | set(face_landmarks[0].keys()), 129 | set(['chin', 'left_eyebrow', 'right_eyebrow', 'nose_bridge', 130 | 'nose_tip', 'left_eye', 'right_eye', 'top_lip', 131 | 'bottom_lip'])) 132 | self.assertEqual( 133 | face_landmarks[0]['chin'], 134 | [(369, 220), (372, 254), (378, 289), (384, 322), (395, 353), 135 | (414, 382), (437, 407), (464, 424), (495, 428), (527, 420), 136 | (552, 399), (576, 372), (594, 344), (604, 314), (610, 282), 137 | (613, 250), (615, 219)]) 138 | 139 | def test_face_encodings(self): 140 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 141 | encodings = api.face_encodings(img) 142 | 143 | self.assertEqual(len(encodings), 1) 144 | self.assertEqual(len(encodings[0]), 128) 145 | 146 | def test_face_distance(self): 147 | img_a1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 148 | img_a2 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama2.jpg')) 149 | img_a3 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama3.jpg')) 150 | 151 | img_b1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg')) 152 | 153 | face_encoding_a1 = api.face_encodings(img_a1)[0] 154 | face_encoding_a2 = api.face_encodings(img_a2)[0] 155 | face_encoding_a3 = api.face_encodings(img_a3)[0] 156 | face_encoding_b1 = api.face_encodings(img_b1)[0] 157 | 158 | faces_to_compare = [ 159 | face_encoding_a2, 160 | face_encoding_a3, 161 | face_encoding_b1] 162 | 163 | distance_results = api.face_distance(faces_to_compare, face_encoding_a1) 164 | 165 | # 0.6 is the default face distance match threshold. So we'll spot-check that the numbers returned 166 | # are above or below that based on if they should match (since the exact numbers could vary). 167 | self.assertEqual(type(distance_results), np.ndarray) 168 | self.assertLessEqual(distance_results[0], 0.6) 169 | self.assertLessEqual(distance_results[1], 0.6) 170 | self.assertGreater(distance_results[2], 0.6) 171 | 172 | def test_face_distance_empty_lists(self): 173 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg')) 174 | face_encoding = api.face_encodings(img)[0] 175 | 176 | # empty python list 177 | faces_to_compare = [] 178 | 179 | distance_results = api.face_distance(faces_to_compare, face_encoding) 180 | self.assertEqual(type(distance_results), np.ndarray) 181 | self.assertEqual(len(distance_results), 0) 182 | 183 | # empty numpy list 184 | faces_to_compare = np.array([]) 185 | 186 | distance_results = api.face_distance(faces_to_compare, face_encoding) 187 | self.assertEqual(type(distance_results), np.ndarray) 188 | self.assertEqual(len(distance_results), 0) 189 | 190 | def test_compare_faces(self): 191 | img_a1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg')) 192 | img_a2 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama2.jpg')) 193 | img_a3 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'obama3.jpg')) 194 | 195 | img_b1 = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg')) 196 | 197 | face_encoding_a1 = api.face_encodings(img_a1)[0] 198 | face_encoding_a2 = api.face_encodings(img_a2)[0] 199 | face_encoding_a3 = api.face_encodings(img_a3)[0] 200 | face_encoding_b1 = api.face_encodings(img_b1)[0] 201 | 202 | faces_to_compare = [ 203 | face_encoding_a2, 204 | face_encoding_a3, 205 | face_encoding_b1] 206 | 207 | match_results = api.compare_faces(faces_to_compare, face_encoding_a1) 208 | 209 | self.assertEqual(type(match_results), list) 210 | self.assertTrue(match_results[0]) 211 | self.assertTrue(match_results[1]) 212 | self.assertFalse(match_results[2]) 213 | 214 | def test_compare_faces_empty_lists(self): 215 | img = api.load_image_file(os.path.join(os.path.dirname(__file__), 'test_images', 'biden.jpg')) 216 | face_encoding = api.face_encodings(img)[0] 217 | 218 | # empty python list 219 | faces_to_compare = [] 220 | 221 | match_results = api.compare_faces(faces_to_compare, face_encoding) 222 | self.assertEqual(type(match_results), list) 223 | self.assertListEqual(match_results, []) 224 | 225 | # empty numpy list 226 | faces_to_compare = np.array([]) 227 | 228 | match_results = api.compare_faces(faces_to_compare, face_encoding) 229 | self.assertEqual(type(match_results), list) 230 | self.assertListEqual(match_results, []) 231 | 232 | def test_command_line_interface_options(self): 233 | target_string = 'Show this message and exit.' 234 | runner = CliRunner() 235 | help_result = runner.invoke(face_recognition_cli.main, ['--help']) 236 | self.assertEqual(help_result.exit_code, 0) 237 | self.assertTrue(target_string in help_result.output) 238 | 239 | def test_command_line_interface(self): 240 | target_string = 'obama.jpg,obama' 241 | runner = CliRunner() 242 | image_folder = os.path.join(os.path.dirname(__file__), 'test_images') 243 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg') 244 | 245 | result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file]) 246 | 247 | self.assertEqual(result.exit_code, 0) 248 | self.assertTrue(target_string in result.output) 249 | 250 | def test_command_line_interface_big_image(self): 251 | target_string = 'obama3.jpg,obama' 252 | runner = CliRunner() 253 | image_folder = os.path.join(os.path.dirname(__file__), 'test_images') 254 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama3.jpg') 255 | 256 | result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file]) 257 | 258 | self.assertEqual(result.exit_code, 0) 259 | self.assertTrue(target_string in result.output) 260 | 261 | def test_command_line_interface_tolerance(self): 262 | target_string = 'obama.jpg,obama' 263 | runner = CliRunner() 264 | image_folder = os.path.join(os.path.dirname(__file__), 'test_images') 265 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg') 266 | 267 | result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file, "--tolerance", "0.55"]) 268 | 269 | self.assertEqual(result.exit_code, 0) 270 | self.assertTrue(target_string in result.output) 271 | 272 | def test_command_line_interface_show_distance(self): 273 | target_string = 'obama.jpg,obama,0.0' 274 | runner = CliRunner() 275 | image_folder = os.path.join(os.path.dirname(__file__), 'test_images') 276 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg') 277 | 278 | result = runner.invoke(face_recognition_cli.main, args=[image_folder, image_file, "--show-distance", "1"]) 279 | 280 | self.assertEqual(result.exit_code, 0) 281 | self.assertTrue(target_string in result.output) 282 | 283 | def test_fd_command_line_interface_options(self): 284 | target_string = 'Show this message and exit.' 285 | runner = CliRunner() 286 | help_result = runner.invoke(face_detection_cli.main, ['--help']) 287 | self.assertEqual(help_result.exit_code, 0) 288 | self.assertTrue(target_string in help_result.output) 289 | 290 | def test_fd_command_line_interface(self): 291 | runner = CliRunner() 292 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg') 293 | 294 | result = runner.invoke(face_detection_cli.main, args=[image_file]) 295 | self.assertEqual(result.exit_code, 0) 296 | parts = result.output.split(",") 297 | self.assertTrue("obama.jpg" in parts[0]) 298 | self.assertEqual(len(parts), 5) 299 | 300 | def test_fd_command_line_interface_folder(self): 301 | runner = CliRunner() 302 | image_file = os.path.join(os.path.dirname(__file__), 'test_images') 303 | 304 | result = runner.invoke(face_detection_cli.main, args=[image_file]) 305 | self.assertEqual(result.exit_code, 0) 306 | self.assertTrue("obama_partial_face2.jpg" in result.output) 307 | self.assertTrue("obama.jpg" in result.output) 308 | self.assertTrue("obama2.jpg" in result.output) 309 | self.assertTrue("obama3.jpg" in result.output) 310 | self.assertTrue("biden.jpg" in result.output) 311 | 312 | def test_fd_command_line_interface_hog_model(self): 313 | target_string = 'obama.jpg' 314 | runner = CliRunner() 315 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg') 316 | 317 | result = runner.invoke(face_detection_cli.main, args=[image_file, "--model", "hog"]) 318 | self.assertEqual(result.exit_code, 0) 319 | self.assertTrue(target_string in result.output) 320 | 321 | def test_fd_command_line_interface_cnn_model(self): 322 | target_string = 'obama.jpg' 323 | runner = CliRunner() 324 | image_file = os.path.join(os.path.dirname(__file__), 'test_images', 'obama.jpg') 325 | 326 | result = runner.invoke(face_detection_cli.main, args=[image_file, "--model", "cnn"]) 327 | self.assertEqual(result.exit_code, 0) 328 | self.assertTrue(target_string in result.output) 329 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Face Recognition 2 | 3 | Recognize and manipulate faces from Python or from the command line with 4 | the world's simplest face recognition library. 5 | 6 | Built using [dlib](http://dlib.net/)'s state-of-the-art face recognition 7 | built with deep learning. The model has an accuracy of 99.38% on the 8 | [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) benchmark. 9 | 10 | This also provides a simple `face_recognition` command line tool that lets 11 | you do face recognition on a folder of images from the command line! 12 | 13 | 14 | [![PyPI](https://img.shields.io/pypi/v/face_recognition.svg)](https://pypi.python.org/pypi/face_recognition) 15 | [![Build Status](https://travis-ci.org/ageitgey/face_recognition.svg?branch=master)](https://travis-ci.org/ageitgey/face_recognition) 16 | [![Documentation Status](https://readthedocs.org/projects/face-recognition/badge/?version=latest)](http://face-recognition.readthedocs.io/en/latest/?badge=latest) 17 | 18 | ## Features 19 | 20 | #### Find faces in pictures 21 | 22 | Find all the faces that appear in a picture: 23 | 24 | ![](https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png) 25 | 26 | ```python 27 | import face_recognition 28 | image = face_recognition.load_image_file("your_file.jpg") 29 | face_locations = face_recognition.face_locations(image) 30 | ``` 31 | 32 | #### Find and manipulate facial features in pictures 33 | 34 | Get the locations and outlines of each person's eyes, nose, mouth and chin. 35 | 36 | ![](https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png) 37 | 38 | ```python 39 | import face_recognition 40 | image = face_recognition.load_image_file("your_file.jpg") 41 | face_landmarks_list = face_recognition.face_landmarks(image) 42 | ``` 43 | 44 | Finding facial features is super useful for lots of important stuff. But you can also use for really stupid stuff 45 | like applying [digital make-up](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py) (think 'Meitu'): 46 | 47 | ![](https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png) 48 | 49 | #### Identify faces in pictures 50 | 51 | Recognize who appears in each photo. 52 | 53 | ![](https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png) 54 | 55 | ```python 56 | import face_recognition 57 | known_image = face_recognition.load_image_file("biden.jpg") 58 | unknown_image = face_recognition.load_image_file("unknown.jpg") 59 | 60 | biden_encoding = face_recognition.face_encodings(known_image)[0] 61 | unknown_encoding = face_recognition.face_encodings(unknown_image)[0] 62 | 63 | results = face_recognition.compare_faces([biden_encoding], unknown_encoding) 64 | ``` 65 | 66 | You can even use this library with other Python libraries to do real-time face recognition: 67 | 68 | ![](https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif) 69 | 70 | See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py) for the code. 71 | 72 | ## Installation 73 | 74 | ### Requirements 75 | 76 | * Python 3.3+ or Python 2.7 77 | * macOS or Linux (Windows not officially supported, but might work) 78 | 79 | ### Installation Options: 80 | 81 | #### Installing on Mac or Linux 82 | 83 | First, make sure you have dlib already installed with Python bindings: 84 | 85 | * [How to install dlib from source on macOS or Ubuntu](https://gist.github.com/ageitgey/629d75c1baac34dfa5ca2a1928a7aeaf) 86 | 87 | Then, install this module from pypi using `pip3` (or `pip2` for Python 2): 88 | 89 | ```bash 90 | pip3 install face_recognition 91 | ``` 92 | 93 | If you are having trouble with installation, you can also try out a 94 | [pre-configured VM](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b). 95 | 96 | #### Installing on Raspberry Pi 2+ 97 | 98 | * [Raspberry Pi 2+ installation instructions](https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65) 99 | 100 | #### Installing on Windows 101 | 102 | While Windows isn't officially supported, helpful users have posted instructions on how to install this library: 103 | 104 | * [@masoudr's Windows 10 installation guide (dlib + face_recognition)](https://github.com/ageitgey/face_recognition/issues/175#issue-257710508) 105 | 106 | #### Installing a pre-configured Virtual Machine image 107 | 108 | * [Download the pre-configured VM image](https://medium.com/@ageitgey/try-deep-learning-in-python-now-with-a-fully-pre-configured-vm-1d97d4c3e9b) (for VMware Player or VirtualBox). 109 | 110 | ## Usage 111 | 112 | ### Command-Line Interface 113 | 114 | When you install `face_recognition`, you get a two simple command-line 115 | programs: 116 | 117 | * `face_recognition` - Recognize faces in a photograph or folder full for 118 | photographs. 119 | * `face_detection` - Find faces in a photograph or folder full for photographs. 120 | 121 | #### `face_recognition` command line tool 122 | 123 | The `face_recognition` command lets you recognize faces in a photograph or 124 | folder full for photographs. 125 | 126 | First, you need to provide a folder with one picture of each person you 127 | already know. There should be one image file for each person with the 128 | files named according to who is in the picture: 129 | 130 | ![known](https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png) 131 | 132 | Next, you need a second folder with the files you want to identify: 133 | 134 | ![unknown](https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png) 135 | 136 | Then in you simply run the command `face_recognition`, passing in 137 | the folder of known people and the folder (or single image) with unknown 138 | people and it tells you who is in each image: 139 | 140 | ```bash 141 | $ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ 142 | 143 | /unknown_pictures/unknown.jpg,Barack Obama 144 | /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person 145 | ``` 146 | 147 | There's one line in the output for each face. The data is comma-separated 148 | with the filename and the name of the person found. 149 | 150 | An `unknown_person` is a face in the image that didn't match anyone in 151 | your folder of known people. 152 | 153 | #### `face_detection` command line tool 154 | 155 | The `face_detection` command lets you find the location (pixel coordinatates) 156 | of any faces in an image. 157 | 158 | Just run the command `face_detection`, passing in a folder of images 159 | to check (or a single image): 160 | 161 | ```bash 162 | $ face_detection ./folder_with_pictures/ 163 | 164 | examples/image1.jpg,65,215,169,112 165 | examples/image2.jpg,62,394,211,244 166 | examples/image2.jpg,95,941,244,792 167 | ``` 168 | 169 | It prints one line for each face that was detected. The coordinates 170 | reported are the top, right, bottom and left coordinates of the face (in pixels). 171 | 172 | ##### Adjusting Tolerance / Sensitivity 173 | 174 | If you are getting multiple matches for the same person, it might be that 175 | the people in your photos look very similar and a lower tolerance value 176 | is needed to make face comparisons more strict. 177 | 178 | You can do that with the `--tolerance` parameter. The default tolerance 179 | value is 0.6 and lower numbers make face comparisons more strict: 180 | 181 | ```bash 182 | $ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/ 183 | 184 | /unknown_pictures/unknown.jpg,Barack Obama 185 | /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person 186 | ``` 187 | 188 | If you want to see the face distance calculated for each match in order 189 | to adjust the tolerance setting, you can use `--show-distance true`: 190 | 191 | ```bash 192 | $ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/ 193 | 194 | /unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785 195 | /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None 196 | ``` 197 | 198 | ##### More Examples 199 | 200 | If you simply want to know the names of the people in each photograph but don't 201 | care about file names, you could do this: 202 | 203 | ```bash 204 | $ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2 205 | 206 | Barack Obama 207 | unknown_person 208 | ``` 209 | 210 | ##### Speeding up Face Recognition 211 | 212 | Face recognition can be done in parallel if you have a computer with 213 | multiple CPU cores. For example if your system has 4 CPU cores, you can 214 | process about 4 times as many images in the same amount of time by using 215 | all your CPU cores in parallel. 216 | 217 | If you are using Python 3.4 or newer, pass in a `--cpus ` parameter: 218 | 219 | ```bash 220 | $ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/ 221 | ``` 222 | 223 | You can also pass in `--cpus -1` to use all CPU cores in your system. 224 | 225 | #### Python Module 226 | 227 | You can import the `face_recognition` module and then easily manipulate 228 | faces with just a couple of lines of code. It's super easy! 229 | 230 | API Docs: [https://face-recognition.readthedocs.io](https://face-recognition.readthedocs.io/en/latest/face_recognition.html). 231 | 232 | ##### Automatically find all the faces in an image 233 | 234 | ```python 235 | import face_recognition 236 | 237 | image = face_recognition.load_image_file("my_picture.jpg") 238 | face_locations = face_recognition.face_locations(image) 239 | 240 | # face_locations is now an array listing the co-ordinates of each face! 241 | ``` 242 | 243 | See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py) 244 | to try it out. 245 | 246 | You can also opt-in to a somewhat more accurate deep-learning-based face detection model. 247 | 248 | Note: GPU acceleration (via nvidia's CUDA library) is required for good 249 | performance with this model. You'll also want to enable CUDA support 250 | when compliling `dlib`. 251 | 252 | ```python 253 | import face_recognition 254 | 255 | image = face_recognition.load_image_file("my_picture.jpg") 256 | face_locations = face_recognition.face_locations(image, model="cnn") 257 | 258 | # face_locations is now an array listing the co-ordinates of each face! 259 | ``` 260 | 261 | See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py) 262 | to try it out. 263 | 264 | If you have a lot of images and a GPU, you can also 265 | [find faces in batches](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py). 266 | 267 | ##### Automatically locate the facial features of a person in an image 268 | 269 | ```python 270 | import face_recognition 271 | 272 | image = face_recognition.load_image_file("my_picture.jpg") 273 | face_landmarks_list = face_recognition.face_landmarks(image) 274 | 275 | # face_landmarks_list is now an array with the locations of each facial feature in each face. 276 | # face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye. 277 | ``` 278 | 279 | See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py) 280 | to try it out. 281 | 282 | ##### Recognize faces in images and identify who they are 283 | 284 | ```python 285 | import face_recognition 286 | 287 | picture_of_me = face_recognition.load_image_file("me.jpg") 288 | my_face_encoding = face_recognition.face_encodings(picture_of_me)[0] 289 | 290 | # my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face! 291 | 292 | unknown_picture = face_recognition.load_image_file("unknown.jpg") 293 | unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0] 294 | 295 | # Now we can see the two face encodings are of the same person with `compare_faces`! 296 | 297 | results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding) 298 | 299 | if results[0] == True: 300 | print("It's a picture of me!") 301 | else: 302 | print("It's not a picture of me!") 303 | ``` 304 | 305 | See [this example](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py) 306 | to try it out. 307 | 308 | ## Python Code Examples 309 | 310 | All the examples are available [here](https://github.com/ageitgey/face_recognition/tree/master/examples). 311 | 312 | 313 | #### Face Detection 314 | 315 | * [Find faces in a photograph](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture.py) 316 | * [Find faces in a photograph (using deep learning)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_picture_cnn.py) 317 | * [Find faces in batches of images w/ GPU (using deep learning)](https://github.com/ageitgey/face_recognition/blob/master/examples/find_faces_in_batches.py) 318 | * [Blur all the faces in a live video using your webcam (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/blur_faces_on_webcam.py) 319 | 320 | #### Facial Features 321 | 322 | * [Identify specific facial features in a photograph](https://github.com/ageitgey/face_recognition/blob/master/examples/find_facial_features_in_picture.py) 323 | * [Apply (horribly ugly) digital make-up](https://github.com/ageitgey/face_recognition/blob/master/examples/digital_makeup.py) 324 | 325 | #### Facial Recognition 326 | 327 | * [Find and recognize unknown faces in a photograph based on photographs of known people](https://github.com/ageitgey/face_recognition/blob/master/examples/recognize_faces_in_pictures.py) 328 | * [Identify and draw boxes around each person in a photo](https://github.com/ageitgey/face_recognition/blob/master/examples/identify_and_draw_boxes_on_faces.py) 329 | * [Compare faces by numeric face distance instead of only True/False matches](https://github.com/ageitgey/face_recognition/blob/master/examples/face_distance.py) 330 | * [Recognize faces in live video using your webcam - Simple / Slower Version (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py) 331 | * [Recognize faces in live video using your webcam - Faster Version (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py) 332 | * [Recognize faces in a video file and write out new video file (Requires OpenCV to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_video_file.py) 333 | * [Recognize faces on a Raspberry Pi w/ camera](https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_on_raspberry_pi.py) 334 | * [Run a web service to recognize faces via HTTP (Requires Flask to be installed)](https://github.com/ageitgey/face_recognition/blob/master/examples/web_service_example.py) 335 | * [Recognize faces with a K-nearest neighbors classifier](https://github.com/ageitgey/face_recognition/blob/master/examples/face_recognition_knn.py) 336 | 337 | ## How Face Recognition Works 338 | 339 | If you want to learn how face location and recognition work instead of 340 | depending on a black box library, [read my article](https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78). 341 | 342 | ## Caveats 343 | 344 | * The face recognition model is trained on adults and does not work very well on children. It tends to mix 345 | up children quite easy using the default comparison threshold of 0.6. 346 | * Accuracy may vary between ethnic groups. Please see [this wiki page](https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems#question-face-recognition-works-well-with-european-individuals-but-overall-accuracy-is-lower-with-asian-individuals) for more details. 347 | 348 | ## Deployment to Cloud Hosts (Heroku, AWS, etc) 349 | 350 | Since `face_recognition` depends on `dlib` which is written in C++, it can be tricky to deploy an app 351 | using it to a cloud hosting provider like Heroku or AWS. 352 | 353 | To make things easier, there's an example Dockerfile in this repo that shows how to run an app built with 354 | `face_recognition` in a [Docker](https://www.docker.com/) container. With that, you should be able to deploy 355 | to any service that supports Docker images. 356 | 357 | ## Having problems? 358 | 359 | If you run into problems, please read the [Common Errors](https://github.com/ageitgey/face_recognition/wiki/Common-Errors) section of the wiki before filing a github issue. 360 | 361 | ## Thanks 362 | 363 | * Many, many thanks to [Davis King](https://github.com/davisking) ([@nulhom](https://twitter.com/nulhom)) 364 | for creating dlib and for providing the trained facial feature detection and face encoding models 365 | used in this library. For more information on the ResNet that powers the face encodings, check out 366 | his [blog post](http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html). 367 | * Thanks to everyone who works on all the awesome Python data science libraries like numpy, scipy, scikit-image, 368 | pillow, etc, etc that makes this kind of stuff so easy and fun in Python. 369 | * Thanks to [Cookiecutter](https://github.com/audreyr/cookiecutter) and the 370 | [audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) project template 371 | for making Python project packaging way more tolerable. 372 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | Face Recognition 2 | ================ 3 | 4 | | Recognize and manipulate faces from Python or from the command line 5 | with 6 | | the world's simplest face recognition library. 7 | 8 | | Built using `dlib `__'s state-of-the-art face 9 | recognition 10 | | built with deep learning. The model has an accuracy of 99.38% on the 11 | | `Labeled Faces in the Wild `__ 12 | benchmark. 13 | 14 | | This also provides a simple ``face_recognition`` command line tool 15 | that lets 16 | | you do face recognition on a folder of images from the command line! 17 | 18 | | |PyPI| 19 | | |Build Status| 20 | | |Documentation Status| 21 | 22 | Features 23 | -------- 24 | 25 | Find faces in pictures 26 | ^^^^^^^^^^^^^^^^^^^^^^ 27 | 28 | Find all the faces that appear in a picture: 29 | 30 | |image3| 31 | 32 | .. code:: python 33 | 34 | import face_recognition 35 | image = face_recognition.load_image_file("your_file.jpg") 36 | face_locations = face_recognition.face_locations(image) 37 | 38 | Find and manipulate facial features in pictures 39 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 40 | 41 | Get the locations and outlines of each person's eyes, nose, mouth and 42 | chin. 43 | 44 | |image4| 45 | 46 | .. code:: python 47 | 48 | import face_recognition 49 | image = face_recognition.load_image_file("your_file.jpg") 50 | face_landmarks_list = face_recognition.face_landmarks(image) 51 | 52 | | Finding facial features is super useful for lots of important stuff. 53 | But you can also use for really stupid stuff 54 | | like applying `digital 55 | make-up `__ 56 | (think 'Meitu'): 57 | 58 | |image5| 59 | 60 | Identify faces in pictures 61 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 62 | 63 | Recognize who appears in each photo. 64 | 65 | |image6| 66 | 67 | .. code:: python 68 | 69 | import face_recognition 70 | known_image = face_recognition.load_image_file("biden.jpg") 71 | unknown_image = face_recognition.load_image_file("unknown.jpg") 72 | 73 | biden_encoding = face_recognition.face_encodings(known_image)[0] 74 | unknown_encoding = face_recognition.face_encodings(unknown_image)[0] 75 | 76 | results = face_recognition.compare_faces([biden_encoding], unknown_encoding) 77 | 78 | You can even use this library with other Python libraries to do 79 | real-time face recognition: 80 | 81 | |image7| 82 | 83 | See `this 84 | example `__ 85 | for the code. 86 | 87 | Installation 88 | ------------ 89 | 90 | Requirements 91 | ^^^^^^^^^^^^ 92 | 93 | - Python 3.3+ or Python 2.7 94 | - macOS or Linux (Windows not officially supported, but might work) 95 | 96 | Installing on Mac or Linux 97 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 98 | 99 | First, make sure you have dlib already installed with Python bindings: 100 | 101 | - `How to install dlib from source on macOS or 102 | Ubuntu `__ 103 | 104 | Then, install this module from pypi using ``pip3`` (or ``pip2`` for 105 | Python 2): 106 | 107 | .. code:: bash 108 | 109 | pip3 install face_recognition 110 | 111 | | If you are having trouble with installation, you can also try out a 112 | | `pre-configured 113 | VM `__. 114 | 115 | Installing on Raspberry Pi 2+ 116 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 117 | 118 | - `Raspberry Pi 2+ installation 119 | instructions `__ 120 | 121 | Installing on Windows 122 | ^^^^^^^^^^^^^^^^^^^^^ 123 | 124 | While Windows isn't officially supported, helpful users have posted 125 | instructions on how to install this library: 126 | 127 | - `@masoudr's Windows 10 installation guide (dlib + 128 | face\_recognition) `__ 129 | 130 | Installing a pre-configured Virtual Machine image 131 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 132 | 133 | - `Download the pre-configured VM 134 | image `__ 135 | (for VMware Player or VirtualBox). 136 | 137 | Usage 138 | ----- 139 | 140 | Command-Line Interface 141 | ^^^^^^^^^^^^^^^^^^^^^^ 142 | 143 | | When you install ``face_recognition``, you get a simple command-line 144 | program 145 | | called ``face_recognition`` that you can use to recognize faces in a 146 | | photograph or folder full for photographs. 147 | 148 | | First, you need to provide a folder with one picture of each person 149 | you 150 | | already know. There should be one image file for each person with the 151 | | files named according to who is in the picture: 152 | 153 | |known| 154 | 155 | Next, you need a second folder with the files you want to identify: 156 | 157 | |unknown| 158 | 159 | | Then in you simply run the command ``face_recognition``, passing in 160 | | the folder of known people and the folder (or single image) with 161 | unknown 162 | | people and it tells you who is in each image: 163 | 164 | .. code:: bash 165 | 166 | $ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ 167 | 168 | /unknown_pictures/unknown.jpg,Barack Obama 169 | /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person 170 | 171 | | There's one line in the output for each face. The data is 172 | comma-separated 173 | | with the filename and the name of the person found. 174 | 175 | | An ``unknown_person`` is a face in the image that didn't match anyone 176 | in 177 | | your folder of known people. 178 | 179 | Adjusting Tolerance / Sensitivity 180 | ''''''''''''''''''''''''''''''''' 181 | 182 | | If you are getting multiple matches for the same person, it might be 183 | that 184 | | the people in your photos look very similar and a lower tolerance 185 | value 186 | | is needed to make face comparisons more strict. 187 | 188 | | You can do that with the ``--tolerance`` parameter. The default 189 | tolerance 190 | | value is 0.6 and lower numbers make face comparisons more strict: 191 | 192 | .. code:: bash 193 | 194 | $ face_recognition --tolerance 0.54 ./pictures_of_people_i_know/ ./unknown_pictures/ 195 | 196 | /unknown_pictures/unknown.jpg,Barack Obama 197 | /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person 198 | 199 | | If you want to see the face distance calculated for each match in 200 | order 201 | | to adjust the tolerance setting, you can use ``--show-distance true``: 202 | 203 | .. code:: bash 204 | 205 | $ face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/ 206 | 207 | /unknown_pictures/unknown.jpg,Barack Obama,0.378542298956785 208 | /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person,None 209 | 210 | More Examples 211 | ''''''''''''' 212 | 213 | | If you simply want to know the names of the people in each photograph 214 | but don't 215 | | care about file names, you could do this: 216 | 217 | .. code:: bash 218 | 219 | $ face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/ | cut -d ',' -f2 220 | 221 | Barack Obama 222 | unknown_person 223 | 224 | Speeding up Face Recognition 225 | '''''''''''''''''''''''''''' 226 | 227 | | Face recognition can be done in parallel if you have a computer with 228 | | multiple CPU cores. For example if your system has 4 CPU cores, you 229 | can 230 | | process about 4 times as many images in the same amount of time by 231 | using 232 | | all your CPU cores in parallel. 233 | 234 | If you are using Python 3.4 or newer, pass in a 235 | ``--cpus `` parameter: 236 | 237 | .. code:: bash 238 | 239 | $ face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/ 240 | 241 | You can also pass in ``--cpus -1`` to use all CPU cores in your system. 242 | 243 | Python Module 244 | ^^^^^^^^^^^^^ 245 | 246 | | You can import the ``face_recognition`` module and then easily 247 | manipulate 248 | | faces with just a couple of lines of code. It's super easy! 249 | 250 | API Docs: 251 | `https://face-recognition.readthedocs.io `__. 252 | 253 | Automatically find all the faces in an image 254 | '''''''''''''''''''''''''''''''''''''''''''' 255 | 256 | .. code:: python 257 | 258 | import face_recognition 259 | 260 | image = face_recognition.load_image_file("my_picture.jpg") 261 | face_locations = face_recognition.face_locations(image) 262 | 263 | # face_locations is now an array listing the co-ordinates of each face! 264 | 265 | | See `this 266 | example `__ 267 | | to try it out. 268 | 269 | You can also opt-in to a somewhat more accurate deep-learning-based face 270 | detection model. 271 | 272 | | Note: GPU acceleration (via nvidia's CUDA library) is required for 273 | good 274 | | performance with this model. You'll also want to enable CUDA support 275 | | when compliling ``dlib``. 276 | 277 | .. code:: python 278 | 279 | import face_recognition 280 | 281 | image = face_recognition.load_image_file("my_picture.jpg") 282 | face_locations = face_recognition.face_locations(image, model="cnn") 283 | 284 | # face_locations is now an array listing the co-ordinates of each face! 285 | 286 | | See `this 287 | example `__ 288 | | to try it out. 289 | 290 | | If you have a lot of images and a GPU, you can also 291 | | `find faces in 292 | batches `__. 293 | 294 | Automatically locate the facial features of a person in an image 295 | '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' 296 | 297 | .. code:: python 298 | 299 | import face_recognition 300 | 301 | image = face_recognition.load_image_file("my_picture.jpg") 302 | face_landmarks_list = face_recognition.face_landmarks(image) 303 | 304 | # face_landmarks_list is now an array with the locations of each facial feature in each face. 305 | # face_landmarks_list[0]['left_eye'] would be the location and outline of the first person's left eye. 306 | 307 | | See `this 308 | example `__ 309 | | to try it out. 310 | 311 | Recognize faces in images and identify who they are 312 | ''''''''''''''''''''''''''''''''''''''''''''''''''' 313 | 314 | .. code:: python 315 | 316 | import face_recognition 317 | 318 | picture_of_me = face_recognition.load_image_file("me.jpg") 319 | my_face_encoding = face_recognition.face_encodings(picture_of_me)[0] 320 | 321 | # my_face_encoding now contains a universal 'encoding' of my facial features that can be compared to any other picture of a face! 322 | 323 | unknown_picture = face_recognition.load_image_file("unknown.jpg") 324 | unknown_face_encoding = face_recognition.face_encodings(unknown_picture)[0] 325 | 326 | # Now we can see the two face encodings are of the same person with `compare_faces`! 327 | 328 | results = face_recognition.compare_faces([my_face_encoding], unknown_face_encoding) 329 | 330 | if results[0] == True: 331 | print("It's a picture of me!") 332 | else: 333 | print("It's not a picture of me!") 334 | 335 | | See `this 336 | example `__ 337 | | to try it out. 338 | 339 | Python Code Examples 340 | -------------------- 341 | 342 | All the examples are available 343 | `here `__. 344 | 345 | Face Detection 346 | ^^^^^^^^^^^^^^ 347 | 348 | - `Find faces in a 349 | photograph `__ 350 | - `Find faces in a photograph (using deep 351 | learning) `__ 352 | - `Find faces in batches of images w/ GPU (using deep 353 | learning) `__ 354 | 355 | Facial Features 356 | ^^^^^^^^^^^^^^^ 357 | 358 | - `Identify specific facial features in a 359 | photograph `__ 360 | - `Apply (horribly ugly) digital 361 | make-up `__ 362 | 363 | Facial Recognition 364 | ^^^^^^^^^^^^^^^^^^ 365 | 366 | - `Find and recognize unknown faces in a photograph based on 367 | photographs of known 368 | people `__ 369 | - `Compare faces by numeric face distance instead of only True/False 370 | matches `__ 371 | - `Recognize faces in live video using your webcam - Simple / Slower 372 | Version (Requires OpenCV to be 373 | installed) `__ 374 | - `Recognize faces in live video using your webcam - Faster Version 375 | (Requires OpenCV to be 376 | installed) `__ 377 | - `Recognize faces in a video file and write out new video file 378 | (Requires OpenCV to be 379 | installed) `__ 380 | - `Recognize faces on a Raspberry Pi w/ 381 | camera `__ 382 | - `Run a web service to recognize faces via HTTP (Requires Flask to be 383 | installed) `__ 384 | - `Recognize faces with a K-nearest neighbors 385 | classifier `__ 386 | 387 | .. rubric:: How Face Recognition Works 388 | :name: how-face-recognition-works 389 | 390 | | If you want to learn how face location and recognition work instead of 391 | | depending on a black box library, `read my 392 | article `__. 393 | 394 | Caveats 395 | ------- 396 | 397 | - The face recognition model is trained on adults and does not work 398 | very well on children. It tends to mix 399 | up children quite easy using the default comparison threshold of 0.6. 400 | 401 | Deployment to Cloud Hosts (Heroku, AWS, etc) 402 | -------------------------------------------- 403 | 404 | | Since ``face_recognition`` depends on ``dlib`` which is written in 405 | C++, it can be tricky to deploy an app 406 | | using it to a cloud hosting provider like Heroku or AWS. 407 | 408 | | To make things easier, there's an example Dockerfile in this repo that 409 | shows how to run an app built with 410 | | ``face_recognition`` in a `Docker `__ 411 | container. With that, you should be able to deploy 412 | | to any service that supports Docker images. 413 | 414 | Common Issues 415 | ------------- 416 | 417 | Issue: ``Illegal instruction (core dumped)`` when using 418 | face\_recognition or running examples. 419 | 420 | | Solution: ``dlib`` is compiled with SSE4 or AVX support, but your CPU 421 | is too old and doesn't support that. 422 | | You'll need to recompile ``dlib`` after `making the code change 423 | outlined 424 | here `__. 425 | 426 | Issue: 427 | ``RuntimeError: Unsupported image type, must be 8bit gray or RGB image.`` 428 | when running the webcam examples. 429 | 430 | Solution: Your webcam probably isn't set up correctly with OpenCV. `Look 431 | here for 432 | more `__. 433 | 434 | Issue: ``MemoryError`` when running ``pip2 install face_recognition`` 435 | 436 | | Solution: The face\_recognition\_models file is too big for your 437 | available pip cache memory. Instead, 438 | | try ``pip2 --no-cache-dir install face_recognition`` to avoid the 439 | issue. 440 | 441 | Issue: 442 | ``AttributeError: 'module' object has no attribute 'face_recognition_model_v1'`` 443 | 444 | Solution: The version of ``dlib`` you have installed is too old. You 445 | need version 19.7 or newer. Upgrade ``dlib``. 446 | 447 | Issue: 448 | ``Attribute Error: 'Module' object has no attribute 'cnn_face_detection_model_v1'`` 449 | 450 | Solution: The version of ``dlib`` you have installed is too old. You 451 | need version 19.7 or newer. Upgrade ``dlib``. 452 | 453 | Issue: ``TypeError: imread() got an unexpected keyword argument 'mode'`` 454 | 455 | Solution: The version of ``scipy`` you have installed is too old. You 456 | need version 0.17 or newer. Upgrade ``scipy``. 457 | 458 | Thanks 459 | ------ 460 | 461 | - Many, many thanks to `Davis King `__ 462 | (`@nulhom `__) 463 | for creating dlib and for providing the trained facial feature 464 | detection and face encoding models 465 | used in this library. For more information on the ResNet that powers 466 | the face encodings, check out 467 | his `blog 468 | post `__. 469 | - Thanks to everyone who works on all the awesome Python data science 470 | libraries like numpy, scipy, scikit-image, 471 | pillow, etc, etc that makes this kind of stuff so easy and fun in 472 | Python. 473 | - Thanks to `Cookiecutter `__ 474 | and the 475 | `audreyr/cookiecutter-pypackage `__ 476 | project template 477 | for making Python project packaging way more tolerable. 478 | 479 | .. |PyPI| image:: https://img.shields.io/pypi/v/face_recognition.svg 480 | :target: https://pypi.python.org/pypi/face_recognition 481 | .. |Build Status| image:: https://travis-ci.org/ageitgey/face_recognition.svg?branch=master 482 | :target: https://travis-ci.org/ageitgey/face_recognition 483 | .. |Documentation Status| image:: https://readthedocs.org/projects/face-recognition/badge/?version=latest 484 | :target: http://face-recognition.readthedocs.io/en/latest/?badge=latest 485 | .. |image3| image:: https://cloud.githubusercontent.com/assets/896692/23625227/42c65360-025d-11e7-94ea-b12f28cb34b4.png 486 | .. |image4| image:: https://cloud.githubusercontent.com/assets/896692/23625282/7f2d79dc-025d-11e7-8728-d8924596f8fa.png 487 | .. |image5| image:: https://cloud.githubusercontent.com/assets/896692/23625283/80638760-025d-11e7-80a2-1d2779f7ccab.png 488 | .. |image6| image:: https://cloud.githubusercontent.com/assets/896692/23625229/45e049b6-025d-11e7-89cc-8a71cf89e713.png 489 | .. |image7| image:: https://cloud.githubusercontent.com/assets/896692/24430398/36f0e3f0-13cb-11e7-8258-4d0c9ce1e419.gif 490 | .. |known| image:: https://cloud.githubusercontent.com/assets/896692/23582466/8324810e-00df-11e7-82cf-41515eba704d.png 491 | .. |unknown| image:: https://cloud.githubusercontent.com/assets/896692/23582465/81f422f8-00df-11e7-8b0d-75364f641f58.png 492 | 493 | --------------------------------------------------------------------------------