├── .gitignore
├── LICENSE
├── Makefile
├── README.md
├── doc
├── after_model_update.png
├── after_update_info.png
├── detection_ex1.png
├── detection_mob.png
└── examples_capture.png
├── docker
├── Dockerfile
├── Dockerfile.gpu
├── download.py
├── download_vggace2.py
└── requirements.txt
├── notebooks
└── experiments_with_classification.ipynb
├── server
├── server.py
├── server.sh
└── static
│ ├── detect.html
│ ├── detect.js
│ └── local.js
├── tensorface
├── __init__.py
├── classifier.py
├── const.py
├── detection.py
├── embedding.py
├── model.py
├── mtcnn.py
├── recognition.py
└── recognition_sklearn.py
└── test
├── __init__.py
├── test_embedding.py
├── test_examples
├── faces
│ ├── Bartek.png
│ ├── CoverGirl.png
│ ├── StudBoy.png
│ ├── StudGirl.png
│ ├── unknown.png
│ └── unknown_1.png
├── test_1523794239.922389.json
└── test_1523794239.922389.png
└── train_examples
├── train_Bartek_160_10.png
├── train_CoverGirl_160_10.png
├── train_StudBoy_160_10.png
└── train_StudGirl_160_10.png
/.gitignore:
--------------------------------------------------------------------------------
1 | .idea/
2 | conda_venv/
3 |
4 | # Rope project settings
5 | .ropeproject
6 |
7 | ### Vim ###
8 | # swap
9 | .sw[a-p]
10 | .*.sw[a-p]
11 | # session
12 | Session.vim
13 | # temporary
14 | .netrwhist
15 | *~
16 | # auto-generated tag files
17 | tags
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2018
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | NS := btwardow
2 | REPO := tf-face-recognition
3 | #VERSION := $(shell git describe)
4 | VERSION := 1.0.0
5 |
6 | .PHONY: build push
7 |
8 | build:
9 | docker build -t $(NS)/$(REPO):$(VERSION) -f docker/Dockerfile .
10 |
11 | push: build
12 | docker push $(NS)/$(REPO):$(VERSION)
13 |
14 | default: build
15 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Real-Time Face Recognition and Detection using Tensorflow
2 | =================================================================
3 |
4 | The idea is to build application for a real-time face detection and
5 | recognition using Tensorflow and a notebook's webcam. The model for
6 | face prediction should be easy to update online to add new targets.
7 |
8 | ## Project assumptions
9 | - Tensorflow 1.7 and python 3
10 | - **Everything should be dockerized and easy to reproduce!**
11 |
12 | ## How to run it?
13 |
14 | ### Run it right away from the pre-build image
15 |
16 | Just type:
17 |
18 | ```bash
19 | docker run -it --rm -p 5000:5000 btwardow/tf-face-recognition:1.0.0
20 | ```
21 |
22 | Then got to [https://localhost:5000/](https://localhost:5000/) or type
23 | it in your browser to get face detection (without recognition for now).
24 |
25 | _Note: HTTPS is required from many modern browsers to transfer video outside the localhost,
26 | without making any unsafe settings to your browser._
27 |
28 |
29 | ### Building docker
30 |
31 | Type this in the root project's directory in order to:
32 |
33 | #### Create docker image
34 |
35 | Use main target from Makefile of main directory:
36 |
37 | ```bash
38 | make
39 | ```
40 |
41 | #### Run project
42 |
43 | ```bash
44 | docker run --rm -it -p 5000:5000 -v /$(pwd):/workspace btwardow/tf-face-recognition:dev
45 | ```
46 |
47 | This volume mapping is very convenient for the development and testing purposes.
48 |
49 | To use GPU power - there is dedicated [Dockerfile.gpu](./docker/Dockerfile.gpu).
50 |
51 | ### Run it without docker (development)
52 |
53 | Running application without docker is useful for development. Below is quick how to for *nix environments.
54 |
55 | Creating virtual env (with Conda) and installing requirements:
56 |
57 | ```bash
58 | conda create -y -n face_recognition_36 python=3.6
59 | source activate face_recognition_36
60 | pip install -r requirements_dev.txt
61 | ```
62 |
63 | Downloading pre-build models:
64 | ```bash
65 | mkdir ~/pretrained_models
66 | cp docker/download*.py ~/pretrained_models
67 | cd ~/pretrained_models
68 | python download.py
69 | python download_vggace2.py
70 | ```
71 |
72 | the `~/pretrained_models` directory should look like that:
73 |
74 | ```bash
75 | (face_recognition_36) b.twardowski@172-16-170-27:~/pretrained_models » tree
76 | .
77 | ├── 20180402-114759
78 | │ ├── 20180402-114759.pb
79 | │ ├── model-20180402-114759.ckpt-275.data-00000-of-00001
80 | │ ├── model-20180402-114759.ckpt-275.index
81 | │ └── model-20180402-114759.meta
82 | ├── 20180402-114759.zip
83 | ├── det1.npy
84 | ├── det2.npy
85 | ├── det3.npy
86 | ├── download.py
87 | └── download_vggace2.py
88 | ```
89 |
90 | Then, to start a server, go to `./server` directory and type:
91 | ```bash
92 | PYTHONPATH=".." python server.py
93 | ```
94 |
95 |
96 | ## Why making a web application for this?
97 |
98 | _Everything should be dockerized and easy to reproduce_. This makes things
99 | interesting even for a toy project from the computer vision area. Why?
100 |
101 | - building model/playing around in Jupyter/Python - that's easy... inference
102 | - on data grabbed from the host box camera inside docker - that's tricky!
103 |
104 | Why is hard to grab data from camera device from docker? You can read
105 | [here](https://apple.stackexchange.com/questions/265281/using-webcam-connected-to-macbook-inside-a-docker-container).
106 | The main reason - docker is not build for such things, so it's not making life
107 | easier for here. Of course few possibilities are mentioned, like streaming from
108 | the host MBP using `ffmpeg` or preparing custom Virtualbox
109 | `boot2docker.iso` image and making the MBP [webcam pass
110 | through](https://www.virtualbox.org/manual/ch09.html#webcam-passthrough). But
111 | all of them dosn't sound right. All requires additiona effort of installing sth
112 | from `brew` or Virualbox configuration (assuming you have docker installed on
113 | your OSX).
114 |
115 | The good side of having this as a webapp is fact that **you can try it out on your mobile phone!**
116 | What is very convenient for testing and demos.
117 |
118 | 
119 |
120 |
121 | ## Face detection
122 |
123 | Face detection is done to find faces from the video and mark it boundaries. These are
124 | areas that can be future use for the face recognition task. To detect faces the
125 | pre-trained MTCNN network is being used.
126 |
127 | 
128 |
129 | ## Face recognition
130 |
131 | The face detection is using embedding from the VGGFace2 network + KNN model implemented in Tensorflow.
132 |
133 | In order to get your face recognized first a few examples have to be provided to our algorithm (now - at least 10).
134 |
135 | When you see the application working and correctly detecting faces just click the _**Capture Examples**_ button.
136 |
137 | **While capturing examples for the face detection there have to be single face in video!**
138 |
139 | 
140 |
141 | After 10 examples are collected, we can type the name of the person and upload them to server.
142 |
143 | As a result we see the current status of classification examples:
144 |
145 | 
146 |
147 | 
148 |
149 | And from now on, the new person is recognized. For this example it's _CoverGirl_.
150 |
151 | ### One more example
152 |
153 | 
154 |
155 |
156 | ### Running Jupyter Notebook and reproducing analysis
157 |
158 | If you are interested about the classification, please check out this [notebook](./notebooks/experiments_with_classification.ipynb) which will explain in details how it works (e.g. threshold for the recognition).
159 |
160 | You can run jupyter notebook from the docker, just type:
161 |
162 | ```bash
163 | docker run --rm -it -p 8888:8888 btwardow/tf-face-recognition:1.0.0 /run_jupyter.sh --allow-root
164 | ```
165 |
166 |
167 | ## TODOs
168 |
169 | - [x] face detection with a pre-trained MTCNN network
170 | - [x] training face recognition classifier (use pre-trained embedding + classifier) based on provided examples
171 | - [x] model updates directly from the browser
172 | - [ ] save & clear classification model from the browser
173 | - [ ] check if detection can be done faster, if so re-implement it (optimize MTCNN for inference?)
174 | - [ ] try out port it to Trensorflow.js (as skeptical as I am of crunching numbers in JavaScript...)
175 |
176 |
177 | ## Thanks
178 |
179 | Many thanks to creators of `facenet` project, which provides pre trained models for VGGFace2. Great job!
180 |
--------------------------------------------------------------------------------
/doc/after_model_update.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/btwardow/tf-face-recognition/12c56c8a59cb9445508ad24448bc8e11a8cbc406/doc/after_model_update.png
--------------------------------------------------------------------------------
/doc/after_update_info.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/btwardow/tf-face-recognition/12c56c8a59cb9445508ad24448bc8e11a8cbc406/doc/after_update_info.png
--------------------------------------------------------------------------------
/doc/detection_ex1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/btwardow/tf-face-recognition/12c56c8a59cb9445508ad24448bc8e11a8cbc406/doc/detection_ex1.png
--------------------------------------------------------------------------------
/doc/detection_mob.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/btwardow/tf-face-recognition/12c56c8a59cb9445508ad24448bc8e11a8cbc406/doc/detection_mob.png
--------------------------------------------------------------------------------
/doc/examples_capture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/btwardow/tf-face-recognition/12c56c8a59cb9445508ad24448bc8e11a8cbc406/doc/examples_capture.png
--------------------------------------------------------------------------------
/docker/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM tensorflow/tensorflow:1.7.0-py3
2 |
3 | RUN apt-get update
4 | RUN apt-get install -y git libfontconfig1 libxrender1 libsm6 libxext6 apt-utils
5 | RUN apt-get clean
6 |
7 | RUN pip --version
8 | RUN pip install --upgrade pip
9 |
10 | COPY docker/requirements.txt /server-requirements.txt
11 | RUN pip install -r /server-requirements.txt
12 |
13 | RUN mkdir /root/pretrained_models
14 | COPY docker/download.py /root/pretrained_models/
15 | COPY docker/download_vggace2.py /root/pretrained_models/
16 | WORKDIR /root/pretrained_models
17 | RUN python download.py
18 | RUN python download_vggace2.py
19 | RUN ls -l
20 |
21 |
22 | COPY . /workspace
23 |
24 | WORKDIR /workspace/
25 | CMD cd server && ./server.sh
--------------------------------------------------------------------------------
/docker/Dockerfile.gpu:
--------------------------------------------------------------------------------
1 | FROM tensorflow/tensorflow:1.7.0-gpu-py3
2 |
3 | RUN apt-get update
4 | RUN apt-get install -y git libfontconfig1 libxrender1 libsm6 libxext6 apt-utils
5 | RUN apt-get clean
6 |
7 | RUN pip --version
8 | RUN pip install --upgrade pip
9 |
10 | COPY requirements.txt /server-requirements.txt
11 | RUN pip install -r /server-requirements.txt
12 |
13 | RUN mkdir /root/pretrained_models
14 | COPY download.py /root/pretrained_models/
15 | COPY download_vggace2.py /root/pretrained_models/
16 | WORKDIR /root/pretrained_models
17 | RUN python download.py
18 | RUN python download_vggace2.py
19 | RUN ls -l
20 |
21 | WORKDIR /workspace/
22 | CMD ./server/server.sh
23 |
--------------------------------------------------------------------------------
/docker/download.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 |
4 | if __name__ == '__main__':
5 |
6 | print("Downloading pretrained model for MTCNN...")
7 |
8 | for i in range(1, 4):
9 | f_name = 'det{}.npy'.format(i)
10 | print("Downloading: ", f_name)
11 | url = "https://github.com/davidsandberg/facenet/raw/" \
12 | "e9d4e8eca95829e5607236fa30a0556b40813f62/src/align/det{}.npy".format(i)
13 | session = requests.Session()
14 | response = session.get(url, stream=True)
15 |
16 | CHUNK_SIZE = 32768
17 |
18 | with open(f_name, "wb") as f:
19 | for chunk in response.iter_content(CHUNK_SIZE):
20 | if chunk: # filter out keep-alive new chunks
21 | f.write(chunk)
22 |
--------------------------------------------------------------------------------
/docker/download_vggace2.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import zipfile
3 | import os
4 |
5 | model_dict = {
6 | 'lfw-subset': '1B5BQUZuJO-paxdN8UclxeHAR1WnR_Tzi',
7 | '20170131-234652': '0B5MzpY9kBtDVSGM0RmVET2EwVEk',
8 | '20170216-091149': '0B5MzpY9kBtDVTGZjcWkzT3pldDA',
9 | '20170512-110547': '0B5MzpY9kBtDVZ2RpVDYwWmxoSUk',
10 | '20180402-114759': '1EXPBSXwTaqrSC0OhUdXNmKSh9qJUQ55-'
11 | }
12 |
13 |
14 | def download_and_extract_file(model_name, data_dir):
15 | file_id = model_dict[model_name]
16 | destination = os.path.join(data_dir, model_name + '.zip')
17 | if not os.path.exists(destination):
18 | print('Downloading file to %s' % destination)
19 | download_file_from_google_drive(file_id, destination)
20 | with zipfile.ZipFile(destination, 'r') as zip_ref:
21 | print('Extracting file to %s' % data_dir)
22 | zip_ref.extractall(data_dir)
23 |
24 |
25 | def download_file_from_google_drive(file_id, destination):
26 | URL = "https://drive.google.com/uc?export=download"
27 |
28 | session = requests.Session()
29 |
30 | response = session.get(URL, params={'id': file_id}, stream=True)
31 | token = get_confirm_token(response)
32 |
33 | if token:
34 | params = {'id': file_id, 'confirm': token}
35 | response = session.get(URL, params=params, stream=True)
36 |
37 | save_response_content(response, destination)
38 |
39 |
40 | def get_confirm_token(response):
41 | for key, value in response.cookies.items():
42 | if key.startswith('download_warning'):
43 | return value
44 |
45 | return None
46 |
47 |
48 | def save_response_content(response, destination):
49 | CHUNK_SIZE = 32768
50 |
51 | with open(destination, "wb") as f:
52 | for chunk in response.iter_content(CHUNK_SIZE):
53 | if chunk: # filter out keep-alive new chunks
54 | f.write(chunk)
55 |
56 |
57 | # download VGGFace2
58 | download_and_extract_file('20180402-114759', '.')
59 |
60 |
--------------------------------------------------------------------------------
/docker/requirements.txt:
--------------------------------------------------------------------------------
1 | #tensorflow==1.7.0
2 | #tensorflowjs
3 | #tensorflow-hub
4 | opencv-python
5 | matplotlib
6 | sklearn
7 |
8 | flask
9 | Pillow
10 | pyopenssl
11 | requests
12 |
13 | jupyter
--------------------------------------------------------------------------------
/server/server.py:
--------------------------------------------------------------------------------
1 | import json
2 | from time import time
3 |
4 | from PIL import Image
5 | from flask import Flask, request, Response
6 |
7 | # assuming that script is run from `server` dir
8 | import sys, os
9 | sys.path.append(os.path.realpath('..'))
10 |
11 | from tensorface import detection
12 | from tensorface.recognition import recognize, learn_from_examples
13 |
14 | # For test examples acquisition
15 | SAVE_DETECT_FILES = False
16 | SAVE_TRAIN_FILES = False
17 |
18 | app = Flask(__name__)
19 | app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
20 |
21 |
22 | # for CORS
23 | @app.after_request
24 | def after_request(response):
25 | response.headers.add('Access-Control-Allow-Origin', '*')
26 | response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
27 | response.headers.add('Access-Control-Allow-Methods', 'GET,POST') # Put any other methods you need here
28 | return response
29 |
30 |
31 | @app.route('/')
32 | def index():
33 | return Response(open('./static/detect.html').read(), mimetype="text/html")
34 |
35 |
36 | @app.route('/detect', methods=['POST'])
37 | def detect():
38 | try:
39 | image_stream = request.files['image'] # get the image
40 | image = Image.open(image_stream)
41 |
42 | # Set an image confidence threshold value to limit returned data
43 | threshold = request.form.get('threshold')
44 | if threshold is None:
45 | threshold = 0.5
46 | else:
47 | threshold = float(threshold)
48 |
49 | faces = recognize(detection.get_faces(image, threshold))
50 |
51 | j = json.dumps([f.data() for f in faces])
52 | print("Result:", j)
53 |
54 | # save files
55 | if SAVE_DETECT_FILES and len(faces):
56 | id = time()
57 | with open('test_{}.json'.format(id), 'w') as f:
58 | f.write(j)
59 |
60 | image.save('test_{}.png'.format(id))
61 | for i, f in enumerate(faces):
62 | f.img.save('face_{}_{}.png'.format(id, i))
63 |
64 | return j
65 |
66 | except Exception as e:
67 | import traceback
68 | traceback.print_exc()
69 | print('POST /detect error: %e' % e)
70 | return e
71 |
72 |
73 | @app.route('/train', methods=['POST'])
74 | def train():
75 | try:
76 | # image with sprites
77 | image_stream = request.files['image'] # get the image
78 | image_sprite = Image.open(image_stream)
79 |
80 | # forms data
81 | name = request.form.get('name')
82 | num = int(request.form.get('num'))
83 | size = int(request.form.get('size'))
84 |
85 | # save for debug purposes
86 | if SAVE_TRAIN_FILES:
87 | image_sprite.save('train_{}_{}_{}.png'.format(name, size, num))
88 |
89 | info = learn_from_examples(name, image_sprite, num, size)
90 | return json.dumps([{'name': n, 'train_examples': s} for n, s in info.items()])
91 |
92 | except Exception as e:
93 | import traceback
94 | traceback.print_exc()
95 | print('POST /image error: %e' % e)
96 | return e
97 |
98 |
99 | if __name__ == '__main__':
100 | app.run(debug=False, host='0.0.0.0', ssl_context='adhoc')
101 | # app.run(host='0.0.0.0')
102 |
--------------------------------------------------------------------------------
/server/server.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | export PYTHONPATH='/workspace:/workspace/server'
4 | python server.py
5 |
--------------------------------------------------------------------------------
/server/static/detect.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |