├── .gitignore ├── README.md ├── Tensorflow - Verification.ipynb ├── app.py ├── in └── id-card.jpg ├── model ├── dsnt.py └── frozen_model.pb ├── ngrok ├── requirements.txt ├── results.png ├── static ├── dl.svg ├── script.js ├── selfie.svg └── style.css ├── templates └── index.html └── tmp ├── jupyter-cropped-warped.jpg ├── jupyter-final.jpg └── jupyter-results.jpg /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | .DS_Store 3 | 4 | # Byte-compiled / optimized / DLL files 5 | __pycache__/ 6 | *.py[cod] 7 | *$py.class 8 | 9 | # C extensions 10 | *.so 11 | 12 | # Distribution / packaging 13 | .Python 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | wheels/ 26 | pip-wheel-metadata/ 27 | share/python-wheels/ 28 | *.egg-info/ 29 | .installed.cfg 30 | *.egg 31 | MANIFEST 32 | 33 | # PyInstaller 34 | # Usually these files are written by a python script from a template 35 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 36 | *.manifest 37 | *.spec 38 | 39 | # Installer logs 40 | pip-log.txt 41 | pip-delete-this-directory.txt 42 | 43 | # Unit test / coverage reports 44 | htmlcov/ 45 | .tox/ 46 | .nox/ 47 | .coverage 48 | .coverage.* 49 | .cache 50 | nosetests.xml 51 | coverage.xml 52 | *.cover 53 | *.py,cover 54 | .hypothesis/ 55 | .pytest_cache/ 56 | cover/ 57 | 58 | # Translations 59 | *.mo 60 | *.pot 61 | 62 | # Django stuff: 63 | *.log 64 | local_settings.py 65 | db.sqlite3 66 | db.sqlite3-journal 67 | 68 | # Flask stuff: 69 | instance/ 70 | .webassets-cache 71 | 72 | # Scrapy stuff: 73 | .scrapy 74 | 75 | # Sphinx documentation 76 | docs/_build/ 77 | 78 | # PyBuilder 79 | target/ 80 | 81 | # Jupyter Notebook 82 | .ipynb_checkpoints 83 | 84 | # IPython 85 | profile_default/ 86 | ipython_config.py 87 | 88 | # pyenv 89 | # For a library or package, you might want to ignore these files since the code is 90 | # intended to run in multiple environments; otherwise, check them in: 91 | # .python-version 92 | 93 | # pipenv 94 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 95 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 96 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 97 | # install all needed dependencies. 98 | #Pipfile.lock 99 | 100 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 101 | __pypackages__/ 102 | 103 | # Celery stuff 104 | celerybeat-schedule 105 | celerybeat.pid 106 | 107 | # SageMath parsed files 108 | *.sage.py 109 | 110 | # Environments 111 | .env 112 | .venv 113 | env/ 114 | venv/ 115 | ENV/ 116 | env.bak/ 117 | venv.bak/ 118 | 119 | # Spyder project settings 120 | .spyderproject 121 | .spyproject 122 | 123 | # Rope project settings 124 | .ropeproject 125 | 126 | # mkdocs documentation 127 | /site 128 | 129 | # mypy 130 | .mypy_cache/ 131 | .dmypy.json 132 | dmypy.json 133 | 134 | # Pyre type checker 135 | .pyre/ 136 | 137 | # pytype static type analyzer 138 | .pytype/ 139 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Machine Learning Driven Identity Verification 2 | 3 | This repo contains a Jupyter Notebook that utilizes a Tensorflow Model to identity and smartly crop an Identity Card, along with an accompanying Flask app. The model was trained using a specially formatted version of the the MIDV-500 dataset where the images were converted from TIF to JPG and the extraneous metadata in the annotations removed. This model also works for isolating and cropping photos of paper receipts and possibly any conceivable rectangular images. To learn how to train the model see repo [KYC-train-model](https://github.com/getcontrol/KYC-train-model/). 4 | 5 | # Installation Instructions 6 | 7 | ### Jupyter Notebook 8 | 9 | 1. Create and activate a Python 3 Virtual environment. 10 | 11 | ```python3 -m venv env``` 12 | 13 | ```source env/bin/activate``` 14 | 15 | 2. Install Requirements. 16 | 17 | ```pip install -r requirements.txt``` 18 | 19 | 3. Start Jupyter Notebook. 20 | 21 | ```ipython notebook --ip=127.0.0.1``` 22 | 23 | 4. Open & Run [Tensorflow - Verification.ipynb](https://github.com/getcontrol/tensorflow-verification/blob/master/Tensorflow%20-%20Verification.ipynb) 24 | 25 | The mechanics of the Tensorflow model and OpenCV transforms is documented inline. 26 | 27 | ### Flask App 28 | This demonstrates uploading the identity document via a Flask app with the relevant pre-processing OpenCV steps. The machine learning model is optimized for images that are acquired via an Android or Samsung Phone. Use ngrok to share localhost URL for mobile browser testing. 29 | 30 | Notes: 31 | 32 | The Step 1 Form and Step 3 selfie or not fully functioning yet. 33 | 34 | You must take a photo of ID card in Step 2 and selfie in Step 3 or app will break. 35 | 36 | App.py behaves very similarily to the Jupyter Notebook , with a few exceptions documented as comments in the code. 37 | 38 | 1. Create and activate a Python 3 Virtual environment. 39 | 40 | ```python3 -m venv env``` 41 | 42 | ```source env/bin/activate``` 43 | 44 | 2. Install Requirements. 45 | 46 | ```pip install -r requirements.txt``` 47 | 48 | 3. Start Flask app. 49 | 50 | ```python app.py``` 51 | 52 | 4. In a separate terminal tab start ngrok. 53 | 54 | ```./ngrok http 5000``` 55 | 56 | 5. Test ngrok URL on mobile browser. Final ID image is saved in tmp/FINAL.jpg. 57 | 58 | ![Results](https://github.com/getcontrol/KYC-tensorflow/blob/master/results.png) 59 | 60 | #TODO 61 | 62 | 1. Connect Step 1 form to database 63 | 2. OCR ID Card and save JSON results to database 64 | 3. Compare ID Card image with selfie and save JSON results to database 65 | 4. Compare form submission and OCR results. 66 | 67 | 68 | ### Citation 69 | Please cite this paper, if using midv dataset, link for dataset provided in paper 70 | 71 | @article{DBLP:journals/corr/abs-1807-05786, 72 | author = {Vladimir V. Arlazarov and 73 | Konstantin Bulatov and 74 | Timofey S. Chernov and 75 | Vladimir L. Arlazarov}, 76 | title = {{MIDV-500:} {A} Dataset for Identity Documents Analysis and Recognition 77 | on Mobile Devices in Video Stream}, 78 | journal = {CoRR}, 79 | volume = {abs/1807.05786}, 80 | year = {2018}, 81 | url = {http://arxiv.org/abs/1807.05786}, 82 | archivePrefix = {arXiv}, 83 | eprint = {1807.05786}, 84 | timestamp = {Mon, 13 Aug 2018 16:46:35 +0200}, 85 | biburl = {https://dblp.org/rec/bib/journals/corr/abs-1807-05786}, 86 | bibsource = {dblp computer science bibliography, https://dblp.org} 87 | } 88 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, render_template, request,redirect, url_for 2 | from PIL import Image 3 | from PIL import ImageOps 4 | import numpy as np 5 | from keras.models import load_model 6 | import tensorflow as tf 7 | from model.dsnt import dsnt 8 | from PIL import Image, ImageDraw 9 | import cv2 10 | import os 11 | import requests 12 | import json 13 | import io 14 | from matplotlib import pyplot as plt 15 | 16 | 17 | app = Flask(__name__, template_folder='templates') 18 | 19 | #load tensorflow graph 20 | def _load_graph(frozen_graph_filename): 21 | """ 22 | """ 23 | with tf.gfile.GFile(frozen_graph_filename, "rb") as f: 24 | graph_def = tf.GraphDef() 25 | graph_def.ParseFromString(f.read()) 26 | with tf.Graph().as_default() as graph: 27 | tf.import_graph_def(graph_def, name="") 28 | return graph 29 | 30 | def init(): 31 | global model 32 | # load the pre-trained Keras model 33 | freeze_file_name = "model/frozen_model.pb" 34 | 35 | 36 | 37 | @app.route('/') 38 | def upload_file(): 39 | return render_template('index.html') 40 | 41 | 42 | 43 | @app.route('/uploader', methods = ['POST']) 44 | def upload_image_file(): 45 | if request.method == 'POST': 46 | #specify frozen model location and map of keypoints 47 | freeze_file_name = "model/frozen_model.pb" 48 | graph = _load_graph(freeze_file_name) 49 | sess = tf.Session(graph=graph) 50 | inputs = graph.get_tensor_by_name('input:0') 51 | activation_map = graph.get_tensor_by_name("heats_map_regression/pred_keypoints/BiasAdd:0") 52 | 53 | hm1, kp1, = dsnt(activation_map[...,0]) 54 | hm2, kp2, = dsnt(activation_map[...,1]) 55 | hm3, kp3, = dsnt(activation_map[...,2]) 56 | hm4, kp4, = dsnt(activation_map[...,3]) 57 | 58 | #read selfie image from filestream , save to memory, decode for OpenCV Processing 59 | photo = request.files['photo'] 60 | in_memory_file = io.BytesIO() 61 | photo.save(in_memory_file) 62 | data = np.fromstring(in_memory_file.getvalue(), dtype=np.uint8) 63 | color_image_flag = 1 64 | #save decoded selfie image to tmp folder 65 | img_self = cv2.imdecode(data, color_image_flag) 66 | cv2.imwrite('tmp/app-img_self.jpg', img_self) 67 | 68 | #read id card image from filestream ,convert to numpy array, rotate 270degrees to correct capture from phone 69 | img = np.array(Image.open(request.files['file'].stream).rotate(270,expand=True)) 70 | #save the uploaded , rotated image to tmp file 71 | cv2.imwrite('tmp/app-original.jpg', img) 72 | 73 | #read image and get the pre-resize factors 74 | img = cv2.imread('tmp/app-original.jpg') 75 | rw = img.shape[1] / 600 76 | rh = img.shape[0] /800 77 | print (rw,rh) 78 | 79 | #read the same image , convert to numpy array, rotate 270 degrees, and this time resize to 600x800 to prepare for feeding into TensorFlow model 80 | img_nd = np.array(Image.open(request.files['file'].stream).rotate(270,expand=True).resize((600, 800))) 81 | 82 | #map image to trained keypoints 83 | hm1_nd,hm2_nd,hm3_nd,hm4_nd,kp1_nd,kp2_nd,kp3_nd,kp4_nd = sess.run([hm1,hm2,hm3,hm4,kp1,kp2,kp3,kp4], feed_dict={inputs: np.expand_dims(img_nd, 0)}) 84 | 85 | keypoints_nd = np.array([kp1_nd[0],kp2_nd[0],kp3_nd[0],kp4_nd[0]]) 86 | keypoints_nd = ((keypoints_nd+1)/2 * np.array([600, 800])).astype('int') 87 | 88 | x1 = (keypoints_nd[0,0] + keypoints_nd[2,0])/2.0 89 | y1 = (keypoints_nd[0,1] + keypoints_nd[1,1])/2.0 90 | x2 = (keypoints_nd[1,0] + keypoints_nd[3,0])/2.0 91 | y2 = (keypoints_nd[2,1] + keypoints_nd[3,1])/2.0 92 | 93 | new_kp1, new_kp2, new_kp3, new_kp4 = np.array([x1,y1]),np.array([x2,y1]),np.array([x1,y2]),np.array([x2,y2]) 94 | 95 | src_pts = keypoints_nd.astype('float32') 96 | dst_pts = np.array([new_kp1, new_kp2, new_kp3, new_kp4]).astype('float32') 97 | 98 | # get the transfrom matrix 99 | img_nd_bgr = cv2.cvtColor(img_nd, cv2.COLOR_RGB2BGR) 100 | H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) 101 | resize_factor = img_nd_bgr.shape[1] * 1.0 / (x2-x1) 102 | height, width = img_nd_bgr.shape[:2] 103 | new_height, new_width = int(height*resize_factor), int(width*resize_factor) 104 | transfomed_image = cv2.warpPerspective(img_nd_bgr, H, (new_width, new_height))#tuple(reversed(img_nd_bgr.shape[:2])) 105 | 106 | cropped_image = transfomed_image[int(y1):int(y2), int(x1):int(x2), :] 107 | 108 | # draw keypoints for smart crop 109 | cv2.line(img_nd_bgr, tuple(keypoints_nd[0]), tuple(keypoints_nd[1]), [0,255,0], 3) 110 | cv2.line(img_nd_bgr, tuple(keypoints_nd[1]), tuple(keypoints_nd[3]), [0,255,0], 3) 111 | cv2.line(img_nd_bgr, tuple(keypoints_nd[3]), tuple(keypoints_nd[2]), [0,255,0], 3) 112 | cv2.line(img_nd_bgr, tuple(keypoints_nd[2]), tuple(keypoints_nd[0]), [0,255,0], 3) 113 | 114 | hm = hm1_nd[0]+hm2_nd[0]+hm3_nd[0]+hm4_nd[0] 115 | hm *= 255 116 | hm = hm.astype('uint8') 117 | color_heatmap = cv2.applyColorMap(hm, cv2.COLORMAP_JET) 118 | color_heatmap = cv2.resize(color_heatmap, (600, 800)) 119 | 120 | #get smart cropped image dimensions and revert resize factor applied previously 121 | cv2.imwrite('tmp/app-cropped-warped.jpg',cropped_image) 122 | print (cropped_image.shape) 123 | dim = (int(cropped_image.shape[1] * rw), int(cropped_image.shape[0] * rh)) 124 | final = cv2.resize(cropped_image, dim, interpolation = cv2.INTER_AREA) 125 | cv2.imwrite('tmp/app-FINAL.jpg',final) 126 | 127 | return 'OK' 128 | 129 | if __name__ == '__main__': 130 | print(("* Loading Keras model and Flask starting server..." 131 | "please wait until server has fully started")) 132 | init() 133 | app.run(debug = True) 134 | -------------------------------------------------------------------------------- /in/id-card.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/in/id-card.jpg -------------------------------------------------------------------------------- /model/dsnt.py: -------------------------------------------------------------------------------- 1 | ''' 2 | A Tensorflow implementation of the DSNT layer, as taken from the paper "Numerical Coordinate 3 | Regression with Convolutional Neural Networks" 4 | ''' 5 | 6 | import tensorflow as tf 7 | 8 | def dsnt(inputs, method='softmax'): 9 | ''' 10 | Differentiable Spatial to Numerical Transform, as taken from the paper "Numerical Coordinate 11 | Regression with Convolutional Neural Networks" 12 | Arguments: 13 | inputs - The learnt heatmap. A 3d tensor of shape [batch, height, width] 14 | method - A string representing the normalisation method. See `_normalise_heatmap` for available methods 15 | Returns: 16 | norm_heatmap - The given heatmap with normalisation/rectification applied 17 | coords_zipped - A tensor of shape [batch, 2] containing the [x, y] coordinate pairs 18 | ''' 19 | # Rectify and reshape inputs 20 | norm_heatmap = _normalise_heatmap(inputs, method) 21 | 22 | batch_count = tf.shape(norm_heatmap)[0] 23 | height = tf.shape(norm_heatmap)[1] 24 | width = tf.shape(norm_heatmap)[2] 25 | 26 | # Build the DSNT x, y matrices 27 | dsnt_x = tf.tile([[(2 * tf.range(1, width+1) - (width + 1)) / width]], [batch_count, height, 1]) 28 | dsnt_x = tf.cast(dsnt_x, tf.float32) 29 | dsnt_y = tf.tile([[(2 * tf.range(1, height+1) - (height + 1)) / height]], [batch_count, width, 1]) 30 | dsnt_y = tf.cast(tf.transpose(dsnt_y, perm=[0, 2, 1]), tf.float32) 31 | 32 | # Compute the Frobenius inner product 33 | outputs_x = tf.reduce_sum(tf.multiply(norm_heatmap, dsnt_x), axis=[1, 2]) 34 | outputs_y = tf.reduce_sum(tf.multiply(norm_heatmap, dsnt_y), axis=[1, 2]) 35 | 36 | # Zip into [x, y] pairs 37 | coords_zipped = tf.stack([outputs_x, outputs_y], axis=1) 38 | return norm_heatmap,coords_zipped 39 | 40 | def js_reg_loss(heatmaps, centres, fwhm=1): 41 | ''' 42 | Calculates and returns the average Jensen-Shannon divergence between heatmaps and target Gaussians. 43 | Arguments: 44 | heatmaps - Heatmaps generated by the model 45 | centres - Centres of the target Gaussians (in normalized units) 46 | fwhm - Full-width-half-maximum for the drawn Gaussians, which can be thought of as a radius. 47 | ''' 48 | gauss = _make_gaussians(centres, tf.shape(heatmaps)[1], tf.shape(heatmaps)[2], fwhm) 49 | divergences = _js_2d(heatmaps, gauss) 50 | return tf.reduce_mean(divergences) 51 | 52 | 53 | def _normalise_heatmap(inputs, method='softmax'): 54 | ''' 55 | Applies the chosen normalisation/rectification method to the input tensor 56 | Arguments: 57 | inputs - A 4d tensor of shape [batch, height, width, 1] (the learnt heatmap) 58 | method - A string representing the normalisation method. One of those shown below 59 | ''' 60 | # Remove the final dimension as it's of size 1 61 | inputs = tf.reshape(inputs, tf.shape(inputs)[:3]) 62 | 63 | # Normalise the values such that the values sum to one for each heatmap 64 | normalise = lambda x: tf.div(x, tf.reshape(tf.reduce_sum(x, [1, 2]), [2, 1, 1])) 65 | 66 | # Perform rectification 67 | if method == 'softmax': 68 | inputs = _softmax2d(inputs, axes=[1, 2]) 69 | elif method == 'abs': 70 | inputs = tf.abs(inputs) 71 | inputs = normalise(inputs) 72 | elif method == 'relu': 73 | inputs = tf.nn.relu(inputs) 74 | inputs = normalise(inputs) 75 | elif method == 'sigmoid': 76 | inputs = tf.nn.sigmoid(inputs) 77 | inputs = normalise(inputs) 78 | else: 79 | msg = "Unknown rectification method \"{}\"".format(method) 80 | raise ValueError(msg) 81 | return inputs 82 | 83 | def _kl_2d(p, q, eps=24): 84 | unsummed_kl = p * (tf.log(p + eps) - tf.log(q + eps)) 85 | kl_values = tf.reduce_sum(unsummed_kl, [-1, -2]) 86 | return kl_values 87 | 88 | def _js_2d(p, q, eps=1e-24): 89 | m = 0.5 * (p + q) 90 | return 0.5 * _kl_2d(p, m, eps) + 0.5 * _kl_2d(q, m, eps) 91 | 92 | def _softmax2d(target, axes): 93 | ''' 94 | A softmax implementation which can operate across more than one axis - as this isn't 95 | provided by Tensorflow 96 | Arguments: 97 | targets - The tensor on which to apply softmax 98 | axes - An integer or list of integers across which to apply softmax 99 | ''' 100 | max_axis = tf.reduce_max(target, axes, keepdims=True) 101 | target_exp = tf.exp(target-max_axis) 102 | normalize = tf.reduce_sum(target_exp, axes, keepdims=True) 103 | softmax = target_exp / normalize 104 | return softmax 105 | 106 | def _make_gaussian(size, centre, fwhm=1): 107 | ''' 108 | Makes a rectangular gaussian kernel. 109 | Arguments: 110 | size - A 2d tensor representing [height, width] 111 | centre - Pair of (normalised [0, 1]) x, y coordinates 112 | fwhm - Full-width-half-maximum, which can be thought of as a radius. 113 | ''' 114 | # Scale the normalised coordinates to be relative to the size of the frame 115 | centre = [centre[0] * tf.cast(size[1], tf.float32), 116 | centre[1] * tf.cast(size[0], tf.float32)] 117 | # Find the largest side, as we build a square and crop to desired size 118 | square_size = tf.cast(tf.reduce_max(size), tf.float32) 119 | 120 | x = tf.range(0, square_size, 1, dtype=tf.float32) 121 | y = x[:,tf.newaxis] 122 | x0 = centre[0] - 0.5 123 | y0 = centre[1] - 0.5 124 | unnorm = tf.exp(-4*tf.log(2.) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)[:size[0],:size[1]] 125 | norm = unnorm / tf.reduce_sum(unnorm) 126 | return norm 127 | 128 | def _make_gaussians(centres_in, height, width, fwhm=1): 129 | ''' 130 | Makes a batch of gaussians. Size of images designated by height, width; number of images 131 | designated by length of the 1st dimension of centres_in 132 | Arguments: 133 | centres_in - The normalised coordinate centres of the gaussians of shape [batch, x, y] 134 | height - The desired height of the produced gaussian image 135 | width - The desired width of the produced gaussian image 136 | fwhm - Full-width-half-maximum, which can be thought of as a radius. 137 | ''' 138 | def cond(centres, heatmaps): 139 | return tf.greater(tf.shape(centres)[0], 0) 140 | 141 | def body(centres, heatmaps): 142 | curr = centres[0] 143 | centres = centres[1:] 144 | new_heatmap = _make_gaussian([height, width], curr, fwhm) 145 | new_heatmap = tf.reshape(new_heatmap, [-1]) 146 | 147 | heatmaps = tf.concat([heatmaps, new_heatmap], 0) 148 | return [centres, heatmaps] 149 | 150 | # Produce 1 heatmap per coordinate pair, build a list of heatmaps 151 | _, heatmaps_out = tf.while_loop(cond, 152 | body, 153 | [centres_in, tf.constant([])], 154 | shape_invariants=[tf.TensorShape([None, 2]), tf.TensorShape([None])]) 155 | heatmaps_out = tf.reshape(heatmaps_out, [-1, height, width]) 156 | return heatmaps_out 157 | -------------------------------------------------------------------------------- /model/frozen_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/model/frozen_model.pb -------------------------------------------------------------------------------- /ngrok: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/ngrok -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | absl-py==0.9.0 2 | astor==0.8.1 3 | cachetools==4.0.0 4 | certifi==2019.11.28 5 | chardet==3.0.4 6 | Click==7.0 7 | cycler==0.10.0 8 | Flask==1.0.3 9 | gast==0.2.2 10 | grpcio==1.26.0 11 | h5py==2.10.0 12 | idna==2.8 13 | itsdangerous==1.1.0 14 | Jinja2==2.10.3 15 | Keras==2.2.4 16 | Keras-Applications==1.0.8 17 | Keras-Preprocessing==1.1.0 18 | kiwisolver==1.1.0 19 | Markdown==3.1.1 20 | MarkupSafe==1.1.1 21 | matplotlib==3.1.2 22 | mock==3.0.5 23 | numpy==1.16.2 24 | oauthlib==3.1.0 25 | opencv-python==4.1.2.30 26 | opt-einsum==3.1.0 27 | Pillow==6.2.0 28 | protobuf==3.11.2 29 | pyasn1==0.4.8 30 | pyasn1-modules==0.2.7 31 | pyparsing==2.4.6 32 | python-dateutil==2.8.1 33 | PyYAML==5.2 34 | requests==2.22.0 35 | requests-oauthlib==1.3.0 36 | rsa==4.0 37 | scipy==1.4.1 38 | six==1.13.0 39 | tensorboard==1.14.0 40 | tensorflow==1.14.0 41 | tensorflow-estimator==1.14.0 42 | termcolor==1.1.0 43 | urllib3==1.25.7 44 | Werkzeug==0.16.0 45 | wrapt==1.11.2 46 | -------------------------------------------------------------------------------- /results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/results.png -------------------------------------------------------------------------------- /static/dl.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | DL 5 | Created with Sketch. 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | DRIVER LICENSE 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /static/script.js: -------------------------------------------------------------------------------- 1 | //jQuery time 2 | var current_fs, next_fs, previous_fs; //fieldsets 3 | var left, opacity, scale; //fieldset properties which we will animate 4 | var animating; //flag to prevent quick multi-click glitches 5 | 6 | $(".next").click(function(){ 7 | if(animating) return false; 8 | animating = true; 9 | 10 | current_fs = $(this).parent(); 11 | next_fs = $(this).parent().next(); 12 | 13 | //activate next step on progressbar using the index of next_fs 14 | $("#progressbar li").eq($("fieldset").index(next_fs)).addClass("active"); 15 | 16 | //show the next fieldset 17 | next_fs.show(); 18 | //hide the current fieldset with style 19 | current_fs.animate({opacity: 0}, { 20 | step: function(now, mx) { 21 | //as the opacity of current_fs reduces to 0 - stored in "now" 22 | //1. scale current_fs down to 80% 23 | scale = 1 - (1 - now) * 0.2; 24 | //2. bring next_fs from the right(50%) 25 | left = (now * 50)+"%"; 26 | //3. increase opacity of next_fs to 1 as it moves in 27 | opacity = 1 - now; 28 | current_fs.css({ 29 | 'transform': 'scale('+scale+')', 30 | 'position': 'absolute' 31 | }); 32 | next_fs.css({'left': left, 'opacity': opacity}); 33 | }, 34 | duration: 800, 35 | complete: function(){ 36 | current_fs.hide(); 37 | animating = false; 38 | }, 39 | //this comes from the custom easing plugin 40 | easing: 'easeInOutBack' 41 | }); 42 | }); 43 | 44 | $(".previous").click(function(){ 45 | if(animating) return false; 46 | animating = true; 47 | 48 | current_fs = $(this).parent(); 49 | previous_fs = $(this).parent().prev(); 50 | 51 | 52 | $(document).ready(function(e) { 53 | $(".showonhover").click(function(){ 54 | $("#selectfile").trigger('click'); 55 | }); 56 | }); 57 | 58 | 59 | var input = document.querySelector('input[type=file]'); // see Example 4 60 | 61 | input.onchange = function () { 62 | var file = input.files[0]; 63 | 64 | drawOnCanvas(file); // see Example 6 65 | displayAsImage(file); // see Example 7 66 | }; 67 | 68 | function drawOnCanvas(file) { 69 | var reader = new FileReader(); 70 | 71 | reader.onload = function (e) { 72 | var dataURL = e.target.result, 73 | c = document.querySelector('canvas'), // see Example 4 74 | ctx = c.getContext('2d'), 75 | img = new Image(); 76 | 77 | img.onload = function() { 78 | c.width = img.width; 79 | c.height = img.height; 80 | ctx.drawImage(img, 0, 0); 81 | }; 82 | 83 | img.src = dataURL; 84 | }; 85 | 86 | reader.readAsDataURL(file); 87 | } 88 | 89 | function displayAsImage(file) { 90 | var imgURL = URL.createObjectURL(file), 91 | img = document.createElement('img'); 92 | 93 | img.onload = function() { 94 | URL.revokeObjectURL(imgURL); 95 | }; 96 | 97 | img.src = imgURL; 98 | document.body.appendChild(img); 99 | } 100 | 101 | $("#upfile1").click(function () { 102 | $("#file1").trigger('click'); 103 | }); 104 | $("#upfile2").click(function () { 105 | $("#file2").trigger('click'); 106 | }); 107 | $("#upfile3").click(function () { 108 | $("#file3").trigger('click'); 109 | }); 110 | 111 | //de-activate current step on progressbar 112 | $("#progressbar li").eq($("fieldset").index(current_fs)).removeClass("active"); 113 | 114 | //show the previous fieldset 115 | previous_fs.show(); 116 | //hide the current fieldset with style 117 | current_fs.animate({opacity: 0}, { 118 | step: function(now, mx) { 119 | //as the opacity of current_fs reduces to 0 - stored in "now" 120 | //1. scale previous_fs from 80% to 100% 121 | scale = 0.8 + (1 - now) * 0.2; 122 | //2. take current_fs to the right(50%) - from 0% 123 | left = ((1-now) * 50)+"%"; 124 | //3. increase opacity of previous_fs to 1 as it moves in 125 | opacity = 1 - now; 126 | current_fs.css({'left': left}); 127 | previous_fs.css({'transform': 'scale('+scale+')', 'opacity': opacity}); 128 | }, 129 | duration: 800, 130 | complete: function(){ 131 | current_fs.hide(); 132 | animating = false; 133 | }, 134 | //this comes from the custom easing plugin 135 | easing: 'easeInOutBack' 136 | }); 137 | }); 138 | 139 | $(".submit").click(function(){ 140 | return false; 141 | }) -------------------------------------------------------------------------------- /static/selfie.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Selfie 5 | Created with Sketch. 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | -------------------------------------------------------------------------------- /static/style.css: -------------------------------------------------------------------------------- 1 | html, 2 | body { 3 | height: 100%; 4 | margin: 0; 5 | padding: 0; 6 | } 7 | 8 | body { 9 | font-family: 'helvetica', sans-serif; 10 | font-weight: light; 11 | background: linear-gradient(to top, #4f6072, #8699aa); 12 | display: flex; 13 | justify-content: center; 14 | align-items: center; 15 | color: #7F8FA4; 16 | } 17 | 18 | .upload { 19 | position: relative; 20 | width: 350px; 21 | min-height: 500px; 22 | max-height: 500px; 23 | box-sizing: border-box; 24 | border-radius: 10px; 25 | box-shadow: 0 12px 28px 0 rgba(0,0,0,0.50); 26 | background: #101519; 27 | -webkit-animation: fadeup .5s .5s ease both; 28 | animation: fadeup .5s .5s ease both; 29 | -webkit-transform: translateY(20px); 30 | transform: translateY(20px); 31 | opacity: 0; 32 | } 33 | 34 | .upload .upload-files header { 35 | background: #000000; 36 | border-top-left-radius: 5px; 37 | border-top-right-radius: 5px; 38 | text-align: center; 39 | margin-top: 10; 40 | padding: 25px 0; 41 | } 42 | 43 | .upload .upload-files .body { 44 | text-align: center; 45 | padding: 50px 0; 46 | padding-bottom: 30px; 47 | } 48 | 49 | .upload .upload-files .body i { 50 | font-size: 65px; 51 | } 52 | 53 | .upload-btn-wrapper { 54 | position: relative; 55 | overflow: hidden; 56 | display: inline-block; 57 | } 58 | 59 | .btn { 60 | border: 2px solid #ffffff; 61 | color: #ffffff; 62 | background-color: #000000; 63 | padding: 20px 20px; 64 | border-radius: 70px; 65 | font-size: 75px; 66 | font-weight: bold; 67 | } 68 | 69 | .upload-btn-wrapper input[type=file] { 70 | font-size: 100px; 71 | position: absolute; 72 | left: 0; 73 | top: 0; 74 | opacity: 0; 75 | } 76 | 77 | .upload .upload-files .body p { 78 | font-size: 14px; 79 | padding-top: 15px; 80 | line-height: 1.4; 81 | bottom: 0; 82 | position: ab; 83 | color: #ffffff; 84 | } 85 | 86 | .upload .upload-files { 87 | background: #000000; 88 | border-bottom-left-radius: 5px; 89 | border-bottom-right-radius: 5px; 90 | padding: 20px 0; 91 | text-align: center; 92 | position: absolute; 93 | bottom: 0; 94 | width: 100%; 95 | height: 5.0rem; 96 | } 97 | 98 | imagediv { 99 | float: left; 100 | margin-top: 50px; 101 | } 102 | 103 | .imagediv .showonhover { 104 | background: red; 105 | padding: 20px; 106 | opacity: 0.9; 107 | color: white; 108 | width: 100%; 109 | display: block; 110 | text-align: center; 111 | cursor: pointer; 112 | } 113 | 114 | /*progressbar*/ 115 | #progressbar { 116 | overflow: hidden; 117 | 118 | /*CSS counters to number the steps*/ 119 | counter-reset: step; 120 | } 121 | 122 | #progressbar li { 123 | margin-top: 20px; 124 | margin-bottom: 20px; 125 | list-style-type: none; 126 | color: #7f8fa4; 127 | text-transform: uppercase; 128 | font-size: 9px; 129 | width: 25%; 130 | float: left; 131 | position: relative; 132 | letter-spacing: 1px; 133 | } 134 | 135 | #progressbar li:before { 136 | content: counter(step); 137 | counter-increment: step; 138 | width: 24px; 139 | height: 24px; 140 | line-height: 26px; 141 | display: block; 142 | font-size: 12px; 143 | color: #333; 144 | background: white; 145 | border-radius: 25px; 146 | margin: 0 auto 10px auto; 147 | } 148 | 149 | /*progressbar connectors*/ 150 | #progressbar li:after { 151 | content: ''; 152 | width: 100%; 153 | height: 2px; 154 | background: white; 155 | position: absolute; 156 | left: -50%; 157 | top: 9px; 158 | z-index: -1; /*put it behind the numbers*/ 159 | } 160 | 161 | #progressbar li:first-child:after { 162 | /*connector not needed before the first step*/ 163 | content: none; 164 | } 165 | 166 | /*marking active/completed steps green*/ 167 | /*The number of the step and the connector before it = green*/ 168 | #progressbar li.active:before, 169 | #progressbar li.active:after { 170 | background: #ee0979; 171 | color: white; 172 | } 173 | 174 | /*form styles*/ 175 | #msform { 176 | text-align: center; 177 | position: relative; 178 | } 179 | 180 | #msform fieldset { 181 | background: rgba(0, 0, 0, 0); 182 | border: 0 none; 183 | border-radius: 0px; 184 | 185 | /*box-shadow: 0 0 15px 1px rgba(0, 0, 0, 0.4)*/ 186 | padding: 5px; 187 | box-sizing: border-box; 188 | width: 100%; 189 | 190 | /*stacking fieldsets above each other*/ 191 | position: relative; 192 | } 193 | 194 | /*Hide all except first fieldset*/ 195 | #msform fieldset:not(:first-of-type) { 196 | display: none; 197 | } 198 | 199 | /*inputs*/ 200 | #msform input, 201 | #msform textarea { 202 | padding: 10px; 203 | border: 1px solid #273142; 204 | border-radius: 5px; 205 | margin-bottom: 10px; 206 | width: 70%; 207 | box-sizing: border-box; 208 | font-family: arial; 209 | background-color: #222c3c; 210 | color: #7f8fa4; 211 | font-size: 11px; 212 | } 213 | 214 | #msform input:focus, 215 | #msform textarea:focus { 216 | -moz-box-shadow: none !important; 217 | -webkit-box-shadow: none !important; 218 | box-shadow: none !important; 219 | border: .5px solid #4eb8ea; 220 | outline-width: 0; 221 | transition: All 0.5s ease-in; 222 | -webkit-transition: All 0.5s ease-in; 223 | -moz-transition: All 0.5s ease-in; 224 | -o-transition: All 0.5s ease-in; 225 | } 226 | 227 | /*buttons*/ 228 | #msform .action-button { 229 | width: 100px; 230 | background: #ee0979; 231 | font-weight: bold; 232 | color: white; 233 | border: 0 none; 234 | border-radius: 10px; 235 | cursor: pointer; 236 | padding: 10px 5px; 237 | margin: 2px 5px; 238 | } 239 | 240 | #msform .action-button:hover, 241 | #msform .action-button:focus { 242 | box-shadow: 0 0 0 2px #c73363, 0 0 0 3px #c73363; 243 | } 244 | 245 | #msform .action-button-previous { 246 | width: 100px; 247 | background: rgba(34,44,60,0.20); 248 | font-weight: bold; 249 | color: white; 250 | border: 2px; 251 | border-radius: 10px; 252 | cursor: pointer; 253 | padding: 10px 5px; 254 | margin: 2px 5px; 255 | } 256 | 257 | #msform .action-button-previous:hover, 258 | #msform .action-button-previous:focus { 259 | box-shadow: 0 0 0 2px #222c3c, 0 0 0 3px #222c3c; 260 | } 261 | 262 | @keyframes fadeup { 263 | to { 264 | -webkit-transform: translateY(0); 265 | transform: translateY(0); 266 | opacity: 1; 267 | } 268 | } 269 | 270 | input, 271 | textarea { 272 | font: helvetica; 273 | } 274 | 275 | ::-webkit-input-placeholder { /* WebKit browsers */ 276 | color: #FFF; 277 | } 278 | 279 | :-moz-placeholder { /* Mozilla Firefox 4 to 18 */ 280 | color: #FFF; 281 | } 282 | 283 | ::-moz-placeholder { /* Mozilla Firefox 19+ */ 284 | color: #FFF; 285 | } 286 | 287 | :-ms-input-placeholder { /* Internet Explorer 10+ */ 288 | color: #FFF; 289 | } 290 | 291 | .footer { 292 | height: 50px; 293 | margin-top: -50px; 294 | } 295 | 296 | .foot { 297 | margin-top: 10px; 298 | } 299 | 300 | .fileContainer { 301 | overflow: hidden; 302 | position: relative; 303 | } 304 | 305 | .fileContainer [type=file] { 306 | cursor: inherit; 307 | display: block; 308 | font-size: 999px; 309 | filter: alpha(opacity=0); 310 | min-height: 100%; 311 | min-width: 100%; 312 | opacity: 0; 313 | position: absolute; 314 | right: 0; 315 | text-align: right; 316 | top: 0; 317 | } 318 | 319 | .fa-beat { 320 | -webkit-animation: fa-beat 1s infinite linear; 321 | animation: fa-beat 1s infinite linear; 322 | margin: 20px; 323 | } 324 | 325 | @-webkit-keyframes fa-beat { 326 | 0% { 327 | -webkit-transform: scale(1); 328 | transform: scale(1); 329 | } 330 | 331 | 50% { 332 | -webkit-transform: scale(1.1); 333 | transform: scale(1.1); 334 | } 335 | 336 | 100% { 337 | -webkit-transform: scale(1); 338 | transform: scale(1); 339 | } 340 | } 341 | 342 | @keyframes fa-beat { 343 | 0% { 344 | -webkit-transform: scale(1); 345 | transform: scale(1); 346 | } 347 | 348 | 50% { 349 | -webkit-transform: scale(1.1); 350 | transform: scale(1.1); 351 | } 352 | 353 | 100% { 354 | -webkit-transform: scale(1); 355 | transform: scale(1); 356 | } 357 | } 358 | 359 | .image-embed { 360 | margin: 20px; 361 | } 362 | 363 | .submit-button { 364 | width: 100px !important; 365 | background-color: #4EB8EA !important; 366 | font-weight: bold !important; 367 | color: white !important; 368 | border: 2px !important; 369 | border-radius: 10px !important; 370 | cursor: pointer !important; 371 | padding: 10px 5px !important; 372 | margin: 2px 5px !important; 373 | } 374 | 375 | .result { 376 | position: relative; 377 | width: 800px; 378 | min-height: 500px; 379 | box-sizing: border-box; 380 | border-radius: 10px; 381 | box-shadow: 0 12px 28px 0 rgba(0,0,0,0.50); 382 | background: #101519; 383 | -webkit-animation: fadeup .5s .5s ease both; 384 | animation: fadeup .5s .5s ease both; 385 | -webkit-transform: translateY(20px); 386 | transform: translateY(20px); 387 | opacity: 0; 388 | } -------------------------------------------------------------------------------- /templates/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Acquaint Instant ID Verification 8 | 9 | 10 | 11 | 12 | 13 | 14 |
16 |
17 | 18 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 |
34 |
35 |


39 | 40 | 41 |
42 |
43 |


47 | 48 | 49 |
50 |
51 | 53 |
54 | 55 | 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /tmp/jupyter-cropped-warped.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/tmp/jupyter-cropped-warped.jpg -------------------------------------------------------------------------------- /tmp/jupyter-final.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/tmp/jupyter-final.jpg -------------------------------------------------------------------------------- /tmp/jupyter-results.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/getcontrol/KYC-tensorflow/e0f59a07882a5661c2aa9d2cd927e81dface52f9/tmp/jupyter-results.jpg --------------------------------------------------------------------------------