├── LICENSE ├── README.md ├── dbface-infer.py ├── dbface-test.ipynb ├── ir-models ├── dbface-320x320-fp16.tar ├── dbface-4vga-fp16.tar └── dbface-vga-fp16.tar └── resources └── output.jpg /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | The objective of this project is to run the DBFace, a real-time, single-stage face detector on Intel(r) Distribution of OpenVINO(tm) Toolkit. 3 | **Original DBFace GitHub site:** 4 | https://github.com/dlunion/DBFace 5 | 6 | Thie original developer used PyTorch to train the model. You need to convert the PyTorch model to OpenVINO IR model. You need to run 2 conversion steps to get the final IR model. 7 | ```sh 8 | PyTorch (.pth) -> ONNX (.onnx) -> IR (.xml, .bin) 9 | ``` 10 | 11 | このプロジェクトの目的はリアルタイムでシングルステージの顔検出モデル、DBFaceをIntel(r) Distribution of OpenVINO(tm) Toolkit上で実行できるようにすることです。 12 | **Original DBFace GitHub site:** 13 | https://github.com/dlunion/DBFace 14 | 15 | 元の開発者はPyTorchを使ってモデルの学習を行っています。そのためPyTorchモデルからOpenVINO IRモデルへの変換を行わなくてはなりません。 IRモデルを得るためには2つの変換ステップを要します。 16 | ```sh 17 | PyTorch (.pth) -> ONNX (.onnx) -> IR (.xml, .bin) 18 | ``` 19 | 20 | **Detection result with a 4VGA (1280x960) input picture** 21 | ![output](resources/output.jpg) 22 | 23 | ## 1. Prerequisites 24 | * [Intel Distribution of OpenVINO toolkit 2020.2 (or newer)](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 25 | * Model Downloader / Model Converter setup 26 | [Model Downloader guide](https://docs.openvinotoolkit.org/latest/_tools_downloader_README.html) 27 | ```sh 28 | pip3 -r ${INTEL_OPENVINO_DIR}/deployment_tools/tools/model_downloader/requirements.in 29 | pip3 -r ${INTEL_OPENVINO_DIR}/deployment_tools/tools/model_downloader/requirements-pytorch.in 30 | ``` 31 | If you fail to install PyTorch, go to PyTorch official web site and follow the `QUICK START LOCALLY` guide to install it. You need to have PyTorch >=1.4. 32 | 33 | ## 2. Download DBFace PyTorch model and weight 34 | Download `DBFace.py` and `dbface.pth` from [original developer's GitHub page](https://github.com/dlunion/DBFace/tree/master/model). 35 | 36 | 37 | ## 3. Convert PyTorch model into ONNX model 38 | 39 | Use `pytorch_to_onnx.py` in the `model_downloader` directory. 40 | ```sh 41 | python3 ${INTEL_OPENVINO_DIR}/deployment_tools/tools/model_downloader/pytorch_to_onnx.py \ 42 | --model-name DBFace \ 43 | --weights dbface.pth \ 44 | --import-module DBFace \ 45 | --input-shape 1,3,320,320 \ 46 | --output-file dbface.onnx \ 47 | --input-names x \ 48 | --output-names sigmoid_hm,tlrb,landmark 49 | ``` 50 | **Hint:** You can get the input and output node names from the model source code (in this case, `DBFace.py` in the `model` directory) 51 | ```Python 52 | def forward(self, x): 53 | out = self.hs1(self.bn1(self.conv1(x))) 54 | : 55 | return sigmoid_hm, tlrb, landmark 56 | ``` 57 | ## 4. Convert ONNX model into OpenVINO IR model 58 | 59 | Use `Model Optimizer (MO)` to convert the ONNX model into IR model. 60 | 61 | ```sh 62 | python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo.py \ 63 | --input_model dbface.onnx \ 64 | --mean_values [180,154,150] \ 65 | --scale_values [73.7,70.0,70.9] \ 66 | --input_shape [1,3,960,1280] \ 67 | --output_dir dbface-4vga \ 68 | --data_type FP16 69 | ``` 70 | 71 | **Hint:** You can change the input shape with `--input_shape` option. 72 | **Hint:** You can find the appropriate parameters (`--mean_values` and `--scale_values`) for input data preprocessing from the original source code (in this case, line 36-40 in `main.py` from the origianl GitHub site) 73 | ```Python 74 | mean = [0.408, 0.447, 0.47] 75 | std = [0.289, 0.274, 0.278] 76 | 77 | image = common.pad(image) 78 | image = ((image / 255.0 - mean) / std).astype(np.float32) 79 | ``` 80 | 81 | ## 5. Run sample program 82 | 83 | ```sh 84 | python3 dbface-infer.py -m model.xml -i input_image 85 | ``` 86 | A webCam #0 will be used when you specify 'cam' as the input file name. 87 | `output.jpg` will be ganarated in the current directry. (non-webCam version only) 88 | 89 | **Command line example:** 90 | ```sh 91 | $ python3 dbface-infer.py -m dbface-4vga/dbface.xml -i image.jpg 92 | $ python3 dbface-infer.py -m dbface-vga/dbface.xml -i cam 93 | ``` 94 | 95 | ## 6. Test Environment 96 | - Ubuntu 18.04 / Windows 10 1909 97 | - OpenVINO 2020.3 LTS 98 | 99 | 100 | ## See Also 101 | * [GitHub, dlunion/DBFace](https://github.com/dlunion/DBFace) 102 | * [Intel(r) Distribution of OpenVINO(tm) Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 103 | * [Converting a ONNX* Model](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html) 104 | * [Model Downloader and other automation tools](https://docs.openvinotoolkit.org/latest/_tools_downloader_README.html) 105 | * [PyTorch official web page](https://pytorch.org/?utm_source=Google&utm_medium=PaidSearch&utm_campaign=%2A%2ALP+-+TM+-+General+-+HV+-+JP&utm_adgroup=Installing+PyTorch&utm_keyword=installing%20pytorch&utm_offering=AI&utm_Product=PyTorch&gclid=Cj0KCQjwudb3BRC9ARIsAEa-vUvjBVIyGnP31gCk__x1bquCw5HNX3Av0Mu0vwU75HBxgT79lCdwsuUaAoUFEALw_wcB) 106 | -------------------------------------------------------------------------------- /dbface-infer.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import math 4 | 5 | import numpy as np 6 | from numpy.lib.stride_tricks import as_strided 7 | 8 | import cv2 9 | 10 | import argparse 11 | 12 | from openvino.inference_engine import IECore 13 | 14 | def _exp(v): 15 | if isinstance(v, tuple) or isinstance(v, list): 16 | return [_exp(item) for item in v] 17 | elif isinstance(v, np.ndarray): 18 | return np.array([_exp(item) for item in v], v.dtype) 19 | 20 | gate = 1 21 | base = np.exp(1) 22 | if abs(v) < gate: 23 | return v * base 24 | 25 | if v > 0: 26 | return np.exp(v) 27 | else: 28 | return -np.exp(-v) 29 | 30 | 31 | def IOU(rec1, rec2): 32 | cx1, cy1, cx2, cy2 = rec1 33 | gx1, gy1, gx2, gy2 = rec2 34 | S_rec1 = (cx2 - cx1 + 1) * (cy2 - cy1 + 1) 35 | S_rec2 = (gx2 - gx1 + 1) * (gy2 - gy1 + 1) 36 | x1 = max(cx1, gx1) 37 | y1 = max(cy1, gy1) 38 | x2 = min(cx2, gx2) 39 | y2 = min(cy2, gy2) 40 | 41 | w = max(0, x2 - x1 + 1) 42 | h = max(0, y2 - y1 + 1) 43 | area = w * h 44 | iou = area / (S_rec1 + S_rec2 - area) 45 | return iou 46 | 47 | 48 | def NMS(objs, iou=0.5): 49 | if objs is None or len(objs) <= 1: 50 | return objs 51 | 52 | objs = sorted(objs, key=lambda obj: obj[1], reverse=True) 53 | keep = [] 54 | flags = [0] * len(objs) 55 | for index, obj in enumerate(objs): 56 | 57 | if flags[index] != 0: 58 | continue 59 | 60 | keep.append(obj) 61 | for j in range(index + 1, len(objs)): 62 | if flags[j] == 0 and IOU(obj[0], objs[j][0]) > iou: 63 | flags[j] = 1 64 | return keep 65 | 66 | 67 | def max_pooling(x, kernel_size, stride=1, padding=1): 68 | x = np.pad(x, padding, mode='constant') 69 | output_shape = ((x.shape[0] - kernel_size)//stride + 1, 70 | (x.shape[1] - kernel_size)//stride + 1) 71 | kernel_size = (kernel_size, kernel_size) 72 | x_w = as_strided(x, shape=output_shape + kernel_size, strides=(stride*x.strides[0], stride*x.strides[1]) + x.strides) 73 | x_w = x_w.reshape(-1, *kernel_size) 74 | 75 | return x_w.max(axis=(1, 2)).reshape(output_shape) 76 | 77 | 78 | def detect(hm, box, landmark, threshold=0.4, nms_iou=0.5): 79 | hm_pool = max_pooling(hm[0,0,:,:], 3, 1, 1) # 1,1,240,320 80 | interest_points = ((hm==hm_pool) * hm) # screen out low-conf pixels 81 | flat = interest_points.ravel() # flatten 82 | indices = np.argsort(flat)[::-1] # index sort 83 | scores = np.array([ flat[idx] for idx in indices ]) 84 | 85 | hm_height, hm_width = hm.shape[2:] 86 | ys = indices // hm_width 87 | xs = indices % hm_width 88 | box = box.reshape(box.shape[1:]) # 4,240,320 89 | landmark = landmark.reshape(landmark.shape[1:]) # 10,240,,320 90 | 91 | stride = 4 92 | objs = [] 93 | for cx, cy, score in zip(xs, ys, scores): 94 | if score < threshold: 95 | break 96 | x, y, r, b = box[:, cy, cx] 97 | xyrb = (np.array([cx, cy, cx, cy]) + [-x, -y, r, b]) * stride 98 | x5y5 = landmark[:, cy, cx] 99 | x5y5 = (_exp(x5y5 * 4) + ([cx]*5 + [cy]*5)) * stride 100 | box_landmark = list(zip(x5y5[:5], x5y5[5:])) 101 | objs.append([xyrb, score, box_landmark]) 102 | return NMS(objs, iou=nms_iou) 103 | 104 | 105 | def drawBBox(image, bbox, color=(0,255,0), thickness=2, textcolor=(0, 0, 0), landmarkcolor=(0, 0, 255)): 106 | 107 | text = f"{bbox[1]:.2f}" 108 | xyrb = bbox[0] 109 | x, y, r, b = int(xyrb[0]), int(xyrb[1]), int(xyrb[2]), int(xyrb[3]) 110 | w = r - x + 1 111 | h = b - y + 1 112 | 113 | cv2.rectangle(image, (x, y, r-x+1, b-y+1), color, thickness, 16) 114 | 115 | border = int(thickness / 2) 116 | pos = (x + 3, y - 5) 117 | cv2.rectangle(image, (x - border, y - 21, w + thickness, 21), color, -1, 16) 118 | cv2.putText(image, text, pos, 0, 0.5, textcolor, 1, 16) 119 | 120 | landmark = bbox[2] 121 | if len(landmark)>0: 122 | for i in range(len(landmark)): 123 | x, y = landmark[i][:2] 124 | cv2.circle(image, (int(x), int(y)), 3, landmarkcolor, -1, 16) 125 | 126 | 127 | def main(args): 128 | ie = IECore() 129 | 130 | base,ext = os.path.splitext(args.model) 131 | if ext != '.xml': 132 | print('Not .xml file is specified ', args.model) 133 | sys.exit(-1) 134 | net = ie.read_network(base+'.xml', base+'.bin') 135 | exenet = ie.load_network(net, 'CPU') 136 | 137 | inblobs = (list(net.inputs.keys())) 138 | outblobs = (list(net.outputs.keys())) 139 | print(inblobs, outblobs) 140 | # ['x'] ['Conv_525', 'Exp_527', 'Sigmoid_526'] 141 | 142 | inshapes = [ net.inputs [i].shape for i in inblobs ] 143 | outshapes = [ net.outputs[i].shape for i in outblobs ] 144 | print(inshapes, outshapes) 145 | # 4vga : [[1, 3, 960, 1280]] [[1, 10, 240, 320], [1, 4, 240, 320], [1, 1, 240, 320]] 146 | 147 | # Assign output node idex by checking the number of channels 148 | for i,outblob in enumerate(outblobs): 149 | C = outshapes[i][1] 150 | if C==1: 151 | hm_idx=i 152 | if C==4: 153 | box_idx=i 154 | if C==10: 155 | lm_idx=i 156 | 157 | if args.input == 'cam': 158 | cap = cv2.VideoCapture(0) 159 | 160 | while True: 161 | if args.input == 'cam': 162 | ret, image = cap.read() 163 | else: 164 | image = cv2.imread(args.input) 165 | 166 | image = cv2.resize(image, (inshapes[0][3], inshapes[0][2])) 167 | img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 168 | img = img.transpose((2,0,1)) 169 | 170 | res = exenet.infer({inblobs[0]:img}) 171 | lm = res[outblobs[lm_idx ]] # 1,10,h,w 172 | box = res[outblobs[box_idx]] # 1,4,h,w 173 | hm = res[outblobs[hm_idx ]] # 1,1,h,w 174 | 175 | objs = detect(hm=hm, box=box, landmark=lm, threshold=0.4, nms_iou=0.5) 176 | 177 | for obj in objs: 178 | drawBBox(image, obj) 179 | 180 | cv2.imshow('output', image) 181 | if args.input == 'cam': 182 | if cv2.waitKey(1) == 27: # ESC key 183 | return 184 | else: 185 | cv2.waitKey(0) 186 | cv2.imwrite('output.jpg', image) 187 | print('"output.jpg" is generated') 188 | return 189 | 190 | 191 | if __name__ == '__main__': 192 | parser = argparse.ArgumentParser() 193 | parser.add_argument('-i', '--input', type=str, default='selfie.jpg', help='input image file name (\'cam\' for webCam input)') 194 | parser.add_argument('-m', '--model', type=str, default='./dbface.xml', help='FBFace IR model file name (*.xml)') 195 | args = parser.parse_args() 196 | 197 | main(args) 198 | -------------------------------------------------------------------------------- /ir-models/dbface-320x320-fp16.tar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yas-sim/dbface-on-openvino/0893ec35de369f2a857d723f78af3a650581c420/ir-models/dbface-320x320-fp16.tar -------------------------------------------------------------------------------- /ir-models/dbface-4vga-fp16.tar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yas-sim/dbface-on-openvino/0893ec35de369f2a857d723f78af3a650581c420/ir-models/dbface-4vga-fp16.tar -------------------------------------------------------------------------------- /ir-models/dbface-vga-fp16.tar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yas-sim/dbface-on-openvino/0893ec35de369f2a857d723f78af3a650581c420/ir-models/dbface-vga-fp16.tar -------------------------------------------------------------------------------- /resources/output.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yas-sim/dbface-on-openvino/0893ec35de369f2a857d723f78af3a650581c420/resources/output.jpg --------------------------------------------------------------------------------