├── .gitmodules ├── LICENSE ├── README.md ├── asserts ├── QR.png ├── api_matrix.png ├── app_matrix.png ├── endpoints.png ├── fas.jpg └── fr_mask.png ├── docs ├── 人脸检测.md ├── 人脸识别.md ├── 人脸跟踪.md ├── 口罩检测.md ├── 年龄估计.md ├── 性别估计.md ├── 特征点检测.md ├── 眼睛状态检测.md ├── 质量评估器.md └── 静默活体.md └── example └── qt ├── README.md └── seetaface_demo ├── default.png ├── face_resource.qrc ├── inputfilesprocessdialog.cpp ├── inputfilesprocessdialog.h ├── main.cpp ├── mainwindow.cpp ├── mainwindow.h ├── mainwindow.ui ├── resetmodelprocessdialog.cpp ├── resetmodelprocessdialog.h ├── seetaface_demo.pro ├── seetaface_demo.pro.user ├── seetatech_logo.png ├── videocapturethread.cpp ├── videocapturethread.h └── white.png /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "OpenRoleZoo"] 2 | path = OpenRoleZoo 3 | url = https://github.com/SeetaFace6Open/OpenRoleZoo 4 | branch = master 5 | [submodule "SeetaEyeStateDetector"] 6 | path = SeetaEyeStateDetector 7 | url = https://github.com/SeetaFace6Open/SeetaEyeStateDetector 8 | branch = master 9 | [submodule "SeetaAgePredictor"] 10 | path = SeetaAgePredictor 11 | url = https://github.com/SeetaFace6Open/SeetaAgePredictor 12 | branch = master 13 | [submodule "FaceAntiSpoofingX6"] 14 | path = FaceAntiSpoofingX6 15 | url = https://github.com/SeetaFace6Open/FaceAntiSpoofingX6 16 | branch = master 17 | [submodule "SeetaGenderPredictor"] 18 | path = SeetaGenderPredictor 19 | url = https://github.com/SeetaFace6Open/SeetaGenderPredictor 20 | branch = master 21 | [submodule "SeetaMaskDetector"] 22 | path = SeetaMaskDetector 23 | url = https://github.com/SeetaFace6Open/SeetaMaskDetector 24 | branch = master 25 | [submodule "FaceTracker6"] 26 | path = FaceTracker6 27 | url = https://github.com/SeetaFace6Open/FaceTracker6 28 | branch = master 29 | [submodule "FaceBoxes"] 30 | path = FaceBoxes 31 | url = https://github.com/SeetaFace6Open/FaceBoxes 32 | branch = master 33 | [submodule "Landmarker"] 34 | path = Landmarker 35 | url = https://github.com/SeetaFace6Open/Landmarker 36 | branch = master 37 | [submodule "FaceRecognizer6"] 38 | path = FaceRecognizer6 39 | url = https://github.com/SeetaFace6Open/FaceRecognizer6 40 | branch = master 41 | [submodule "PoseEstimator6"] 42 | path = PoseEstimator6 43 | url = https://github.com/SeetaFace6Open/PoseEstimator6 44 | branch = master 45 | [submodule "QualityAssessor3"] 46 | path = QualityAssessor3 47 | url = https://github.com/SeetaFace6Open/QualityAssessor3 48 | branch = master 49 | [submodule "TenniS"] 50 | path = TenniS 51 | url = https://github.com/TenniS-Open/TenniS 52 | branch = master 53 | [submodule "SeetaAuthorize"] 54 | path = SeetaAuthorize 55 | url = https://github.com/SeetaFace6Open/SeetaAuthorize 56 | branch = master 57 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019, SeetaTech, 2 | Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 3 | All rights reserved. 4 | 5 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 8 | 9 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 10 | 11 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # **SeetaFace6** 2 | 3 | [![License](https://img.shields.io/badge/license-BSD-blue.svg)](LICENSE) 4 | 5 | [[中文]()] 6 | 7 | ## 开源模块 8 | 9 | `SeetaFace6`是中科视拓最新开源的商业正式版本。突破了之前社区版和企业版版本不同步发布的情况,这次开源的v6版本正式与商用版本同步。 10 | 11 |
12 | 13 |
14 | 15 | 此次开源包含了一直以来人脸识别的基本部分,如人脸检测、关键点定位、人脸识别。同时增加了活体检测、质量评估、年龄性别估计。并且响应时事,开放了口罩检测以及戴口罩的人脸识别模型。 16 | 17 |
18 | 19 |
20 | 21 | 22 | 同时此次我们开源了商用版最新的推理引擎TenniS,ResNet50的推理速度,从SeetaFace2在I7的8FPS提升到了20FPS。同时人脸识别训练集也大幅度提高,SeetaFace6人脸识别数据量增加到了上亿张图片。 23 | 24 | 为了应对不同级别的应用需求,SeetaFace6将开放三个版本模型: 25 | 26 | 模型名称 | 网络结构 | 速度(I7-6700) | 速度(RK3399) | 特征长度 27 | -|-|-|-|- 28 | 通用人脸识别 | ResNet-50 | 57ms | 300ms | 1024 29 | 带口罩人脸识别 | ResNet-50 | 34ms | 150ms | 512 30 | 通用人脸识别(小) | Mobile FaceNet | 9ms | 70ms | 512 31 | 32 | 作为能力兼容升级,SeetaFace6仍然能够给众多人脸识别应用提供业务能力。 33 | 34 |
35 | 36 |
37 | 38 | 同时该套算法适用于高精度的服务器部署外,也可以终端设备上很好的适应运行。 39 | 40 |
41 | 42 |
43 | 44 |
45 | 46 |
47 | 48 | ## 编译 49 | 50 | ### 下载源码 51 | 52 | ``` 53 | git clone --recursive https://github.com/SeetaFace6Open/index.git 54 | ``` 55 | 56 | ### 编译依赖 57 | 58 | 1. 编译工具 59 | 2. For linux
60 | GNU Make 工具
61 | GCC 或者 Clang 编译器 62 | 3. For windows
63 | [MSVC](https://visualstudio.microsoft.com/zh-hans/) 或者 MinGW.
64 | [jom](https://wiki.qt.io/Jom) 65 | 4. [CMake](http://www.cmake.org/) 66 | 5. 依赖架构
67 | CPU 支持 AVX 和 FMA [可选](x86)或 NENO(ARM)支持 68 | 69 | ### 编译顺序说明 70 | OpenRoleZoo 为常用操作的集合,SeetaAuthorize 为模型解析工程,TenniS 为前向计算框架。需要重点说明的是,此次 TenniS 同时放出了 **GPU** 计算源码,可以编译出 **GPU** 版本进行使用。上述三个模块为基础模块,各个 SDK 的编译均依赖上述模块,因此需要优先编译出 OpenRoleZoo, SeetaAuthorize 和 TenniS,然后再进行其他 SDK 模块的编译。 71 | 72 | ### 各平台编译 73 | 74 | #### linux 平台编译说明 75 | 76 | cd ./craft 77 | 运行脚本 build.linux.x64.sh(gpu版本为 build.linux.x64_gpu.sh) 78 | 79 | #### windows 平台编译说明 80 | 81 | cd ./craft 82 | 执行脚本 build.win.vc14.all.cmd 编译各个版本的库(gpu版本为build.win.vc14.all_gpu.cmd) 83 | 84 | #### Android 平台编译说明 85 | + 安装 ndk 编译工具(推荐版本 **ndk-r16b**) 86 | - 从 https://developer.android.com/ndk/downloads 下载 ndk 并安装 87 | - 设置环境变量, 导出ndk-build工具 88 | 89 | + 编译 90 | 各个模块均含有 android/jni/Android.mk 和 android/jni/Application.mk 两个编译脚本文件。 91 | 92 | cd 到各模块的 android/jni 目录 93 | 执行 ndk-build -j4 编译 94 | 95 | #### 其他 arm 等交叉编译平台 96 | 当前版本并未直接对交叉编译平台进行支持, 不过可参考文章 [cmake cross compile](https://zhuanlan.zhihu.com/p/100367053) 的说明进行 CMake 配置和对应平台的编译。 97 | # 下载地址 98 | 99 | ### 百度网盘 100 | 模型文件: 101 | Part I: [Download](https://pan.baidu.com/s/1LlXe2-YsUxQMe-MLzhQ2Aw) code: `ngne`, including: `age_predictor.csta`, `face_landmarker_pts5.csta`, `fas_first.csta`, `pose_estimation.csta`, `eye_state.csta`, `face_landmarker_pts68.csta`, `fas_second.csta`, `quality_lbn.csta`, `face_detector.csta`, `face_recognizer.csta`, `gender_predictor.csta`, `face_landmarker_mask_pts5.csta`, `face_recognizer_mask.csta`, `mask_detector.csta`. 102 | Part II: [Download](https://pan.baidu.com/s/1xjciq-lkzEBOZsTfVYAT9g) code: `t6j0`,including: `face_recognizer_light.csta`. 103 | 104 | ### Dropbox 105 | Model files: 106 | Part I: [Download](https://www.dropbox.com/s/julk1f16riu0dyp/sf6.0_models.zip?dl=0), including: `age_predictor.csta`, `face_landmarker_pts5.csta`, `fas_first.csta`, `pose_estimation.csta`, `eye_state.csta`, `face_landmarker_pts68.csta`, `fas_second.csta`, `quality_lbn.csta`, `face_detector.csta`, `face_recognizer.csta`, `gender_predictor.csta`, `face_landmarker_mask_pts5.csta`, `face_recognizer_mask.csta`, `mask_detector.csta`. 107 | Part II: [Download](https://www.dropbox.com/s/d296i7efnz5evbx/face_recognizer_light.csta?dl=0) ,including: `face_recognizer_light.csta`. 108 | 109 | # 使用入门 110 | 111 | 关于基本的接口使用,请参见教程: 112 | [《SeetaFace 入门教程》](http://leanote.com/blog/post/5e7d6cecab64412ae60016ef),github上有同步[文档源码](https://github.com/seetafaceengine/SeetaFaceTutorial)。 113 | 114 | 人脸识别的完整示例Demo见 [example/qt](./example/qt)。 115 | 116 | 在每个压缩包的文档中都包含了对应平台上的调用示例,请解压对应平台压缩包后分别获取。 117 | 118 | # 接口文档 119 | 120 | 各模块接口参见 [docs](./docs) 121 | 122 | # 开发者社区 123 | 124 | 欢迎开发者加入 SeetaFace 开发者社区,请先加 SeetaFace 小助手微信,经过审核后邀请入群。 125 | 126 | ![QR](./asserts/QR.png) 127 | 128 | # 联系我们 129 | 130 | `SeetaFace` 开源版可以免费用于商业和个人用途。如果需要更多的商业支持,请联系商务邮件 bd@seetatech.com。 131 | 132 | -------------------------------------------------------------------------------- /asserts/QR.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/asserts/QR.png -------------------------------------------------------------------------------- /asserts/api_matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/asserts/api_matrix.png -------------------------------------------------------------------------------- /asserts/app_matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/asserts/app_matrix.png -------------------------------------------------------------------------------- /asserts/endpoints.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/asserts/endpoints.png -------------------------------------------------------------------------------- /asserts/fas.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/asserts/fas.jpg -------------------------------------------------------------------------------- /asserts/fr_mask.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/asserts/fr_mask.png -------------------------------------------------------------------------------- /docs/人脸检测.md: -------------------------------------------------------------------------------- 1 | # 人脸检测器 2 | 3 | ## **1. 接口简介**
4 | 5 | 人脸检测器会对输入的彩色图片或者灰度图像进行人脸检测,并返回所有检测到的人脸位置。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaRect**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|int32_t |人脸区域左上角横坐标| 24 | |y| int32_t | 人脸区域左上角纵坐标| 25 | |width| int32_t | 人脸区域宽度| 26 | |height| int32_t | 人脸区域高度| 27 | 28 | ### **2.3 struct SeetaFaceInfo**
29 | 30 | |名称 | 类型 | 说明| 31 | |---|---|---| 32 | |pos|SeetaRect|人脸位置| 33 | |score|float|人脸置信分数| 34 | 35 | ### **2.4 struct SeetaFaceInfoArray**
36 | 37 | |名称 | 类型 | 说明| 38 | |---|---|---| 39 | |data|const SeetaFaceInfo*|人脸信息数组| 40 | |size|int|人脸信息数组长度| 41 | 42 | ## 3 class FaceDetector 43 | 44 | 人脸检测器。 45 | 46 | ### 3.1 Enum SeetaDevice 47 | 48 | 模型运行的计算设备。
49 | 50 | |名称 |说明| 51 | |---|---| 52 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 53 | |SEETA_DEVICE_CPU|使用CPU计算| 54 | |SEETA_DEVICE_GPU|使用GPU计算| 55 | 56 | ### 3.2 struct SeetaModelSetting 57 | 58 | 构造人脸检测器需要传入的结构体参数。
59 | 60 | |参数 | 类型 |缺省值|说明| 61 | |---|---|---|---| 62 | |model|const char**| |检测器模型| 63 | |id|int| |GPU id| 64 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 65 | 66 | ### 3.3 构造函数 67 | 68 | #### FaceDetector 69 | 70 | |参数 | 类型 |缺省值|说明| 71 | |---|---|---|---| 72 | |setting|const SeetaModelSetting&| |检测器结构参数| 73 | 74 | ### 3.4 成员函数 75 | 76 | #### detect 77 | 78 | 输入彩色图像,检测其中的人脸。
79 | 80 | |参数 | 类型 |缺省值|说明| 81 | |---|---|---|---| 82 | |image|const SeetaImageData&| |输入的图像数据| 83 | |返回值|SeetaFaceInfoArray| |人脸信息数组| 84 | 85 | #### set 86 | 设置人脸检测器相关属性值。其中
87 | **PROPERTY_MIN_FACE_SIZE**: 表示人脸检测器可以检测到的最小人脸,该值越小,支持检测到的人脸尺寸越小,检测速度越慢,默认值为20;
88 | **PROPERTY_THRESHOLD**: 89 | 表示人脸检测器过滤阈值,默认为 0.90;
90 | **PROPERTY_MAX_IMAGE_WIDTH** 和 **PROPERTY_MAX_IMAGE_HEIGHT**: 91 | 分别表示支持输入的图像的最大宽度和高度;
92 | **PROPERTY_NUMBER_THREADS**: 93 | 表示人脸检测器计算线程数,默认为 4. 94 | 95 | |参数 | 类型 |缺省值|说明| 96 | |---|---|---|---| 97 | |property|Property||人脸检测器属性类别| 98 | |value|double||设置的属性值| 99 | |返回值|void| | | | 100 | 101 | #### get 102 | 获取人脸检测器相关属性值。
103 | 104 | |参数 | 类型 |缺省值|说明| 105 | |---|---|---|---| 106 | |property|Property||人脸检测器属性类别| 107 | |返回值|double||对应的人脸属性值| 108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /docs/人脸识别.md: -------------------------------------------------------------------------------- 1 | # 人脸识别器 2 | 3 | ## **1. 接口简介**
4 | 5 | 人脸识别器要求输入原始图像数据和人脸特征点(或者裁剪好的人脸数据),对输入的人脸提取特征值数组,根据提取的特征值数组对人脸进行相似度比较。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaPointF**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|double|人脸特征点横坐标| 24 | |y|double|人脸特征点纵坐标| 25 | 26 | ## 3 class FaceRecognizer 27 | 人脸识别器。 28 | 29 | ### 3.1 Enum SeetaDevice 30 | 31 | 模型运行的计算设备。 32 | 33 | |名称 |说明| 34 | |---|---| 35 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 36 | |SEETA_DEVICE_CPU|使用CPU计算| 37 | |SEETA_DEVICE_GPU|使用GPU计算| 38 | 39 | ### 3.2 struct SeetaModelSetting 40 | 41 | 构造人脸识别器需要传入的结构体参数。 42 | 43 | |参数 | 类型 |缺省值|说明| 44 | |---|---|---|---| 45 | |model|const char**| |识别器模型| 46 | |id|int| |GPU id| 47 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 48 | 49 | ### 3.3 构造函数 50 | #### FaceRecognizer 51 | 52 | |参数 | 类型 |缺省值|说明| 53 | |---|---|---|---| 54 | |setting|const SeetaModelSetting&| |识别器结构参数| 55 | 56 | ### 3.4 成员函数 57 | 58 | #### GetCropFaceWidth 59 | 获取裁剪人脸的宽度。 60 | 61 | |参数 | 类型 |缺省值|说明| 62 | |---|---|---|---| 63 | |返回值|int| |返回的人脸宽度| 64 | 65 | #### GetCropFaceHeight 66 | 获取裁剪的人脸高度。 67 | 68 | |参数 | 类型 |缺省值|说明| 69 | |---|---|---|---| 70 | |返回值|int| |返回的人脸高度| 71 | 72 | #### GetCropFaceChannels 73 | 获取裁剪的人脸数据通道数。 74 | 75 | |参数 | 类型 |缺省值|说明| 76 | |---|---|---|---| 77 | |返回值|int| |返回的人脸数据通道数| 78 | 79 | #### CropFace 80 | 裁剪人脸。 81 | 82 | |参数 | 类型 |缺省值|说明| 83 | |---|---|---|---| 84 | |image|const SeetaImageData&| |原始图像数据| 85 | |points|const SeetaPointF*| |人脸特征点数组| 86 | |face|SeetaImageData&| |返回的裁剪人脸| 87 | |返回值|bool| |true表示人脸裁剪成功| 88 | 89 | #### CropFace 90 | 裁剪人脸。 91 | 92 | |参数 | 类型 |缺省值|说明| 93 | |---|---|---|---| 94 | |image|const SeetaImageData&| |原始图像数据| 95 | |points|const SeetaPointF*| |人脸特征点数组| 96 | |返回值|seeta::ImageData| |返回的裁剪人脸| 97 | 98 | #### GetCropFaceWidthV2 99 | 获取裁剪人脸的宽度。 100 | 101 | |参数 | 类型 |缺省值|说明| 102 | |---|---|---|---| 103 | |返回值|int| |返回的人脸宽度| 104 | 105 | #### GetCropFaceHeightV2 106 | 获取裁剪的人脸高度。 107 | 108 | |参数 | 类型 |缺省值|说明| 109 | |---|---|---|---| 110 | |返回值|int| |返回的人脸高度| 111 | 112 | #### GetCropFaceChannelsV2 113 | 获取裁剪的人脸数据通道数。 114 | 115 | |参数 | 类型 |缺省值|说明| 116 | |---|---|---|---| 117 | |返回值|int| |返回的人脸数据通道数| 118 | 119 | #### CropFaceV2 120 | 裁剪人脸。 121 | 122 | |参数 | 类型 |缺省值|说明| 123 | |---|---|---|---| 124 | |image|const SeetaImageData&| |原始图像数据| 125 | |points|const SeetaPointF*| |人脸特征点数组| 126 | |face|SeetaImageData&| |返回的裁剪人脸| 127 | |返回值|bool| |true表示人脸裁剪成功| 128 | 129 | #### CropFaceV2 130 | 裁剪人脸。 131 | 132 | |参数 | 类型 |缺省值|说明| 133 | |---|---|---|---| 134 | |image|const SeetaImageData&| |原始图像数据| 135 | |points|const SeetaPointF*| |人脸特征点数组| 136 | |返回值|seeta::ImageData| |返回的裁剪人脸| 137 | 138 | #### GetExtractFeatureSize 139 | 获取特征值数组的长度。 140 | 141 | |参数 | 类型 |缺省值|说明| 142 | |---|---|---|---| 143 | |返回值|int| |特征值数组的长度| 144 | 145 | #### ExtractCroppedFace 146 | 输入裁剪后的人脸图像,提取人脸的特征值数组。 147 | 148 | |参数 | 类型 |缺省值|说明| 149 | |---|---|---|---| 150 | |face|const SeetaImageData&| |裁剪后的人脸图像数据| 151 | |features|float*| |返回的人脸特征值数组| 152 | |返回值|bool| |true表示提取特征成功| 153 | 154 | #### Extract 155 | 输入原始图像数据和人脸特征点数组,提取人脸的特征值数组。 156 | 157 | |参数 | 类型 |缺省值|说明| 158 | |---|---|---|---| 159 | |image|const SeetaImageData&| |原始的人脸图像数据| 160 | |points|const SeetaPointF*| |人脸的特征点数组| 161 | |features|float*| |返回的人脸特征值数组| 162 | |返回值|bool| |true表示提取特征成功| 163 | 164 | #### CalculateSimilarity 165 | 比较两人脸的特征值数据,获取人脸的相似度值。 166 | 167 | |参数 | 类型 |缺省值|说明| 168 | |---|---|---|---| 169 | |features1|const float*| |特征数组一| 170 | |features2|const float*| |特征数组二| 171 | |返回值|float| |相似度值| 172 | 173 | #### set 174 | 设置相关属性值。其中
175 | **PROPERTY_NUMBER_THREADS**: 176 | 表示计算线程数,默认为 4. 177 | 178 | |参数 | 类型 |缺省值|说明| 179 | |---|---|---|---| 180 | |property|Property||属性类别| 181 | |value|double||设置的属性值| 182 | |返回值|void| | | | 183 | 184 | #### get 185 | 获取相关属性值。 186 | 187 | |参数 | 类型 |缺省值|说明| 188 | |---|---|---|---| 189 | |property|Property||属性类别| 190 | |返回值|double||对应的属性值| -------------------------------------------------------------------------------- /docs/人脸跟踪.md: -------------------------------------------------------------------------------- 1 | # 人脸跟踪器 2 | 3 | ## **1. 接口简介**
4 | 5 | 人脸跟踪器会对输入的彩色图像或者灰度图像中的人脸进行跟踪,并返回所有跟踪到的人脸信息。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaRect**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|int32_t |人脸区域左上角横坐标| 24 | |y| int32_t | 人脸区域左上角纵坐标| 25 | |width| int32_t | 人脸区域宽度| 26 | |height| int32_t | 人脸区域高度| 27 | 28 | ### **2.3 struct SeetaTrackingFaceInfo**
29 | 30 | |名称 | 类型 | 说明| 31 | |---|---|---| 32 | |pos|SeetaRect|人脸位置| 33 | |score|float|人脸置信分数| 34 | |frame_no|int|视频帧的索引| 35 | |PID|int|跟踪的人脸标识id| 36 | 37 | ### **2.4 struct SeetaTrackingFaceInfoArray**
38 | 39 | |名称 | 类型 | 说明| 40 | |---|---|---| 41 | |data|const SeetaTrackingFaceInfo*|人脸信息数组| 42 | |size|int|人脸信息数组长度| 43 | 44 | ## 3 class FaceTracker 45 | 46 | 人脸跟踪器。 47 | 48 | ### 3.1 Enum SeetaDevice 49 | 50 | 模型运行的计算设备。 51 | 52 | |名称 |说明| 53 | |---|---| 54 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 55 | |SEETA_DEVICE_CPU|使用CPU计算| 56 | |SEETA_DEVICE_GPU|使用GPU计算| 57 | 58 | ### 3.2 struct SeetaModelSetting 59 | 60 | 构造人脸跟踪器需要传入的结构体参数。 61 | 62 | |参数 | 类型 |缺省值|说明| 63 | |---|---|---|---| 64 | |model|const char**| |跟踪器模型| 65 | |id|int| |GPU id| 66 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 67 | 68 | ### 3.3 构造函数 69 | 70 | #### FaceTrakcer 71 | 72 | |参数 | 类型 |缺省值|说明| 73 | |---|---|---|---| 74 | |setting|const SeetaModelSetting&| |跟踪器结构参数| 75 | |video_width|int| |视频的宽度| 76 | |video_height|int| |视频的高度| 77 | 78 | ### 3.4 成员函数 79 | 80 | #### SetSingleCalculationThreads 81 | 设置底层的计算线程数量。 82 | 83 | |参数 | 类型 |缺省值|说明| 84 | |---|---|---|---| 85 | |num|int| |线程数量| 86 | |返回值|void| || 87 | 88 | #### Track 89 | 对视频帧中的人脸进行跟踪。 90 | 91 | |参数 | 类型 |缺省值|说明| 92 | |---|---|---|---| 93 | |image|const SeetaImageData&| |原始图像数据| 94 | |返回值|SeetaTrackingFaceInfoArray| |跟踪到的人脸信息数组| 95 | 96 | #### Track 97 | 对视频帧中的人脸进行跟踪。 98 | 99 | |参数 | 类型 |缺省值|说明| 100 | |---|---|---|---| 101 | |image|const SeetaImageData&| |原始图像数据| 102 | |frame_no|int| |视频帧索引| 103 | |返回值|SeetaTrackingFaceInfoArray| |跟踪到的人脸信息数组| 104 | 105 | #### SetMinFaceSize 106 | 设置检测器的最小人脸大小。 107 | 108 | |参数 | 类型 |缺省值|说明| 109 | |---|---|---|---| 110 | |size|int32_t| |最小人脸大小| 111 | |返回值|void| || 112 | 说明:size 的大小保证大于等于 20,size的值越小,能够检测到的人脸的尺寸越小, 113 | 检测速度越慢。 114 | 115 | #### GetMinFaceSize 116 | 获取最小人脸的大小。 117 | 118 | |参数 | 类型 |缺省值|说明| 119 | |---|---|---|---| 120 | |返回值|int32_t| |最小人脸大小| 121 | 122 | #### SetThreshold 123 | 设置检测器的检测阈值。 124 | 125 | |参数 | 类型 |缺省值|说明| 126 | |---|---|---|---| 127 | |thresh|float| |检测阈值| 128 | |返回值|void| || 129 | 130 | #### GetScoreThreshold 131 | 获取检测器检测阈值。 132 | 133 | |参数 | 类型 |缺省值|说明| 134 | |---|---|---|---| 135 | |返回值|float| |检测阈值| 136 | 137 | #### SetVideoStable 138 | 设置以稳定模式输出人脸跟踪结果。 139 | 140 | |参数 | 类型 |缺省值|说明| 141 | |---|---|---|---| 142 | |stable|bool| |是否是稳定模式| 143 | |返回值|void| || 144 | 说明:只有在视频中连续跟踪时,才使用此方法。 145 | 146 | #### GetVideoStable 147 | 获取当前是否是稳定工作模式。 148 | 149 | |参数 | 类型 |缺省值|说明| 150 | |---|---|---|---| 151 | |返回值|bool| |是否是稳定模式| 152 | -------------------------------------------------------------------------------- /docs/口罩检测.md: -------------------------------------------------------------------------------- 1 | # 口罩检测器 2 | 3 | ## **1. 接口简介**
4 | 5 | 口罩检测器根据输入的图像数据、人脸位置,返回是否佩戴口罩的检测结果。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaRect**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|int32_t |人脸区域左上角横坐标| 24 | |y| int32_t | 人脸区域左上角纵坐标| 25 | |width| int32_t | 人脸区域宽度| 26 | |height| int32_t | 人脸区域高度| 27 | 28 | ## 3 class MaskDetector 29 | 口罩检测器。 30 | 31 | ### 3.1 Enum SeetaDevice 32 | 33 | 模型运行的计算设备。 34 | 35 | |名称 |说明| 36 | |---|---| 37 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 38 | |SEETA_DEVICE_CPU|使用CPU计算| 39 | |SEETA_DEVICE_GPU|使用GPU计算| 40 | 41 | ### 3.2 struct SeetaModelSetting 42 | 43 | 口罩检测器需要传入的结构体参数。 44 | 45 | |参数 | 类型 |缺省值|说明| 46 | |---|---|---|---| 47 | |model|const char**| |检测器模型| 48 | |id|int| |GPU id| 49 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 50 | 51 | ### 3.3 构造函数 52 | 53 | #### MaskDetector 54 | 构造检测器,需要在构造的时候传入检测器结构参数。 55 | 56 | |参数 | 类型 |缺省值|说明| 57 | |---|---|---|---| 58 | |setting|const SeetaModelSetting&| |识别器接口参数| 59 | 60 | ### 3.4 成员函数 61 | 62 | #### detect 63 | 输入图像数据、人脸位置,返回是否佩戴口罩的检测结果。 64 | 65 | |参数 | 类型 |缺省值|说明| 66 | |---|---|---|---| 67 | |image|const SeetaImageData&| |原始图像数据| 68 | |face|const SeetaRect&| |人脸位置| 69 | |score|float*|nullptr|戴口罩的置信度| 70 | |返回值|bool| |true为佩戴了口罩| 71 | -------------------------------------------------------------------------------- /docs/年龄估计.md: -------------------------------------------------------------------------------- 1 | # 年龄估计器 2 | 3 | ## **1. 接口简介**
4 | 5 | 年龄估计器要求输入原始图像数据和人脸特征点(或者裁剪好的人脸数据),对输入的人脸进行年龄估计。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaPointF**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|double|人脸特征点横坐标| 24 | |y|double|人脸特征点纵坐标| 25 | 26 | ## 3 class AgePredictor 27 | 年龄估计器。 28 | 29 | ### 3.1 Enum SeetaDevice 30 | 31 | 模型运行的计算设备。 32 | 33 | |名称 |说明| 34 | |---|---| 35 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 36 | |SEETA_DEVICE_CPU|使用CPU计算| 37 | |SEETA_DEVICE_GPU|使用GPU计算| 38 | 39 | ### 3.2 struct SeetaModelSetting 40 | 41 | 年龄估计器需要传入的结构体参数。 42 | 43 | |参数 | 类型 |缺省值|说明| 44 | |---|---|---|---| 45 | |model|const char**| |模型文件| 46 | |id|int| |GPU id| 47 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 48 | 49 | ### 3.3 构造函数 50 | #### AgePredictor 51 | 52 | |参数 | 类型 |缺省值|说明| 53 | |---|---|---|---| 54 | |setting|const SeetaModelSetting&| |结构参数| 55 | 56 | ### 3.4 成员函数 57 | 58 | #### GetCropFaceWidth 59 | 获取裁剪人脸的宽度。 60 | 61 | |参数 | 类型 |缺省值|说明| 62 | |---|---|---|---| 63 | |返回值|int| |返回的人脸宽度| 64 | 65 | #### GetCropFaceHeight 66 | 获取裁剪的人脸高度。 67 | 68 | |参数 | 类型 |缺省值|说明| 69 | |---|---|---|---| 70 | |返回值|int| |返回的人脸高度| 71 | 72 | #### GetCropFaceChannels 73 | 获取裁剪的人脸数据通道数。 74 | 75 | |参数 | 类型 |缺省值|说明| 76 | |---|---|---|---| 77 | |返回值|int| |返回的人脸数据通道数| 78 | 79 | #### CropFace 80 | 裁剪人脸。 81 | 82 | |参数 | 类型 |缺省值|说明| 83 | |---|---|---|---| 84 | |image|const SeetaImageData&| |原始图像数据| 85 | |points|const SeetaPointF*| |人脸特征点数组| 86 | |face|SeetaImageData&| |返回的裁剪人脸| 87 | |返回值|bool| |true表示人脸裁剪成功| 88 | 89 | #### PredictAge 90 | 输入裁剪好的人脸,返回估计的年龄。 91 | 92 | |参数 | 类型 |缺省值|说明| 93 | |---|---|---|---| 94 | |face|const SeetaImageData&| |裁剪好的人脸数据| 95 | |age|int&| |估计的年龄| 96 | |返回值|bool| |true表示估计成功| 97 | 98 | #### PredictAgeWithCrop 99 | 输入原始图像数据和人脸特征点,返回估计的年龄。 100 | 101 | |参数 | 类型 |缺省值|说明| 102 | |---|---|---|---| 103 | |image|const SeetaImageData&| |原始人脸数据| 104 | |points|const SeetaPointF*| |人脸特征点| 105 | |age|int&| |估计的年龄| 106 | |返回值|bool| |true表示估计成功| 107 | 108 | #### set 109 | 设置相关属性值。其中
110 | **PROPERTY_NUMBER_THREADS**: 111 | 表示计算线程数,默认为 4. 112 | 113 | |参数 | 类型 |缺省值|说明| 114 | |---|---|---|---| 115 | |property|Property||属性类别| 116 | |value|double||设置的属性值| 117 | |返回值|void| | | | 118 | 119 | #### get 120 | 获取相关属性值。 121 | 122 | |参数 | 类型 |缺省值|说明| 123 | |---|---|---|---| 124 | |property|Property||属性类别| 125 | |返回值|double||对应的属性值| -------------------------------------------------------------------------------- /docs/性别估计.md: -------------------------------------------------------------------------------- 1 | # 性别估计器 2 | 3 | ## **1. 接口简介**
4 | 5 | 性别估计器要求输入原始图像数据和人脸特征点(或者裁剪好的人脸数据),对输入的人脸进行性别估计。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaPointF**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|double|人脸特征点横坐标| 24 | |y|double|人脸特征点纵坐标| 25 | 26 | ## 3 class GenderPredictor 27 | 性别估计器。 28 | 29 | ### 3.1 Enum SeetaDevice 30 | 31 | 模型运行的计算设备。 32 | 33 | |名称 |说明| 34 | |---|---| 35 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 36 | |SEETA_DEVICE_CPU|使用CPU计算| 37 | |SEETA_DEVICE_GPU|使用GPU计算| 38 | 39 | ### 3.2 struct SeetaModelSetting 40 | 41 | 性别估计器需要传入的结构体参数。 42 | 43 | |参数 | 类型 |缺省值|说明| 44 | |---|---|---|---| 45 | |model|const char**| |模型文件| 46 | |id|int| |GPU id| 47 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 48 | 49 | ### 3.3 构造函数 50 | #### GenderPredictor 51 | 52 | |参数 | 类型 |缺省值|说明| 53 | |---|---|---|---| 54 | |setting|const SeetaModelSetting&| |结构参数| 55 | 56 | ### 3.4 成员函数 57 | 58 | #### GetCropFaceWidth 59 | 获取裁剪人脸的宽度。 60 | 61 | |参数 | 类型 |缺省值|说明| 62 | |---|---|---|---| 63 | |返回值|int| |返回的人脸宽度| 64 | 65 | #### GetCropFaceHeight 66 | 获取裁剪的人脸高度。 67 | 68 | |参数 | 类型 |缺省值|说明| 69 | |---|---|---|---| 70 | |返回值|int| |返回的人脸高度| 71 | 72 | #### GetCropFaceChannels 73 | 获取裁剪的人脸数据通道数。 74 | 75 | |参数 | 类型 |缺省值|说明| 76 | |---|---|---|---| 77 | |返回值|int| |返回的人脸数据通道数| 78 | 79 | #### CropFace 80 | 裁剪人脸。 81 | 82 | |参数 | 类型 |缺省值|说明| 83 | |---|---|---|---| 84 | |image|const SeetaImageData&| |原始图像数据| 85 | |points|const SeetaPointF*| |人脸特征点数组| 86 | |face|SeetaImageData&| |返回的裁剪人脸| 87 | |返回值|bool| |true表示人脸裁剪成功| 88 | 89 | #### PredictGender 90 | 输入裁剪好的人脸,返回估计的性别。 91 | 92 | |参数 | 类型 |缺省值|说明| 93 | |---|---|---|---| 94 | |face|const SeetaImageData&| |裁剪好的人脸数据| 95 | |gender|GENDER&| |估计的性别| 96 | |返回值|bool| |true表示估计成功| 97 | 说明:GENDER可取值MALE(男性)和FEMALE(女性)。 98 | 99 | #### PredictGenderWithCrop 100 | 输入原始图像数据和人脸特征点,返回估计的性别。 101 | 102 | |参数 | 类型 |缺省值|说明| 103 | |---|---|---|---| 104 | |image|const SeetaImageData&| |原始人脸数据| 105 | |points|const SeetaPointF*| |人脸特征点| 106 | |gender|GENDER&| |估计的性别| 107 | |返回值|bool| |true表示估计成功| 108 | 说明:GENDER可取值MALE(男性)和FEMALE(女性)。 109 | 110 | #### set 111 | 设置相关属性值。其中
112 | 113 | **PROPERTY_NUMBER_THREADS**: 114 | 表示计算线程数,默认为 4. 115 | 116 | |参数 | 类型 |缺省值|说明| 117 | |---|---|---|---| 118 | |property|Property||属性类别| 119 | |value|double||设置的属性值| 120 | |返回值|void| | | | 121 | 122 | #### get 123 | 获取相关属性值。 124 | 125 | |参数 | 类型 |缺省值|说明| 126 | |---|---|---|---| 127 | |property|Property||属性类别| 128 | |返回值|double||对应的属性值| -------------------------------------------------------------------------------- /docs/特征点检测.md: -------------------------------------------------------------------------------- 1 | # 人脸特征点检测器 2 | 3 | ## **1. 接口简介**
4 | 5 | 人脸特征点检测器要求输入原始图像数据和人脸位置,返回人脸 5 个或者其他数量的的特征点的坐标(特征点的数量和加载的模型有关)。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaRect**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|int32_t |人脸区域左上角横坐标| 24 | |y| int32_t | 人脸区域左上角纵坐标| 25 | |width| int32_t | 人脸区域宽度| 26 | |height| int32_t | 人脸区域高度| 27 | 28 | ### **2.3 struct SeetaPointF**
29 | 30 | |名称 | 类型 | 说明| 31 | |---|---|---| 32 | |x|double|人脸特征点横坐标| 33 | |y|double|人脸特征点纵坐标| 34 | 35 | ## 3 class FaceLandmarker 36 | 37 | 人脸特征点检测器。 38 | 39 | ### 3.1 Enum SeetaDevice 40 | 41 | 模型运行的计算设备。 42 | 43 | |名称 |说明| 44 | |---|---| 45 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 46 | |SEETA_DEVICE_CPU|使用CPU计算| 47 | |SEETA_DEVICE_GPU|使用GPU计算| 48 | 49 | ### 3.2 struct SeetaModelSetting 50 | 51 | 构造人脸特征点检测器需要传入的结构体参数。 52 | 53 | |参数 | 类型 |缺省值|说明| 54 | |---|---|---|---| 55 | |model|const char**| |检测器模型| 56 | |id|int| |GPU id| 57 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 58 | 59 | ### 3.3 构造函数 60 | 61 | #### FaceLandmarker 62 | 63 | |参数 | 类型 |缺省值|说明| 64 | |---|---|---|---| 65 | |setting|const SeetaModelSetting&| |检测器结构参数| 66 | 67 | ### 3.4 成员函数 68 | 69 | #### number 70 | 获取模型对应的特征点数组长度。 71 | 72 | |参数 | 类型 |缺省值|说明| 73 | |---|---|---|---| 74 | |返回值|int| |模型特征点数组长度| 75 | 76 | #### mark 77 | 获取人脸特征点。 78 | 79 | |参数 | 类型 |缺省值|说明| 80 | |---|---|---|---| 81 | |image|const SeetaImageData&| |图像原始数据| 82 | |face|const SeetaRect&| |人脸位置| 83 | |points|SeetaPointF*| |获取的人脸特征点数组(需预分配好数组长度,长度为number()返回的值)| 84 | |返回值|void| | | 85 | 86 | #### mark 87 | 获取人脸特征点和遮挡信息。 88 | 89 | |参数 | 类型 |缺省值|说明| 90 | |---|---|---|---| 91 | |image|const SeetaImageData&| |图像原始数据| 92 | |face|const SeetaRect&| |人脸位置| 93 | |points|SeetaPointF*| |获取的人脸特征点数组(需预分配好数组长度,长度为number()返回的值)| 94 | |mask|int32_t*| |获取人脸特征点位置对应的遮挡信息数组(需预分配好数组长度,长度为number()返回的值), 其中值为1表示被遮挡,0表示未被遮挡| 95 | |返回值|void| | | 96 | 97 | #### mark 98 | 获取人脸特征点。 99 | 100 | |参数 | 类型 |缺省值|说明| 101 | |---|---|---|---| 102 | |image|const SeetaImageData&| |图像原始数据| 103 | |face|const SeetaRect&| |人脸位置| 104 | |返回值|std::vector| |获取的人脸特征点数组 | 105 | 106 | #### mark_v2 107 | 获取人脸特征点和遮挡信息。 108 | 109 | |参数 | 类型 |缺省值|说明| 110 | |---|---|---|---| 111 | |image|const SeetaImageData&| |图像原始数据| 112 | |face|const SeetaRect&| |人脸位置| 113 | |返回值|std::vector| |获取人脸特征点和是否遮挡数组| -------------------------------------------------------------------------------- /docs/眼睛状态检测.md: -------------------------------------------------------------------------------- 1 | # 眼睛状态检测器 2 | 3 | ## **1. 接口简介**
4 | 5 | 眼睛检测器要求输入原始图像数据和人脸特征点,返回左眼和右眼的状态。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaPointF**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|double|人脸特征点横坐标| 24 | |y|double|人脸特征点纵坐标| 25 | 26 | ## 3 class EyeStateDetector 27 | 眼睛状态检测器。 28 | 29 | ### 3.1 Enum SeetaDevice 30 | 31 | 模型运行的计算设备。 32 | 33 | |名称 |说明| 34 | |---|---| 35 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 36 | |SEETA_DEVICE_CPU|使用CPU计算| 37 | |SEETA_DEVICE_GPU|使用GPU计算| 38 | 39 | ### 3.2 struct SeetaModelSetting 40 | 41 | 构造眼睛状态检测器需要传入的结构体参数。 42 | 43 | |参数 | 类型 |缺省值|说明| 44 | |---|---|---|---| 45 | |model|const char**| |检测器模型| 46 | |id|int| |GPU id| 47 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 48 | 49 | ### 3.3 构造函数 50 | #### EyeStateDetector 51 | 52 | |参数 | 类型 |缺省值|说明| 53 | |---|---|---|---| 54 | |setting|const SeetaModelSetting&| |检测器结构参数| 55 | 56 | ### 3.4 成员函数 57 | 58 | #### Detect 59 | 输入原始图像数据和人脸特征点,返回左眼和右眼的状态。 60 | 61 | |参数 | 类型 |缺省值|说明| 62 | |---|---|---|---| 63 | |image|const SeetaImageData&| |原始图像数据| 64 | |points|const SeetaPointF*| |人脸特征点数组| 65 | |leftState|EYE_STATE| |返回的左眼状态| 66 | |rightState|EYE_STATE| |返回的右眼状态| 67 | 说明:EYE_STATE可取值为EYE_CLOSE(闭眼)、EYE_OPEN(睁眼)、EYE_RANDOM(非眼部区域)和EYE_UNKNOWN(未知状态)。 68 | 69 | #### set 70 | 设置相关属性值。其中
71 | **PROPERTY_NUMBER_THREADS**: 72 | 表示计算线程数,默认为 4. 73 | 74 | |参数 | 类型 |缺省值|说明| 75 | |---|---|---|---| 76 | |property|Property||属性类别| 77 | |value|double||设置的属性值| 78 | |返回值|void| | | | 79 | 80 | #### get 81 | 获取相关属性值。 82 | 83 | |参数 | 类型 |缺省值|说明| 84 | |---|---|---|---| 85 | |property|Property||属性类别| 86 | |返回值|double||对应的属性值| 87 | -------------------------------------------------------------------------------- /docs/质量评估器.md: -------------------------------------------------------------------------------- 1 | # 质量评估器 2 | 3 | ## **1. 接口简介**
4 | 5 | 质量评估器包含不同的质量评估模块,包括人脸亮度、人脸清晰度(非深度方法)、 6 | 人脸清晰度(深度方法)、人脸姿态(非深度方法)、人脸姿态(深度方法)、人脸分辨率和人脸完整度评估模块。
7 | 8 | ## **2. 类型说明**
9 | 10 | ### **2.1 struct SeetaImageData**
11 | 12 | |名称 | 类型 | 说明| 13 | |---|---|---| 14 | |data|unit8_t* |图像数据| 15 | |width | int32_t | 图像的宽度| 16 | |height | int32_t | 图像的高度| 17 | |channels | int32_t | 图像的通道数| 18 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 19 | 20 | ### **2.2 struct SeetaRect**
21 | 22 | |名称 | 类型 | 说明| 23 | |---|---|---| 24 | |x|int32_t |人脸区域左上角横坐标| 25 | |y| int32_t | 人脸区域左上角纵坐标| 26 | |width| int32_t | 人脸区域宽度| 27 | |height| int32_t | 人脸区域高度| 28 | 29 | ### **2.3 struct SeetaPointF**
30 | 31 | |名称 | 类型 | 说明| 32 | |---|---|---| 33 | |x|double|人脸特征点横坐标| 34 | |y|double|人脸特征点纵坐标| 35 | 36 | ### 2.4 enum QualityLevel 37 | 38 | |名称 | 类型 | 说明| 39 | |---|---|---| 40 | |LOW| |表示人脸质量为低| 41 | |MEDIUM| |表示人脸质量为中| 42 | |HIGH| |表示人脸质量为高| 43 | 44 | ### 2.5 class QualityResult 45 | 46 | |名称 | 类型 | 说明| 47 | |---|---|---| 48 | |level|QualityLevel|人脸质量等级| 49 | |score|float|人脸质量分数| 50 | 51 | ## 3 class QualityOfBrightness 52 | 非深度的人脸亮度评估器。 53 | 54 | ### 3.1 构造函数 55 | 56 | #### QualityOfBrightness 57 | 人脸亮度评估器构造函数。 58 | 59 | |参数 | 类型 |缺省值|说明| 60 | |---|---|---|---| 61 | |void|| || 62 | 63 | #### QualityOfBrightness 64 | 人脸亮度评估器构造函数。 65 | 66 | |参数 | 类型 |缺省值|说明| 67 | |---|---|---|---| 68 | |v0|float| |分级参数一| 69 | |v1|float| |分级参数二| 70 | |v2|float| |分级参数三| 71 | |v3|float| |分级参数四| 72 | 说明:说明:分类依据为[0, v0) and [v3, ~) => LOW;[v0, v1) and [v2, v3) => 73 | MEDIUM;[v1, v2) => HIGH。 74 | 75 | ### 3.2 成员函数 76 | 77 | #### check 78 | 检测人脸亮度。 79 | 80 | |参数 | 类型 |缺省值|说明| 81 | |---|---|---|---| 82 | |image|const SeetaImageData&| |原始图像数据| 83 | |face|const SeetaRect&| |人脸位置| 84 | |points|const SeetaPointF*| |人脸5个特征点数组| 85 | |N|const int32_t| |人脸特征点数组长度| 86 | |返回值|QualityResult| |人脸亮度检测结果| 87 | 88 | ## 4 class QualityOfClarity 89 | 非深度学习的人脸清晰度评估器。 90 | 91 | ### 4.1 构造函数 92 | 93 | #### QualityOfClarity 94 | 人脸清晰度评估器构造函数 95 | 96 | |参数 | 类型 |缺省值|说明| 97 | |---|---|---|---| 98 | |void|| || 99 | 100 | #### QualityOfClarity 101 | 人脸清晰度评估器构造函数 102 | 103 | |参数 | 类型 |缺省值|说明| 104 | |---|---|---|---| 105 | |low|float| |分级参数一| 106 | |high|float| |分级参数二| 107 | 说明:分类依据为[0, low)=> LOW; [low, high)=> MEDIUM; [high, ~)=> HIGH。 108 | 109 | ### 4.2 成员函数 110 | 111 | #### check 112 | 检测人脸清晰度。 113 | 114 | |参数 | 类型 |缺省值|说明| 115 | |---|---|---|---| 116 | |image|const SeetaImageData&| |原始图像数据| 117 | |face|const SeetaRect&| |人脸位置| 118 | |points|const SeetaPointF*| |人脸5个特征点数组| 119 | |N|const int32_t| |人脸特征点数组长度| 120 | |返回值|QualityResult| |人脸清晰度检测结果| 121 | 122 | ## 5 class QualityOfLBN 123 | 深度学习的人脸清晰度评估器。 124 | 125 | ### 5.1 Enum SeetaDevice 126 | 127 | 模型运行的计算设备。 128 | 129 | |名称 |说明| 130 | |---|---| 131 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 132 | |SEETA_DEVICE_CPU|使用CPU计算| 133 | |SEETA_DEVICE_GPU|使用GPU计算| 134 | 135 | ### 5.2 struct SeetaModelSetting 136 | 137 | 构造评估器需要传入的结构体参数。 138 | 139 | |参数 | 类型 |缺省值|说明| 140 | |---|---|---|---| 141 | |model|const char**| |评估器模型| 142 | |id|int| |GPU id| 143 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 144 | 145 | ### 5.3 构造函数 146 | 人脸清晰度评估器构造函数。 147 | 148 | |参数 | 类型 |缺省值|说明| 149 | |---|---|---|---| 150 | |setting|const SeetaModelSetting&| |对象构造结构体参数| 151 | 152 | ### 5.4 成员函数 153 | 154 | #### Detect 155 | 检测人脸清晰度。 156 | 157 | |参数 | 类型 |缺省值|说明| 158 | |---|---|---|---| 159 | |image|const SeetaImageData&| |原始图像数据| 160 | |points|const SeetaPointF*| |人脸68个特征点数组| 161 | |light|int*| |亮度返回结果,暂不推荐使用该返回结果| 162 | |blur|int*| |模糊度返回结果| 163 | |noise|int*| |是否有噪声返回结果,暂不推荐使用该返回结果| 164 | |返回值|void| || 165 | 说明:blur 结果返回 0 说明人脸是清晰的,blur 为 1 说明人脸是模糊的。 166 | 167 | #### set 168 | 设置相关属性值。其中
169 | 170 | **PROPERTY_NUMBER_THREADS**: 171 | 表示计算线程数,默认为 4。
172 | **PROPERTY_ARM_CPU_MODE**:针对于移动端,表示设置的 cpu 计算模式。0 表示 173 | 大核计算模式,1 表示小核计算模式,2 表示平衡模式,为默认模式。
174 | **PROPERTY_BLUR_THRESH**:表示人脸模糊阈值,默认值大小为 0.80。 175 | 176 | |参数 | 类型 |缺省值|说明| 177 | |---|---|---|---| 178 | |property|Property||属性类别| 179 | |value|double||设置的属性值| 180 | |返回值|void| | | | 181 | 182 | #### get 183 | 获取相关属性值。 184 | 185 | |参数 | 类型 |缺省值|说明| 186 | |---|---|---|---| 187 | |property|Property||属性类别| 188 | |返回值|double||对应的属性值| 189 | 190 | ## 6 class QualityOfPose 191 | 非深度学习的人脸姿态评估器。 192 | 193 | ### 6.1 构造函数 194 | 195 | #### QualityOfPose 196 | 人脸姿态评估器构造函数。 197 | 198 | |参数 | 类型 |缺省值|说明| 199 | |---|---|---|---| 200 | |void|| || 201 | 202 | ### 6.2 成员函数 203 | 204 | #### check 205 | 检测人脸姿态。 206 | 207 | |参数 | 类型 |缺省值|说明| 208 | |---|---|---|---| 209 | |image|const SeetaImageData&| |原始图像数据| 210 | |face|const SeetaRect&| |人脸位置| 211 | |points|const SeetaPointF*| |人脸5个特征点数组| 212 | |N|const int32_t| |人脸特征点数组长度| 213 | |返回值|QualityResult| |人脸姿态检测结果| 214 | 215 | ## 7 class QualityOfPoseEx 216 | 深度学习的人脸姿态评估器。 217 | 218 | ### 7.1 Enum SeetaDevice 219 | 220 | 模型运行的计算设备。 221 | 222 | |名称 |说明| 223 | |---|---| 224 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 225 | |SEETA_DEVICE_CPU|使用CPU计算| 226 | |SEETA_DEVICE_GPU|使用GPU计算| 227 | 228 | ### 7.2 struct SeetaModelSetting 229 | 230 | 构造评估器需要传入的结构体参数。 231 | 232 | |参数 | 类型 |缺省值|说明| 233 | |---|---|---|---| 234 | |model|const char**| |评估器模型| 235 | |id|int| |GPU id| 236 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 237 | 238 | ### 7.3 构造函数 239 | 240 | #### QualityOfPoseEx 241 | 人脸姿态评估器构造函数。 242 | 243 | |参数 | 类型 |缺省值|说明| 244 | |---|---|---|---| 245 | |setting|const SeetaModelSetting&| |对象结构体参数| 246 | 247 | ### 7.4 成员函数 248 | 249 | #### check 250 | 检测人脸姿态。 251 | 252 | |参数 | 类型 |缺省值|说明| 253 | |---|---|---|---| 254 | |image|const SeetaImageData&| |原始图像数据| 255 | |face|const SeetaRect&| |人脸位置| 256 | |points|const SeetaPointF*| |人脸5个特征点数组| 257 | |N|const int32_t| |人脸特征点数组长度| 258 | |返回值|QualityResult| |人脸姿态检测结果| 259 | 260 | #### check 261 | 检测人脸姿态,返回具体姿态角度。 262 | 263 | |参数 | 类型 |缺省值|说明| 264 | |---|---|---|---| 265 | |image|const SeetaImageData&| |原始图像数据| 266 | |face|const SeetaRect&| |人脸位置| 267 | |points|const SeetaPointF*| |人脸5个特征点数组| 268 | |N|const int32_t| |人脸特征点数组长度| 269 | |yaw|float&| | yaw方向角度| 270 | |pitch|float& ||pitch方向角度| 271 | |roll|float&| | roll方向角度| 272 | |返回值|bool| |true为检测成功| 273 | 274 | #### set 275 | 设置相关属性值。其中
276 | **YAW_HIGH_THRESHOLD**: 277 | yaw方向的分级参数一。
278 | **YAW_LOW_THRESHOLD**: 279 | yaw方向的分级参数二。
280 | **PITCH_HIGH_THRESHOLD**: 281 | pitch方向的分级参数一。
282 | **PITCH_LOW_THRESHOLD**: 283 | pitch方向的分级参数二。
284 | **ROLL_HIGH_THRESHOLD**: 285 | roll方向的分级参数一。
286 | **ROLL_LOW_THRESHOLD**: 287 | roll方向的分级参数二。
288 | 289 | |参数 | 类型 |缺省值|说明| 290 | |---|---|---|---| 291 | |property|Property||属性类别| 292 | |value|double||设置的属性值| 293 | |返回值|void| | | | 294 | 295 | #### get 296 | 获取相关属性值。 297 | 298 | |参数 | 类型 |缺省值|说明| 299 | |---|---|---|---| 300 | |property|Property||属性类别| 301 | |返回值|double||对应的属性值| 302 | 303 | ## 8 class QualityOfResolution 304 | 非深度学习的人脸尺寸评估器。 305 | 306 | ### 8.1 构造函数 307 | 308 | #### QualityOfResolution 309 | 人脸尺寸评估器构造函数。 310 | 311 | |参数 | 类型 |缺省值|说明| 312 | |---|---|---|---| 313 | |void|| || 314 | 315 | #### QualityOfResolution 316 | 人脸尺寸评估器构造函数。 317 | 318 | |参数 | 类型 |缺省值|说明| 319 | |---|---|---|---| 320 | |low|float| |分级参数一| 321 | |high|float| |分级参数二| 322 | 323 | ### 8.2 成员函数 324 | 325 | #### check 326 | 评估人脸尺寸。 327 | 328 | |参数 | 类型 |缺省值|说明| 329 | |---|---|---|---| 330 | |image|const SeetaImageData&| |原始图像数据| 331 | |face|const SeetaRect&| |人脸位置| 332 | |points|const SeetaPointF*| |人脸5个特征点数组| 333 | |N|const int32_t| |人脸特征点数组长度| 334 | |返回值|QualityResult| |人脸尺寸评估结果| 335 | 336 | ## 9 class QualityOfIntegrity 337 | 非深度学习的人脸完整度评估器,评估人脸靠近图像边缘的程度。 338 | 339 | ### 9.1 构造函数 340 | 341 | #### QualityOfIntegrity 342 | 人脸完整评估器构造函数。 343 | 344 | |参数 | 类型 |缺省值|说明| 345 | |---|---|---|---| 346 | |void|| || 347 | 348 | #### QualityOfIntegrity 349 | 人脸尺寸评估器构造函数。 350 | 351 | |参数 | 类型 |缺省值|说明| 352 | |---|---|---|---| 353 | |low|float| |分级参数一| 354 | |high|float| |分级参数二| 355 | 356 | 说明:low和high主要来控制人脸位置靠近图像边缘的接受程度。 357 | 358 | ### 9.2 成员函数 359 | 360 | #### check 361 | 评估人脸完整度。 362 | 363 | |参数 | 类型 |缺省值|说明| 364 | |---|---|---|---| 365 | |image|const SeetaImageData&| |原始图像数据| 366 | |face|const SeetaRect&| |人脸位置| 367 | |points|const SeetaPointF*| |人脸5个特征点数组| 368 | |N|const int32_t| |人脸特征点数组长度| 369 | |返回值|QualityResult| |人脸完整度评估结果| -------------------------------------------------------------------------------- /docs/静默活体.md: -------------------------------------------------------------------------------- 1 | # 静默活体识别器 2 | 3 | ## **1. 接口简介**
4 | 5 | 静默活体识别根据输入的图像数据、人脸位置和人脸特征点,对输入人脸进行活体的判断,并返回人脸活体的状态。
6 | 7 | ## **2. 类型说明**
8 | 9 | ### **2.1 struct SeetaImageData**
10 | 11 | |名称 | 类型 | 说明| 12 | |---|---|---| 13 | |data|unit8_t* |图像数据| 14 | |width | int32_t | 图像的宽度| 15 | |height | int32_t | 图像的高度| 16 | |channels | int32_t | 图像的通道数| 17 | 说明:存储彩色(三通道)或灰度(单通道)图像,像素连续存储,行优先,采用 BGR888 格式存放彩色图像,单字节灰度值存放灰度图像。 18 | 19 | ### **2.2 struct SeetaRect**
20 | 21 | |名称 | 类型 | 说明| 22 | |---|---|---| 23 | |x|int32_t |人脸区域左上角横坐标| 24 | |y| int32_t | 人脸区域左上角纵坐标| 25 | |width| int32_t | 人脸区域宽度| 26 | |height| int32_t | 人脸区域高度| 27 | 28 | ### **2.3 struct SeetaPointF**
29 | 30 | |名称 | 类型 | 说明| 31 | |---|---|---| 32 | |x|double|人脸特征点横坐标| 33 | |y|double|人脸特征点纵坐标| 34 | 35 | ## 3 class FaceAntiSpoofing 36 | 活体识别器。 37 | 38 | ### 3.1 Enum SeetaDevice 39 | 40 | 模型运行的计算设备。 41 | 42 | |名称 |说明| 43 | |---|---| 44 | |SEETA_DEVICE_AUTO|自动检测,会优先使用 GPU| 45 | |SEETA_DEVICE_CPU|使用CPU计算| 46 | |SEETA_DEVICE_GPU|使用GPU计算| 47 | 48 | ### 3.2 struct SeetaModelSetting 49 | 50 | 构造活体识别器需要传入的结构体参数。 51 | 52 | |参数 | 类型 |缺省值|说明| 53 | |---|---|---|---| 54 | |model|const char**| |识别器模型| 55 | |id|int| |GPU id| 56 | |device|SeetaDevice|AUTO |计算设备(CPU 或者 GPU)| 57 | 58 | ### 3.3 构造函数 59 | 60 | #### FaceAntiSpoofing 61 | 构造活体识别器,需要在构造的时候传入识别器结构参数。 62 | 63 | |参数 | 类型 |缺省值|说明| 64 | |---|---|---|---| 65 | |setting|const SeetaModelSetting&| |识别器接口参数| 66 | 说明:活体对象创建可以出入一个模型文件(局部活体模型)和两个模型文件(局部活体模型和全局活体模型,顺序不可颠倒),传入一个模型文件时活体识别速度快于传入两个模型文件的识别速度,传入一个模型文件时活体识别精度低于传入两个模型文件的识别精度。 67 | 68 | ### 3.4 成员函数 69 | 70 | #### Predict 71 | 基于单帧图像对人脸是否为活体进行判断。 72 | 73 | |参数 | 类型 |缺省值|说明| 74 | |---|---|---|---| 75 | |image|const SeetaImageData&| |原始图像数据| 76 | |face|const SeetaRect&| |人脸位置| 77 | |points|const SeetaPointF*| |人脸特征点数组| 78 | |返回值|Status| |人脸活体的状态| 79 | 说明:Status 活体状态可取值为REAL(真人)、SPOOF(假体)、FUZZY(由于图像质量问题造成的无法判断)和 DETECTING(正在检测),DETECTING 状态针对于 PredicVideo 模式。 80 | 81 | #### PredictVideo 82 | 基于连续视频序列对人脸是否为活体进行判断。 83 | 84 | |参数 | 类型 |缺省值|说明| 85 | |---|---|---|---| 86 | |image|const SeetaImageData&| |原始图像数据| 87 | |face|const SeetaRect&| |人脸位置| 88 | |points|const SeetaPointF*| |人脸特征点数组| 89 | |返回值|Status| |人脸活体的状态| 90 | 说明:Status 活体状态可取值为REAL(真人)、SPOOF(假体)、FUZZY(由于图像质量问题造成的无法判断)和 DETECTING(正在检测),DETECTING 状态针对于 PredicVideo 模式。 91 | 92 | #### ResetVideo 93 | 重置活体识别结果,开始下一次 PredictVideo 识别过程。 94 | 95 | |参数 | 类型 |缺省值|说明| 96 | |---|---|---|---| 97 | |返回值|void| || 98 | 99 | #### GetPreFrameScore 100 | 获取活体检测内部分数。 101 | 102 | |参数 | 类型 |缺省值|说明| 103 | |---|---|---|---| 104 | |clarity|float*| |人脸清晰度分数| 105 | |reality|float*| |人脸活体分数| 106 | |返回值|void| || 107 | 108 | #### SetVideoFrameCount 109 | 设置 Video 模式中识别视频帧数,当输入帧数为该值以后才会有活体的 110 | 真假结果。 111 | 112 | |参数 | 类型 |缺省值|说明| 113 | |---|---|---|---| 114 | |number|int32_t| |video模式下活体需求帧数| 115 | |返回值|void| || 116 | 117 | #### GetVideoFrameCount 118 | 获取video模式下活体需求帧数。 119 | 120 | |参数 | 类型 |缺省值|说明| 121 | |---|---|---|---| 122 | |返回值|int| || 123 | 124 | #### SetThreshold 125 | 设置阈值。 126 | 127 | |参数 | 类型 |缺省值|说明| 128 | |---|---|---|---| 129 | |clarity|float| |人脸清晰度阈值| 130 | |reality|float| |人脸活体阈值| 131 | |返回值|void| || 132 | 说明:人脸清晰度阈值默认为0.3,人脸活体阈值为0.8。 133 | 134 | #### GetThreshold 135 | 获取阈值。 136 | 137 | |参数 | 类型 |缺省值|说明| 138 | |---|---|---|---| 139 | |clarity|float*| |人脸清晰度阈值| 140 | |reality|float*| |人脸活体阈值| 141 | 142 | #### set 143 | 设置相关属性值。其中
144 | **PROPERTY_NUMBER_THREADS**: 145 | 表示计算线程数,默认为 4. 146 | 147 | |参数 | 类型 |缺省值|说明| 148 | |---|---|---|---| 149 | |property|Property||属性类别| 150 | |value|double||设置的属性值| 151 | |返回值|void| | | | 152 | 153 | #### get 154 | 获取相关属性值。 155 | 156 | |参数 | 类型 |缺省值|说明| 157 | |---|---|---|---| 158 | |property|Property||属性类别| 159 | |返回值|double||对应的属性值| -------------------------------------------------------------------------------- /example/qt/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/example/qt/README.md -------------------------------------------------------------------------------- /example/qt/seetaface_demo/default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/example/qt/seetaface_demo/default.png -------------------------------------------------------------------------------- /example/qt/seetaface_demo/face_resource.qrc: -------------------------------------------------------------------------------- 1 | 2 | 3 | default.png 4 | white.png 5 | seetatech_logo.png 6 | 7 | 8 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/inputfilesprocessdialog.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include "inputfilesprocessdialog.h" 7 | 8 | #include "videocapturethread.h" 9 | 10 | 11 | InputFilesProcessDlg::InputFilesProcessDlg(QWidget *parent, InputFilesThread * thread) 12 | : QDialog(parent) 13 | { 14 | m_exited = false; 15 | workthread = thread; 16 | qDebug() << "------------dlg input----------------"; 17 | //初始化控件对象 18 | //tr是把当前字符串翻译成为其他语言的标记 19 | //&后面的字母是用快捷键来激活控件的标记,例如可以用Alt+w激活Find &what这个控件 20 | label = new QLabel("", this); 21 | 22 | progressbar = new QProgressBar(this); 23 | progressbar->setOrientation(Qt::Horizontal); 24 | progressbar->setMinimum(0); 25 | progressbar->setMaximum(100); 26 | progressbar->setValue(5); 27 | progressbar->setFormat(tr("current progress:%1%").arg(QString::number(5, 'f',1))); 28 | progressbar->setAlignment(Qt::AlignLeft| Qt::AlignVCenter); 29 | 30 | cancelButton = new QPushButton(tr("&Cancel")); 31 | cancelButton->setEnabled(true); 32 | 33 | //closeButton = new QPushButton(tr("&Close")); 34 | 35 | 36 | //连接信号和槽 37 | //connect(edit1, SIGNAL(textChanged()), this, SLOT(enableOkButton())); 38 | //connect(okButton, SIGNAL(clicked()), this, SLOT(okClicked())); 39 | //connect(closeButton, SIGNAL(clicked()), this, SLOT(close())); 40 | connect(workthread, SIGNAL(sigprogress(float)), this, SLOT(setprogressvalue(float))); 41 | connect(workthread, SIGNAL(sigInputFilesEnd()), this, SLOT(setinputfileend())); 42 | 43 | 44 | 45 | QHBoxLayout *bottomLayout = new QHBoxLayout; 46 | bottomLayout->addStretch(); 47 | bottomLayout->addWidget(cancelButton); 48 | //bottomLayout->addWidget(closeButton); 49 | bottomLayout->addStretch(); 50 | 51 | QVBoxLayout *mainLayout = new QVBoxLayout; 52 | mainLayout->addWidget(label); 53 | mainLayout->addWidget(progressbar); 54 | mainLayout->addStretch(); 55 | mainLayout->addLayout(bottomLayout); 56 | 57 | this->setLayout(mainLayout); 58 | 59 | setWindowTitle(tr("Input Files Progress")); 60 | 61 | //cancelButton->setEnabled(true); 62 | setFixedSize(400,160); 63 | } 64 | 65 | void InputFilesProcessDlg::closeEvent(QCloseEvent *event) 66 | { 67 | if(!m_exited) 68 | { 69 | workthread->m_exited = true; 70 | event->ignore(); 71 | }else 72 | { 73 | event->accept(); 74 | } 75 | 76 | } 77 | 78 | void InputFilesProcessDlg::cancelClicked() 79 | { 80 | workthread->m_exited = true; 81 | } 82 | 83 | 84 | InputFilesProcessDlg::~InputFilesProcessDlg() 85 | { 86 | 87 | } 88 | void InputFilesProcessDlg::setinputfileend() 89 | { 90 | hide(); 91 | m_exited = true; 92 | close(); 93 | } 94 | 95 | 96 | void InputFilesProcessDlg::setprogressvalue(float value) 97 | { 98 | QString str = QString("%1%").arg(QString::number(value, 'f',1)); 99 | progressbar->setValue(value); 100 | progressbar->setFormat(str); 101 | } 102 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/inputfilesprocessdialog.h: -------------------------------------------------------------------------------- 1 | #ifndef INPUTFILESPROCESSDIALOG_H 2 | #define INPUTFILESPROCESSDIALOG_H 3 | 4 | 5 | #include 6 | 7 | 8 | class QLabel; 9 | class QProgressBar; 10 | class QPushButton; 11 | class InputFilesThread; 12 | 13 | class InputFilesProcessDlg :public QDialog{ 14 | 15 | //如果需要在对话框类中自定义信号和槽,则需要在类内添加Q_OBJECT 16 | Q_OBJECT 17 | public: 18 | //构造函数,析构函数 19 | InputFilesProcessDlg(QWidget *parent, InputFilesThread * thread); 20 | ~InputFilesProcessDlg(); 21 | protected: 22 | void closeEvent(QCloseEvent *event); 23 | 24 | //在signal和slots中定义这个对话框所需要的信号。 25 | signals: 26 | //signals修饰的函数不需要本类实现。他描述了本类对象可以发送那些求助信号 27 | 28 | //slots必须用private修饰 29 | private slots: 30 | void cancelClicked(); 31 | void setprogressvalue(float value); 32 | void setinputfileend(); 33 | //申明这个对话框需要哪些组件 34 | private: 35 | QLabel *label; 36 | 37 | QProgressBar *progressbar; 38 | //QLabel *label2; 39 | 40 | QPushButton *cancelButton;//, *closeButton; 41 | 42 | InputFilesThread * workthread; 43 | bool m_exited; 44 | }; 45 | 46 | 47 | 48 | #endif // INPUTFILESPROCESSDIALOG_H 49 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/main.cpp: -------------------------------------------------------------------------------- 1 | #include "mainwindow.h" 2 | #include 3 | #include 4 | 5 | 6 | int main(int argc, char *argv[]) 7 | { 8 | QApplication a(argc, argv); 9 | 10 | 11 | //QTextCodec::setCodecForCStrings(QTextCodec::codecForName("GBK")); 12 | //QTextCodec::setCodecForCStrings(QTextCodec::codecForName("UTF-8")) 13 | MainWindow w; 14 | w.setWindowTitle("SeetaFace Demo"); 15 | w.setWindowIcon(QIcon(":/new/prefix1/seetatech_logo.png")); 16 | w.show(); 17 | 18 | QString str("乱码"); 19 | 20 | qDebug() << str; 21 | return a.exec(); 22 | } 23 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/mainwindow.cpp: -------------------------------------------------------------------------------- 1 | #include "mainwindow.h" 2 | #include "ui_mainwindow.h" 3 | 4 | #include "QDir" 5 | #include "QFileDialog" 6 | #include "QDebug" 7 | 8 | #include "qsqlquery.h" 9 | #include "qmessagebox.h" 10 | #include "qsqlerror.h" 11 | 12 | #include "qitemselectionmodel.h" 13 | #include 14 | 15 | //#include "faceinputdialog.h" 16 | 17 | #include "inputfilesprocessdialog.h" 18 | #include "resetmodelprocessdialog.h" 19 | 20 | #include 21 | #include 22 | #include 23 | #include 24 | 25 | //#include "Common/CStruct.h" 26 | #include 27 | using namespace std::chrono; 28 | 29 | 30 | 31 | ////////////////////////////////// 32 | 33 | 34 | const QString gcrop_prefix("crop_"); 35 | Config_Paramter gparamters; 36 | std::string gmodelpath; 37 | 38 | ///////////////////////////////////// 39 | MainWindow::MainWindow(QWidget *parent) : 40 | QMainWindow(parent), 41 | ui(new Ui::MainWindow) 42 | { 43 | m_currenttab = -1; 44 | ui->setupUi(this); 45 | 46 | 47 | QIntValidator * vfdminfacesize = new QIntValidator(20, 1000); 48 | ui->fdminfacesize->setValidator(vfdminfacesize); 49 | 50 | QDoubleValidator *vfdthreshold = new QDoubleValidator(0.0,1.0, 2); 51 | ui->fdthreshold->setValidator(vfdthreshold); 52 | 53 | QDoubleValidator *vantispoofclarity = new QDoubleValidator(0.0,1.0, 2); 54 | ui->antispoofclarity->setValidator(vantispoofclarity); 55 | 56 | QDoubleValidator *vantispoofreality = new QDoubleValidator(0.0,1.0, 2); 57 | ui->antispoofreality->setValidator(vantispoofreality); 58 | 59 | QDoubleValidator *vyawhigh = new QDoubleValidator(0.0,90, 2); 60 | ui->yawhighthreshold->setValidator(vyawhigh); 61 | 62 | QDoubleValidator *vyawlow = new QDoubleValidator(0.0,90, 2); 63 | ui->yawlowthreshold->setValidator(vyawlow); 64 | 65 | QDoubleValidator *vpitchlow = new QDoubleValidator(0.0,90, 2); 66 | ui->pitchlowthreshold->setValidator(vpitchlow); 67 | 68 | QDoubleValidator *vpitchhigh = new QDoubleValidator(0.0,90, 2); 69 | ui->pitchhighthreshold->setValidator(vpitchhigh); 70 | 71 | QDoubleValidator *vfrthreshold = new QDoubleValidator(0.0,1.0, 2); 72 | ui->fr_threshold->setValidator(vfrthreshold); 73 | 74 | gparamters.MinFaceSize = 100; 75 | gparamters.Fd_Threshold = 0.80; 76 | gparamters.VideoWidth = 400; 77 | gparamters.VideoHeight = 400; 78 | gparamters.AntiSpoofClarity = 0.30; 79 | gparamters.AntiSpoofReality = 0.80; 80 | gparamters.PitchLowThreshold = 20; 81 | gparamters.PitchHighThreshold = 10; 82 | gparamters.YawLowThreshold = 20; 83 | gparamters.YawHighThreshold = 10; 84 | gparamters.Fr_Threshold = 0.6; 85 | gparamters.Fr_ModelPath = "face_recognizer.csta"; 86 | 87 | m_type.type = 0; 88 | m_type.filename = ""; 89 | m_type.title = "Open Camera 0"; 90 | 91 | ui->recognize_label->setText(m_type.title); 92 | 93 | int width = this->width(); 94 | int height = this->height(); 95 | this->setFixedSize(width, height); 96 | 97 | ui->db_editpicture->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); 98 | ui->db_editcrop->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); 99 | 100 | ///////////////////////// 101 | 102 | 103 | m_database = QSqlDatabase::addDatabase("QSQLITE"); 104 | QString exepath = QCoreApplication::applicationDirPath(); 105 | QString strdb = exepath + /*QDir::separator()*/ + "/seetaface_demo.db"; 106 | 107 | m_image_tmp_path = exepath + /*QDir::separator()*/ + "/tmp/";// + QDir::separator(); 108 | m_image_path = exepath + /*QDir::separator()*/ + "/images/";// + QDir::separator(); 109 | //m_model_path = exepath + /*QDir::separator()*/ + "/models/";// + QDir::separator(); 110 | gmodelpath = (exepath + /*QDir::separator()*/ + "/models/"/* + QDir::separator()*/).toStdString(); 111 | 112 | QDir dir; 113 | dir.mkpath(m_image_tmp_path); 114 | dir.mkpath(m_image_path); 115 | 116 | m_database.setDatabaseName(strdb); 117 | 118 | if(!m_database.open()) 119 | { 120 | QMessageBox::critical(NULL, "critical", tr("open database failed, exited!"), QMessageBox::Yes); 121 | exit(-1); 122 | } 123 | 124 | QStringList tables = m_database.tables(); 125 | m_table = "face_tab"; 126 | m_config_table = "setting_tab";//"paramter_tab"; 127 | 128 | 129 | 130 | bool bfind = false; 131 | bool bconfigfind = false; 132 | int i =0; 133 | for( i=0; ifdminfacesize->setText(QString::number(gparamters.MinFaceSize)); 186 | ui->fdthreshold->setText(QString::number(gparamters.Fd_Threshold)); 187 | ui->antispoofclarity->setText(QString::number(gparamters.AntiSpoofClarity)); 188 | ui->antispoofreality->setText(QString::number(gparamters.AntiSpoofReality)); 189 | ui->yawlowthreshold->setText(QString::number(gparamters.YawLowThreshold)); 190 | ui->yawhighthreshold->setText(QString::number(gparamters.YawHighThreshold)); 191 | ui->pitchlowthreshold->setText(QString::number(gparamters.PitchLowThreshold)); 192 | ui->pitchhighthreshold->setText(QString::number(gparamters.PitchHighThreshold)); 193 | ui->fr_threshold->setText(QString::number(gparamters.Fr_Threshold)); 194 | ui->fr_modelpath->setText(gparamters.Fr_ModelPath); 195 | qDebug() << "create config table ok!"; 196 | 197 | } 198 | 199 | ui->dbtableview->setSelectionBehavior(QAbstractItemView::SelectRows); 200 | ui->dbtableview->setEditTriggers(QAbstractItemView::NoEditTriggers); 201 | ui->dbtableview->verticalHeader()->setDefaultSectionSize(80); 202 | ui->dbtableview->verticalHeader()->hide(); 203 | 204 | connect(ui->dbtableview, SIGNAL(clicked(QModelIndex)), this, SLOT(showfaceinfo())); 205 | 206 | m_model = new QStandardItemModel(this); 207 | QStringList columsTitles; 208 | columsTitles << "ID" << "Name" << "Image" << /*"edit" << */" "; 209 | m_model->setHorizontalHeaderLabels(columsTitles); 210 | ui->dbtableview->setModel(m_model); 211 | ui->dbtableview->setColumnWidth(0, 120); 212 | ui->dbtableview->setColumnWidth(1, 200); 213 | ui->dbtableview->setColumnWidth(2, 104); 214 | ui->dbtableview->setColumnWidth(3, 100); 215 | //ui->dbtableview->setColumnWidth(4, 100); 216 | getdatas(); 217 | /// /////////////////////////// 218 | 219 | gparamters.VideoWidth = ui->previewlabel->width(); 220 | gparamters.VideoHeight = ui->previewlabel->height(); 221 | 222 | if(bconfigfind) 223 | { 224 | //fd_minfacesize, fd_threshold, antispoof_clarity, antispoof_reality, qa_yawlow, qa_yawhigh, qa_pitchlow, qa_pitchhigh 225 | QSqlQuery q("select * from " + m_config_table); 226 | while(q.next()) 227 | { 228 | gparamters.MinFaceSize = q.value("fd_minfacesize").toInt(); 229 | ui->fdminfacesize->setText(QString::number(q.value("fd_minfacesize").toInt())); 230 | 231 | gparamters.Fd_Threshold = q.value("fd_threshold").toFloat(); 232 | ui->fdthreshold->setText(QString::number(q.value("fd_threshold").toFloat())); 233 | 234 | gparamters.AntiSpoofClarity = q.value("antispoof_clarity").toFloat(); 235 | ui->antispoofclarity->setText(QString::number(q.value("antispoof_clarity").toFloat())); 236 | 237 | gparamters.AntiSpoofReality = q.value("antispoof_reality").toFloat(); 238 | ui->antispoofreality->setText(QString::number(q.value("antispoof_reality").toFloat())); 239 | 240 | gparamters.YawLowThreshold = q.value("qa_yawlow").toFloat(); 241 | ui->yawlowthreshold ->setText(QString::number(q.value("qa_yawlow").toFloat())); 242 | 243 | gparamters.YawHighThreshold = q.value("qa_yawhigh").toFloat(); 244 | ui->yawhighthreshold ->setText(QString::number(q.value("qa_yawhigh").toFloat())); 245 | 246 | gparamters.PitchLowThreshold = q.value("qa_pitchlow").toFloat(); 247 | ui->pitchlowthreshold ->setText(QString::number(q.value("qa_pitchlow").toFloat())); 248 | 249 | gparamters.PitchHighThreshold = q.value("qa_pitchhigh").toFloat(); 250 | ui->pitchhighthreshold ->setText(QString::number(q.value("qa_pitchhigh").toFloat())); 251 | 252 | gparamters.Fr_Threshold = q.value("fr_threshold").toFloat(); 253 | gparamters.Fr_ModelPath = q.value("fr_modelpath").toString(); 254 | 255 | ui->fr_threshold->setText(QString::number(gparamters.Fr_Threshold)); 256 | ui->fr_modelpath->setText(gparamters.Fr_ModelPath); 257 | 258 | } 259 | 260 | } 261 | 262 | 263 | //////////////////////////// 264 | ui->previewtableview->setSelectionBehavior(QAbstractItemView::SelectRows); 265 | ui->previewtableview->setEditTriggers(QAbstractItemView::NoEditTriggers); 266 | ui->previewtableview->verticalHeader()->setDefaultSectionSize(80); 267 | ui->previewtableview->verticalHeader()->hide(); 268 | 269 | //connect(ui->tableView, SIGNAL(clicked(QModelIndex)), this, SLOT(showfaceinfo())); 270 | 271 | m_videomodel = new QStandardItemModel(this); 272 | columsTitles.clear(); 273 | columsTitles << "Name" << "Score" << "Gallery" << "Snapshot" << "PID"; 274 | m_videomodel->setHorizontalHeaderLabels(columsTitles); 275 | ui->previewtableview->setModel(m_videomodel); 276 | ui->previewtableview->setColumnWidth(0, 140); 277 | ui->previewtableview->setColumnWidth(1, 80); 278 | ui->previewtableview->setColumnWidth(2, 84); 279 | ui->previewtableview->setColumnWidth(3, 84); 280 | ui->previewtableview->setColumnWidth(4, 2); 281 | ui->previewtableview->hideColumn(4); 282 | 283 | ///////////////////////// 284 | m_videothread = new VideoCaptureThread(&m_datalst, ui->previewlabel->width(), ui->previewlabel->height()); 285 | m_videothread->setparamter(); 286 | //m_videothread->setMinFaceSize(ui->fdminfacesize->text().toInt()); 287 | connect(m_videothread, SIGNAL(sigUpdateUI(const QImage &)), this, SLOT(onupdateui(const QImage &))); 288 | connect(m_videothread, SIGNAL(sigEnd(int)), this, SLOT(onvideothreadend(int))); 289 | connect(m_videothread->m_workthread, SIGNAL(sigRecognize(int, const QString &, const QString &, float, const QImage &, const QRect &)), this, 290 | SLOT(onrecognize(int, const QString &, const QString &, float, const QImage &, const QRect &))); 291 | //m_videothread->start(); 292 | 293 | m_inputfilesthread = new InputFilesThread(m_videothread, m_image_path, m_image_tmp_path); 294 | m_resetmodelthread = new ResetModelThread( m_image_path, m_image_tmp_path); 295 | 296 | connect(m_inputfilesthread, SIGNAL(sigInputFilesUpdateUI(std::vector*)), this, SLOT(oninputfilesupdateui(std::vector *)), Qt::BlockingQueuedConnection); 297 | 298 | ui->dbsavebtn->setEnabled(true); 299 | ui->previewrunbtn->setEnabled(true); 300 | ui->previewstopbtn->setEnabled(false); 301 | 302 | //ui->pushButton_6->setEnabled(false); 303 | /////////////////////// 304 | /////////////////////// 305 | //ui->label->setStyleSheet("QLabel{background-color:rgb(255,255,255);}"); 306 | //ui->label->setStyleSheet("border-image:url(:/new/prefix1/white.png)"); 307 | int a = ui->previewlabel->width(); 308 | int b = ui->previewlabel->height(); 309 | QImage image(":/new/prefix1/white.png"); 310 | QImage ime = image.scaled(a,b); 311 | ui->previewlabel->setPixmap(QPixmap::fromImage(ime)); 312 | 313 | ui->tabWidget->setCurrentIndex(0); 314 | m_currenttab = ui->tabWidget->currentIndex(); 315 | 316 | 317 | if(m_model->rowCount() > 0) 318 | { 319 | ui->dbtableview->scrollToBottom(); 320 | ui->dbtableview->selectRow(m_model->rowCount() - 1); 321 | emit ui->dbtableview->clicked(m_model->index(m_model->rowCount() - 1, 1)); 322 | } 323 | } 324 | 325 | MainWindow::~MainWindow() 326 | { 327 | 328 | delete ui; 329 | cleardata(); 330 | } 331 | 332 | void MainWindow::cleardata() 333 | { 334 | std::map::iterator iter = m_datalst.begin(); 335 | for(; iter != m_datalst.end(); ++iter) 336 | { 337 | if(iter->second) 338 | { 339 | delete iter->second; 340 | iter->second = NULL; 341 | } 342 | } 343 | m_datalst.clear(); 344 | } 345 | 346 | void MainWindow::getdatas() 347 | { 348 | int i = 0; 349 | QSqlQuery q("select * from " + m_table + " order by id asc"); 350 | while(q.next()) 351 | { 352 | //qDebug() << q.value("id").toInt() << "-----" << q.value("name").toString() << "----" << q.value("image_path").toString(); 353 | QByteArray data1 = q.value("feature_data").toByteArray(); 354 | float * ptr = (float *)data1.data(); 355 | //qDebug() << ptr[0] << "," << ptr[1] << "," << ptr[2] << "," << ptr[3] ; 356 | 357 | ////////////////////////////////////////////////// 358 | m_model->setItem(i, 0, new QStandardItem(QString::number(q.value("id").toInt()))); 359 | m_model->setItem(i, 1, new QStandardItem(q.value("name").toString())); 360 | // m_model->setItem(i, 2, new QStandardItem(q.value("image_path").toString())); 361 | 362 | QLabel *label = new QLabel(""); 363 | label->setFixedSize(100,80); 364 | label->setStyleSheet("border-image:url(" + m_image_path + q.value("image_path").toString() + ")"); 365 | ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 2), label); 366 | 367 | /* 368 | QPushButton *button = new QPushButton("edit"); 369 | button->setProperty("id", q.value("id").toInt()); 370 | button->setFixedSize(80, 40); 371 | connect(button, SIGNAL(clicked()), this, SLOT(editrecord())); 372 | ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 3), button); 373 | */ 374 | 375 | 376 | QPushButton *button2 = new QPushButton("delete"); 377 | button2->setProperty("id", q.value("id").toInt()); 378 | button2->setFixedSize(80, 40); 379 | connect(button2, SIGNAL(clicked()), this, SLOT(deleterecord())); 380 | 381 | QWidget *widget = new QWidget(); 382 | QHBoxLayout *layout = new QHBoxLayout; 383 | layout->addStretch(); 384 | layout->addWidget(button2); 385 | layout->addStretch(); 386 | widget->setLayout(layout); 387 | 388 | ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 3), widget); 389 | 390 | //ui->dbtableview->setIndexWidget(m_model->index(m_model->rowCount() - 1, 3), button2); 391 | 392 | DataInfo * info = new DataInfo; 393 | info->id = q.value("id").toInt(); 394 | info->name = q.value("name").toString(); 395 | info->image_path = q.value("image_path").toString(); 396 | memcpy(info->features, ptr, 1024 * sizeof(float)); 397 | info->x = q.value("facex").toInt(); 398 | info->y = q.value("facey").toInt(); 399 | info->width = q.value("facewidth").toInt(); 400 | info->height = q.value("faceheight").toInt(); 401 | m_datalst.insert(std::map::value_type(info->id, info)); 402 | i++; 403 | } 404 | } 405 | 406 | 407 | 408 | void MainWindow::editrecord() 409 | { 410 | //QPushButton *button = (QPushButton *)sender(); 411 | //qDebug() << button->property("id").toInt() << ", edit"; 412 | } 413 | 414 | void MainWindow::deleterecord() 415 | { 416 | QPushButton *button = (QPushButton *)sender(); 417 | qDebug() << button->property("id").toInt() << ",del"; 418 | QMessageBox::StandardButton reply = QMessageBox::question(NULL, "delete", tr("Are you sure delete this record?"), QMessageBox::Yes | QMessageBox::No); 419 | if(reply == QMessageBox::No) 420 | return; 421 | 422 | QModelIndex modelindex = ui->dbtableview->indexAt(button->pos()); 423 | 424 | int id = button->property("id").toInt(); 425 | QStandardItemModel * model = (QStandardItemModel *)ui->dbtableview->model(); 426 | 427 | QSqlQuery query("delete from " + m_table + " where id=" + QString::number(id)); 428 | //qDebug() << "delete from " + m_table + " where id=" + QString::number(id); 429 | if(!query.exec()) 430 | { 431 | QMessageBox::warning(NULL, "warning", tr("delete this record failed!"), QMessageBox::Yes); 432 | return; 433 | } 434 | 435 | int nrows = modelindex.row(); 436 | model->removeRow(modelindex.row()); 437 | std::map::iterator iter = m_datalst.find(id); 438 | if(iter != m_datalst.end()) 439 | { 440 | QFile file(m_image_path + iter->second->image_path); 441 | file.remove(); 442 | delete iter->second; 443 | m_datalst.erase(iter); 444 | } 445 | 446 | if(m_model->rowCount() > 0) 447 | { 448 | nrows--; 449 | if(nrows < 0) 450 | { 451 | nrows = 0; 452 | } 453 | //qDebug() << "delete------------row:" << nrows; 454 | ui->dbtableview->selectRow(nrows); 455 | emit ui->dbtableview->clicked(m_model->index(nrows, 1)); 456 | }else 457 | { 458 | ui->db_editname->setText(""); 459 | ui->db_editid->setText(""); 460 | ui->db_editpicture->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); 461 | ui->db_editcrop->setStyleSheet("border-image:url(:/new/prefix1/default.png)"); 462 | } 463 | } 464 | 465 | void MainWindow::showfaceinfo() 466 | { 467 | int row = ui->dbtableview->currentIndex().row(); 468 | //qDebug() << "showfaceinfo:" << row ; 469 | if(row >= 0) 470 | { 471 | QModelIndex index = m_model->index(row, 0); 472 | int id = ui->db_editid->text().toInt(); 473 | int curid = m_model->data(index).toInt(); 474 | if(id == curid) 475 | return; 476 | 477 | 478 | ui->db_editid->setText(QString::number(m_model->data(index).toInt())); 479 | std::map::iterator iter = m_datalst.find(m_model->data(index).toInt()); 480 | if(iter == m_datalst.end()) 481 | return; 482 | 483 | index = m_model->index(row, 1); 484 | ui->db_editname->setText(m_model->data(index).toString()); 485 | 486 | QString strimage = iter->second->image_path; 487 | //qDebug() << "showfaceinfo:" << strimage; 488 | ui->db_editpicture->setStyleSheet("border-image:url(" + m_image_path + strimage + ")"); 489 | 490 | 491 | //qDebug() << "showfaceinfo:" << strimage; 492 | ui->db_editcrop->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + strimage + ")"); 493 | 494 | 495 | iter = m_datalst.find(id); 496 | if(iter == m_datalst.end()) 497 | return; 498 | QFile::remove(m_image_tmp_path + iter->second->image_path); 499 | } 500 | } 501 | 502 | void MainWindow::onrecognize(int pid, const QString & name, const QString & imagepath, float score, const QImage &image, const QRect &rc) 503 | { 504 | int nrows = m_videomodel->rowCount(); 505 | 506 | if(nrows > 1000) 507 | { 508 | ui->previewtableview->setUpdatesEnabled(false); 509 | m_videomodel->removeRows(0, 200); 510 | ui->previewtableview->setUpdatesEnabled(true); 511 | } 512 | 513 | nrows = m_videomodel->rowCount(); 514 | int i = 0; 515 | for(; iitem(i, 4)->text().toInt() == pid) 518 | { 519 | break; 520 | } 521 | } 522 | 523 | nrows = i; 524 | 525 | m_videomodel->setItem(nrows, 0, new QStandardItem(name)); 526 | //m_videomodel->setItem(nrows, 1, new QStandardItem(QString::number(score, 'f', 3))); 527 | 528 | QLabel *label = new QLabel(""); 529 | label->setFixedSize(80,80); 530 | if(name.isEmpty()) 531 | { 532 | m_videomodel->setItem(nrows, 1, new QStandardItem("")); 533 | label->setText(imagepath); 534 | }else 535 | { 536 | m_videomodel->setItem(nrows, 1, new QStandardItem(QString::number(score, 'f', 3))); 537 | //QLabel *label = new QLabel(""); 538 | //qDebug() << "rows:" << nrows << ",imagepath:" << imagepath << "," << m_image_path + gcrop_prefix + imagepath ; 539 | //label->setFixedSize(80,80); 540 | 541 | QImage srcimage; 542 | srcimage.load( m_image_path + imagepath); 543 | srcimage = srcimage.copy(rc.x(),rc.y(),rc.width(),rc.height()); 544 | srcimage = srcimage.scaled(80,80); 545 | label->setPixmap(QPixmap::fromImage(srcimage)); 546 | //label->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + imagepath + ")"); 547 | //ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 2), label); 548 | } 549 | 550 | ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 2), label); 551 | 552 | /* 553 | QLabel *label = new QLabel(""); 554 | qDebug() << "rows:" << nrows << ",imagepath:" << imagepath << "," << m_image_path + gcrop_prefix + imagepath ; 555 | label->setFixedSize(80,80); 556 | 557 | QImage srcimage; 558 | srcimage.load( m_image_path + imagepath); 559 | srcimage = srcimage.copy(rc.x(),rc.y(),rc.width(),rc.height()); 560 | srcimage = srcimage.scaled(80,80); 561 | label->setPixmap(QPixmap::fromImage(srcimage)); 562 | //label->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + imagepath + ")"); 563 | ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 2), label); 564 | */ 565 | 566 | QLabel *label2 = new QLabel(""); 567 | label2->setFixedSize(80,80); 568 | QImage img = image.scaled(80,80); 569 | label2->setPixmap(QPixmap::fromImage(img)); 570 | //label2->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + imagepath + ")"); 571 | ui->previewtableview->setIndexWidget(m_videomodel->index(nrows, 3), label2); 572 | 573 | m_videomodel->setItem(nrows, 4, new QStandardItem(QString::number(pid))); 574 | ui->previewtableview->scrollToBottom(); 575 | 576 | } 577 | 578 | void MainWindow::onupdateui(const QImage & image) 579 | { 580 | int a = ui->previewlabel->width(); 581 | int b = ui->previewlabel->height(); 582 | QImage ime = image.scaled(a,b); 583 | ui->previewlabel->setPixmap(QPixmap::fromImage(ime)); 584 | ui->previewlabel->show(); 585 | } 586 | 587 | void MainWindow::onvideothreadend(int value) 588 | { 589 | qDebug() << "onvideothreadend:" << value; 590 | //ui->label->setStyleSheet("border-image:url(:/new/prefix1/white.png)"); 591 | 592 | if(m_type.type != 2) 593 | { 594 | int a = ui->previewlabel->width(); 595 | int b = ui->previewlabel->height(); 596 | QImage image(":/new/prefix1/white.png"); 597 | QImage ime = image.scaled(a,b); 598 | ui->previewlabel->setPixmap(QPixmap::fromImage(ime)); 599 | ui->previewlabel->show(); 600 | } 601 | 602 | ui->previewrunbtn->setEnabled(true); 603 | ui->previewstopbtn->setEnabled(false); 604 | } 605 | 606 | void MainWindow::on_dbsavebtn_clicked() 607 | { 608 | //input image to database 609 | //phuckDlg *dialog = new phuckDlg(this); 610 | //dialog->setModal(true); 611 | //dialog->show(); 612 | 613 | //qDebug() << "----begin---update"; 614 | if(ui->db_editname->text().isEmpty()) 615 | { 616 | QMessageBox::critical(NULL, "critical", tr("name is empty!"), QMessageBox::Yes); 617 | return; 618 | } 619 | 620 | if(ui->db_editname->text().length() > 64) 621 | { 622 | QMessageBox::critical(NULL, "critical", tr("name length is more than 64!"), QMessageBox::Yes); 623 | return; 624 | } 625 | 626 | int index = 1; 627 | index = ui->db_editid->text().toInt(); 628 | 629 | //qDebug() << "----begin---update---index:" << index; 630 | std::map::iterator iter = m_datalst.find(index); 631 | if(iter == m_datalst.end()) 632 | { 633 | return; 634 | } 635 | 636 | QString str = m_image_tmp_path + iter->second->image_path; 637 | QFileInfo fileinfo(str); 638 | bool imageupdate = false; 639 | float features[1024]; 640 | SeetaRect rect; 641 | 642 | if(fileinfo.isFile()) 643 | { 644 | //imageupdate = true; 645 | QString cropfile = m_image_tmp_path + gcrop_prefix + iter->second->image_path; 646 | 647 | float features[1024]; 648 | int nret = m_videothread->checkimage(str, cropfile, features, rect); 649 | QString strerror; 650 | 651 | if(nret == -2) 652 | { 653 | strerror = "do not find face!"; 654 | }else if(nret == -1) 655 | { 656 | strerror = str + " is invalid!"; 657 | }else if(nret == 1) 658 | { 659 | strerror = "find more than one face!"; 660 | }else if(nret == 2) 661 | { 662 | strerror = "quality check failed!"; 663 | } 664 | 665 | if(!strerror.isEmpty()) 666 | { 667 | QFile::remove(str); 668 | QMessageBox::critical(NULL,"critical", strerror, QMessageBox::Yes); 669 | return; 670 | } 671 | } 672 | 673 | //qDebug() << "---1-begin---update---index:" << index; 674 | 675 | QSqlQuery query; 676 | 677 | if(imageupdate) 678 | { 679 | query.prepare("update " + m_table + " set name = :name, feature_data=:feature_data, facex=:facex,facey=:facey,facewidth=:facewidth,faceheight=:faceheight where id=" + QString::number(index)); 680 | QByteArray bytearray; 681 | bytearray.resize(1024 * sizeof(float)); 682 | memcpy(bytearray.data(), features, 1024 * sizeof(float)); 683 | query.bindValue(":feature_data", QVariant(bytearray)); 684 | query.bindValue(":facex", rect.x); 685 | query.bindValue(":facey", rect.y); 686 | query.bindValue(":facewidth", rect.width); 687 | query.bindValue(":faceheight", rect.height); 688 | 689 | }else 690 | { 691 | query.prepare("update " + m_table + " set name = :name where id=" + QString::number(index)); 692 | } 693 | query.bindValue(":name", ui->db_editname->text());//fileinfo.fileName());//strfile); 694 | 695 | if(!query.exec()) 696 | { 697 | if(imageupdate) 698 | { 699 | QFile::remove(str); 700 | QFile::remove(m_image_tmp_path + gcrop_prefix + iter->second->image_path); 701 | } 702 | 703 | //QFile::remove() 704 | //qDebug() << "failed to update table:" << query.lastError(); 705 | QMessageBox::critical(NULL, "critical", tr("update data to database failed!"), QMessageBox::Yes); 706 | return; 707 | } 708 | 709 | //qDebug() << "---ddd-begin---update---index:" << index; 710 | iter->second->name = ui->db_editname->text(); 711 | 712 | 713 | if(imageupdate) 714 | { 715 | memcpy(iter->second->features, features, 1024 * sizeof(float)); 716 | //qDebug() << "---image-begin---update---index:" << index << ",image:" << str; 717 | QFile::remove(m_image_path + iter->second->image_path); 718 | QFile::remove(m_image_path + gcrop_prefix + iter->second->image_path); 719 | QFile::copy(str, m_image_path + iter->second->image_path); 720 | QFile::copy(m_image_tmp_path + gcrop_prefix + iter->second->image_path, m_image_path + gcrop_prefix + iter->second->image_path); 721 | QFile::remove(str); 722 | QFile::remove(m_image_tmp_path + gcrop_prefix + iter->second->image_path); 723 | } 724 | 725 | int row = ui->dbtableview->currentIndex().row(); 726 | //qDebug() << "showfaceinfo:" << row ; 727 | if(row >= 0) 728 | { 729 | QModelIndex index = m_model->index(row, 1); 730 | m_model->itemFromIndex(index)->setText(ui->db_editname->text()); 731 | 732 | //qDebug() << "---image-begin---update---index:" << index << ",image:" << str; 733 | if(imageupdate) 734 | { 735 | index = m_model->index(row, 2); 736 | ui->dbtableview->indexWidget(index)->setStyleSheet("border-image:url(" + m_image_path + iter->second->image_path + ")"); 737 | ui->db_editcrop->setStyleSheet("border-image:url(" + m_image_path + gcrop_prefix + iter->second->image_path + ")"); 738 | } 739 | } 740 | QMessageBox::information(NULL, "info", tr("update name to database success!"), QMessageBox::Yes); 741 | } 742 | 743 | void MainWindow::on_previewrunbtn_clicked() 744 | { 745 | m_videothread->m_exited = false; 746 | m_videothread->start(m_type); 747 | ui->previewrunbtn->setEnabled(false); 748 | ui->previewstopbtn->setEnabled(true); 749 | } 750 | 751 | void MainWindow::on_previewstopbtn_clicked() 752 | { 753 | m_videothread->m_exited = true; 754 | } 755 | 756 | void MainWindow::on_settingsavebtn_clicked() 757 | { 758 | /* 759 | ResetModelProcessDlg dialog(this, m_resetmodelthread); 760 | //m_resetmodelthread->start(&m_datalst, m_table, fr); 761 | int nret = dialog.exec(); 762 | 763 | qDebug() << "ResetModelProcessDlg:" << nret; 764 | 765 | if(nret != QDialog::Accepted) 766 | { 767 | 768 | QMessageBox::critical(NULL, "critical", "reset face recognizer model failed!", QMessageBox::Yes); 769 | return; 770 | } 771 | return; 772 | */ 773 | ////////////////////////////////// 774 | 775 | int size = ui->fdminfacesize->text().toInt(); 776 | if(size < 20 || size > 1000) 777 | { 778 | QMessageBox::warning(NULL, "warn", "Face Detector Min Face Size is invalid!", QMessageBox::Yes); 779 | return; 780 | } 781 | 782 | float value = ui->fdthreshold->text().toFloat(); 783 | if(value >= 1.0 || value < 0.0) 784 | { 785 | QMessageBox::warning(NULL, "warn", "Face Detector Threshold is invalid!", QMessageBox::Yes); 786 | return; 787 | } 788 | 789 | value = ui->antispoofclarity->text().toFloat(); 790 | if(value >= 1.0 || value < 0.0) 791 | { 792 | QMessageBox::warning(NULL, "warn", "Anti Spoofing Clarity is invalid!", QMessageBox::Yes); 793 | return; 794 | } 795 | 796 | value = ui->antispoofreality->text().toFloat(); 797 | if(value >= 1.0 || value < 0.0) 798 | { 799 | QMessageBox::warning(NULL, "warn", "Anti Spoofing Reality is invalid!", QMessageBox::Yes); 800 | return; 801 | } 802 | 803 | value = ui->yawlowthreshold->text().toFloat(); 804 | if(value >= 90 || value < 0.0) 805 | { 806 | QMessageBox::warning(NULL, "warn", "Quality Yaw Low Threshold is invalid!", QMessageBox::Yes); 807 | return; 808 | } 809 | value = ui->yawhighthreshold->text().toFloat(); 810 | if(value >= 90 || value < 0.0) 811 | { 812 | QMessageBox::warning(NULL, "warn", "Quality Yaw High Threshold is invalid!", QMessageBox::Yes); 813 | return; 814 | } 815 | 816 | value = ui->pitchlowthreshold->text().toFloat(); 817 | if(value >= 90 || value < 0.0) 818 | { 819 | QMessageBox::warning(NULL, "warn", "Quality Pitch Low Threshold is invalid!", QMessageBox::Yes); 820 | return; 821 | } 822 | value = ui->pitchhighthreshold->text().toFloat(); 823 | if(value >= 90 || value < 0.0) 824 | { 825 | QMessageBox::warning(NULL, "warn", "Quality Pitch High Threshold is invalid!", QMessageBox::Yes); 826 | return; 827 | } 828 | 829 | value = ui->fr_threshold->text().toFloat(); 830 | if(value >= 1.0 || value < 0.0) 831 | { 832 | QMessageBox::warning(NULL, "warn", "Face Recognizer Threshold is invalid!", QMessageBox::Yes); 833 | return; 834 | } 835 | 836 | QString strmodel = ui->fr_modelpath->text().trimmed(); 837 | QFileInfo fileinfo(gmodelpath.c_str() + strmodel); 838 | if(QString::compare(fileinfo.suffix(), "csta", Qt::CaseInsensitive) != 0) 839 | { 840 | QMessageBox::warning(NULL, "warn", "Face Recognizer model file is invalid!", QMessageBox::Yes); 841 | return; 842 | } 843 | 844 | QMessageBox::StandardButton result; 845 | if(QString::compare(gparamters.Fr_ModelPath, ui->fr_modelpath->text().trimmed()) != 0) 846 | { 847 | result = QMessageBox::warning(NULL, "warning", "Set new face recognizer model need reset features, Are you sure?", QMessageBox::Yes | QMessageBox::No); 848 | if(result == QMessageBox::No) 849 | { 850 | return; 851 | } 852 | 853 | seeta::FaceRecognizer * fr = m_videothread->CreateFaceRecognizer(ui->fr_modelpath->text().trimmed()); 854 | ResetModelProcessDlg dialog(this, m_resetmodelthread); 855 | m_resetmodelthread->start(&m_datalst, m_table, fr); 856 | int nret = dialog.exec(); 857 | 858 | qDebug() << "ResetModelProcessDlg:" << nret; 859 | 860 | if(nret != QDialog::Accepted) 861 | { 862 | delete fr; 863 | QMessageBox::critical(NULL, "critical", "reset face recognizer model failed!", QMessageBox::Yes); 864 | return; 865 | } 866 | m_videothread->set_fr(fr); 867 | } 868 | 869 | 870 | QString sql("update " + m_config_table + " set fd_minfacesize=%1, fd_threshold=%2, antispoof_clarity=%3, antispoof_reality=%4,"); 871 | sql += "qa_yawlow=%5, qa_yawhigh=%6, qa_pitchlow=%7, qa_pitchhigh=%8, fr_threshold=%9,fr_modelpath=\"%10\""; 872 | sql = QString(sql).arg(ui->fdminfacesize->text()).arg(ui->fdthreshold->text()).arg(ui->antispoofclarity->text()).arg(ui->antispoofreality->text()). 873 | arg(ui->yawlowthreshold->text()).arg(ui->yawhighthreshold->text()).arg(ui->pitchlowthreshold->text()).arg(ui->pitchhighthreshold->text()). 874 | arg(ui->fr_threshold->text()).arg(ui->fr_modelpath->text().trimmed()); 875 | QSqlQuery q(sql); 876 | //qDebug() << sql; 877 | //QSqlQuery q("update " + m_config_table + " set min_face_size =" + ui->fdminfacesize->text() ); 878 | if(!q.exec()) 879 | { 880 | QMessageBox::critical(NULL, "critical", "update setting failed!", QMessageBox::Yes); 881 | return; 882 | } 883 | 884 | 885 | 886 | gparamters.MinFaceSize = ui->fdminfacesize->text().toInt(); 887 | gparamters.Fd_Threshold = ui->fdthreshold->text().toFloat(); 888 | gparamters.AntiSpoofClarity = ui->antispoofclarity->text().toFloat(); 889 | gparamters.AntiSpoofReality = ui->antispoofreality->text().toFloat(); 890 | gparamters.YawLowThreshold = ui->yawlowthreshold->text().toFloat(); 891 | gparamters.YawHighThreshold = ui->yawhighthreshold->text().toFloat(); 892 | gparamters.PitchLowThreshold = ui->pitchlowthreshold->text().toFloat(); 893 | gparamters.PitchHighThreshold = ui->pitchhighthreshold->text().toFloat(); 894 | gparamters.Fr_Threshold = ui->fr_threshold->text().toFloat(); 895 | gparamters.Fr_ModelPath = ui->fr_modelpath->text().trimmed(); 896 | 897 | m_videothread->setparamter(); 898 | 899 | QMessageBox::information(NULL, "info", "update setting ok!", QMessageBox::Yes); 900 | 901 | } 902 | 903 | void MainWindow::on_rotatebtn_clicked() 904 | { 905 | QMatrix matrix; 906 | matrix.rotate(90); 907 | 908 | int id = ui->db_editid->text().toInt(); 909 | 910 | std::map::iterator iter = m_datalst.find(id); 911 | if(iter == m_datalst.end()) 912 | { 913 | return; 914 | } 915 | 916 | //QFile::remove(m_image_tmp_path + iter->second->image_path); 917 | if(!QFile::exists(m_image_tmp_path + iter->second->image_path)) 918 | { 919 | QFile::copy(m_image_path + iter->second->image_path, m_image_tmp_path + iter->second->image_path); 920 | } 921 | 922 | if(!QFile::exists(m_image_tmp_path + gcrop_prefix + iter->second->image_path)) 923 | { 924 | QFile::copy(m_image_path + gcrop_prefix + iter->second->image_path, m_image_tmp_path + gcrop_prefix + iter->second->image_path); 925 | } 926 | //QFile::copy(m_image_path + iter->second->image_path, m_image_tmp_path + iter->second->image_path); 927 | 928 | QImage image(m_image_tmp_path + iter->second->image_path); 929 | if(image.isNull()) 930 | return; 931 | 932 | image = image.transformed(matrix, Qt::FastTransformation); 933 | image.save(m_image_tmp_path + iter->second->image_path); 934 | 935 | ui->db_editpicture->setStyleSheet("border-image:url(" + m_image_tmp_path + iter->second->image_path + ")"); 936 | 937 | /////////////////////// 938 | //QMatrix cropmatrix; 939 | matrix.reset(); 940 | matrix.rotate(90); 941 | QImage cropimage(m_image_tmp_path + gcrop_prefix + iter->second->image_path); 942 | if(cropimage.isNull()) 943 | return; 944 | 945 | cropimage = cropimage.transformed(matrix, Qt::FastTransformation); 946 | cropimage.save(m_image_tmp_path + gcrop_prefix + iter->second->image_path); 947 | 948 | ui->db_editcrop->setStyleSheet("border-image:url(" + m_image_tmp_path + gcrop_prefix + iter->second->image_path + ")"); 949 | 950 | } 951 | 952 | 953 | 954 | void MainWindow::on_tabWidget_currentChanged(int index) 955 | { 956 | //qDebug() << "cur:" << ui->tabWidget->tabText(index) << ",old:" << ui->tabWidget->tabText(m_currenttab) ; 957 | if(m_currenttab != index) 958 | { 959 | if(m_currenttab == 2) 960 | { 961 | on_previewstopbtn_clicked(); 962 | m_videothread->wait(); 963 | } 964 | m_currenttab = index; 965 | } 966 | //qDebug() << "tab:" << ui->tabWidget->tabText(index) << ",cur:" << index << ",old:" << ui->tabWidget->currentIndex(); 967 | } 968 | 969 | void MainWindow::on_addimagebtn_clicked() 970 | { 971 | QString fileName = QFileDialog::getOpenFileName(this, tr("open image file"), 972 | "./" , 973 | "JPEG Files(*.jpg *.jpeg);;PNG Files(*.png);;BMP Files(*.bmp)"); 974 | //qDebug() << "image:" << fileName; 975 | 976 | QImage image(fileName); 977 | if(image.isNull()) 978 | return; 979 | 980 | QFile file(fileName); 981 | QFileInfo fileinfo(fileName); 982 | 983 | ////////////////////////////// 984 | QSqlQuery query; 985 | query.prepare("insert into " + m_table + " (id, name, image_path, feature_data, facex,facey,facewidth,faceheight) values (:id, :name, :image_path, :feature_data,:facex,:facey,:facewidth,:faceheight)"); 986 | 987 | int index = 1; 988 | if(m_model->rowCount() > 0) 989 | { 990 | index = m_model->item(m_model->rowCount() - 1, 0)->text().toInt() + 1; 991 | } 992 | 993 | 994 | QString strfile = QString::number(index) + "_" + fileinfo.fileName();//m_image_path + QString::number(index) + "_" + m_currentimagefile;//fileinfo.fileName(); 995 | 996 | QString cropfile = m_image_path + gcrop_prefix + strfile; 997 | 998 | float features[1024]; 999 | SeetaRect rect; 1000 | int nret = m_videothread->checkimage(fileName, cropfile, features, rect); 1001 | QString strerror; 1002 | 1003 | if(nret == -2) 1004 | { 1005 | strerror = "do not find face!"; 1006 | }else if(nret == -1) 1007 | { 1008 | strerror = fileName + " is invalid!"; 1009 | }else if(nret == 1) 1010 | { 1011 | strerror = "find more than one face!"; 1012 | }else if(nret == 2) 1013 | { 1014 | strerror = "quality check failed!"; 1015 | } 1016 | 1017 | if(!strerror.isEmpty()) 1018 | { 1019 | QMessageBox::critical(NULL,"critical", strerror, QMessageBox::Yes); 1020 | return; 1021 | } 1022 | 1023 | QString name = fileinfo.completeBaseName();//fileName(); 1024 | int n = name.indexOf("_"); 1025 | 1026 | if(n >= 1) 1027 | { 1028 | name = name.left(n); 1029 | } 1030 | 1031 | query.bindValue(0, index); 1032 | query.bindValue(1,name); 1033 | 1034 | //query.bindValue(2, "/wqy/Downloads/ap.jpeg"); 1035 | query.bindValue(2, strfile);//fileinfo.fileName());//strfile); 1036 | 1037 | //float data[4] = {0.56,0.223,0.5671,-0.785}; 1038 | QByteArray bytearray; 1039 | bytearray.resize(1024 * sizeof(float)); 1040 | memcpy(bytearray.data(), features, 1024 * sizeof(float)); 1041 | 1042 | query.bindValue(3, QVariant(bytearray)); 1043 | query.bindValue(4, rect.x); 1044 | query.bindValue(5, rect.y); 1045 | query.bindValue(6, rect.width); 1046 | query.bindValue(7, rect.height); 1047 | 1048 | 1049 | if(!query.exec()) 1050 | { 1051 | QFile::remove(cropfile); 1052 | qDebug() << "failed to insert table:" << query.lastError(); 1053 | QMessageBox::critical(NULL, "critical", tr("save face data to database failed!"), QMessageBox::Yes); 1054 | return; 1055 | } 1056 | 1057 | file.copy(m_image_path + strfile); 1058 | 1059 | 1060 | DataInfo * info = new DataInfo(); 1061 | info->id = index; 1062 | info->name = name; 1063 | info->image_path = strfile; 1064 | memcpy(info->features, features, 1024 * sizeof(float)); 1065 | info->x = rect.x; 1066 | info->y = rect.y; 1067 | info->width = rect.width; 1068 | info->height = rect.height; 1069 | m_datalst.insert(std::map::value_type(index, info)); 1070 | 1071 | //////////////////////////////////////////////////////////// 1072 | int rows = m_model->rowCount(); 1073 | //qDebug() << "rows:" << rows; 1074 | 1075 | m_model->setItem(rows, 0, new QStandardItem(QString::number(index))); 1076 | m_model->setItem(rows, 1, new QStandardItem(info->name)); 1077 | 1078 | QLabel *label = new QLabel(""); 1079 | 1080 | label->setStyleSheet("border-image:url(" + m_image_path + strfile + ")"); 1081 | ui->dbtableview->setIndexWidget(m_model->index(rows, 2), label); 1082 | 1083 | QPushButton *button2 = new QPushButton("delete"); 1084 | button2->setProperty("id", index); 1085 | button2->setFixedSize(80, 40); 1086 | connect(button2, SIGNAL(clicked()), this, SLOT(deleterecord())); 1087 | 1088 | QWidget *widget = new QWidget(); 1089 | QHBoxLayout *layout = new QHBoxLayout; 1090 | layout->addStretch(); 1091 | layout->addWidget(button2); 1092 | layout->addStretch(); 1093 | widget->setLayout(layout); 1094 | 1095 | ui->dbtableview->setIndexWidget(m_model->index(rows, 3), widget); 1096 | ui->dbtableview->scrollToBottom(); 1097 | ui->dbtableview->selectRow(rows); 1098 | 1099 | emit ui->dbtableview->clicked(m_model->index(rows, 1)); 1100 | //QMessageBox::information(NULL, "info", tr("add face operator success!"), QMessageBox::Yes); 1101 | 1102 | } 1103 | 1104 | void MainWindow::on_menufacedbbtn_clicked() 1105 | { 1106 | ui->tabWidget->setCurrentIndex(1); 1107 | } 1108 | 1109 | 1110 | 1111 | void MainWindow::on_menusettingbtn_clicked() 1112 | { 1113 | 1114 | ui->tabWidget->setCurrentIndex(3); 1115 | } 1116 | 1117 | void MainWindow::on_previewclearbtn_clicked() 1118 | { 1119 | ui->previewtableview->setUpdatesEnabled(false); 1120 | m_videomodel->removeRows(0, m_videomodel->rowCount()); 1121 | //m_videomodel->clear(); 1122 | ui->previewtableview->setUpdatesEnabled(true); 1123 | } 1124 | 1125 | void MainWindow::on_menuopenvideofile_clicked() 1126 | { 1127 | QString fileName = QFileDialog::getOpenFileName(this, tr("open video file"), 1128 | "./" , 1129 | "MP4 Files(*.mp4 *.MP4);;AVI Files(*.avi);;FLV Files(*.flv);;h265 Files(*.h265);;h263 Files(*.h263)"); 1130 | //qDebug() << "image:" << fileName; 1131 | m_type.type = 1; 1132 | m_type.filename = fileName; 1133 | m_type.title = "Open Video: " + fileName; 1134 | ui->recognize_label->setText(m_type.title); 1135 | ui->tabWidget->setCurrentIndex(2); 1136 | emit ui->previewrunbtn->clicked(); 1137 | } 1138 | 1139 | void MainWindow::on_menuopenpicturefile_clicked() 1140 | { 1141 | QString fileName = QFileDialog::getOpenFileName(this, tr("open image file"), 1142 | "./" , 1143 | "JPEG Files(*.jpg *.jpeg);;PNG Files(*.png);;BMP Files(*.bmp)"); 1144 | //qDebug() << "image:" << fileName; 1145 | m_type.type = 2; 1146 | m_type.filename = fileName; 1147 | m_type.title = "Open Image: " + fileName; 1148 | ui->recognize_label->setText(m_type.title); 1149 | ui->tabWidget->setCurrentIndex(2); 1150 | emit ui->previewrunbtn->clicked(); 1151 | } 1152 | 1153 | void MainWindow::on_menuopencamera_clicked() 1154 | { 1155 | m_type.type = 0; 1156 | m_type.filename = ""; 1157 | m_type.title = "Open Camera: 0"; 1158 | ui->recognize_label->setText(m_type.title); 1159 | ui->tabWidget->setCurrentIndex(2); 1160 | emit ui->previewrunbtn->clicked(); 1161 | } 1162 | 1163 | static void FindFile(const QString & path, QStringList &files) 1164 | { 1165 | QDir dir(path); 1166 | if(!dir.exists()) 1167 | return; 1168 | 1169 | dir.setFilter(QDir::Dirs | QDir::Files | QDir::NoDotAndDotDot | QDir::NoSymLinks); 1170 | dir.setSorting(QDir::DirsFirst);; 1171 | 1172 | QFileInfoList list = dir.entryInfoList(); 1173 | int i = 0; 1174 | while(i < list.size()) 1175 | { 1176 | QFileInfo info = list.at(i); 1177 | //qDebug() << info.absoluteFilePath(); 1178 | if(info.isDir()) 1179 | { 1180 | FindFile(info.absoluteFilePath(), files); 1181 | }else 1182 | { 1183 | QString str = info.suffix(); 1184 | if(str.compare("png", Qt::CaseInsensitive) == 0 || str.compare("jpg", Qt::CaseInsensitive) == 0 || str.compare("jpeg", Qt::CaseSensitive) == 0 || str.compare("bmp", Qt::CaseInsensitive) == 0) 1185 | { 1186 | files.append(info.absoluteFilePath()); 1187 | } 1188 | } 1189 | i++; 1190 | } 1191 | return; 1192 | } 1193 | 1194 | void MainWindow::on_addfilesbtn_clicked() 1195 | { 1196 | QString fileName = QFileDialog::getExistingDirectory(this, tr("Select Directorky"), "."); 1197 | if(fileName.isEmpty()) 1198 | { 1199 | return; 1200 | } 1201 | 1202 | qDebug() << fileName; 1203 | QStringList files; 1204 | FindFile(fileName, files); 1205 | qDebug() << files.size(); 1206 | if(files.size() <= 0) 1207 | return; 1208 | 1209 | for(int i=0; irowCount() > 0) 1218 | { 1219 | index = m_model->item(m_model->rowCount() - 1, 0)->text().toInt(); 1220 | } 1221 | 1222 | InputFilesProcessDlg dialog(this, m_inputfilesthread); 1223 | 1224 | 1225 | m_inputfilesthread->start(&files, index, m_table); 1226 | dialog.exec(); 1227 | 1228 | 1229 | //qDebug() << "------on_addfilesbtn_clicked---end"; 1230 | } 1231 | 1232 | void MainWindow::oninputfilesupdateui(std::vector * datas) 1233 | { 1234 | DataInfo * info = NULL; 1235 | //qDebug() << "----oninputfilesupdateui--" << datas->size(); 1236 | if(datas->size() > 0) 1237 | { 1238 | ui->dbtableview->setUpdatesEnabled(false); 1239 | } 1240 | 1241 | int rows = 0; 1242 | for(int i=0; isize(); i++) 1243 | { 1244 | rows = m_model->rowCount(); 1245 | //qDebug() << "rows:" << rows; 1246 | info = (*datas)[i]; 1247 | m_datalst.insert(std::map::value_type(info->id, info)); 1248 | m_model->setItem(rows, 0, new QStandardItem(QString::number(info->id))); 1249 | m_model->setItem(rows, 1, new QStandardItem(info->name)); 1250 | 1251 | QLabel *label = new QLabel(""); 1252 | 1253 | label->setStyleSheet("border-image:url(" + m_image_path + info->image_path + ")"); 1254 | ui->dbtableview->setIndexWidget(m_model->index(rows, 2), label); 1255 | 1256 | QPushButton *button2 = new QPushButton("delete"); 1257 | button2->setProperty("id", info->id); 1258 | button2->setFixedSize(80, 40); 1259 | connect(button2, SIGNAL(clicked()), this, SLOT(deleterecord())); 1260 | ui->dbtableview->setIndexWidget(m_model->index(rows, 3), button2); 1261 | //ui->dbtableview->scrollToBottom(); 1262 | //ui->dbtableview->selectRow(rows); 1263 | } 1264 | if(datas->size() > 0) 1265 | { 1266 | ui->dbtableview->setUpdatesEnabled(true); 1267 | ui->dbtableview->scrollToBottom(); 1268 | ui->dbtableview->selectRow(rows); 1269 | emit ui->dbtableview->clicked(m_model->index(rows, 1)); 1270 | } 1271 | 1272 | } 1273 | 1274 | void MainWindow::on_settingselectmodelbtn_clicked() 1275 | { 1276 | QString fileName = QFileDialog::getOpenFileName(this, tr("open model file"), 1277 | "./" , 1278 | "CSTA Files(*.csta)"); 1279 | QFileInfo fileinfo(fileName); 1280 | QString modelfile = fileinfo.fileName(); 1281 | 1282 | QString str = gmodelpath.c_str() + modelfile; 1283 | 1284 | qDebug() << "------str:" << str; 1285 | qDebug() << "fileName:" << fileName; 1286 | 1287 | if(QString::compare(fileName, str) == 0) 1288 | { 1289 | ui->fr_modelpath->setText(modelfile); 1290 | return; 1291 | } 1292 | //QFile file(fileName); 1293 | if(!QFile::copy(fileName, str)) 1294 | { 1295 | QMessageBox::critical(NULL, "critical", "Copy model file: " + fileName + " to " + gmodelpath.c_str() + " failed, file already exists!", QMessageBox::Yes); 1296 | return; 1297 | } 1298 | 1299 | ui->fr_modelpath->setText(modelfile); 1300 | 1301 | //m_videothread->reset_fr_model(modelfile); 1302 | //qDebug() << "image:" << fileName; 1303 | } 1304 | 1305 | void MainWindow::closeEvent(QCloseEvent *event) 1306 | { 1307 | m_videothread->m_exited = true; 1308 | m_videothread->wait(); 1309 | QWidget::closeEvent(event); 1310 | } 1311 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/mainwindow.h: -------------------------------------------------------------------------------- 1 | #ifndef MAINWINDOW_H 2 | #define MAINWINDOW_H 3 | 4 | #include 5 | //#include 6 | 7 | /* 8 | #include 9 | #include 10 | #include 11 | 12 | #include "seeta/FaceLandmarker.h" 13 | #include "seeta/FaceDetector.h" 14 | #include "seeta/FaceAntiSpoofing.h" 15 | #include "seeta/Common/Struct.h" 16 | */ 17 | 18 | #include "videocapturethread.h" 19 | 20 | #include "qsqldatabase.h" 21 | #include "qsqltablemodel.h" 22 | #include "qstandarditemmodel.h" 23 | 24 | #include 25 | 26 | 27 | namespace Ui { 28 | class MainWindow; 29 | } 30 | 31 | class MainWindow : public QMainWindow 32 | { 33 | Q_OBJECT 34 | 35 | public: 36 | explicit MainWindow(QWidget *parent = 0); 37 | ~MainWindow(); 38 | 39 | void getdatas(); 40 | void cleardata(); 41 | 42 | protected: 43 | void closeEvent(QCloseEvent *event); 44 | 45 | private slots: 46 | //void on_pushButton_clicked(); 47 | 48 | void editrecord(); 49 | void deleterecord(); 50 | void onupdateui(const QImage & image); 51 | void onrecognize(int pid, const QString & name, const QString & imagepath, float score, const QImage &image, const QRect &rc); 52 | 53 | void onvideothreadend(int value); 54 | void on_dbsavebtn_clicked(); 55 | 56 | void on_previewrunbtn_clicked(); 57 | 58 | void on_previewstopbtn_clicked(); 59 | 60 | void on_settingsavebtn_clicked(); 61 | 62 | void on_rotatebtn_clicked(); 63 | 64 | 65 | 66 | void showfaceinfo(); 67 | 68 | void on_tabWidget_currentChanged(int index); 69 | 70 | void on_addimagebtn_clicked(); 71 | 72 | void on_menufacedbbtn_clicked(); 73 | 74 | //void on_pushButton_8_clicked(); 75 | 76 | void on_menusettingbtn_clicked(); 77 | 78 | void on_previewclearbtn_clicked(); 79 | 80 | void on_menuopenvideofile_clicked(); 81 | 82 | void on_menuopenpicturefile_clicked(); 83 | 84 | void on_menuopencamera_clicked(); 85 | 86 | void on_addfilesbtn_clicked(); 87 | 88 | void oninputfilesupdateui(std::vector *); 89 | 90 | void on_settingselectmodelbtn_clicked(); 91 | 92 | private: 93 | Ui::MainWindow *ui; 94 | 95 | /* 96 | QTimer *m_timer; 97 | cv::VideoCapture * m_capture; 98 | 99 | seeta::FaceDetector * m_fd; 100 | seeta::FaceLandmarker * m_pd; 101 | seeta::FaceAntiSpoofing * m_spoof; 102 | */ 103 | 104 | VideoCaptureThread * m_videothread; 105 | 106 | QSqlDatabase m_database; 107 | 108 | // QSqlTableModel * m_model; 109 | QString m_table; 110 | QString m_config_table; 111 | QStandardItemModel * m_model; 112 | 113 | QPixmap m_default_image; 114 | 115 | //QString m_currentimagefile; 116 | QString m_image_path; 117 | QString m_image_tmp_path; 118 | //QString m_model_path; 119 | 120 | std::map m_datalst; 121 | 122 | int m_currenttab; 123 | 124 | QStandardItemModel * m_videomodel; 125 | 126 | RecognizeType m_type; 127 | 128 | InputFilesThread *m_inputfilesthread; 129 | ResetModelThread *m_resetmodelthread; 130 | 131 | 132 | }; 133 | 134 | #endif // MAINWINDOW_H 135 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/mainwindow.ui: -------------------------------------------------------------------------------- 1 | 2 | 3 | MainWindow 4 | 5 | 6 | 7 | 0 8 | 0 9 | 1230 10 | 718 11 | 12 | 13 | 14 | MainWindow 15 | 16 | 17 | 18 | 19 | 20 | 0 21 | 0 22 | 1221 23 | 751 24 | 25 | 26 | 27 | 3 28 | 29 | 30 | 31 | Menu 32 | 33 | 34 | 35 | 36 | 190 37 | 220 38 | 171 39 | 81 40 | 41 | 42 | 43 | &Face Database 44 | 45 | 46 | 47 | 48 | 49 | 190 50 | 380 51 | 171 52 | 81 53 | 54 | 55 | 56 | Open &Camera 57 | 58 | 59 | 60 | 61 | 62 | 701 63 | 220 64 | 171 65 | 81 66 | 67 | 68 | 69 | &Setting 70 | 71 | 72 | 73 | 74 | 75 | 440 76 | 383 77 | 171 78 | 81 79 | 80 | 81 | 82 | Open &Video 83 | 84 | 85 | 86 | 87 | 88 | 706 89 | 383 90 | 171 91 | 81 92 | 93 | 94 | 95 | Open &Image 96 | 97 | 98 | 99 | 100 | 101 | Face Database 102 | 103 | 104 | 105 | 106 | 0 107 | 10 108 | 641 109 | 631 110 | 111 | 112 | 113 | 114 | 115 | 116 | 660 117 | 10 118 | 521 119 | 121 120 | 121 | 122 | 123 | Register 124 | 125 | 126 | 127 | 128 | 61 129 | 60 130 | 131 131 | 25 132 | 133 | 134 | 135 | &Image 136 | 137 | 138 | 139 | 140 | 141 | 309 142 | 60 143 | 151 144 | 25 145 | 146 | 147 | 148 | &Directory 149 | 150 | 151 | 152 | 153 | 154 | 155 | 660 156 | 230 157 | 521 158 | 391 159 | 160 | 161 | 162 | Edit 163 | 164 | 165 | 166 | 167 | 45 168 | 40 169 | 21 170 | 16 171 | 172 | 173 | 174 | ID: 175 | 176 | 177 | 178 | 179 | false 180 | 181 | 182 | 183 | 70 184 | 40 185 | 171 186 | 25 187 | 188 | 189 | 190 | 191 | 192 | 193 | 274 194 | 40 195 | 53 196 | 16 197 | 198 | 199 | 200 | Name: 201 | 202 | 203 | 204 | 205 | 206 | 320 207 | 40 208 | 171 209 | 25 210 | 211 | 212 | 213 | 214 | 215 | 216 | 10 217 | 90 218 | 53 219 | 16 220 | 221 | 222 | 223 | Picture: 224 | 225 | 226 | 227 | 228 | 229 | 70 230 | 90 231 | 171 232 | 161 233 | 234 | 235 | 236 | QFrame::Box 237 | 238 | 239 | QFrame::Plain 240 | 241 | 242 | 243 | 244 | 245 | 246 | 247 | 248 | 283 249 | 90 250 | 41 251 | 20 252 | 253 | 254 | 255 | Face: 256 | 257 | 258 | 259 | 260 | 261 | 320 262 | 90 263 | 111 264 | 101 265 | 266 | 267 | 268 | QFrame::Box 269 | 270 | 271 | QFrame::Plain 272 | 273 | 274 | 275 | 276 | 277 | 278 | 279 | 280 | 130 281 | 260 282 | 71 283 | 25 284 | 285 | 286 | 287 | &Rotate 288 | 289 | 290 | 291 | 292 | 293 | 170 294 | 340 295 | 161 296 | 25 297 | 298 | 299 | 300 | &Save 301 | 302 | 303 | 304 | 305 | 306 | 307 | Preview 308 | 309 | 310 | 311 | 312 | 6 313 | 26 314 | 800 315 | 600 316 | 317 | 318 | 319 | 320 | 800 321 | 600 322 | 323 | 324 | 325 | QFrame::Box 326 | 327 | 328 | QFrame::Plain 329 | 330 | 331 | 332 | 333 | 334 | 335 | 336 | 337 | 831 338 | 603 339 | 101 340 | 25 341 | 342 | 343 | 344 | &Run 345 | 346 | 347 | 348 | 349 | 350 | 955 351 | 603 352 | 101 353 | 25 354 | 355 | 356 | 357 | &Stop 358 | 359 | 360 | 361 | 362 | 363 | 816 364 | 25 365 | 71 366 | 17 367 | 368 | 369 | 370 | Record: 371 | 372 | 373 | 374 | 375 | 376 | 9 377 | 4 378 | 761 379 | 17 380 | 381 | 382 | 383 | open camera: 0 384 | 385 | 386 | 387 | 388 | 389 | 814 390 | 46 391 | 391 392 | 551 393 | 394 | 395 | 396 | 397 | 398 | 399 | 1076 400 | 603 401 | 101 402 | 25 403 | 404 | 405 | 406 | &Clear 407 | 408 | 409 | 410 | 411 | 412 | Setting 413 | 414 | 415 | 416 | 417 | 880 418 | 470 419 | 121 420 | 51 421 | 422 | 423 | 424 | &Save 425 | 426 | 427 | 428 | 429 | 430 | 110 431 | 40 432 | 661 433 | 80 434 | 435 | 436 | 437 | Face Detector 438 | 439 | 440 | 441 | 442 | 81 443 | 40 444 | 91 445 | 20 446 | 447 | 448 | 449 | Min Face Size: 450 | 451 | 452 | 453 | 454 | 455 | 180 456 | 40 457 | 113 458 | 25 459 | 460 | 461 | 462 | 463 | 464 | 465 | 381 466 | 40 467 | 121 468 | 20 469 | 470 | 471 | 472 | Score Threshold: 473 | 474 | 475 | 476 | 477 | 478 | 500 479 | 40 480 | 113 481 | 25 482 | 483 | 484 | 485 | 486 | 487 | 488 | 489 | 110 490 | 140 491 | 661 492 | 80 493 | 494 | 495 | 496 | Anti-Spoofing 497 | 498 | 499 | 500 | 501 | 500 502 | 40 503 | 113 504 | 25 505 | 506 | 507 | 508 | 509 | 510 | 511 | 49 512 | 40 513 | 131 514 | 20 515 | 516 | 517 | 518 | Clarity Threshold: 519 | 520 | 521 | 522 | 523 | 524 | 370 525 | 40 526 | 131 527 | 20 528 | 529 | 530 | 531 | Reality Threshold: 532 | 533 | 534 | 535 | 536 | 537 | 180 538 | 40 539 | 113 540 | 25 541 | 542 | 543 | 544 | 545 | 546 | 547 | 548 | 110 549 | 250 550 | 661 551 | 151 552 | 553 | 554 | 555 | Quality Assessor 556 | 557 | 558 | 559 | 560 | 180 561 | 40 562 | 61 563 | 25 564 | 565 | 566 | 567 | 568 | 569 | 570 | 37 571 | 39 572 | 141 573 | 20 574 | 575 | 576 | 577 | Yaw Score Between: 578 | 579 | 580 | 581 | 582 | 583 | 297 584 | 40 585 | 61 586 | 25 587 | 588 | 589 | 590 | 591 | 592 | 593 | 180 594 | 90 595 | 61 596 | 25 597 | 598 | 599 | 600 | 601 | 602 | 603 | 31 604 | 90 605 | 151 606 | 20 607 | 608 | 609 | 610 | Pitch Score Between: 611 | 612 | 613 | 614 | 615 | 616 | 298 617 | 90 618 | 61 619 | 25 620 | 621 | 622 | 623 | 624 | 625 | 626 | 362 627 | 40 628 | 51 629 | 17 630 | 631 | 632 | 633 | ° 634 | 635 | 636 | 637 | 638 | 639 | 363 640 | 89 641 | 51 642 | 17 643 | 644 | 645 | 646 | ° 647 | 648 | 649 | 650 | 651 | 652 | 245 653 | 40 654 | 51 655 | 17 656 | 657 | 658 | 659 | ° And 660 | 661 | 662 | 663 | 664 | 665 | 244 666 | 90 667 | 51 668 | 17 669 | 670 | 671 | 672 | ° And 673 | 674 | 675 | 676 | 677 | 678 | 679 | 110 680 | 430 681 | 661 682 | 91 683 | 684 | 685 | 686 | Face Recognizer 687 | 688 | 689 | 690 | 691 | 140 692 | 30 693 | 113 694 | 25 695 | 696 | 697 | 698 | 699 | 700 | 701 | 56 702 | 60 703 | 81 704 | 20 705 | 706 | 707 | 708 | Model File: 709 | 710 | 711 | 712 | 713 | 714 | 140 715 | 60 716 | 471 717 | 25 718 | 719 | 720 | 721 | 722 | 723 | 724 | 20 725 | 30 726 | 121 727 | 20 728 | 729 | 730 | 731 | Score Threshold: 732 | 733 | 734 | 735 | 736 | 737 | 620 738 | 60 739 | 21 740 | 25 741 | 742 | 743 | 744 | ... 745 | 746 | 747 | 748 | 749 | 750 | 751 | 752 | 753 | 754 | 0 755 | 0 756 | 1230 757 | 23 758 | 759 | 760 | 761 | 762 | 763 | TopToolBarArea 764 | 765 | 766 | false 767 | 768 | 769 | 770 | 771 | 772 | 773 | 774 | 775 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/resetmodelprocessdialog.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include "resetmodelprocessdialog.h" 7 | 8 | #include "videocapturethread.h" 9 | 10 | 11 | ResetModelProcessDlg::ResetModelProcessDlg(QWidget *parent, ResetModelThread * thread) 12 | : QDialog(parent) 13 | { 14 | m_exited = false; 15 | workthread = thread; 16 | qDebug() << "------------dlg input----------------"; 17 | //初始化控件对象 18 | //tr是把当前字符串翻译成为其他语言的标记 19 | //&后面的字母是用快捷键来激活控件的标记,例如可以用Alt+w激活Find &what这个控件 20 | label = new QLabel("", this); 21 | 22 | progressbar = new QProgressBar(this); 23 | progressbar->setOrientation(Qt::Horizontal); 24 | progressbar->setMinimum(0); 25 | progressbar->setMaximum(100); 26 | progressbar->setValue(5); 27 | progressbar->setFormat(tr("current progress:%1%").arg(QString::number(5, 'f',1))); 28 | progressbar->setAlignment(Qt::AlignLeft| Qt::AlignVCenter); 29 | 30 | cancelButton = new QPushButton(tr("&Cancel")); 31 | //cancelButton->setEnabled(true); 32 | 33 | //closeButton = new QPushButton(tr("&Close")); 34 | 35 | 36 | //连接信号和槽 37 | connect(cancelButton, SIGNAL(clicked()), this, SLOT(cancelClicked())); 38 | //connect(okButton, SIGNAL(clicked()), this, SLOT(okClicked())); 39 | //connect(closeButton, SIGNAL(clicked()), this, SLOT(close())); 40 | connect(workthread, SIGNAL(sigprogress(float)), this, SLOT(setProgressValue(float))); 41 | connect(workthread, SIGNAL(sigResetModelEnd(int)), this, SLOT(setResetModelEnd(int))); 42 | 43 | 44 | 45 | QHBoxLayout *bottomLayout = new QHBoxLayout; 46 | bottomLayout->addStretch(); 47 | bottomLayout->addWidget(cancelButton); 48 | //bottomLayout->addWidget(closeButton); 49 | bottomLayout->addStretch(); 50 | 51 | QVBoxLayout *mainLayout = new QVBoxLayout; 52 | mainLayout->addWidget(label); 53 | mainLayout->addWidget(progressbar); 54 | mainLayout->addStretch(); 55 | mainLayout->addLayout(bottomLayout); 56 | 57 | this->setLayout(mainLayout); 58 | 59 | setWindowTitle(tr("Reset Face Recognizer Model Progress")); 60 | 61 | //cancelButton->setEnabled(true); 62 | setFixedSize(400,160); 63 | } 64 | 65 | void ResetModelProcessDlg::closeEvent(QCloseEvent *event) 66 | { 67 | if(!m_exited) 68 | { 69 | workthread->m_exited = true; 70 | event->ignore(); 71 | }else 72 | { 73 | event->accept(); 74 | } 75 | } 76 | 77 | void ResetModelProcessDlg::cancelClicked() 78 | { 79 | qDebug() << "ResetModelProcessDlg cancelclicked"; 80 | workthread->m_exited = true; 81 | } 82 | 83 | 84 | ResetModelProcessDlg::~ResetModelProcessDlg() 85 | { 86 | qDebug() << "ResetModelProcessDlg ~ResetModelProcessDlg"; 87 | } 88 | void ResetModelProcessDlg::setResetModelEnd(int value) 89 | { 90 | m_exited = true; 91 | this->hide(); 92 | qDebug() << "setResetModelEnd:" << value; 93 | if(value == 0) 94 | { 95 | accept(); 96 | }else 97 | { 98 | reject(); 99 | } 100 | 101 | } 102 | 103 | 104 | void ResetModelProcessDlg::setProgressValue(float value) 105 | { 106 | QString str = QString("%1%").arg(QString::number(value, 'f',1)); 107 | progressbar->setValue(value); 108 | progressbar->setFormat(str); 109 | } 110 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/resetmodelprocessdialog.h: -------------------------------------------------------------------------------- 1 | #ifndef RESETMODELPROCESSDIALOG_H 2 | #define RESETMODELPROCESSDIALOG_H 3 | 4 | 5 | 6 | #include 7 | 8 | 9 | class QLabel; 10 | class QProgressBar; 11 | class QPushButton; 12 | class ResetModelThread; 13 | 14 | class ResetModelProcessDlg :public QDialog{ 15 | 16 | //如果需要在对话框类中自定义信号和槽,则需要在类内添加Q_OBJECT 17 | Q_OBJECT 18 | public: 19 | //构造函数,析构函数 20 | ResetModelProcessDlg(QWidget *parent, ResetModelThread * thread); 21 | ~ResetModelProcessDlg(); 22 | 23 | protected: 24 | void closeEvent(QCloseEvent *event); 25 | //在signal和slots中定义这个对话框所需要的信号。 26 | signals: 27 | //signals修饰的函数不需要本类实现。他描述了本类对象可以发送那些求助信号 28 | 29 | //slots必须用private修饰 30 | private slots: 31 | void cancelClicked(); 32 | void setProgressValue(float value); 33 | void setResetModelEnd(int); 34 | //申明这个对话框需要哪些组件 35 | private: 36 | QLabel *label; 37 | 38 | QProgressBar *progressbar; 39 | //QLabel *label2; 40 | 41 | QPushButton *cancelButton;//, *closeButton; 42 | 43 | ResetModelThread * workthread; 44 | bool m_exited; 45 | }; 46 | 47 | 48 | 49 | 50 | #endif // RESETMODELPROCESSDIALOG_H 51 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/seetaface_demo.pro: -------------------------------------------------------------------------------- 1 | #------------------------------------------------- 2 | # 3 | # Project created by QtCreator 2020-03-16T14:40:38 4 | # 5 | #------------------------------------------------- 6 | 7 | QT += core gui sql 8 | 9 | greaterThan(QT_MAJOR_VERSION, 4): QT += widgets 10 | 11 | TARGET = seetaface_demo 12 | TEMPLATE = app 13 | 14 | # The following define makes your compiler emit warnings if you use 15 | # any feature of Qt which has been marked as deprecated (the exact warnings 16 | # depend on your compiler). Please consult the documentation of the 17 | # deprecated API in order to know how to port your code away from it. 18 | DEFINES += QT_DEPRECATED_WARNINGS 19 | 20 | # You can also make your code fail to compile if you use deprecated APIs. 21 | # In order to do so, uncomment the following line. 22 | # You can also select to disable deprecated APIs only up to a certain version of Qt. 23 | #DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0 24 | 25 | 26 | SOURCES += \ 27 | main.cpp \ 28 | mainwindow.cpp \ 29 | videocapturethread.cpp \ 30 | inputfilesprocessdialog.cpp \ 31 | resetmodelprocessdialog.cpp 32 | 33 | HEADERS += \ 34 | mainwindow.h \ 35 | videocapturethread.h \ 36 | inputfilesprocessdialog.h \ 37 | resetmodelprocessdialog.h 38 | 39 | FORMS += \ 40 | mainwindow.ui 41 | 42 | #windows adm64: 43 | 44 | #INCLUDEPATH += C:/thirdparty/opencv4.2/build/include \ 45 | # C:/study/SF3.0/sf3.0_windows/sf3.0_windows/include 46 | 47 | 48 | #CONFIG(debug, debug|release) { 49 | #LIBS += -LC:/thirdparty/opencv4.2/build/x64/vc14/lib -lopencv_world420d \ 50 | # -LC:/study/SF3.0/sf3.0_windows/sf3.0_windows/lib/x64 -lSeetaFaceDetector600d -lSeetaFaceLandmarker600d \ 51 | # -lSeetaFaceAntiSpoofingX600d -lSeetaFaceTracking600d -lSeetaFaceRecognizer610d \ 52 | # -lSeetaQualityAssessor300d -lSeetaPoseEstimation600d 53 | 54 | #} else { 55 | #LIBS += -LC:/thirdparty/opencv4.2/build/x64/vc14/lib -lopencv_world420 \ 56 | # -LC:/study/SF3.0/sf3.0_windows/sf3.0_windows/lib/x64 -lSeetaFaceDetector600 -lSeetaFaceLandmarker600 \ 57 | # -lSeetaFaceAntiSpoofingX600 -lSeetaFaceTracking600 -lSeetaFaceRecognizer610 \ 58 | # -lSeetaQualityAssessor300 -lSeetaPoseEstimation600 59 | #} 60 | 61 | #linux: 62 | INCLUDEPATH += /wqy/tools/opencv4_home/include/opencv4 \ 63 | /wqy/seeta_sdk/SF3/libs/SF3.0_v1/include 64 | 65 | LIBS += -L/wqy/tools/opencv4_home/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs \ 66 | -L/wqy/seeta_sdk/SF3/libs/SF3.0_v1/lib64 -lSeetaFaceDetector600 -lSeetaFaceLandmarker600 \ 67 | -lSeetaFaceAntiSpoofingX600 -lSeetaFaceTracking600 -lSeetaFaceRecognizer610 \ 68 | -lSeetaQualityAssessor300 -lSeetaPoseEstimation600 -lSeetaAuthorize -ltennis 69 | 70 | RESOURCES += \ 71 | face_resource.qrc 72 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/seetaface_demo.pro.user: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | EnvironmentId 7 | {1a9b2863-cf78-4bf8-bcd9-520cf75bdafe} 8 | 9 | 10 | ProjectExplorer.Project.ActiveTarget 11 | 0 12 | 13 | 14 | ProjectExplorer.Project.EditorSettings 15 | 16 | true 17 | false 18 | true 19 | 20 | Cpp 21 | 22 | CppGlobal 23 | 24 | 25 | 26 | QmlJS 27 | 28 | QmlJSGlobal 29 | 30 | 31 | 2 32 | UTF-8 33 | false 34 | 4 35 | false 36 | 80 37 | true 38 | true 39 | 1 40 | true 41 | false 42 | 0 43 | true 44 | true 45 | 0 46 | 8 47 | true 48 | 1 49 | true 50 | true 51 | true 52 | false 53 | 54 | 55 | 56 | ProjectExplorer.Project.PluginSettings 57 | 58 | 59 | 60 | ProjectExplorer.Project.Target.0 61 | 62 | Desktop Qt 5.9.2 GCC 64bit 63 | Desktop Qt 5.9.2 GCC 64bit 64 | qt.592.gcc_64_kit 65 | 0 66 | 0 67 | 0 68 | 69 | /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Debug 70 | 71 | 72 | true 73 | qmake 74 | 75 | QtProjectManager.QMakeBuildStep 76 | true 77 | 78 | false 79 | false 80 | false 81 | 82 | 83 | true 84 | Make 85 | 86 | Qt4ProjectManager.MakeStep 87 | 88 | -w 89 | -r 90 | 91 | false 92 | 93 | 94 | 95 | 2 96 | Build 97 | 98 | ProjectExplorer.BuildSteps.Build 99 | 100 | 101 | 102 | true 103 | Make 104 | 105 | Qt4ProjectManager.MakeStep 106 | 107 | -w 108 | -r 109 | 110 | true 111 | clean 112 | 113 | 114 | 1 115 | Clean 116 | 117 | ProjectExplorer.BuildSteps.Clean 118 | 119 | 2 120 | false 121 | 122 | Debug 123 | 124 | Qt4ProjectManager.Qt4BuildConfiguration 125 | 2 126 | true 127 | 128 | 129 | /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Release 130 | 131 | 132 | true 133 | qmake 134 | 135 | QtProjectManager.QMakeBuildStep 136 | false 137 | 138 | false 139 | false 140 | false 141 | 142 | 143 | true 144 | Make 145 | 146 | Qt4ProjectManager.MakeStep 147 | 148 | -w 149 | -r 150 | 151 | false 152 | 153 | 154 | 155 | 2 156 | Build 157 | 158 | ProjectExplorer.BuildSteps.Build 159 | 160 | 161 | 162 | true 163 | Make 164 | 165 | Qt4ProjectManager.MakeStep 166 | 167 | -w 168 | -r 169 | 170 | true 171 | clean 172 | 173 | 174 | 1 175 | Clean 176 | 177 | ProjectExplorer.BuildSteps.Clean 178 | 179 | 2 180 | false 181 | 182 | Release 183 | 184 | Qt4ProjectManager.Qt4BuildConfiguration 185 | 0 186 | true 187 | 188 | 189 | /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Profile 190 | 191 | 192 | true 193 | qmake 194 | 195 | QtProjectManager.QMakeBuildStep 196 | true 197 | 198 | false 199 | true 200 | false 201 | 202 | 203 | true 204 | Make 205 | 206 | Qt4ProjectManager.MakeStep 207 | 208 | -w 209 | -r 210 | 211 | false 212 | 213 | 214 | 215 | 2 216 | Build 217 | 218 | ProjectExplorer.BuildSteps.Build 219 | 220 | 221 | 222 | true 223 | Make 224 | 225 | Qt4ProjectManager.MakeStep 226 | 227 | -w 228 | -r 229 | 230 | true 231 | clean 232 | 233 | 234 | 1 235 | Clean 236 | 237 | ProjectExplorer.BuildSteps.Clean 238 | 239 | 2 240 | false 241 | 242 | Profile 243 | 244 | Qt4ProjectManager.Qt4BuildConfiguration 245 | 0 246 | true 247 | 248 | 3 249 | 250 | 251 | 0 252 | Deploy 253 | 254 | ProjectExplorer.BuildSteps.Deploy 255 | 256 | 1 257 | Deploy locally 258 | 259 | ProjectExplorer.DefaultDeployConfiguration 260 | 261 | 1 262 | 263 | 264 | false 265 | false 266 | 1000 267 | 268 | true 269 | 270 | false 271 | false 272 | false 273 | false 274 | true 275 | 0.01 276 | 10 277 | true 278 | 1 279 | 25 280 | 281 | 1 282 | true 283 | false 284 | true 285 | valgrind 286 | 287 | 0 288 | 1 289 | 2 290 | 3 291 | 4 292 | 5 293 | 6 294 | 7 295 | 8 296 | 9 297 | 10 298 | 11 299 | 12 300 | 13 301 | 14 302 | 303 | 2 304 | 305 | seetaface_demo 306 | 307 | Qt4ProjectManager.Qt4RunConfiguration:/wqy/test/qtproject/seetaface_demo/seetaface_demo.pro 308 | true 309 | 310 | seetaface_demo.pro 311 | false 312 | 313 | /wqy/test/qtproject/build-seetaface_demo-Desktop_Qt_5_9_2_GCC_64bit-Debug 314 | 3768 315 | false 316 | true 317 | false 318 | false 319 | true 320 | 321 | 1 322 | 323 | 324 | 325 | ProjectExplorer.Project.TargetCount 326 | 1 327 | 328 | 329 | ProjectExplorer.Project.Updater.FileVersion 330 | 18 331 | 332 | 333 | Version 334 | 18 335 | 336 | 337 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/seetatech_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/example/qt/seetaface_demo/seetatech_logo.png -------------------------------------------------------------------------------- /example/qt/seetaface_demo/videocapturethread.cpp: -------------------------------------------------------------------------------- 1 | #include "videocapturethread.h" 2 | 3 | 4 | 5 | #include "seeta/QualityOfPoseEx.h" 6 | #include "seeta/Struct.h" 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include "QDebug" 13 | 14 | 15 | using namespace std::chrono; 16 | 17 | extern const QString gcrop_prefix; 18 | extern Config_Paramter gparamters; 19 | extern std::string gmodelpath;// = "/wqy/seeta_sdk/SF3/libs/SF3.0_v1/models/"; 20 | 21 | void clone_image( const SeetaImageData &src, SeetaImageData &dst) 22 | { 23 | if(src.width != dst.width || src.height != dst.height || src.channels != dst.channels) 24 | { 25 | if(dst.data) 26 | { 27 | delete [] dst.data; 28 | dst.data = nullptr; 29 | } 30 | dst.width = src.width; 31 | dst.height = src.height; 32 | dst.channels = src.channels; 33 | dst.data = new unsigned char[src.width * src.height * src.channels]; 34 | } 35 | 36 | memcpy(dst.data, src.data, src.width * src.height * src.channels); 37 | } 38 | 39 | ////////////////////////////// 40 | WorkThread::WorkThread(VideoCaptureThread * main) 41 | { 42 | m_mainthread = main; 43 | 44 | } 45 | 46 | WorkThread::~WorkThread() 47 | { 48 | qDebug() << "WorkThread exited"; 49 | } 50 | 51 | int WorkThread::recognize(const SeetaTrackingFaceInfo & faceinfo)//, std::vector & datas) 52 | { 53 | auto points = m_mainthread->m_pd->mark(*m_mainthread->m_mainImage, faceinfo.pos); 54 | 55 | m_mainthread->m_qa->feed(*(m_mainthread->m_mainImage), faceinfo.pos, points.data(), 5); 56 | auto result1 = m_mainthread->m_qa->query(seeta::BRIGHTNESS); 57 | auto result2 = m_mainthread->m_qa->query(seeta::RESOLUTION); 58 | auto result3 = m_mainthread->m_qa->query(seeta::CLARITY); 59 | auto result4 = m_mainthread->m_qa->query(seeta::INTEGRITY); 60 | auto result = m_mainthread->m_qa->query(seeta::POSE_EX); 61 | 62 | qDebug() << "PID:" << faceinfo.PID; 63 | if(result.level == 0 || result1.level == 0 || result2.level == 0 || result3.level == 0 || result4.level == 0 ) 64 | { 65 | qDebug() << "Quality check failed!"; 66 | return -1; 67 | } 68 | 69 | auto status = m_mainthread->m_spoof->Predict( *m_mainthread->m_mainImage, faceinfo.pos, points.data() ); 70 | 71 | if( status != seeta::FaceAntiSpoofing::REAL) 72 | { 73 | qDebug() << "antispoofing check failed!"; 74 | return -2; 75 | } 76 | seeta::ImageData cropface = m_mainthread->m_fr->CropFaceV2(*m_mainthread->m_mainImage, points.data() ); 77 | float features[1024]; 78 | memset(features, 0, 1024 * sizeof(float)); 79 | m_mainthread->m_fr->ExtractCroppedFace(cropface, features); 80 | std::map::iterator iter = m_mainthread->m_datalst->begin(); 81 | //std::vector datas; 82 | 83 | for(; iter != m_mainthread->m_datalst->end(); ++iter) 84 | { 85 | if(m_mainthread->m_exited) 86 | { 87 | return -3; 88 | } 89 | float score = m_mainthread->m_fr->CalculateSimilarity(features, iter->second->features); 90 | qDebug() << "PID:" << faceinfo.PID << ", score:" << score; 91 | if(score >= gparamters.Fr_Threshold) 92 | { 93 | //datas.push_back(faceinfo.PID); 94 | //m_lastpids.push_back(faceinfo.PID); 95 | 96 | int x = faceinfo.pos.x - faceinfo.pos.width / 2; 97 | if((x) < 0) 98 | x = 0; 99 | int y = faceinfo.pos.y - faceinfo.pos.height / 2; 100 | if(y < 0) 101 | y = 0; 102 | 103 | int x2 = faceinfo.pos.x + faceinfo.pos.width * 1.5; 104 | if(x2 >= m_mainthread->m_mainImage->width) 105 | { 106 | x2 = m_mainthread->m_mainImage->width -1; 107 | } 108 | 109 | int y2 = faceinfo.pos.y + faceinfo.pos.height * 1.5; 110 | if(y2 >= m_mainthread->m_mainImage->height) 111 | { 112 | y2 = m_mainthread->m_mainImage->height -1; 113 | } 114 | 115 | //qDebug() << "----x:" << faceinfo.pos.x << ",y:" << faceinfo.pos.y << ",w:" << faceinfo.pos.width << ",h:" << faceinfo.pos.height; 116 | cv::Rect rect(x, y, x2-x, y2 - y); 117 | //qDebug() << "x:" << x << ",y:" << y << ",w:" << x2-x << ",h:" << y2-y; 118 | //cv::Rect rect(faceinfo.pos.x, faceinfo.pos.y, faceinfo.pos.width, faceinfo.pos.height); 119 | 120 | cv::Mat mat = m_mainthread->m_mainmat(rect).clone(); 121 | //cv::imwrite("/tmp/ddd.png",mat); 122 | //qDebug() << "----mat---"; 123 | QImage image((const unsigned char *)mat.data, mat.cols,mat.rows,mat.step, QImage::Format_RGB888); 124 | //image.save("/tmp/wwww.png"); 125 | //qDebug() << "PID:" << faceinfo.PID << ", score:" << score; 126 | QRect rc(iter->second->x, iter->second->y, iter->second->width, iter->second->height); 127 | 128 | emit sigRecognize(faceinfo.PID, iter->second->name, iter->second->image_path, score, image, rc); 129 | return 0; 130 | } 131 | } 132 | 133 | //m_lastpids.clear(); 134 | //m_lastpids.resize(datas.size()); 135 | //std::copy(datas.begin(), datas.end(), m_lastpids.begin()); 136 | 137 | return -3; 138 | } 139 | 140 | void WorkThread::run() 141 | { 142 | //m_begin = system_clock::now(); 143 | m_lastpids.clear(); 144 | m_lasterrorpids.clear(); 145 | bool bfind = false; 146 | std::vector datas; 147 | std::vector errordatas; 148 | int nret = 0; 149 | while(!m_mainthread->m_exited) 150 | { 151 | if(!m_mainthread->m_readimage) 152 | { 153 | QThread::msleep(1); 154 | continue; 155 | } 156 | 157 | auto end = system_clock::now(); 158 | //auto duration = duration_cast(end - m_begin); 159 | //int spent = duration.count(); 160 | //if(spent > 10) 161 | // m_lastpids.clear(); 162 | 163 | datas.clear(); 164 | errordatas.clear(); 165 | for(int i=0; im_mainfaceinfos.size(); i++) 166 | { 167 | if(m_mainthread->m_exited) 168 | { 169 | return; 170 | } 171 | 172 | bfind = false; 173 | for(int k=0; km_mainfaceinfos[i].PID == m_lastpids[k]) 176 | { 177 | datas.push_back(m_lastpids[k]); 178 | bfind = true; 179 | break; 180 | } 181 | } 182 | if(!bfind) 183 | { 184 | SeetaTrackingFaceInfo & faceinfo = m_mainthread->m_mainfaceinfos[i]; 185 | nret = recognize(faceinfo);//(m_mainthread->m_mainfaceinfos[i]);//, datas); 186 | if(nret < 0) 187 | { 188 | Fr_DataInfo info; 189 | info.pid = faceinfo.PID; 190 | info.state = nret; 191 | errordatas.push_back(info); 192 | bool bsend = true; 193 | for(int k=0; k= m_mainthread->m_mainImage->width) 217 | { 218 | x2 = m_mainthread->m_mainImage->width -1; 219 | } 220 | 221 | int y2 = faceinfo.pos.y + faceinfo.pos.height * 1.5; 222 | if(y2 >= m_mainthread->m_mainImage->height) 223 | { 224 | y2 = m_mainthread->m_mainImage->height -1; 225 | } 226 | 227 | //qDebug() << "----x:" << faceinfo.pos.x << ",y:" << faceinfo.pos.y << ",w:" << faceinfo.pos.width << ",h:" << faceinfo.pos.height; 228 | cv::Rect rect(x, y, x2-x, y2 - y); 229 | //qDebug() << "x:" << x << ",y:" << y << ",w:" << x2-x << ",h:" << y2-y; 230 | //cv::Rect rect(faceinfo.pos.x, faceinfo.pos.y, faceinfo.pos.width, faceinfo.pos.height); 231 | 232 | cv::Mat mat = m_mainthread->m_mainmat(rect).clone(); 233 | //cv::imwrite("/tmp/ddd.png",mat); 234 | //qDebug() << "----mat---"; 235 | QImage image((const unsigned char *)mat.data, mat.cols,mat.rows,mat.step, QImage::Format_RGB888); 236 | 237 | QString str; 238 | if(info.state == -1) 239 | { 240 | str = "QA ERROR"; 241 | }else if(info.state == -2) 242 | { 243 | str = "SPOOFING"; 244 | }else if(info.state == -3) 245 | { 246 | str = "MISS"; 247 | } 248 | emit sigRecognize(info.pid, "", str, 0.0, image, QRect(0,0,0,0)); 249 | } 250 | }else 251 | { 252 | datas.push_back(m_mainthread->m_mainfaceinfos[i].PID); 253 | } 254 | } 255 | 256 | } 257 | 258 | m_lastpids.clear(); 259 | m_lastpids.resize(datas.size()); 260 | std::copy(datas.begin(), datas.end(), m_lastpids.begin()); 261 | 262 | m_lasterrorpids.clear(); 263 | m_lasterrorpids.resize(errordatas.size()); 264 | std::copy(errordatas.begin(), errordatas.end(), m_lasterrorpids.begin()); 265 | 266 | auto end2 = system_clock::now(); 267 | auto duration2= duration_cast(end2 - end); 268 | int spent2 = duration2.count(); 269 | //qDebug() << "----spent:" << spent2; 270 | m_mainthread->m_mutex.lock(); 271 | m_mainthread->m_readimage = false; 272 | m_mainthread->m_mutex.unlock(); 273 | 274 | } 275 | 276 | } 277 | 278 | 279 | ///////////////////////////////// 280 | ResetModelThread::ResetModelThread(const QString &imagepath, const QString & tmpimagepath) 281 | { 282 | //m_mainthread = main; 283 | m_image_path = imagepath; 284 | m_image_tmp_path = tmpimagepath; 285 | m_exited = false; 286 | } 287 | 288 | ResetModelThread::~ResetModelThread() 289 | { 290 | qDebug() << "ResetModelThread exited"; 291 | } 292 | 293 | void ResetModelThread::start(std::map *datalst, const QString & table, seeta::FaceRecognizer * fr) 294 | { 295 | m_table = table; 296 | m_datalst = datalst; 297 | m_fr = fr; 298 | m_exited = false; 299 | 300 | QThread::start(); 301 | } 302 | 303 | typedef struct DataInfoTmp 304 | { 305 | int id; 306 | float features[ 1024]; 307 | }DataInfoTmp; 308 | 309 | void ResetModelThread::run() 310 | { 311 | int num = m_datalst->size(); 312 | QString fileName; 313 | 314 | float lastvalue = 0.0; 315 | float value = 0.0; 316 | 317 | 318 | ////////////////////////////////// 319 | 320 | QSqlQuery query; 321 | query.exec("drop table " + m_table + "_tmp"); 322 | if(!query.exec("create table " + m_table + "_tmp (id int primary key, name varchar(64), image_path varchar(256), feature_data blob)")) 323 | { 324 | qDebug() << "failed to create table:" + m_table + "_tmp"<< query.lastError(); 325 | emit sigResetModelEnd(-1); 326 | return; 327 | } 328 | 329 | 330 | //////////////////////////////// 331 | float features[1024]; 332 | 333 | std::vector vecs; 334 | 335 | std::map::iterator iter = m_datalst->begin(); 336 | //std::vector datas; 337 | int i=0; 338 | 339 | for(; iter != m_datalst->end(); ++iter,i++) 340 | { 341 | if(m_exited) 342 | { 343 | break; 344 | } 345 | 346 | value = (i + 1) / num; 347 | value = value * 90; 348 | if(value - lastvalue >= 1.0) 349 | { 350 | emit sigprogress(value); 351 | lastvalue = value; 352 | } 353 | //QString str = QString("current progress : %1%").arg(QString::number(value, 'f',1)); 354 | emit sigprogress(value); 355 | 356 | fileName = m_image_path + "crop_" + iter->second->image_path; 357 | cv::Mat mat = cv::imread(fileName.toStdString().c_str()); 358 | if(mat.data == NULL) 359 | { 360 | continue; 361 | } 362 | 363 | SeetaImageData image; 364 | image.height = mat.rows; 365 | image.width = mat.cols; 366 | image.channels = mat.channels(); 367 | image.data = mat.data; 368 | memset(features, 0, 1024 * sizeof(float)); 369 | m_fr->ExtractCroppedFace(image, features); 370 | 371 | 372 | 373 | //////////////////////////////////////////////////////// 374 | /* 375 | /// 376 | QSqlQuery query; 377 | query.prepare("update " + m_table + " set feature_data = :feature_data where id=:id"); 378 | 379 | query.bindValue(":id", iter->second->id); 380 | 381 | QByteArray bytearray; 382 | bytearray.resize(1024 * sizeof(float)); 383 | memcpy(bytearray.data(), features, 1024 * sizeof(float)); 384 | query.bindValue(":feature_data", QVariant(bytearray)); 385 | if(!query.exec()) 386 | { 387 | //vecs.push_back(iter->second->id); 388 | qDebug() << "failed to update table:" << query.lastError(); 389 | continue; 390 | } 391 | */ 392 | ////////////////////////////////////////////////////// 393 | QSqlQuery query2; 394 | query2.prepare("insert into " + m_table + "_tmp (id, name, image_path, feature_data) values (:id, :name, :image_path, :feature_data)"); 395 | 396 | query2.bindValue(":id", iter->second->id); 397 | query2.bindValue(":name",iter->second->name); 398 | query2.bindValue(":image_path", iter->second->image_path); 399 | 400 | QByteArray bytearray; 401 | bytearray.resize(1024 * sizeof(float)); 402 | memcpy(bytearray.data(), features, 1024 * sizeof(float)); 403 | 404 | query2.bindValue(":feature_data", QVariant(bytearray)); 405 | if(!query2.exec()) 406 | { 407 | qDebug() << "failed to update table:" << query.lastError(); 408 | continue; 409 | break; 410 | } 411 | 412 | 413 | /////////////////////////////////////////////// 414 | 415 | 416 | DataInfoTmp * info = new DataInfoTmp; 417 | info->id = iter->second->id; 418 | memcpy(info->features, features, 1024 * sizeof(float)); 419 | vecs.push_back(info); 420 | memcpy(iter->second->features, features, 1024 * sizeof(float)); 421 | } 422 | 423 | if(i < m_datalst->size()) 424 | { 425 | 426 | QSqlQuery deltable("drop table " + m_table + "_tmp"); 427 | deltable.exec(); 428 | for(int k=0; kfind(vecs[k]->id); 462 | if(iter != m_datalst->end()) 463 | { 464 | memcpy(iter->second->features, vecs[k]->features, 1024 * sizeof(float)); 465 | delete vecs[k]; 466 | } 467 | } 468 | vecs.clear(); 469 | 470 | } 471 | emit sigprogress(100.0); 472 | qDebug() << "------ResetModelThread---ok:"; 473 | emit sigResetModelEnd(0); 474 | } 475 | /// 476 | 477 | 478 | ///////////////////////////////// 479 | InputFilesThread::InputFilesThread(VideoCaptureThread * main, const QString &imagepath, const QString & tmpimagepath) 480 | { 481 | m_mainthread = main; 482 | m_image_path = imagepath; 483 | m_image_tmp_path = tmpimagepath; 484 | m_exited = false; 485 | } 486 | 487 | InputFilesThread::~InputFilesThread() 488 | { 489 | qDebug() << "InputFilesThread exited"; 490 | } 491 | 492 | void InputFilesThread::start(const QStringList * files, unsigned int id, const QString & table) 493 | { 494 | m_table = table; 495 | m_files = files; 496 | m_id = id; 497 | m_exited = false; 498 | QThread::start(); 499 | } 500 | 501 | void InputFilesThread::run() 502 | { 503 | int num = m_files->size(); 504 | float features[1024]; 505 | QString strerror; 506 | int nret; 507 | QString fileName; 508 | int index; 509 | 510 | float lastvalue = 0.0; 511 | float value = 0.0; 512 | SeetaRect rect; 513 | std::vector datalst; 514 | 515 | for(int i=0; isize(); i++) 516 | { 517 | if(m_exited) 518 | break; 519 | value = (i + 1) / num; 520 | value = value * 100 * 0.8; 521 | if(value - lastvalue >= 1.0) 522 | { 523 | emit sigprogress(value); 524 | lastvalue = value; 525 | } 526 | QString str = QString("current progress : %1%").arg(QString::number(value, 'f',1)); 527 | emit sigprogress(value); 528 | 529 | fileName = m_files->at(i); 530 | 531 | QImage image(fileName); 532 | if(image.isNull()) 533 | continue; 534 | 535 | QFile file(fileName); 536 | QFileInfo fileinfo(fileName); 537 | 538 | ////////////////////////////// 539 | QSqlQuery query; 540 | query.prepare("insert into " + m_table + " (id, name, image_path, feature_data, facex,facey,facewidth,faceheight) values (:id, :name, :image_path, :feature_data,:facex,:facey,:facewidth,:faceheight)"); 541 | 542 | index = m_id + 1; 543 | 544 | QString strfile = QString::number(index) + "_" + fileinfo.fileName(); 545 | QString cropfile = m_image_path + "crop_" + strfile; 546 | 547 | memset(features, 0, sizeof(float) * 1024); 548 | nret = m_mainthread->checkimage(fileName, cropfile, features, rect); 549 | strerror = ""; 550 | 551 | if(nret == -2) 552 | { 553 | strerror = "do not find face!"; 554 | }else if(nret == -1) 555 | { 556 | strerror = fileName + " is invalid!"; 557 | }else if(nret == 1) 558 | { 559 | strerror = "find more than one face!"; 560 | }else if(nret == 2) 561 | { 562 | strerror = "quality check failed!"; 563 | } 564 | 565 | if(!strerror.isEmpty()) 566 | { 567 | //QMessageBox::critical(NULL,"critical", strerror, QMessageBox::Yes); 568 | continue; 569 | } 570 | 571 | QString name = fileinfo.completeBaseName();//fileName(); 572 | int n = name.indexOf("_"); 573 | 574 | if(n >= 1) 575 | { 576 | name = name.left(n); 577 | } 578 | 579 | query.bindValue(0, index); 580 | query.bindValue(1,name); 581 | query.bindValue(2, strfile); 582 | 583 | QByteArray bytearray; 584 | bytearray.resize(1024 * sizeof(float)); 585 | memcpy(bytearray.data(), features, 1024 * sizeof(float)); 586 | 587 | query.bindValue(3, QVariant(bytearray)); 588 | query.bindValue(4, rect.x); 589 | query.bindValue(5, rect.y); 590 | query.bindValue(6, rect.width); 591 | query.bindValue(7, rect.height); 592 | if(!query.exec()) 593 | { 594 | QFile::remove(cropfile); 595 | qDebug() << "failed to insert table:" << query.lastError(); 596 | //QMessageBox::critical(NULL, "critical", tr("save face data to database failed!"), QMessageBox::Yes); 597 | continue; 598 | } 599 | 600 | file.copy(m_image_path + strfile); 601 | 602 | 603 | DataInfo * info = new DataInfo(); 604 | info->id = index; 605 | info->name = name; 606 | info->image_path = strfile; 607 | memcpy(info->features, features, 1024 * sizeof(float)); 608 | info->x = rect.x; 609 | info->y = rect.y; 610 | info->width = rect.width; 611 | info->height = rect.height; 612 | datalst.push_back(info); 613 | 614 | m_id++; 615 | } 616 | 617 | if(datalst.size() > 0) 618 | { 619 | emit sigInputFilesUpdateUI( &datalst); 620 | } 621 | 622 | emit sigprogress(100.0); 623 | 624 | datalst.clear(); 625 | emit sigInputFilesEnd(); 626 | } 627 | /// 628 | 629 | VideoCaptureThread::VideoCaptureThread(std::map * datalst, int videowidth, int videoheight) 630 | { 631 | m_exited = false; 632 | //m_haveimage = false; 633 | 634 | m_datalst = datalst; 635 | //m_width = 800; 636 | //m_height = 600; 637 | qDebug() << "video width:" << videowidth << "," << videoheight; 638 | 639 | //std::string modelpath = "/wqy/seeta_sdk/SF3/libs/SF3.0_v1/models/"; 640 | seeta::ModelSetting fd_model; 641 | fd_model.append(gmodelpath + "face_detector.csta"); 642 | fd_model.set_device( seeta::ModelSetting::CPU ); 643 | fd_model.set_id(0); 644 | m_fd = new seeta::FaceDetector(fd_model); 645 | m_fd->set(seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, 100); 646 | 647 | m_tracker = new seeta::FaceTracker(fd_model, videowidth,videoheight); 648 | m_tracker->SetMinFaceSize(100); //set(seeta::FaceTracker::PROPERTY_MIN_FACE_SIZE, 100); 649 | 650 | seeta::ModelSetting pd_model; 651 | pd_model.append(gmodelpath + "face_landmarker_pts5.csta"); 652 | pd_model.set_device( seeta::ModelSetting::CPU ); 653 | pd_model.set_id(0); 654 | m_pd = new seeta::FaceLandmarker(pd_model); 655 | 656 | 657 | seeta::ModelSetting spoof_model; 658 | spoof_model.append(gmodelpath + "fas_first.csta"); 659 | spoof_model.append(gmodelpath + "fas_second.csta"); 660 | spoof_model.set_device( seeta::ModelSetting::CPU ); 661 | spoof_model.set_id(0); 662 | m_spoof = new seeta::FaceAntiSpoofing(spoof_model); 663 | m_spoof->SetThreshold(0.30, 0.80); 664 | 665 | seeta::ModelSetting fr_model; 666 | fr_model.append(gmodelpath + "face_recognizer.csta"); 667 | fr_model.set_device( seeta::ModelSetting::CPU ); 668 | fr_model.set_id(0); 669 | m_fr = new seeta::FaceRecognizer(fr_model); 670 | 671 | 672 | 673 | /////////////////////////////// 674 | seeta::ModelSetting setting68; 675 | setting68.set_id(0); 676 | setting68.set_device( SEETA_DEVICE_CPU ); 677 | setting68.append(gmodelpath + "face_landmarker_pts68.csta" ); 678 | m_pd68 = new seeta::FaceLandmarker( setting68 ); 679 | 680 | seeta::ModelSetting posemodel; 681 | posemodel.set_device(SEETA_DEVICE_CPU); 682 | posemodel.set_id(0); 683 | posemodel.append(gmodelpath + "pose_estimation.csta"); 684 | m_poseex = new seeta::QualityOfPoseEx(posemodel); 685 | m_poseex->set(seeta::QualityOfPoseEx::YAW_LOW_THRESHOLD, 20); 686 | m_poseex->set(seeta::QualityOfPoseEx::YAW_HIGH_THRESHOLD, 10); 687 | m_poseex->set(seeta::QualityOfPoseEx::PITCH_LOW_THRESHOLD, 20); 688 | m_poseex->set(seeta::QualityOfPoseEx::PITCH_HIGH_THRESHOLD, 10); 689 | 690 | seeta::ModelSetting lbnmodel; 691 | lbnmodel.set_device(SEETA_DEVICE_CPU); 692 | lbnmodel.set_id(0); 693 | lbnmodel.append(gmodelpath + "quality_lbn.csta"); 694 | m_lbn = new seeta::QualityOfLBN(lbnmodel); 695 | m_lbn->set(seeta::QualityOfLBN::PROPERTY_BLUR_THRESH, 0.80); 696 | 697 | m_qa = new seeta::QualityAssessor(); 698 | m_qa->add_rule(seeta::INTEGRITY); 699 | m_qa->add_rule(seeta::RESOLUTION); 700 | m_qa->add_rule(seeta::BRIGHTNESS); 701 | m_qa->add_rule(seeta::CLARITY); 702 | m_qa->add_rule(seeta::POSE_EX, m_poseex, true); 703 | 704 | ////////////////////// 705 | 706 | 707 | //m_capture = new cv::VideoCapture(0); 708 | m_capture = NULL;//new cv::VideoCapture; 709 | //m_capture->set( cv::CAP_PROP_FRAME_WIDTH, videowidth ); 710 | //m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, videoheight ); 711 | //int videow = vc.get( CV_CAP_PROP_FRAME_WIDTH ); 712 | //int videoh = vc.get( CV_CAP_PROP_FRAME_HEIGHT ); 713 | 714 | m_workthread = new WorkThread(this); 715 | 716 | m_mainImage = new SeetaImageData(); 717 | //m_curImage = new SeetaImageData(); 718 | m_mainImage->width = m_mainImage->height = m_mainImage->channels= 0; 719 | m_mainImage->data = NULL; 720 | 721 | //m_curImage->width = m_curImage->height = m_curImage->channels= 0; 722 | //m_curImage->data = NULL; 723 | } 724 | 725 | VideoCaptureThread::~VideoCaptureThread() 726 | { 727 | m_exited = true; 728 | while(!isFinished()) 729 | { 730 | QThread::msleep(1); 731 | } 732 | qDebug() << "VideoCaptureThread exited"; 733 | if( m_capture) 734 | delete m_capture; 735 | delete m_fd; 736 | delete m_pd; 737 | delete m_spoof; 738 | delete m_tracker; 739 | delete m_lbn; 740 | delete m_qa; 741 | 742 | delete m_workthread; 743 | 744 | } 745 | 746 | void VideoCaptureThread::setparamter() 747 | { 748 | /* 749 | qDebug() << gparamters.MinFaceSize << ", " << gparamters.Fd_Threshold; 750 | qDebug() << gparamters.VideoWidth << ", " << gparamters.VideoHeight; 751 | qDebug() << gparamters.AntiSpoofClarity << ", " << gparamters.AntiSpoofReality; 752 | qDebug() << gparamters.YawLowThreshold << ", " << gparamters.YawHighThreshold; 753 | qDebug() << gparamters.PitchLowThreshold << ", " << gparamters.PitchHighThreshold; 754 | */ 755 | m_fd->set(seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, gparamters.MinFaceSize); 756 | m_fd->set(seeta::FaceDetector::PROPERTY_THRESHOLD, gparamters.Fd_Threshold); 757 | 758 | m_tracker->SetMinFaceSize(gparamters.MinFaceSize); 759 | m_tracker->SetThreshold(gparamters.Fd_Threshold); 760 | m_tracker->SetVideoSize(gparamters.VideoWidth, gparamters.VideoHeight); 761 | 762 | m_spoof->SetThreshold(gparamters.AntiSpoofClarity, gparamters.AntiSpoofReality); 763 | 764 | m_poseex->set(seeta::QualityOfPoseEx::YAW_LOW_THRESHOLD, gparamters.YawLowThreshold); 765 | m_poseex->set(seeta::QualityOfPoseEx::YAW_HIGH_THRESHOLD, gparamters.YawHighThreshold); 766 | m_poseex->set(seeta::QualityOfPoseEx::PITCH_LOW_THRESHOLD, gparamters.PitchLowThreshold); 767 | m_poseex->set(seeta::QualityOfPoseEx::PITCH_HIGH_THRESHOLD, gparamters.PitchHighThreshold); 768 | 769 | } 770 | 771 | seeta::FaceRecognizer * VideoCaptureThread::CreateFaceRecognizer(const QString & modelfile) 772 | { 773 | 774 | seeta::ModelSetting fr_model; 775 | fr_model.append(gmodelpath + modelfile.toStdString()); 776 | fr_model.set_device( seeta::ModelSetting::CPU ); 777 | fr_model.set_id(0); 778 | seeta::FaceRecognizer * fr = new seeta::FaceRecognizer(fr_model); 779 | return fr; 780 | } 781 | 782 | void VideoCaptureThread::set_fr(seeta::FaceRecognizer * fr) 783 | { 784 | if(m_fr != NULL) 785 | { 786 | delete m_fr; 787 | } 788 | m_fr = fr; 789 | } 790 | 791 | void VideoCaptureThread::start(const RecognizeType &type) 792 | { 793 | m_type.type = type.type; 794 | m_type.filename = type.filename; 795 | QThread::start(); 796 | } 797 | 798 | void VideoCaptureThread::run() 799 | { 800 | int nret = 0; 801 | 802 | 803 | if(m_type.type == 0) 804 | { 805 | m_capture = new cv::VideoCapture; 806 | m_capture->open(m_type.type); 807 | m_capture->set( cv::CAP_PROP_FRAME_WIDTH, gparamters.VideoWidth ); 808 | m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, gparamters.VideoHeight ); 809 | 810 | }else if(m_type.type == 1) 811 | { 812 | m_capture = new cv::VideoCapture; 813 | m_capture->open(m_type.filename.toStdString().c_str()); 814 | m_capture->set( cv::CAP_PROP_FRAME_WIDTH, gparamters.VideoWidth ); 815 | m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, gparamters.VideoHeight ); 816 | } 817 | 818 | //m_capture->open("/tmp/test.avi"); 819 | //m_capture->open(0); 820 | //m_capture->set( cv::CAP_PROP_FRAME_WIDTH, gparamters.VideoWidth ); 821 | //m_capture->set( cv::CAP_PROP_FRAME_HEIGHT, gparamters.VideoHeight ); 822 | 823 | 824 | if((m_capture != NULL) && (!m_capture->isOpened())) 825 | { 826 | m_capture->release(); 827 | emit sigEnd(-1); 828 | return; 829 | } 830 | 831 | cv::Mat mat, mat2; 832 | cv::Scalar color; 833 | color = CV_RGB( 0, 255, 0 ); 834 | 835 | m_workthread->start(); 836 | 837 | /* 838 | //mp4,h263,flv 839 | cv::VideoWriter outputvideo; 840 | cv::Size s(800,600); 841 | int codec = outputvideo.fourcc('M', 'P', '4', '2'); 842 | outputvideo.open("/tmp/test.avi", codec, 50.0, s, true); 843 | if(!outputvideo.isOpened()) 844 | { 845 | qDebug() << " write video failed"; 846 | } 847 | */ 848 | 849 | while(!m_exited) 850 | { 851 | if(m_type.type == 2) 852 | { 853 | mat = cv::imread(m_type.filename.toStdString().c_str()); 854 | if(mat.data == NULL) 855 | { 856 | qDebug() << "VideoCapture read failed"; 857 | m_exited = true; 858 | nret = -2; 859 | break; 860 | } 861 | }else 862 | { 863 | if(!m_capture->read(mat)) 864 | { 865 | qDebug() << "VideoCapture read failed"; 866 | m_exited = true; 867 | nret = -2; 868 | break; 869 | } 870 | } 871 | 872 | //(*m_capture) >> mat; 873 | 874 | //cv::imwrite("/tmp/www_test.png",mat); 875 | auto start = system_clock::now(); 876 | if(m_type.type == 1) 877 | { 878 | cv::flip(mat, mat, 1); 879 | }else 880 | { 881 | cv::Size size (gparamters.VideoWidth, gparamters.VideoHeight); 882 | cv::resize(mat, mat2, size, 0, 0, cv::INTER_CUBIC); 883 | mat = mat2.clone(); 884 | } 885 | 886 | if(mat.channels() == 4) 887 | { 888 | cv::cvtColor(mat, mat, cv::COLOR_RGBA2BGR); 889 | } 890 | 891 | SeetaImageData image; 892 | image.height = mat.rows; 893 | image.width = mat.cols; 894 | image.channels = mat.channels(); 895 | image.data = mat.data; 896 | 897 | cv::cvtColor(mat, mat2, cv::COLOR_BGR2RGB); 898 | 899 | auto faces = m_tracker->Track(image); 900 | //qDebug() << "-----track size:" << faces.size; 901 | if( faces.size > 0 ) 902 | { 903 | m_mutex.lock(); 904 | if(!m_readimage) 905 | { 906 | clone_image(image, *m_mainImage); 907 | //cv::Mat tmpmat; 908 | //cv::cvtColor(mat, tmpmat, cv::COLOR_BGR2RGB); 909 | m_mainmat = mat2.clone();//tmpmat.clone(); 910 | m_mainfaceinfos.clear(); 911 | for(int i=0; i(end - start); 942 | int spent = duration.count() / 1000; 943 | if(spent - 50 > 0) 944 | { 945 | QThread::msleep(spent - 50); 946 | } 947 | 948 | if(m_type.type == 2) 949 | { 950 | nret = -2; 951 | m_exited = true; 952 | break; 953 | } 954 | } 955 | 956 | if(m_capture != NULL) 957 | { 958 | m_capture->release(); 959 | } 960 | 961 | while(!m_workthread->isFinished()) 962 | { 963 | QThread::msleep(1); 964 | } 965 | 966 | emit sigEnd(nret); 967 | } 968 | 969 | //return 0:success, -1:src image is invalid, -2:do not find face, 1: find more than one face, 2: quality check failed 970 | int VideoCaptureThread::checkimage(const QString & image, const QString & crop, float * features, SeetaRect &rect) 971 | { 972 | std::string strimage = image.toStdString(); 973 | std::string strcrop = crop.toStdString(); 974 | 975 | cv::Mat mat = cv::imread(strimage.c_str()); 976 | if(mat.empty()) 977 | return -1; 978 | 979 | SeetaImageData img; 980 | img.width = mat.cols; 981 | img.height = mat.rows; 982 | img.channels = mat.channels(); 983 | img.data = mat.data; 984 | 985 | auto face_array = m_fd->detect(img); 986 | 987 | if(face_array.size <= 0) 988 | { 989 | return -2; 990 | }else if(face_array.size > 1) 991 | { 992 | return 1; 993 | } 994 | 995 | SeetaRect& face = face_array.data[0].pos; 996 | SeetaPointF points[5]; 997 | 998 | m_pd->mark(img, face, points); 999 | 1000 | m_qa->feed(img, face, points, 5); 1001 | auto result1 = m_qa->query(seeta::BRIGHTNESS); 1002 | auto result2 = m_qa->query(seeta::RESOLUTION); 1003 | auto result3 = m_qa->query(seeta::CLARITY); 1004 | auto result4 = m_qa->query(seeta::INTEGRITY); 1005 | //auto result5 = m_qa->query(seeta::POSE); 1006 | auto result = m_qa->query(seeta::POSE_EX); 1007 | 1008 | if(result.level == 0 || result1.level == 0 || result2.level == 0 || result3.level == 0 || result4.level == 0 ) 1009 | { 1010 | return 2; 1011 | } 1012 | 1013 | /* 1014 | SeetaPointF points68[68]; 1015 | memset( points68, 0, sizeof( SeetaPointF ) * 68 ); 1016 | 1017 | m_pd68->mark(img, face,points68); 1018 | int light, blur, noise; 1019 | light = blur = noise = -1; 1020 | 1021 | m_lbn->Detect( img, points68, &light, &blur, &noise ); 1022 | */ 1023 | //std::cout << "light:" << light << ", blur:" << blur << ", noise:" << noise << std::endl; 1024 | 1025 | seeta::ImageData cropface = m_fr->CropFaceV2(img, points); 1026 | cv::Mat imgmat(cropface.height, cropface.width, CV_8UC(cropface.channels), cropface.data); 1027 | 1028 | m_fr->ExtractCroppedFace(cropface, features); 1029 | 1030 | cv::imwrite(strcrop.c_str(), imgmat); 1031 | 1032 | /////////////////////////////////////////////// 1033 | int x = face.x - face.width / 2; 1034 | if((x) < 0) 1035 | x = 0; 1036 | int y = face.y - face.height / 2; 1037 | if(y < 0) 1038 | y = 0; 1039 | 1040 | int x2 = face.x + face.width * 1.5; 1041 | if(x2 >= img.width) 1042 | { 1043 | x2 = img.width -1; 1044 | } 1045 | 1046 | int y2 = face.y + face.height * 1.5; 1047 | if(y2 >= img.height) 1048 | { 1049 | y2 = img.height -1; 1050 | } 1051 | 1052 | rect.x = x; 1053 | rect.y = y; 1054 | rect.width = x2 - x; 1055 | rect.height = y2 - y; 1056 | 1057 | return 0; 1058 | } 1059 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/videocapturethread.h: -------------------------------------------------------------------------------- 1 | #ifndef VIDEOCAPTURETHREAD_H 2 | #define VIDEOCAPTURETHREAD_H 3 | 4 | #include 5 | #include 6 | #include 7 | 8 | #include "seeta/FaceLandmarker.h" 9 | #include "seeta/FaceDetector.h" 10 | #include "seeta/FaceAntiSpoofing.h" 11 | #include "seeta/Common/Struct.h" 12 | #include "seeta/CTrackingFaceInfo.h" 13 | #include "seeta/FaceTracker.h" 14 | #include "seeta/FaceRecognizer.h" 15 | #include "seeta/QualityAssessor.h" 16 | #include "seeta/QualityOfPoseEx.h" 17 | #include "seeta/QualityOfLBN.h" 18 | 19 | 20 | #include 21 | #include 22 | #include 23 | #include 24 | 25 | #include 26 | 27 | typedef struct RecognizeType 28 | { 29 | int type; //0: open camera, 1:open video file, 2:open image file 30 | QString filename; //when type is 1 or 2, video file name or image file name 31 | QString title; //windows title 32 | }RecognizeType; 33 | 34 | typedef struct DataInfo 35 | { 36 | int id; 37 | int x; 38 | int y; 39 | int width; 40 | int height; 41 | QString name; 42 | QString image_path; 43 | float features[1024]; 44 | }DataInfo; 45 | 46 | 47 | typedef struct Config_Paramter 48 | { 49 | int MinFaceSize; 50 | float Fd_Threshold; 51 | int VideoWidth; 52 | int VideoHeight; 53 | 54 | float YawLowThreshold; 55 | float YawHighThreshold; 56 | float PitchLowThreshold; 57 | float PitchHighThreshold; 58 | 59 | float AntiSpoofClarity; 60 | float AntiSpoofReality; 61 | 62 | float Fr_Threshold; 63 | QString Fr_ModelPath; 64 | } Config_Paramter; 65 | 66 | 67 | typedef struct Fr_DataInfo 68 | { 69 | int pid; 70 | int state; 71 | }Fr_DataInfo; 72 | 73 | class VideoCaptureThread; 74 | 75 | class WorkThread : public QThread 76 | { 77 | Q_OBJECT 78 | public: 79 | WorkThread(VideoCaptureThread * main); 80 | ~WorkThread(); 81 | 82 | protected: 83 | void run(); 84 | 85 | 86 | signals: 87 | void sigRecognize(int, const QString &, const QString &, float, const QImage &, const QRect &); 88 | 89 | private: 90 | 91 | int recognize(const SeetaTrackingFaceInfo & faceinfo);//, std::vector & datas); 92 | 93 | public: 94 | 95 | VideoCaptureThread * m_mainthread; 96 | std::vector m_lastpids; 97 | std::vector m_lasterrorpids; 98 | }; 99 | 100 | 101 | class ResetModelThread : public QThread 102 | { 103 | Q_OBJECT 104 | public: 105 | ResetModelThread( const QString &imagepath, const QString & tmpimagepath); 106 | ~ResetModelThread(); 107 | 108 | void start(std::map *datalst, const QString & table, seeta::FaceRecognizer * fr); 109 | protected: 110 | void run(); 111 | 112 | 113 | signals: 114 | //void sigResetModelUpdateUI(std::vector *); 115 | void sigResetModelEnd(int); 116 | void sigprogress(float); 117 | 118 | public: 119 | 120 | seeta::FaceRecognizer * m_fr; 121 | //VideoCaptureThread * m_mainthread; 122 | 123 | std::map * m_datalst; 124 | 125 | QString m_table; 126 | QString m_image_path; 127 | QString m_image_tmp_path; 128 | 129 | bool m_exited; 130 | }; 131 | 132 | 133 | class InputFilesThread : public QThread 134 | { 135 | Q_OBJECT 136 | public: 137 | InputFilesThread(VideoCaptureThread * main, const QString &imagepath, const QString & tmpimagepath); 138 | ~InputFilesThread(); 139 | 140 | void start(const QStringList * files, unsigned int id, const QString & table); 141 | protected: 142 | void run(); 143 | 144 | 145 | signals: 146 | void sigInputFilesUpdateUI(std::vector *); 147 | void sigInputFilesEnd(); 148 | void sigprogress(float); 149 | 150 | public: 151 | 152 | VideoCaptureThread * m_mainthread; 153 | 154 | const QStringList * m_files; 155 | unsigned int m_id; 156 | QString m_table; 157 | QString m_image_path; 158 | QString m_image_tmp_path; 159 | 160 | bool m_exited; 161 | }; 162 | 163 | 164 | class VideoCaptureThread : public QThread 165 | { 166 | Q_OBJECT 167 | public: 168 | VideoCaptureThread(std::map * datalst, int videowidth, int videoheight); 169 | ~VideoCaptureThread(); 170 | //void setMinFaceSize(int size); 171 | 172 | void setparamter(); 173 | int checkimage(const QString & image, const QString & crop, float * features, SeetaRect &rect); 174 | 175 | void start(const RecognizeType &type); 176 | 177 | seeta::FaceRecognizer * CreateFaceRecognizer(const QString & modelfile); 178 | void set_fr(seeta::FaceRecognizer * fr); 179 | protected: 180 | void run(); 181 | 182 | 183 | signals: 184 | void sigUpdateUI(const QImage & image); 185 | void sigEnd(int); 186 | 187 | private: 188 | 189 | cv::VideoCapture * m_capture; 190 | 191 | public: 192 | seeta::FaceDetector * m_fd; 193 | seeta::FaceLandmarker * m_pd; 194 | seeta::FaceLandmarker * m_pd68; 195 | seeta::FaceAntiSpoofing * m_spoof; 196 | seeta::FaceRecognizer * m_fr; 197 | seeta::FaceTracker * m_tracker; 198 | seeta::QualityAssessor * m_qa; 199 | seeta::QualityOfLBN * m_lbn; 200 | seeta::QualityOfPoseEx * m_poseex; 201 | 202 | public: 203 | bool m_isrun; 204 | bool m_exited; 205 | 206 | 207 | 208 | std::map *m_datalst; 209 | 210 | bool m_readimage; 211 | SeetaImageData *m_mainImage; 212 | cv::Mat m_mainmat; 213 | 214 | std::vector m_mainfaceinfos; 215 | 216 | WorkThread * m_workthread; 217 | QMutex m_mutex; 218 | 219 | RecognizeType m_type; 220 | }; 221 | 222 | #endif // VIDEOCAPTURETHREAD_H 223 | -------------------------------------------------------------------------------- /example/qt/seetaface_demo/white.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeetaFace6Open/index/a32e2faa0694c0f841ace4df9ead0407b78363c6/example/qt/seetaface_demo/white.png --------------------------------------------------------------------------------