Skip to content

Commit

Permalink
[vision] Add AdaFace model support (PaddlePaddle#301)
Browse files Browse the repository at this point in the history
* 新增adaface模型

* 新增adaface模型python代码

* 新增adaface模型example代码

* 删除无用的import

* update

* 修正faceid文档的错误

* 修正faceid文档的错误

* 删除无用文件

* 新增adaface模型paddleinference推理代码,模型文件先提交方便测试后期会删除

* 新增adaface模型paddleinference推理代码,模型文件先提交方便测试后期会删除

* 按照要求修改并跑通cpp example

* 测试python example

* python cpu测试通过,修改了文档

* 修正文档,替换了模型下载地址

* 修正文档

* 修正文档

Co-authored-by: DefTruth <[email protected]>
  • Loading branch information
Zheng-Bicheng and DefTruth authored Oct 11, 2022
1 parent a6847c5 commit 9c3ac8f
Show file tree
Hide file tree
Showing 18 changed files with 804 additions and 31 deletions.
13 changes: 7 additions & 6 deletions examples/vision/faceid/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@

FastDeploy目前支持如下人脸识别模型部署

| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [deepinsight/ArcFace](./insightface) | ArcFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/CosFace](./insightface) | CosFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/PartialFC](./insightface) | PartialFC 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/VPL](./insightface) | VPL 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| 模型 | 说明 | 模型格式 | 版本 |
|:---------------------------------------|:---------------|:-----------|:------------------------------------------------------------------------------|
| [deepinsight/ArcFace](./insightface) | ArcFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/CosFace](./insightface) | CosFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/PartialFC](./insightface) | PartialFC 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/VPL](./insightface) | VPL 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [paddleclas/AdaFace](./adaface) | AdaFace 系列模型 | PADDLE | [CommitID:babb9a5](https://github.com/PaddlePaddle/PaddleClas/tree/v2.4.0) |
32 changes: 32 additions & 0 deletions examples/vision/faceid/adaface/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# AdaFace准备部署模型

- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/)
- [官方库](https://github.com/PaddlePaddle/PaddleClas/)中训练过后的Paddle模型导出Paddle静态图模型操作后,可进行部署;

## 简介
一直以来,低质量图像的人脸识别都具有挑战性,因为低质量图像的人脸属性是模糊和退化的。将这样的图片输入模型时,将不能很好的实现分类。
而在人脸识别任务中,我们经常会利用opencv的仿射变换来矫正人脸数据,这时数据会出现低质量退化的现象。如何解决低质量图片的分类问题成为了模型落地时的痛点问题。

在AdaFace这项工作中,作者在损失函数中引入了另一个因素,即图像质量。作者认为,强调错误分类样本的策略应根据其图像质量进行调整。
具体来说,简单或困难样本的相对重要性应该基于样本的图像质量来给定。据此作者提出了一种新的损失函数来通过图像质量强调不同的困难样本的重要性。

由上,AdaFace缓解了低质量图片在输入网络后输出结果精度变低的情况,更加适合在人脸识别任务落地中使用。


## 导出Paddle静态图模型
以AdaFace为例:
训练和导出代码,请参考[AIStudio](https://aistudio.baidu.com/aistudio/projectdetail/4479879?contributionType=1)


## 下载预训练Paddle静态图模型

为了方便开发者的测试,下面提供了我转换过的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)其中精度指标来源于AIStudio中对各模型的介绍。

| 模型 | 大小 | 精度 (AgeDB_30) |
|:----------------------------------------------------------------------------------------------|:------|:--------------|
| [AdaFace-MobileFacenet](https://bj.bcebos.com/paddlehub/fastdeploy/mobilefacenet_adaface.tgz) | 3.2MB | 95.5 |

## 详细部署文档

- [Python部署](python)
- [C++部署](cpp)
13 changes: 13 additions & 0 deletions examples/vision/faceid/adaface/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
PROJECT(infer_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)

# 指定下载解压后的fastdeploy库路径
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")

include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)

# 添加FastDeploy依赖头文件
include_directories(${FASTDEPLOY_INCS})

add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
111 changes: 111 additions & 0 deletions examples/vision/faceid/adaface/cpp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# AdaFace C++部署示例
本目录下提供infer_xxx.py快速完成AdaFace模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。

以AdaFace为例提供`infer.cc`快速完成AdaFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/quick_start)

以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试

```bash
# “如果预编译库不包含本模型,请从最新代码编译SDK”
mkdir build
cd build
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-0.2.1.tgz
tar xvf fastdeploy-linux-x64-0.2.1.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.1
make -j

#下载测试图片
wget https://bj.bcebos.com/paddlehub/test_samples/test_lite_focal_arcface_0.JPG
wget https://bj.bcebos.com/paddlehub/test_samples/test_lite_focal_arcface_1.JPG
wget https://bj.bcebos.com/paddlehub/test_samples/test_lite_focal_arcface_2.JPG

# 如果为Paddle模型,运行以下代码
wget https://bj.bcebos.com/paddlehub/fastdeploy/mobilefacenet_adaface.tgz
tar zxvf mobilefacenet_adaface.tgz -C ./
# CPU推理
./infer_demo mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \
mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \
test_lite_focal_arcface_0.JPG \
test_lite_focal_arcface_1.JPG \
test_lite_focal_arcface_2.JPG \
0

# GPU推理
./infer_demo mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \
mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \
test_lite_focal_arcface_0.JPG \
test_lite_focal_arcface_1.JPG \
test_lite_focal_arcface_2.JPG \
1

# GPU上TensorRT推理
./infer_demo mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \
mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \
test_lite_focal_arcface_0.JPG \
test_lite_focal_arcface_1.JPG \
test_lite_focal_arcface_2.JPG \
2

```

运行完成可视化结果如下图所示

<div width="700">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321537-860bf857-0101-4e92-a74c-48e8658d838c.JPG">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184322004-a551e6e4-6f47-454e-95d6-f8ba2f47b516.JPG">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321622-d9a494c3-72f3-47f1-97c5-8a2372de491f.JPG">
</div>

以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)

## AdaFace C++接口

### AdaFace类

```c++
fastdeploy::vision::faceid::AdaFace(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::PADDLE)
```
AdaFace模型加载和初始化,如果使用PaddleInference推理,model_file和params_file为PaddleInference模型格式;
如果使用ONNXRuntime推理,model_file为ONNX模型格式,params_file为空。
#### Predict函数
> ```c++
> AdaFace::Predict(cv::Mat* im, FaceRecognitionResult* result)
> ```
>
> 模型预测接口,输入图像直接输出检测结果。
>
> **参数**
>
> > * **im**: 输入图像,注意需为HWC,BGR格式
> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceRecognitionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
### 类成员变量
#### 预处理参数
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[112, 112]
> > * **alpha**(vector&lt;float&gt;): 预处理归一化的alpha值,计算公式为`x'=x*alpha+beta`,alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5]
> > * **beta**(vector&lt;float&gt;): 预处理归一化的beta值,计算公式为`x'=x*alpha+beta`,beta默认为[-1.f, -1.f, -1.f]
> > * **swap_rb**(bool): 预处理是否将BGR转换成RGB,默认true
> > * **l2_normalize**(bool): 输出人脸向量之前是否执行l2归一化,默认false
- [模型介绍](../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md)
152 changes: 152 additions & 0 deletions examples/vision/faceid/adaface/cpp/infer.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
/***************************************************************************
*
* Copyright (c) 2021 Baidu.com, Inc. All Rights Reserved
*
**************************************************************************/

/**
* @author Baidu
* @brief demo_image_inference
*
**/
#include "fastdeploy/vision.h"

void CpuInfer(const std::string &model_file, const std::string &params_file,
const std::vector<std::string> &image_file) {
auto option = fastdeploy::RuntimeOption();
auto model = fastdeploy::vision::faceid::AdaFace(model_file, params_file);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

cv::Mat face0 = cv::imread(image_file[0]);
cv::Mat face1 = cv::imread(image_file[1]);
cv::Mat face2 = cv::imread(image_file[2]);

fastdeploy::vision::FaceRecognitionResult res0;
fastdeploy::vision::FaceRecognitionResult res1;
fastdeploy::vision::FaceRecognitionResult res2;

if ((!model.Predict(&face0, &res0)) || (!model.Predict(&face1, &res1)) ||
(!model.Predict(&face2, &res2))) {
std::cerr << "Prediction Failed." << std::endl;
}

std::cout << "Prediction Done!" << std::endl;

std::cout << "--- [Face 0]:" << res0.Str();
std::cout << "--- [Face 1]:" << res1.Str();
std::cout << "--- [Face 2]:" << res2.Str();

float cosine01 = fastdeploy::vision::utils::CosineSimilarity(
res0.embedding, res1.embedding, model.l2_normalize);
float cosine02 = fastdeploy::vision::utils::CosineSimilarity(
res0.embedding, res2.embedding, model.l2_normalize);
std::cout << "Detect Done! Cosine 01: " << cosine01
<< ", Cosine 02:" << cosine02 << std::endl;
}

void GpuInfer(const std::string &model_file, const std::string &params_file,
const std::vector<std::string> &image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
auto model =
fastdeploy::vision::faceid::AdaFace(model_file, params_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

cv::Mat face0 = cv::imread(image_file[0]);
cv::Mat face1 = cv::imread(image_file[1]);
cv::Mat face2 = cv::imread(image_file[2]);

fastdeploy::vision::FaceRecognitionResult res0;
fastdeploy::vision::FaceRecognitionResult res1;
fastdeploy::vision::FaceRecognitionResult res2;

if ((!model.Predict(&face0, &res0)) || (!model.Predict(&face1, &res1)) ||
(!model.Predict(&face2, &res2))) {
std::cerr << "Prediction Failed." << std::endl;
}

std::cout << "Prediction Done!" << std::endl;

std::cout << "--- [Face 0]:" << res0.Str();
std::cout << "--- [Face 1]:" << res1.Str();
std::cout << "--- [Face 2]:" << res2.Str();

float cosine01 = fastdeploy::vision::utils::CosineSimilarity(
res0.embedding, res1.embedding, model.l2_normalize);
float cosine02 = fastdeploy::vision::utils::CosineSimilarity(
res0.embedding, res2.embedding, model.l2_normalize);
std::cout << "Detect Done! Cosine 01: " << cosine01
<< ", Cosine 02:" << cosine02 << std::endl;
}

void TrtInfer(const std::string &model_file, const std::string &params_file,
const std::vector<std::string> &image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UseTrtBackend();
option.SetTrtInputShape("data", {1, 3, 112, 112});
auto model =
fastdeploy::vision::faceid::AdaFace(model_file, params_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

cv::Mat face0 = cv::imread(image_file[0]);
cv::Mat face1 = cv::imread(image_file[1]);
cv::Mat face2 = cv::imread(image_file[2]);

fastdeploy::vision::FaceRecognitionResult res0;
fastdeploy::vision::FaceRecognitionResult res1;
fastdeploy::vision::FaceRecognitionResult res2;

if ((!model.Predict(&face0, &res0)) || (!model.Predict(&face1, &res1)) ||
(!model.Predict(&face2, &res2))) {
std::cerr << "Prediction Failed." << std::endl;
}

std::cout << "Prediction Done!" << std::endl;

std::cout << "--- [Face 0]:" << res0.Str();
std::cout << "--- [Face 1]:" << res1.Str();
std::cout << "--- [Face 2]:" << res2.Str();

float cosine01 = fastdeploy::vision::utils::CosineSimilarity(
res0.embedding, res1.embedding, model.l2_normalize);
float cosine02 = fastdeploy::vision::utils::CosineSimilarity(
res0.embedding, res2.embedding, model.l2_normalize);
std::cout << "Detect Done! Cosine 01: " << cosine01
<< ", Cosine 02:" << cosine02 << std::endl;
}

int main(int argc, char *argv[]) {
if (argc < 7) {
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
"e.g ./infer_demo mobilefacenet_adaface.pdmodel "
"mobilefacenet_adaface.pdiparams "
"test_lite_focal_AdaFace_0.JPG test_lite_focal_AdaFace_1.JPG "
"test_lite_focal_AdaFace_2.JPG 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend."
<< std::endl;
return -1;
}

std::vector<std::string> image_files = {argv[3], argv[4], argv[5]};
if (std::atoi(argv[6]) == 0) {
std::cout << "use CpuInfer" << std::endl;
CpuInfer(argv[1], argv[2], image_files);
} else if (std::atoi(argv[6]) == 1) {
GpuInfer(argv[1], argv[2], image_files);
} else if (std::atoi(argv[6]) == 2) {
TrtInfer(argv[1], argv[2], image_files);
}
return 0;
}
Loading

0 comments on commit 9c3ac8f

Please sign in to comment.