forked from PaddlePaddle/FastDeploy
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Model] Support BlazeFace Model (PaddlePaddle#1172)
* fit yolov7face file path * TODO:添加yolov7facePython接口Predict * resolve yolov7face.py * resolve yolov7face.py * resolve yolov7face.py * add yolov7face example readme file * [Doc] fix yolov7face example readme file * [Doc]fix yolov7face example readme file * support BlazeFace * add blazeface readme file * fix review problem * fix code style error * fix review problem * fix review problem * fix head file problem * fix review problem * fix review problem * fix readme file problem * add English readme file * fix English readme file
- Loading branch information
Showing
21 changed files
with
1,518 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
English | [简体中文](README_CN.md) | ||
# BlazeFace Ready-to-deploy Model | ||
|
||
- BlazeFace deployment model implementation comes from [BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection),and [Pre-training model based on WiderFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection) | ||
- (1)Provided in [Official library | ||
](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/tools) *.params, could deploy after operation [export_model.py](#Export PADDLE model); | ||
- (2)Developers can train BlazeFace model based on their own data according to [export_model. py](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/tools/export_model.py)After exporting the model, complete the deployment。 | ||
|
||
## Export PADDLE model | ||
|
||
Visit [BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection) Github library, download and install according to the instructions, download the `. yml` and `. params` model parameters, and use` export_ Model. py `gets the` pad `model file`. yml,. pdiparams,. pdmodel `. | ||
|
||
|
||
* Download BlazeFace model parameter file | ||
|
||
|Network structure | input size | number of pictures/GPU | learning rate strategy | Easy/Media/Hard Set | prediction delay (SD855) | model size (MB) | download | configuration file| | ||
|:------------:|:--------:|:----:|:-------:|:-------:|:---------:|:----------:|:---------:|:--------:| | ||
| BlazeFace | 640 | 8 | 1000e | 0.885 / 0.855 / 0.731 | - | 0.472 |[Download link](https://paddledet.bj.bcebos.com/models/blazeface_1000e.pdparams) | [Config file](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection/blazeface_1000e.yml) | | ||
| BlazeFace-FPN-SSH | 640 | 8 | 1000e | 0.907 / 0.883 / 0.793 | - | 0.479 |[Download link](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams) | [Config file](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection/blazeface_fpn_ssh_1000e.yml) | | ||
|
||
* Export paddle-format file | ||
```bash | ||
python tools/export_model.py -c configs/face_detection/blazeface_1000e.yml -o weights=blazeface_1000e.pdparams --export_serving_model=True | ||
``` | ||
|
||
## Detailed Deployment Tutorials | ||
|
||
- [Python Deployment](python) | ||
- [C++ Deployment](cpp) | ||
|
||
|
||
## Release Note | ||
|
||
- This tutorial and related code are written based on [BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# BlazeFace准备部署模型 | ||
|
||
- BlazeFace部署模型实现来自[BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection),和[基于WiderFace的预训练模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection) | ||
- (1)[官方库](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/tools)中提供的*.params,通过[export_model.py](#导出PADDLE模型)操作后,可进行部署; | ||
- (2)开发者基于自己数据训练的BlazeFace模型,可按照[export_model.py](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/tools/export_model.py)导出模型后,完成部署。 | ||
|
||
## 导出PADDLE模型 | ||
|
||
访问[BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection)github库,按照指引下载安装,下载`.yml`和`.params` 模型参数,利用 `export_model.py` 得到`paddle`模型文件`.yml, .pdiparams, .pdmodel`。 | ||
|
||
* 下载BlazeFace模型参数文件 | ||
|
||
| 网络结构 | 输入尺寸 | 图片个数/GPU | 学习率策略 | Easy/Medium/Hard Set | 预测时延(SD855)| 模型大小(MB) | 下载 | 配置文件 | | ||
|:------------:|:--------:|:----:|:-------:|:-------:|:---------:|:----------:|:---------:|:--------:| | ||
| BlazeFace | 640 | 8 | 1000e | 0.885 / 0.855 / 0.731 | - | 0.472 |[下载链接](https://paddledet.bj.bcebos.com/models/blazeface_1000e.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection/blazeface_1000e.yml) | | ||
| BlazeFace-FPN-SSH | 640 | 8 | 1000e | 0.907 / 0.883 / 0.793 | - | 0.479 |[下载链接](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection/blazeface_fpn_ssh_1000e.yml) | | ||
|
||
* 导出paddle格式文件 | ||
```bash | ||
python tools/export_model.py -c configs/face_detection/blazeface_1000e.yml -o weights=blazeface_1000e.pdparams --export_serving_model=True | ||
``` | ||
|
||
## 详细部署文档 | ||
|
||
- [Python部署](python) | ||
- [C++部署](cpp) | ||
|
||
|
||
## 版本说明 | ||
|
||
- 本版本文档和代码基于[BlazeFace](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/face_detection) 编写 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
PROJECT(infer_demo C CXX) | ||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10) | ||
|
||
# Specifies the path to the fastdeploy library after you have downloaded it | ||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") | ||
|
||
include(../../../../../FastDeploy.cmake) | ||
|
||
# Add the FastDeploy dependency header | ||
include_directories(${FASTDEPLOY_INCS}) | ||
|
||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) | ||
# Add the FastDeploy library dependency | ||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
English | [简体中文](README_CN.md) | ||
# BlazeFace C++ Deployment Example | ||
|
||
This directory provides examples that `infer.cc` fast finishes the deployment of BlazeFace on CPU/GPU。 | ||
|
||
Before deployment, two steps require confirmation | ||
|
||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) | ||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) | ||
|
||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. | ||
|
||
```bash | ||
mkdir build | ||
cd build | ||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above | ||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz # x.x.x >= 1.0.4 | ||
tar xvf fastdeploy-linux-x64-x.x.x.tgz # x.x.x >= 1.0.4 | ||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x # x.x.x >= 1.0.4 | ||
make -j | ||
|
||
#Download the official converted YOLOv7Face model files and test images | ||
wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/lite/resources/test_lite_face_detector_3.jpg | ||
wget https://bj.bcebos.com/paddlehub/fastdeploy/blzeface-1000e.tgz | ||
|
||
#Use blazeface-1000e model | ||
# CPU inference | ||
./infer_demo blazeface-1000e/ test_lite_face_detector_3.jpg 0 | ||
# GPU Inference | ||
./infer_demo blazeface-1000e/ test_lite_face_detector_3.jpg 1 | ||
``` | ||
|
||
The visualized result after running is as follows | ||
|
||
<img width="640" src="https://user-images.githubusercontent.com/49013063/206170111-843febb6-67d6-4c46-a121-d87d003bba21.jpg"> | ||
|
||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to: | ||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) | ||
|
||
## BlazeFace C++ Interface | ||
|
||
### BlazeFace Class | ||
|
||
```c++ | ||
fastdeploy::vision::facedet::BlazeFace( | ||
const string& model_file, | ||
const string& params_file = "", | ||
const string& config_file = "", | ||
const RuntimeOption& runtime_option = RuntimeOption(), | ||
const ModelFormat& model_format = ModelFormat::PADDLE) | ||
``` | ||
BlazeFace model loading and initialization, among which model_file is the exported PADDLE model format | ||
**Parameter** | ||
> * **model_file**(str): Model file path | ||
> * **params_file**(str): Parameter file path. Only passing an empty string when the model is in PADDLE format | ||
> * **config_file**(str): Config file path. Only passing an empty string when the model is in PADDLE format | ||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration | ||
> * **model_format**(ModelFormat): Model format. PADDLE format by default | ||
#### Predict Function | ||
> ```c++ | ||
> BlazeFace::Predict(cv::Mat& im, FaceDetectionResult* result) | ||
> ``` | ||
> | ||
> Model prediction interface. Input images and output detection results. | ||
> | ||
> **Parameter** | ||
> | ||
> > * **im**: Input images in HWC or BGR format | ||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for FaceDetectionResult | ||
- [Model Description](../../) | ||
- [Python Deployment](../python) | ||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
[English](README.md) | 简体中文 | ||
# BlazeFace C++部署示例 | ||
|
||
本目录下提供`infer.cc`快速完成BlazeFace在CPU/GPU部署的示例。 | ||
|
||
在部署前,需确认以下两个步骤 | ||
|
||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) | ||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) | ||
|
||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 | ||
|
||
```bash | ||
mkdir build | ||
cd build | ||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用 | ||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz # x.x.x >= 1.0.4 | ||
tar xvf fastdeploy-linux-x64-x.x.x.tgz # x.x.x >= 1.0.4 | ||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x # x.x.x >= 1.0.4 | ||
make -j | ||
|
||
#下载官方转换好的BlazeFace模型文件和测试图片 | ||
wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/lite/resources/test_lite_face_detector_3.jpg | ||
wget https://bj.bcebos.com/paddlehub/fastdeploy/blzeface-1000e.tgz | ||
|
||
#使用blazeface-1000e模型 | ||
# CPU推理 | ||
./infer_demo blazeface-1000e/ test_lite_face_detector_3.jpg 0 | ||
# GPU推理 | ||
./infer_demo blazeface-1000e/ test_lite_face_detector_3.jpg 1 | ||
|
||
运行完成可视化结果如下图所示 | ||
|
||
<img width="640" src="https://user-images.githubusercontent.com/49013063/206170111-843febb6-67d6-4c46-a121-d87d003bba21.jpg"> | ||
|
||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: | ||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) | ||
|
||
## BlazeFace C++接口 | ||
|
||
### BlazeFace类 | ||
|
||
```c++ | ||
fastdeploy::vision::facedet::BlazeFace( | ||
const string& model_file, | ||
const string& params_file = "", | ||
const string& config_file = "", | ||
const RuntimeOption& runtime_option = RuntimeOption(), | ||
const ModelFormat& model_format = ModelFormat::PADDLE) | ||
``` | ||
|
||
BlazeFace模型加载和初始化,其中model_file为导出的PADDLE模型格式。 | ||
|
||
**参数** | ||
|
||
> * **model_file**(str): 模型文件路径 | ||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 | ||
> * **config_file**(str): 配置文件路径,当模型格式为ONNX时,此参数传入空字符串即可 | ||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 | ||
> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式 | ||
|
||
#### Predict函数 | ||
|
||
> ```c++ | ||
> BlazeFace::Predict(cv::Mat& im, FaceDetectionResult* result) | ||
> ``` | ||
> | ||
> 模型预测接口,输入图像直接输出检测结果。 | ||
> | ||
> **参数** | ||
> | ||
> > * **im**: 输入图像,注意需为HWC,BGR格式 | ||
> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) | ||
|
||
- [模型介绍](../../) | ||
- [Python部署](../python) | ||
- [视觉模型预测结果](../../../../../docs/api/vision_results/) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. | ||
// | ||
// Licensed under the Apache License, Version 2.0 (the "License"); | ||
// you may not use this file except in compliance with the License. | ||
// You may obtain a copy of the License at | ||
// | ||
// http://www.apache.org/licenses/LICENSE-2.0 | ||
// | ||
// Unless required by applicable law or agreed to in writing, software | ||
// distributed under the License is distributed on an "AS IS" BASIS, | ||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
#include "fastdeploy/vision.h" | ||
|
||
#ifdef WIN32 | ||
const char sep = '\\'; | ||
#else | ||
const char sep = '/'; | ||
#endif | ||
|
||
void CpuInfer(const std::string& model_dir, const std::string& image_file) { | ||
auto model_file = model_dir + sep + "model.pdmodel"; | ||
auto params_file = model_dir + sep + "model.pdiparams"; | ||
auto config_file = model_dir + sep + "infer_cfg.yml"; | ||
auto option = fastdeploy::RuntimeOption(); | ||
option.UseCpu(); | ||
auto model = fastdeploy::vision::facedet::BlazeFace( | ||
model_file, params_file, config_file, option); | ||
if (!model.Initialized()) { | ||
std::cerr << "Failed to initialize." << std::endl; | ||
return; | ||
} | ||
|
||
auto im = cv::imread(image_file); | ||
|
||
fastdeploy::vision::FaceDetectionResult res; | ||
if (!model.Predict(im, &res)) { | ||
std::cerr << "Failed to predict." << std::endl; | ||
return; | ||
} | ||
std::cout << res.Str() << std::endl; | ||
|
||
auto vis_im = fastdeploy::vision::VisFaceDetection(im, res); | ||
cv::imwrite("vis_result.jpg", vis_im); | ||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl; | ||
} | ||
|
||
void GpuInfer(const std::string& model_dir, const std::string& image_file) { | ||
auto model_file = model_dir + sep + "model.pdmodel"; | ||
auto params_file = model_dir + sep + "model.pdiparams"; | ||
auto config_file = model_dir + sep + "infer_cfg.yml"; | ||
auto option = fastdeploy::RuntimeOption(); | ||
option.UseGpu(); | ||
auto model = fastdeploy::vision::facedet::BlazeFace( | ||
model_file, params_file, config_file, option); | ||
if (!model.Initialized()) { | ||
std::cerr << "Failed to initialize." << std::endl; | ||
return; | ||
} | ||
|
||
auto im = cv::imread(image_file); | ||
|
||
fastdeploy::vision::FaceDetectionResult res; | ||
if (!model.Predict(im, &res)) { | ||
std::cerr << "Failed to predict." << std::endl; | ||
return; | ||
} | ||
std::cout << res.Str() << std::endl; | ||
|
||
auto vis_im = fastdeploy::vision::VisFaceDetection(im, res); | ||
cv::imwrite("vis_result.jpg", vis_im); | ||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl; | ||
} | ||
|
||
int main(int argc, char* argv[]) { | ||
if (argc < 4) { | ||
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, " | ||
"e.g ./infer_model yolov5s-face.onnx ./test.jpeg 0" | ||
<< std::endl; | ||
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run " | ||
"with gpu; 2: run with gpu and use tensorrt backend." | ||
<< std::endl; | ||
return -1; | ||
} | ||
|
||
if (std::atoi(argv[3]) == 0) { | ||
CpuInfer(argv[1], argv[2]); | ||
} else if (std::atoi(argv[3]) == 1) { | ||
GpuInfer(argv[1], argv[2]); | ||
} | ||
return 0; | ||
} |
Oops, something went wrong.