Skip to content

Commit

Permalink
[Model] Add picodet for RV1126 and A311D (PaddlePaddle#1549)
Browse files Browse the repository at this point in the history
* add infer for picodet

* update code

* update lite lib

---------

Co-authored-by: DefTruth <[email protected]>
  • Loading branch information
yeliang2258 and DefTruth authored Apr 10, 2023
1 parent cc4bbf2 commit e2f5a9c
Show file tree
Hide file tree
Showing 16 changed files with 209 additions and 50 deletions.
4 changes: 2 additions & 2 deletions cmake/toolchain.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ if (DEFINED TARGET_ABI)
set(OPENCV_URL "https://bj.bcebos.com/fastdeploy/third_libs/opencv-linux-armv7hf-4.6.0.tgz")
set(OPENCV_FILENAME "opencv-linux-armv7hf-4.6.0")
if(WITH_TIMVX)
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux-armhf-timvx-20221229.tgz")
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux-armhf-timvx-20230316.tgz")
else()
message(STATUS "PADDLELITE_URL will be configured if WITH_TIMVX=ON.")
endif()
Expand All @@ -33,7 +33,7 @@ if (DEFINED TARGET_ABI)
set(OPENCV_URL "https://bj.bcebos.com/fastdeploy/third_libs/opencv-linux-aarch64-4.6.0.tgz")
set(OPENCV_FILENAME "opencv-linux-aarch64-4.6.0")
if(WITH_TIMVX)
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux-aarch64-timvx-20221209.tgz")
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux-aarch64-timvx-20230316.tgz")
else()
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux-arm64-20221209.tgz")
endif()
Expand Down
4 changes: 2 additions & 2 deletions examples/vision/detection/paddledetection/a311d/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
English | [简体中文](README_CN.md)
# Deploy PP-YOLOE Quantification Model on A311D
Now FastDeploy supports the deployment of PP-YOLOE quantification model to A311D on Paddle Lite.
Now FastDeploy supports the deployment of PP-YOLOE and PicoDet quantification model to A311D on Paddle Lite.

For model quantification and download, refer to [Model Quantification](../quantize/README.md)


## Detailed Deployment Tutorials

Only C++ deployment is supported on A311D
Only C++ deployment is supported on A311D

- [C++ deployment](cpp)
2 changes: 1 addition & 1 deletion examples/vision/detection/paddledetection/a311d/README_CN.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[English](README.md) | 简体中文
# PP-YOLOE 量化模型在 A311D 上的部署
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 A311D 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 和 PicoDet 量化模型到 A311D 上。

模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,16 @@ include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
include_directories(${FASTDEPLOY_INCS})
include_directories(${FastDeploy_INCLUDE_DIRS})

add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer_ppyoloe.cc)
add_executable(ppyoloe_infer_demo ${PROJECT_SOURCE_DIR}/infer_ppyoloe.cc)
add_executable(picodet_infer_demo ${PROJECT_SOURCE_DIR}/infer_picodet.cc)
# 添加FastDeploy库依赖
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
target_link_libraries(ppyoloe_infer_demo ${FASTDEPLOY_LIBS})
target_link_libraries(picodet_infer_demo ${FASTDEPLOY_LIBS})

set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/build/install)

install(TARGETS infer_demo DESTINATION ./)
install(TARGETS ppyoloe_infer_demo DESTINATION ./)
install(TARGETS picodet_infer_demo DESTINATION ./)

install(DIRECTORY models DESTINATION ./)
install(DIRECTORY images DESTINATION ./)
Expand Down
24 changes: 15 additions & 9 deletions examples/vision/detection/paddledetection/a311d/cpp/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
English | [简体中文](README_CN.md)
# PP-YOLOE Quantitative Model C++ Deployment Example
# PP-YOLOE Quantitative Model C++ Deployment Example

`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on A311D.
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE and PicoDet quantization model deployment on A311D.

## Deployment Preparations
### FastDeploy Cross-compile Environment Preparations
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction)
1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).

### Model Preparations
1. You can directly use the quantized model provided by FastDeploy for deployment.
2. You can use PaddleDetection to export Float32 models, note that you need to set the parameter when exporting model: use_shared_conv=False. For more information: [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe).
2. You can use PaddleDetection to export Float32 models, note that you need to set the parameter when exporting PP-YOLOE model: use_shared_conv=False. For more information: [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe). For more information when exporting PicoDet model: [PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet).
3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.

For more information, please refer to [Model Quantization](../../quantize/README.md)

## Deploying the Quantized PP-YOLOE Detection model on A311D
Please follow these steps to complete the deployment of the PP-YOLOE quantization model on A311D.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
## Deploying the Quantized PP-YOLOE and PicoDet Detection model on A311D
Please follow these steps to complete the deployment of the PP-YOLOE and PicoDet quantization model on A311D.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)

2. Copy the compiled library to the current directory. You can run this line:
```bash
Expand All @@ -28,9 +28,14 @@ cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/pa
```bash
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
mkdir models && mkdir images
# download PP-YOLOE model
wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
tar -xvf ppyoloe_noshare_qat.tar.gz
cp -r ppyoloe_noshare_qat models
# download PicoDet model
wget https://bj.bcebos.com/fastdeploy/models/picodet_withNMS_quant_qat.tar.gz
tar -xvf picodet_withNMS_quant_qat.tar.gz
cp -r picodet_withNMS_quant_qat models
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp -r 000000014439.jpg images
```
Expand All @@ -45,12 +50,13 @@ make install
# After success, an install folder will be created with a running demo and libraries required for deployment.
```

5. Deploy the PP-YOLOE detection model to A311D based on adb.
5. Deploy the PP-YOLOE and PicoDet detection model to A311D based on adb. You can run the following lines:
```bash
# Go to the install directory.
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
bash run_with_adb.sh ppyoloe_infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
bash run_with_adb.sh picodet_infer_demo picodet_withNMS_quant_qat 000000014439.jpg $DEVICE_ID
```

The output is:
Expand Down
19 changes: 13 additions & 6 deletions examples/vision/detection/paddledetection/a311d/cpp/README_CN.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
[English](README.md) | 简体中文
# PP-YOLOE 量化模型 C++ 部署示例

本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 A311D 上的部署推理加速。
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 和 PicoDet 量化模型在 A311D 上的部署推理加速。

## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)

### 模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出 PP-YOLOE 模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe),导出 PicoDet 请参考:[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。

更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)

## 在 A311D 上部署量化后的 PP-YOLOE 检测模型
请按照以下步骤完成在 A311D 上部署 PP-YOLOE 量化模型:
## 在 A311D 上部署量化后的 PP-YOLOE 和 PicoDet 检测模型
请按照以下步骤完成在 A311D 上部署 PP-YOLOE 和 PicoDet 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)

2. 将编译后的库拷贝到当前目录,可使用如下命令:
Expand All @@ -28,9 +28,15 @@ cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/pa
```bash
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
mkdir models && mkdir images
# 下载 FastDeploy 准备的 PP-YOLOE 模型
wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
tar -xvf ppyoloe_noshare_qat.tar.gz
cp -r ppyoloe_noshare_qat models
# 下载 FastDeploy 准备的 PicoDet 模型
wget https://bj.bcebos.com/fastdeploy/models/picodet_withNMS_quant_qat.tar.gz
tar -xvf picodet_withNMS_quant_qat.tar.gz
cp -r picodet_withNMS_quant_qat models
# 下载 FastDeploy 准备的测试图片
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp -r 000000014439.jpg images
```
Expand All @@ -45,12 +51,13 @@ make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
```

5. 基于 adb 工具部署 PP-YOLOE 检测模型到晶晨 A311D
5. 基于 adb 工具部署 PP-YOLOE 和 PicoDet 检测模型到 A311D,可使用如下命令:
```bash
# 进入 install 目录
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
bash run_with_adb.sh ppyoloe_infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
bash run_with_adb.sh picodet_infer_demo picodet_withNMS_quant_qat 000000014439.jpg $DEVICE_ID
```

部署成功后运行结果如下:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/vision.h"
#ifdef WIN32
const char sep = '\\';
#else
const char sep = '/';
#endif

void InitAndInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "infer_cfg.yml";
auto subgraph_file = model_dir + sep + "subgraph.txt";
fastdeploy::vision::EnableFlyCV();
fastdeploy::RuntimeOption option;
option.UseTimVX();
option.SetLiteSubgraphPartitionPath(subgraph_file);

auto model = fastdeploy::vision::detection::PicoDet(model_file, params_file,
config_file, option);
assert(model.Initialized());

auto im = cv::imread(image_file);

fastdeploy::vision::DetectionResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}

std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::VisDetection(im, res, 0.3);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

int main(int argc, char* argv[]) {
if (argc < 3) {
std::cout << "Usage: picodet_infer_demo path/to/quant_model "
"path/to/image "
"e.g ./picodet_infer_demo ./picodet_ptq ./test.jpeg"
<< std::endl;
return -1;
}

std::string model_dir = argv[1];
std::string test_image = argv[2];
InitAndInfer(model_dir, test_image);
return 0;
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ void InitAndInfer(const std::string& model_dir, const std::string& image_file) {
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "infer_cfg.yml";
auto subgraph_file = model_dir + sep + "subgraph.txt";
fastdeploy::vision::EnableFlyCV();
fastdeploy::vision::EnableFlyCV();
fastdeploy::RuntimeOption option;
option.UseTimVX();
option.SetLiteSubgraphPartitionPath(subgraph_file);
Expand All @@ -46,14 +46,13 @@ void InitAndInfer(const std::string& model_dir, const std::string& image_file) {
auto vis_im = fastdeploy::vision::VisDetection(im, res, 0.5);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;

}

int main(int argc, char* argv[]) {
if (argc < 3) {
std::cout << "Usage: infer_demo path/to/quant_model "
std::cout << "Usage: ppyoloe_infer_demo path/to/quant_model "
"path/to/image "
"e.g ./infer_demo ./PPYOLOE_L_quant ./test.jpeg"
"e.g ./ppyoloe_infer_demo ./PPYOLOE_L_quant ./test.jpeg"
<< std::endl;
return -1;
}
Expand Down
4 changes: 2 additions & 2 deletions examples/vision/detection/paddledetection/rv1126/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
English | [简体中文](README_CN.md)
# Deploy PP-YOLOE Quantification Model on RV1126
Now FastDeploy supports the deployment of PP-YOLOE quantification model to RV1126 based on Paddle Lite.
# Deploy PP-YOLOE and PicoDet Quantification Model on RV1126
Now FastDeploy supports the deployment of PP-YOLOE and PicoDet quantification model to RV1126 based on Paddle Lite.

For model quantification and download, refer to [Model Quantification](../quantize/README.md)

Expand Down
4 changes: 2 additions & 2 deletions examples/vision/detection/paddledetection/rv1126/README_CN.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[English](README.md) | 简体中文
# PP-YOLOE 量化模型在 RV1126 上的部署
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 RV1126 上。
# PP-YOLOE 和 PicoDet 量化模型在 RV1126 上的部署
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 和 PicoDet 量化模型到 RV1126 上。

模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,16 @@ include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
include_directories(${FASTDEPLOY_INCS})
include_directories(${FastDeploy_INCLUDE_DIRS})

add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer_ppyoloe.cc)
add_executable(ppyoloe_infer_demo ${PROJECT_SOURCE_DIR}/infer_ppyoloe.cc)
add_executable(picodet_infer_demo ${PROJECT_SOURCE_DIR}/infer_picodet.cc)
# 添加FastDeploy库依赖
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
target_link_libraries(ppyoloe_infer_demo ${FASTDEPLOY_LIBS})
target_link_libraries(picodet_infer_demo ${FASTDEPLOY_LIBS})

set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/build/install)

install(TARGETS infer_demo DESTINATION ./)
install(TARGETS ppyoloe_infer_demo DESTINATION ./)
install(TARGETS picodet_infer_demo DESTINATION ./)

install(DIRECTORY models DESTINATION ./)
install(DIRECTORY images DESTINATION ./)
Expand Down
Loading

0 comments on commit e2f5a9c

Please sign in to comment.