Skip to content

Commit

Permalink
[Backend And DOC] 改进ppseg文档 + 为RKNPU2后端新增对多输入模型的支持 (PaddlePaddle#491)
Browse files Browse the repository at this point in the history
* 11-02/14:35
* 新增输入数据format错误判断
* 优化推理过程,减少内存分配次数
* 支持多输入rknn模型
* rknn模型输出shape为三维时,输出将被强制对齐为4纬。现在将直接抹除rknn补充的shape,方便部分对输出shape进行判断的模型进行正确的后处理。

* 11-03/17:25
* 支持导出多输入RKNN模型
* 更新各种文档
* ppseg改用Fastdeploy中的模型进行转换

* 11-03/17:25
* 新增开源头

* 11-03/21:48
* 删除无用debug代码,补充注释
  • Loading branch information
Zheng-Bicheng authored Nov 4, 2022
1 parent a36d49a commit ce828ec
Show file tree
Hide file tree
Showing 12 changed files with 302 additions and 174 deletions.
4 changes: 3 additions & 1 deletion docs/cn/faq/rknpu2/rknpu2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# RKNPU2模型部署

## 安装环境
RKNPU2模型导出只支持在x86Linux平台上进行导出,安装流程请参考[RKNPU2模型导出环境配置文档](./install_rknn_toolkit2.md)

## ONNX模型转换为RKNN模型
ONNX模型不能直接调用RK芯片中的NPU进行运算,需要把ONNX模型转换为RKNN模型,具体流程请查看[转换文档](./export.md)

Expand Down Expand Up @@ -61,4 +64,3 @@ int infer_scrfd_npu() {
- [rknpu2板端环境安装配置](../../build_and_install/rknpu2.md)
- [rknn_toolkit2安装文档](./install_rknn_toolkit2.md)
- [onnx转换rknn文档](./export.md)

132 changes: 93 additions & 39 deletions examples/vision/segmentation/paddleseg/rknpu2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,49 +4,103 @@

- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)

目前FastDeploy支持如下模型的部署
目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:

- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/unet/README.md)
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/README.md)
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/fcn/README.md)
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/deeplabv3/README.md)

【注意】如你部署的为**PP-Matting****PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting)
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |

## 准备PaddleSeg部署模型以及转换模型
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
* Paddle动态图模型转换为ONNX模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg)
* ONNX模型转换RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。

## 模型转换example

下面以Portait-PP-HumanSegV2_Lite(肖像分割模型)为例子,教大家如何转换PPSeg模型到RKNN模型。
```bash
# 下载Paddle2ONNX仓库
git clone https://github.com/PaddlePaddle/Paddle2ONNX

# 下载Paddle静态图模型并为Paddle静态图模型固定输入shape
## 进入为Paddle静态图模型固定输入shape的目录
cd Paddle2ONNX/tools/paddle
## 下载Paddle静态图模型并解压
wget https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz
tar xvf Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz
python paddle_infer_shape.py --model_dir Portrait_PP_HumanSegV2_Lite_256x144_infer/ \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_dir Portrait_PP_HumanSegV2_Lite_256x144_infer \
--input_shape_dict="{'x':[1,3,144,256]}"

# 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
paddle2onnx --model_dir Portrait_PP_HumanSegV2_Lite_256x144_infer \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_file Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer.onnx \
--enable_dev_version True

# ONNX模型转RKNN模型
# 将ONNX模型目录拷贝到Fastdeploy根目录
cp -r ./Portrait_PP_HumanSegV2_Lite_256x144_infer /path/to/Fastdeploy
# 转换模型,模型将生成在Portrait_PP_HumanSegV2_Lite_256x144_infer目录下
python tools/rknpu2/export.py --config_path tools/rknpu2/config/RK3588/Portrait_PP_HumanSegV2_Lite_256x144_infer.yaml
```

## 修改yaml配置文件

**模型转换example**中,我们对模型的shape进行了固定,因此对应的yaml文件也要进行修改,如下:

**原yaml文件**
```yaml
Deploy:
input_shape:
- -1
- 3
- -1
- -1
model: model.pdmodel
output_dtype: float32
output_op: none
params: model.pdiparams
transforms:
- target_size:
- 256
- 144
type: Resize
- type: Normalize
```
RKNPU部署模型前需要将模型转换成RKNN模型,其过程一般可以简化为如下步骤:
* Paddle动态图模型 -> ONNX模型 -> RKNN模型。
* Paddle动态图模型 转换为 ONNX模型的过程请参考([PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg))。
* 对于ONNX模型 转换 RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
以PPHumanSeg为例,在获取到ONNX模型后,其转换为RK3588步骤如下:
* 编写config.yaml文件
```yaml
model_path: ./portrait_pp_humansegv2_lite_256x144_pretrained.onnx
output_folder: ./
target_platform: RK3588
normalize:
mean: [0.5,0.5,0.5]
std: [0.5,0.5,0.5]
outputs: None
```
* 执行转换代码
```bash
python /path/to/fastDeploy/toosl/export.py --config_path=/path/to/fastdeploy/tools/rknpu2/config/ppset_config.yaml
```

## 下载预训练模型

为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型(导出方式为:**指定**`--input_shape`,**指定**`--output_op none`,**指定**`--without_argmax`),开发者可直接下载使用。

| 任务场景 | 模型 | 模型版本(表示已经测试的版本) | 大小 | ONNX/RKNN是否支持 | ONNX/RKNN速度(ms) |
|------------------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------|-----|---------------|-----------------|
| Segmentation | PP-LiteSeg | [PP_LiteSeg_T_STDC1_cityscapes](https://bj.bcebos.com/fastdeploy/models/rknn2/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_3588.tgz) | - | True/True | 6634/5598 |
| Segmentation | PP-HumanSegV2Lite | [portrait](https://bj.bcebos.com/fastdeploy/models/rknn2/portrait_pp_humansegv2_lite_256x144_inference_model_without_softmax_3588.tgz) | - | True/True | 456/266 |
| Segmentation | PP-HumanSegV2Lite | [human](https://bj.bcebos.com/fastdeploy/models/rknn2/human_pp_humansegv2_lite_192x192_pretrained_3588.tgz) | - | True/True | 496/256 |
**修改后的yaml文件**
```yaml
Deploy:
input_shape:
- 1
- 3
- 144
- 256
model: model.pdmodel
output_dtype: float32
output_op: none
params: model.pdiparams
transforms:
- target_size:
- 256
- 144
type: Resize
- type: Normalize
```
## 详细部署文档
- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2.md)
- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2/rknpu2.md)
- [C++部署](cpp)
- [Python部署](python)
- [Python部署](python)
10 changes: 2 additions & 8 deletions examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,7 @@ fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.

### 拷贝模型文件,以及配置文件至model文件夹
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
转换为RKNN后的模型文件也需要拷贝至model,这里提供了转换好的文件,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
```bash
cd model
wget https://bj.bcebos.com/fastdeploy/models/rknn2/human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
tar xvf human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
cp -r ./human_pp_humansegv2_lite_192x192_pretrained_3588 ./model
```
转换为RKNN后的模型文件也需要拷贝至model,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。

### 准备测试图片至image文件夹
```bash
Expand Down Expand Up @@ -81,4 +75,4 @@ RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作

- [模型介绍](../../)
- [Python部署](../python)
- [转换PPSeg RKNN模型文档](../README.md)
- [转换PPSeg RKNN模型文档](../README.md)
19 changes: 16 additions & 3 deletions examples/vision/segmentation/paddleseg/rknpu2/cpp/infer.cc
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <iostream>
#include <string>
#include "fastdeploy/vision.h"
Expand Down Expand Up @@ -40,11 +53,11 @@ std::string GetModelPath(std::string& model_path, const std::string& device) {

void InferHumanPPHumansegv2Lite(const std::string& device) {
std::string model_file =
"./model/human_pp_humansegv2_lite_192x192_pretrained_3588/"
"human_pp_humansegv2_lite_192x192_pretrained_3588.";
"./model/Portrait_PP_HumanSegV2_Lite_256x144_infer/"
"Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.";
std::string params_file;
std::string config_file =
"./model/human_pp_humansegv2_lite_192x192_pretrained_3588/deploy.yaml";
"./model/Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml";

fastdeploy::RuntimeOption option = GetOption(device);
fastdeploy::ModelFormat format = GetFormat(device);
Expand Down
10 changes: 3 additions & 7 deletions examples/vision/segmentation/paddleseg/rknpu2/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)

【注意】如你部署的为**PP-Matting****PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)

Expand All @@ -13,17 +13,13 @@
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/python

# 下载模型
wget https://bj.bcebos.com/fastdeploy/models/rknn2/human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
tar xvf human_pp_humansegv2_lite_192x192_pretrained_3588.tgz

# 下载图片
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip images.zip

# 推理
python3 infer.py --model_file ./human_pp_humansegv2_lite_192x192_pretrained_3588/human_pp_humansegv2_lite_192x192_pretrained_3588.rknn \
--config_file ./human_pp_humansegv2_lite_192x192_pretrained_3588/deploy.yaml \
python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
--config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
--image images/portrait_heng.jpg
```

Expand Down
19 changes: 18 additions & 1 deletion examples/vision/segmentation/paddleseg/rknpu2/python/infer.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import fastdeploy as fd
import cv2
import os
Expand Down Expand Up @@ -30,7 +43,11 @@ def build_option(args):
params_file = ""
config_file = args.config_file
model = fd.vision.segmentation.PaddleSegModel(
model_file, params_file, config_file, runtime_option=runtime_option,model_format=fd.ModelFormat.RKNN)
model_file,
params_file,
config_file,
runtime_option=runtime_option,
model_format=fd.ModelFormat.RKNN)

model.disable_normalize_and_permute()

Expand Down
Loading

0 comments on commit ce828ec

Please sign in to comment.