Skip to content

Commit

Permalink
[Backend] Add RKNPU2 backend support (PaddlePaddle#456)
Browse files Browse the repository at this point in the history
* 10-29/14:05
* 新增cmake
* 新增rknpu2 backend

* 10-29/14:43
* Runtime fd_type新增RKNPU代码

* 10-29/15:02
* 新增ppseg RKNPU2推理代码

* 10-29/15:46
* 新增ppseg RKNPU2 cpp example代码

* 10-29/15:51
* 新增README文档

* 10-29/15:51
* 按照要求修改部分注释以及变量名称

* 10-29/15:51
* 修复重命名之后,cc文件中的部分代码还用旧函数名的bug

* 10-29/22:32
* str(Device::NPU)将输出NPU而不是UNKOWN
* 修改runtime文件中的注释格式
* 新增Building Summary ENABLE_RKNPU2_BACKEND输出
* pybind新增支持rknpu2
* 新增python编译选项
* 新增PPSeg Python代码
* 新增以及更新各种文档

* 10-30/14:11
* 尝试修复编译cuda时产生的错误

* 10-30/19:27
* 修改CpuName和CoreMask层级
* 修改ppseg rknn推理层级
* 图片将移动到网络进行下载

* 10-30/19:39
* 更新文档

* 10-30/19:39
* 更新文档
* 更新ppseg rknpu2 example中的函数命名方式
* 更新ppseg rknpu2 example为一个cc文件
* 修复disable_normalize_and_permute部分的逻辑错误
* 移除rknpu2初始化时的无用参数

* 10-30/19:39
* 尝试重置python代码

* 10-30/10:16
* rknpu2_config.h文件不再包含rknn_api头文件防止出现导入错误的问题

* 10-31/14:31
* 修改pybind,支持最新的rknpu2 backends
* 再次支持ppseg python推理
* 移动cpuname 和 coremask的层级

* 10-31/15:35
* 尝试修复rknpu2导入错误

* 10-31/19:00
* 新增RKNPU2模型导出代码以及其对应的文档
* 更新大量文档错误

* 10-31/19:00
* 现在编译完fastdeploy仓库后无需重新设置RKNN2_TARGET_SOC

* 10-31/19:26
* 修改部分错误文档

* 10-31/19:26
* 修复错误删除的部分
* 修复各种错误文档
* 修复FastDeploy.cmake在设置RKNN2_TARGET_SOC错误时,提示错误的信息
* 修复rknpu2_backend.cc中存在的中文注释

* 10-31/20:45
* 删除无用的注释

* 10-31/20:45
* 按照要求修改Device::NPU为Device::RKNPU,硬件将共用valid_hardware_backends
* 删除无用注释以及debug代码

* 11-01/09:45
* 更新变量命名方式

* 11-01/10:16
* 修改部分文档,修改函数命名方式

Co-authored-by: Jason <[email protected]>
  • Loading branch information
Zheng-Bicheng and jiangjiajun authored Nov 1, 2022
1 parent bb00e07 commit 4ffcfbe
Show file tree
Hide file tree
Showing 37 changed files with 1,567 additions and 74 deletions.
11 changes: 10 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ option(ENABLE_TRT_BACKEND "Whether to enable tensorrt backend." OFF)
option(ENABLE_PADDLE_BACKEND "Whether to enable paddle backend." OFF)
option(ENABLE_POROS_BACKEND "Whether to enable poros backend." OFF)
option(ENABLE_OPENVINO_BACKEND "Whether to enable openvino backend." OFF)
option(ENABLE_RKNPU2_BACKEND "Whether to enable RKNPU2 backend." OFF)
option(ENABLE_LITE_BACKEND "Whether to enable paddle lite backend." OFF)
option(ENABLE_VISION "Whether to enable vision models usage." OFF)
option(ENABLE_TEXT "Whether to enable text models usage." OFF)
Expand Down Expand Up @@ -164,13 +165,14 @@ file(GLOB_RECURSE DEPLOY_PADDLE_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fas
file(GLOB_RECURSE DEPLOY_POROS_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/poros/*.cc)
file(GLOB_RECURSE DEPLOY_TRT_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/tensorrt/*.cc ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/tensorrt/*.cpp)
file(GLOB_RECURSE DEPLOY_OPENVINO_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/openvino/*.cc)
file(GLOB_RECURSE DEPLOY_RKNPU2_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/rknpu/rknpu2/*.cc)
file(GLOB_RECURSE DEPLOY_LITE_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/lite/*.cc)
file(GLOB_RECURSE DEPLOY_VISION_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/vision/*.cc)
file(GLOB_RECURSE DEPLOY_PIPELINE_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/pipeline/*.cc)
file(GLOB_RECURSE DEPLOY_VISION_CUDA_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/vision/*.cu)
file(GLOB_RECURSE DEPLOY_TEXT_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/text/*.cc)
file(GLOB_RECURSE DEPLOY_PYBIND_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/pybind/*.cc ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/*_pybind.cc)
list(REMOVE_ITEM ALL_DEPLOY_SRCS ${DEPLOY_ORT_SRCS} ${DEPLOY_PADDLE_SRCS} ${DEPLOY_POROS_SRCS} ${DEPLOY_TRT_SRCS} ${DEPLOY_OPENVINO_SRCS} ${DEPLOY_LITE_SRCS} ${DEPLOY_VISION_SRCS} ${DEPLOY_TEXT_SRCS} ${DEPLOY_PIPELINE_SRCS})
list(REMOVE_ITEM ALL_DEPLOY_SRCS ${DEPLOY_ORT_SRCS} ${DEPLOY_PADDLE_SRCS} ${DEPLOY_POROS_SRCS} ${DEPLOY_TRT_SRCS} ${DEPLOY_OPENVINO_SRCS} ${DEPLOY_LITE_SRCS} ${DEPLOY_VISION_SRCS} ${DEPLOY_TEXT_SRCS} ${DEPLOY_PIPELINE_SRCS} ${DEPLOY_RKNPU2_SRCS})

set(DEPEND_LIBS "")

Expand Down Expand Up @@ -227,6 +229,13 @@ if(ENABLE_OPENVINO_BACKEND)
include(${PROJECT_SOURCE_DIR}/cmake/openvino.cmake)
endif()

if(ENABLE_RKNPU2_BACKEND)
add_definitions(-DENABLE_RKNPU2_BACKEND)
list(APPEND ALL_DEPLOY_SRCS ${DEPLOY_RKNPU2_SRCS})
include(${PROJECT_SOURCE_DIR}/cmake/rknpu2.cmake)
list(APPEND DEPEND_LIBS ${RKNN_RT_LIB})
endif()

if(ENABLE_POROS_BACKEND)
set(CMAKE_CXX_STANDARD 14)
add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)
Expand Down
15 changes: 15 additions & 0 deletions FastDeploy.cmake.in
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ CMAKE_MINIMUM_REQUIRED(VERSION 3.8)

set(WITH_GPU @WITH_GPU@)
set(ENABLE_ORT_BACKEND @ENABLE_ORT_BACKEND@)
set(ENABLE_RKNPU2_BACKEND @ENABLE_RKNPU2_BACKEND@)
set(ENABLE_LITE_BACKEND @ENABLE_LITE_BACKEND@)
set(ENABLE_PADDLE_BACKEND @ENABLE_PADDLE_BACKEND@)
set(ENABLE_OPENVINO_BACKEND @ENABLE_OPENVINO_BACKEND@)
Expand All @@ -27,6 +28,7 @@ set(LIBRARY_NAME @LIBRARY_NAME@)
set(OPENCV_DIRECTORY "@OPENCV_DIRECTORY@")
set(ORT_DIRECTORY "@ORT_DIRECTORY@")
set(OPENVINO_DIRECTORY "@OPENVINO_DIRECTORY@")
set(RKNN2_TARGET_SOC "@RKNN2_TARGET_SOC@")

set(FASTDEPLOY_LIBS "")
set(FASTDEPLOY_INCS "")
Expand Down Expand Up @@ -88,6 +90,18 @@ if(ENABLE_OPENVINO_BACKEND)
list(APPEND FASTDEPLOY_LIBS ${OPENVINO_LIBS})
endif()

if(ENABLE_RKNPU2_BACKEND)
if(RKNN2_TARGET_SOC STREQUAL "RK356X")
set(RKNPU2_LIB ${CMAKE_CURRENT_LIST_DIR}/third_libs/install/rknpu2_runtime/RK356X/lib/librknn_api.so)
elseif (RKNN2_TARGET_SOC STREQUAL "RK3588")
set(RKNPU2_LIB ${CMAKE_CURRENT_LIST_DIR}/third_libs/install/rknpu2_runtime/RK3588/lib/librknn_api.so)
else ()
message(FATAL_ERROR "RKNN2_TARGET_SOC is not set, ref value: RK356X or RK3588")
endif()
message(STATUS "The path of RKNPU2 is ${RKNPU2_LIB}.")
list(APPEND FASTDEPLOY_LIBS ${RKNPU2_LIB})
endif()

if(ENABLE_LITE_BACKEND)
set(LITE_DIR ${CMAKE_CURRENT_LIST_DIR}/third_libs/install/${PADDLELITE_FILENAME})
if(ANDROID)
Expand Down Expand Up @@ -234,6 +248,7 @@ message(STATUS " C++ compiler version : ${CMAKE_CXX_COMPILER_VERSION}")
message(STATUS " CXX flags : ${CMAKE_CXX_FLAGS}")
message(STATUS " WITH_GPU : ${WITH_GPU}")
message(STATUS " ENABLE_ORT_BACKEND : ${ENABLE_ORT_BACKEND}")
message(STATUS " ENABLE_RKNPU2_BACKEND : ${ENABLE_RKNPU2_BACKEND}")
message(STATUS " ENABLE_PADDLE_BACKEND : ${ENABLE_PADDLE_BACKEND}")
message(STATUS " ENABLE_POROS_BACKEND : ${ENABLE_POROS_BACKEND}")
message(STATUS " ENABLE_OPENVINO_BACKEND : ${ENABLE_OPENVINO_BACKEND}")
Expand Down
26 changes: 26 additions & 0 deletions cmake/rknpu2.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# get RKNPU2_URL
set(RKNPU2_URL_BASE "https://bj.bcebos.com/fastdeploy/third_libs/")
set(RKNPU2_VERSION "1.4.0")
set(RKNPU2_FILE "rknpu2_runtime-linux-x64-${RKNPU2_VERSION}.tgz")
set(RKNPU2_URL "${RKNPU2_URL_BASE}${RKNPU2_FILE}")

# download_and_decompress
download_and_decompress(${RKNPU2_URL} ${CMAKE_CURRENT_BINARY_DIR}/${RKNPU2_FILE} ${THIRD_PARTY_PATH}/install/)

# set path
set(RKNPU_RUNTIME_PATH ${THIRD_PARTY_PATH}/install/rknpu2_runtime)

if (${CMAKE_SYSTEM_NAME} STREQUAL "Linux")
else ()
message(FATAL_ERROR "[rknpu2.cmake] Only support build rknpu2 in Linux")
endif ()


if (EXISTS ${RKNPU_RUNTIME_PATH})
set(RKNN_RT_LIB ${RKNPU_RUNTIME_PATH}/${RKNN2_TARGET_SOC}/lib/librknnrt.so)
include_directories(${RKNPU_RUNTIME_PATH}/${RKNN2_TARGET_SOC}/include)
else ()
message(FATAL_ERROR "[rknpu2.cmake] download_and_decompress rknpu2_runtime error")
endif ()


1 change: 1 addition & 0 deletions cmake/summary.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ function(fastdeploy_summary)
message(STATUS " FastDeploy version : ${FASTDEPLOY_VERSION}")
message(STATUS " Paddle2ONNX version : ${PADDLE2ONNX_VERSION}")
message(STATUS " ENABLE_ORT_BACKEND : ${ENABLE_ORT_BACKEND}")
message(STATUS " ENABLE_RKNPU2_BACKEND : ${ENABLE_RKNPU2_BACKEND}")
message(STATUS " ENABLE_PADDLE_BACKEND : ${ENABLE_PADDLE_BACKEND}")
message(STATUS " ENABLE_POROS_BACKEND : ${ENABLE_POROS_BACKEND}")
message(STATUS " ENABLE_TRT_BACKEND : ${ENABLE_TRT_BACKEND}")
Expand Down
32 changes: 17 additions & 15 deletions docs/cn/build_and_install/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,20 @@

## FastDeploy编译选项说明

| 选项 | 说明 |
| :--- | :---- |
| ENABLE_ORT_BACKEND | 默认OFF, 是否编译集成ONNX Runtime后端(CPU/GPU上推荐打开) |
| ENABLE_PADDLE_BACKEND | 默认OFF,是否编译集成Paddle Inference后端(CPU/GPU上推荐打开) |
| ENABLE_LITE_BACKEND | 默认OFF,是否编译集成Paddle Lite后端(编译Android库时需要设置为ON) |
| ENABLE_TRT_BACKEND | 默认OFF,是否编译集成TensorRT后端(GPU上推荐打开) |
| ENABLE_OPENVINO_BACKEND | 默认OFF,是否编译集成OpenVINO后端(CPU上推荐打开) |
| ENABLE_VISION | 默认OFF,是否编译集成视觉模型的部署模块 |
| ENABLE_TEXT | 默认OFF,是否编译集成文本NLP模型的部署模块 |
| WITH_GPU | 默认OFF, 当需要在GPU上部署时,需设置为ON |
| CUDA_DIRECTORY | 默认/usr/local/cuda, 当需要在GPU上部署时,用于指定CUDA(>=11.2)的路径 |
| TRT_DIRECTORY | 当开启TensorRT后端时,必须通过此开关指定TensorRT(>=8.4)的路径 |
| ORT_DIRECTORY | 当开启ONNX Runtime后端时,用于指定用户本地的ONNX Runtime库路径;如果不指定,编译过程会自动下载ONNX Runtime库 |
| OPENCV_DIRECTORY | 当ENABLE_VISION=ON时,用于指定用户本地的OpenCV库路径;如果不指定,编译过程会自动下载OpenCV库 |
| OPENVINO_DIRECTORY | 当开启OpenVINO后端时, 用于指定用户本地的OpenVINO库路径;如果不指定,编译过程会自动下载OpenVINO库 |
| 选项 | 说明 |
|:------------------------|:--------------------------------------------------------------------------|
| ENABLE_ORT_BACKEND | 默认OFF, 是否编译集成ONNX Runtime后端(CPU/GPU上推荐打开) |
| ENABLE_PADDLE_BACKEND | 默认OFF,是否编译集成Paddle Inference后端(CPU/GPU上推荐打开) |
| ENABLE_LITE_BACKEND | 默认OFF,是否编译集成Paddle Lite后端(编译Android库时需要设置为ON) |
| ENABLE_RKNPU2_BACKEND | 默认OFF,是否编译集成RKNPU2后端(RK3588/RK3568/RK3566上推荐打开) |
| ENABLE_TRT_BACKEND | 默认OFF,是否编译集成TensorRT后端(GPU上推荐打开) |
| ENABLE_OPENVINO_BACKEND | 默认OFF,是否编译集成OpenVINO后端(CPU上推荐打开) |
| ENABLE_VISION | 默认OFF,是否编译集成视觉模型的部署模块 |
| ENABLE_TEXT | 默认OFF,是否编译集成文本NLP模型的部署模块 |
| WITH_GPU | 默认OFF, 当需要在GPU上部署时,需设置为ON |
| RKNN2_TARGET_SOC | ENABLE_RKNPU2_BACKEND时才需要使用这个编译选项。无默认值, 可输入值为RK3588/RK356X, 必须填入,否则 将编译失败 |
| CUDA_DIRECTORY | 默认/usr/local/cuda, 当需要在GPU上部署时,用于指定CUDA(>=11.2)的路径 |
| TRT_DIRECTORY | 当开启TensorRT后端时,必须通过此开关指定TensorRT(>=8.4)的路径 |
| ORT_DIRECTORY | 当开启ONNX Runtime后端时,用于指定用户本地的ONNX Runtime库路径;如果不指定,编译过程会自动下载ONNX Runtime库 |
| OPENCV_DIRECTORY | 当ENABLE_VISION=ON时,用于指定用户本地的OpenCV库路径;如果不指定,编译过程会自动下载OpenCV库 |
| OPENVINO_DIRECTORY | 当开启OpenVINO后端时, 用于指定用户本地的OpenVINO库路径;如果不指定,编译过程会自动下载OpenVINO库 |
102 changes: 102 additions & 0 deletions docs/cn/build_and_install/rknpu2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# RK2代NPU部署库编译

## 写在前面
FastDeploy已经初步支持RKNPU2的部署。使用的过程中,如果出现Bug请提Issues反馈。

## 简介
FastDeploy当前在RK平台上支持后端引擎如下:

| 后端 | 平台 | 支持模型格式 | 说明 |
|:------------------|:---------------------|:-------|:-------------------------------------------|
| ONNX&nbsp;Runtime | RK356X <br> RK3588 | ONNX | 编译开关`ENABLE_ORT_BACKEND`为ON或OFF控制,默认OFF |
| RKNPU2 | RK356X <br> RK3588 | RKNN | 编译开关`ENABLE_RKNPU2_BACKEND`为ON或OFF控制,默认OFF |


## C++ SDK编译安装

RKNPU2仅支持linux下进行编译,以下教程均在linux环境下完成。

### 更新驱动和安装编译时需要的环境


在运行代码之前,我们需要安装以下最新的RKNPU驱动,目前驱动更新至1.4.0。为了简化安装我编写了快速安装脚本,一键即可进行安装。

**方法1: 通过脚本安装**
```bash
# 下载解压rknpu2_device_install_1.4.0
wget https://bj.bcebos.com/fastdeploy/third_libs/rknpu2_device_install_1.4.0.zip
unzip rknpu2_device_install_1.4.0.zip

cd rknpu2_device_install_1.4.0
# RK3588运行以下代码
sudo rknn_install_rk3588.sh
# RK356X运行以下代码
sudo rknn_install_rk356X.sh
```

**方法2: 通过gitee安装**
```bash
# 安装必备的包
sudo apt update -y
sudo apt install -y python3
sudo apt install -y python3-dev
sudo apt install -y python3-pip
sudo apt install -y gcc
sudo apt install -y python3-opencv
sudo apt install -y python3-numpy
sudo apt install -y cmake

# 下载rknpu2
# RK3588运行以下代码
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
sudo cp ./rknpu2/runtime/RK3588/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./rknpu2/runtime/RK3588/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/

# RK356X运行以下代码
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
sudo cp ./rknpu2/runtime/RK356X/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./rknpu2/runtime/RK356X/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
```

### 编译C++ SDK

```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build

# 编译配置详情见README文件,这里只介绍关键的几个配置
# -DENABLE_ORT_BACKEND: 是否开启ONNX模型,默认关闭
# -DENABLE_RKNPU2_BACKEND: 是否开启RKNPU模型,默认关闭
# -RKNN2_TARGET_SOC: 编译SDK的板子型号,只能输入RK356X或者RK3588,注意区分大小写
cmake .. -DENABLE_ORT_BACKEND=ON \
-DENABLE_RKNPU2_BACKEND=ON \
-DENABLE_VISION=ON \
-DRKNN2_TARGET_SOC=RK3588 \
-DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.3
make -j8
make install
```

### 编译Python SDK

```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
cd python

export ENABLE_ORT_BACKEND=ON
export ENABLE_RKNPU2_BACKEND=ON
export ENABLE_VISION=ON
export RKNN2_TARGET_SOC=RK3588
python3 setup.py build
python3 setup.py bdist_wheel

cd dist

pip3 install fastdeploy_python-0.0.0-cp39-cp39-linux_aarch64.whl
```

## 部署模型

请查看[RKNPU2部署模型教程](../faq/rknpu2/rknpu2.md)
48 changes: 48 additions & 0 deletions docs/cn/faq/rknpu2/export.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# 导出模型指南

## 简介

Fastdeploy已经简单的集成了onnx->rknn的转换过程。本教程使用tools/export.py文件导出模型,在导出之前需要编写yaml配置文件。
在进行转换前请根据[rknn_toolkit2安装文档](./install_rknn_toolkit2.md)检查环境是否已经安装成功。


## export.py 配置参数介绍

| 参数名称 | 是否可以为空 | 参数作用 |
|-----------------|------------|--------------------|
| verbose | 是,默认值为True | 是否在屏幕上输出转换模型时的具体信息 |
| config_path || 配置文件路径 |

## config 配置文件介绍

### config yaml文件模版

```yaml
model_path: ./portrait_pp_humansegv2_lite_256x144_pretrained.onnx
output_folder: ./
target_platform: RK3588
normalize:
mean: [0.5,0.5,0.5]
std: [0.5,0.5,0.5]
outputs: None
```
### config 配置参数介绍
* model_path: 模型储存路径
* output_folder: 模型储存文件夹名字
* target_platform: 模型跑在哪一个设备上,只能为RK3588或RK3568
* normalize: 配置在NPU上的normalize操作,有std和mean两个参数
* std: 如果在外部做normalize操作,请配置为[1/255,1/255,1/255]
* mean: 如果在外部做normalize操作,请配置为[0,0,0]
* outputs: 输出节点列表,如果使用默认输出节点,请配置为None
## 如何转换模型
根目录下执行以下代码
```bash
python tools/export.py --config_path=./config.yaml
```

## 模型导出要注意的事项

* 请不要导出带softmax和argmax的模型,这两个算子存在bug,请在外部进行运算
49 changes: 49 additions & 0 deletions docs/cn/faq/rknpu2/install_rknn_toolkit2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# 安装rknn_toolkit2仓库

## 下载rknn_toolkit2

rknn_toolkit2的下载一般有两种方式,以下将一一介绍:

* github仓库下载

github仓库中提供了稳定版本的rknn_toolkit2下载
```bash
git clone https://github.com/rockchip-linux/rknn-toolkit2.git
```

* 百度网盘下载

在有些时候,如果稳定版本的rknn_toolkit2存在bug,不满足模型部署的要求,我们也可以使用百度网盘下载beta版本的rknn_toolkit2使用。其安装方式与
稳定版本一致
```text
链接:https://eyun.baidu.com/s/3eTDMk6Y 密码:rknn
```

## 安装rknn_toolkit2

安装rknn_toolkit2中会存在依赖问题,这里介绍以下如何安装。首先,因为rknn_toolkit2依赖一些特定的包,因此建议使用conda新建一个虚拟环境进行安装。
安装conda的方法百度有很多,这里跳过,直接介绍如何安装rknn_toolkit2。


### 下载安装需要的软件包
```bash
sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 \
libsm6 libgl1-mesa-glx libprotobuf-dev gcc g++
```

### 安装rknn_toolkit2环境
```bash
# 创建虚拟环境
conda create -n rknn2 python=3.6
conda activate rknn2

# rknn_toolkit2对numpy存在特定依赖,因此需要先安装numpy==1.16.6
pip install numpy==1.16.6

# 安装rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
cd ~/下载/rknn-toolkit2-master/packages
pip install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
```

## 其他文档
- [onnx转换rknn文档](./export.md)
Loading

0 comments on commit 4ffcfbe

Please sign in to comment.