Skip to content

Latest commit

 

History

History
98 lines (76 loc) · 5.07 KB

huawei_ascend.md

File metadata and controls

98 lines (76 loc) · 5.07 KB

How to build Huawei Ascend Deployment Environment

Based on the Paddle-Lite backend, FastDeploy supports model inference on Huawei's Ascend NPU. For more detailed information, please refer to: Paddle Lite Deployment Example.

This document describes how to compile C++ and Python FastDeploy source code under ARM Linux OS environment to generate prediction libraries for Huawei Sunrise NPU as the target hardware.

For more compilation options, please refer to the FastDeploy compilation options description

Huawei Ascend Environment Preparation

$ chmod +x *.run

$ ./Atlas-300i-pro-npu-driver_5.1.rc2_linux-aarch64.run --full
$ ./Atlas-300i-pro-npu-firmware_5.1.rc2.run --full

$ reboot
# Check the driver information to confirm successful installation
$ npu-smi info

Compilation environment construction

Host environment requirements

  • os: ARM-Linux
  • gcc, g++, git, make, wget, python, pip, python-dev, patchelf
  • cmake (version 3.10 or above recommended)

Using Docker development environment

In order to ensure consistency with the FastDeploy verified build environment, it is recommended to use the Docker development environment for configuration.

# Download Dockerfile
$ wget https://bj.bcebos.com/fastdeploy/test/Ascend_ubuntu18.04_aarch64_5.1.rc2.Dockerfile
# Create docker images
$ docker build --network=host -f Ascend_ubuntu18.04_aarch64_5.1.rc2.Dockerfile -t Paddle Lite/ascend_aarch64:cann_5.1.rc2 .
# Create container
$ docker run -itd --privileged --name=ascend-aarch64 --net=host -v $PWD:/Work -w /Work --device=/dev/davinci0 --device=/dev/davinci_manager --device=/dev/hisi_hdc --device /dev/devmm_svm -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi  -v /usr/local/Ascend/driver/:/usr/local/Ascend/driver/ Paddle Lite/ascend_aarch64:cann_5.1.rc2 /bin/bash
# Enter the container
$ docker exec -it ascend-aarch64 /bin/bash
# Verify that the Ascend environment for the container is created successfully
$ npu-smi info

Once the above steps are successful, the user can start compiling FastDeploy directly from within docker.

Note:

  • If you want to use another CANN version in Docker, please update the CANN download path in the Dockerfile file, and update the corresponding driver and firmware. The current default in Dockerfile is CANN 5.1.RC2.
  • If users do not want to use docker, you can refer to Compile Environment Preparation for ARM Linux Environments provided by Paddle Lite and configure your own compilation environment, and then download and install the proper CANN packages to complete the configuration.

C++ FastDeploy library compilation based on Paddle Lite

After setting up the compilation environment, the compilation command is as follows.

# Download the latest source code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy  
mkdir build && cd build

# CMake configuration with Ascend
cmake -DWITH_ASCEND=ON  \
      -DCMAKE_INSTALL_PREFIX=fastdeploy-ascend \
      -DENABLE_VISION=ON \
      ..

# Build FastDeploy Ascend C++ SDK
make -j8
make install

When the compilation is complete, the fastdeploy-ascend directory is created in the current build directory, indicating that the FastDeploy library has been compiled.

Compiling Python FastDeploy Libraries Based on Paddle Lite

# Download the latest source code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/python
export WITH_ASCEND=ON
export ENABLE_VISION=ON

python setup.py build
python setup.py bdist_wheel

#After the compilation is complete, please install the whl package in the dist folder of the current directory.

Deploying PaddleClas Classification Model on Huawei Ascend NPU using C++ please refer to: PaddleClas Huawei Ascend NPU C++ Deployment Example

Deploying PaddleClas classification model on Huawei Ascend NPU using Python please refer to: PaddleClas Huawei Ascend NPU Python Deployment Example