How to create a FastDploy image
The GPU images published by FastDploy are based on version 21.10 of Triton Inference Server. If developers need to use other CUDA versions, please refer to NVIDIA official website to modify the scripts in Dockerfile and scripts.
# Enter the serving directory and execute the script to compile the FastDeploy and serving backend
cd serving
bash scripts/build.sh
# Exit to the FastDeploy home directory and create the image
# x.y.z is FastDeploy version, example: 1.0.0
cd ../
docker build -t paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 -f serving/Dockerfile .
# Enter the serving directory and execute the script to compile the FastDeploy and serving backend
cd serving
cd serving
bash scripts/build.sh OFF
# Exit to the FastDeploy home directory and create the image
# x.y.z is FastDeploy version, example: 1.0.0
cd ../
docker build -t paddlepaddle/fastdeploy:x.y.z-cpu-only-21.10 -f serving/Dockerfile_cpu .
# Enter the serving directory and execute the script to compile the FastDeploy and serving backend
cd serving
bash scripts/build_fd_ipu.sh
# Exit to the FastDeploy home directory and create the image
# x.y.z is FastDeploy version, example: 1.0.0
cd ../
docker build -t paddlepaddle/fastdeploy:x.y.z-ipu-only-21.10 -f serving/Dockerfile_ipu .