FastDeploy Runtime examples
FastDeploy Runtime 推理示例如下
Example Code
Program Language
Description
python/infer_paddle_paddle_inference.py
Python
Deploy Paddle model with Paddle Inference(CPU/GPU)
python/infer_paddle_tensorrt.py
Python
Deploy Paddle model with TensorRT(GPU)
python/infer_paddle_openvino.py
Python
Deploy Paddle model with OpenVINO(CPU)
python/infer_paddle_onnxruntime.py
Python
Deploy Paddle model with ONNX Runtime(CPU/GPU)
python/infer_onnx_openvino.py
Python
Deploy ONNX model with OpenVINO(CPU)
python/infer_onnx_tensorrt.py
Python
Deploy ONNX model with TensorRT(GPU)
python/infer_onnx_onnxruntime.py
Python
Deploy ONNX model with ONNX Runtime(CPU/GPU)
python/infer_torchscript_poros.py
Python
Deploy TorchScript model with Poros Runtime(CPU/GPU)
Example Code
Program Language
Description
cpp/infer_paddle_paddle_inference.cc
C++
Deploy Paddle model with Paddle Inference(CPU/GPU)
cpp/infer_paddle_tensorrt.cc
C++
Deploy Paddle model with TensorRT(GPU)
cpp/infer_paddle_openvino.cc
C++
Deploy Paddle model with OpenVINO(CPU
cpp/infer_paddle_onnxruntime.cc
C++
Deploy Paddle model with ONNX Runtime(CPU/GPU)
cpp/infer_onnx_openvino.cc
C++
Deploy ONNX model with OpenVINO(CPU)
cpp/infer_onnx_tensorrt.cc
C++
Deploy ONNX model with TensorRT(GPU)
cpp/infer_onnx_onnxruntime.cc
C++
Deploy ONNX model with ONNX Runtime(CPU/GPU)
cpp/infer_torchscript_poros.cc
C++
Deploy TorchScript model with Poros Runtime(CPU/GPU)