Skip to content

small c++ library to quickly deploy models using onnxruntime

License

Notifications You must be signed in to change notification settings

huynhthaihoa/onnx_runtime_cpp

Repository files navigation

small c++ library to quickly use onnxruntime to deploy deep learning models

Thanks to cardboardcode, we have the documentation for this small library. Hope that they both are helpful for your work.

TODO

Installation

  • build onnxruntime from source with the following script
    sudo bash ./scripts/install_onnx_runtime.sh

How to build


make default

# build examples
make apps

How to test apps


Image Classification With Squeezenet


# after make apps
./build/examples/TestImageClassification ./data/squeezenet1.1.onnx ./data/images/dog.jpg

the following result can be obtained

264 : Cardigan, Cardigan Welsh corgi : 0.391365
263 : Pembroke, Pembroke Welsh corgi : 0.376214
227 : kelpie : 0.0314975
158 : toy terrier : 0.0223435
230 : Shetland sheepdog, Shetland sheep dog, Shetland : 0.020529

Object Detection With Tiny-Yolov2 trained on VOC dataset (with 20 classes)


  • Download model from onnx model zoo: HERE

  • The shape of the output would be

    OUTPUT_FEATUREMAP_SIZE X OUTPUT_FEATUREMAP_SIZE * NUM_ANCHORS * (NUM_CLASSES + 4 + 1)
    where OUTPUT_FEATUREMAP_SIZE = 13; NUM_ANCHORS = 5; NUM_CLASSES = 20 for the tiny-yolov2 model from onnx model zoo
  • Test tiny-yolov2 inference apps
# after make apps
./build/examples/tiny_yolo_v2 [path/to/tiny_yolov2/onnx/model] ./data/images/dog.jpg
  • Test result

tinyyolov2 test result

Object Instance Segmentation With MaskRCNN trained on MS CoCo Dataset (80 + 1(background) clasess)


  • Download model from onnx model zoo: HERE

  • As also stated in the url above, there are four outputs: boxes(nboxes x 4), labels(nboxes), scores(nboxes), masks(nboxesx1x28x28)

  • Test mask-rcnn inference apps

# after make apps
./build/examples/mask_rcnn [path/to/mask_rcnn/onnx/model] ./data/images/dogs.jpg
  • Test results:

dogs maskrcnn result

indoor maskrcnn result

Yolo V3 trained on Ms CoCo Dataset


  • Download model from onnx model zoo: HERE

  • Test yolo-v3 inference apps

# after make apps
./build/examples/yolov3 [path/to/yolov3/onnx/model] ./data/images/no_way_home.jpg
  • Test result


# after make apps
./build/examples/ultra_light_face_detector ./data/version-RFB-640.onnx ./data/images/endgame.jpg
  • Test results: ultra light weight face result

  • Download onnx model trained on COCO dataset from HERE
# this app tests yolox_l model but you can try with other yolox models also.
wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.onnx -O ./data/yolox_l.onnx
  • Test inference apps
# after make apps
./build/examples/yolox ./data/yolox_l.onnx ./data/images/matrix.jpg
  • Test results: yolox result

  • Download PaddleSeg's bisenetv2 trained on cityscapes dataset that has been converted to onnx HERE and copy to ./data directory
You can also convert your own PaddleSeg with following procedures
  • Test inference apps
./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/sample_city_scapes.png
./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/odaiba.jpg
  • Test results:

    • cityscapes dataset's color legend

city scapes color legend

+  test result on sample image of cityscapes dataset (this model is trained on cityscapes dataset)

paddleseg city scapes

+  test result on a new scene at Odaiba, Tokyo, Japan

paddleseg odaiba

About

small c++ library to quickly deploy models using onnxruntime

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 75.0%
  • Python 14.7%
  • CMake 4.1%
  • Shell 3.2%
  • Dockerfile 2.2%
  • Makefile 0.8%