Stars
6
stars
written in C++
Clear filter
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
A JIT assembler for x86/x64 architectures supporting MMX, SSE (1-4), AVX (1-2, 512), FPU, APX, and AVX10.2
Examples for using ONNX Runtime for machine learning inferencing.