SDK for TEE AI Stick (includes model training script, inference library, examples)
-
Updated
Feb 15, 2019 - Python
SDK for TEE AI Stick (includes model training script, inference library, examples)
Describing How to Enable OpenVINO Execution Provider for ONNX Runtime
Latte is a convolutional neural network (CNN) inference engine written in C++ and uses AVX to vectorize operations. The engine runs on Windows 10, Linux and macOS Sierra.
Unified JavaScript API for scoring via various DL framework
Rust library managing long conversations with any LLM
Experimental Python implementation of OpenVINO Inference Engine (very slow, limited functionality). All codes are written in Python. Easy to read and modify.
NodeJS binding for Menoh DNN inference library
Add a description, image, and links to the inference-library topic page so that developers can more easily learn about it.
To associate your repository with the inference-library topic, visit your repo's landing page and select "manage topics."