Skip to content
/ ONE Public
forked from ONEforALL-S003/ONE

On-device Neural Engine / q-implant

License

Notifications You must be signed in to change notification settings

wnwoghd22/ONE

 
 

Repository files navigation

GitHub release (latest SemVer) Documentation Status GitHub commit activity Gitter

ONE (On-device Neural Engine)

ONE Logo

A high-performance, on-device neural network inference framework.

Goal

This project ONE aims at providing a high-performance, on-device neural network (NN) inference framework that performs inference of a given NN model on processors, such as CPU, GPU, DSP or NPU.

We develop a runtime that runs on a Linux kernel-based OS platform such as Ubuntu, Tizen, or Android, and a compiler toolchain to support NN models created using various NN training frameworks such as Tensorflow or PyTorch in a unified form at runtime.

Overview

Getting started

  • For the contribution, please refer to our contribution guide.
  • You can also find various how-to documents here.

Feature Request

You can suggest development of ONE's features that are not yet available.

The functions requested so far can be checked in the popular feature request list.

  • If the feature you want is on the list, 👍 to the body of the issue. The feature with the most 👍 is placed at the top of the list. When adding new features, we will prioritize them with this reference. Of course, it is good to add an additional comment which describes your request in detail.

  • For features not listed, create a new issue. Sooner or later, the maintainer will tag the FEATURE_REQUEST label and appear on the list.

We expect one of the most frequent feature requests would be the operator kernel implementation. It is good to make a request, but it is better if you contribute by yourself. See the following guide, How to add a new operation, for help.

We are looking forward to your participation. Thank you in advance!

How to Contact

  • Please post questions, issues, or suggestions into Issues. This is the best way to communicate with the developer.
  • You can also have an open discussion with community members through gitter.im channel.

About

On-device Neural Engine / q-implant

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 84.3%
  • C 5.9%
  • Python 3.3%
  • Shell 3.2%
  • CMake 2.8%
  • Roff 0.1%
  • Other 0.4%