Skip to content

Commit

Permalink
[doc] Minor improvements of the overview docs (Samsung#12619)
Browse files Browse the repository at this point in the history
This commit corrects punctuation, typos and style.

ONE-DCO-1.0-Signed-off-by: Piotr Fusik <[email protected]>
  • Loading branch information
pfusik authored Feb 14, 2024
1 parent 68379e5 commit 5caf24e
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 20 deletions.
22 changes: 11 additions & 11 deletions docs/overview/background.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ Artificial intelligence (AI) techniques are getting popular and utilized in vari
services. While the cloud-based AI techniques have been used to perform compute/memory intensive
inferences because of the powerful servers on cloud, on-device AI technologies are recently drawing
attention from the mobile industry for response time reduction, privacy protection, and
connection-less AI service. Big mobile players are investing their research effort on the on-device
AI technologies and already announced hardware and software on-device AI solutions. We are not
leading this trend currently, but since on-device AI area is just started and still in the initial
state, there are still opportunities and possibilities to reduce the gap between pioneers and us. We
believe on-device AI will become a key differentiator for mobile phone, TV, and other home
appliances, and thus developing on-device AI software stack is of paramount importance in order to
connection-less AI service. Big mobile players are investing their research effort in the on-device
AI technologies and have already announced hardware and software on-device AI solutions. We are not
leading this trend currently, but since on-device AI area has just started and remains in the initial
stage, there are still opportunities and possibilities to reduce the gap between pioneers and us. We
believe that on-device AI will become a key differentiator for mobile phones, TV, and other home
appliances. Therefore, developing on-device AI software stack is of paramount importance in order to
take leadership in the on-device AI technology.

Although the vision of on-device AI is promising, enabling on-device AI involves unique technical
challenges compared to traditional cloud-based approach. This is because on-device AI tries to
conduct inference tasks solely on device without connecting to cloud resources. Specifically,
hardware resources on device, such as processor performance, memory capacity, and power budget, are
very scarce and limit the compute capability, which is typically required to execute complicated
hardware resources on device, such as processor performance, memory capacity and power budget are
very scarce and limit the compute capability, which is typically required to execute complex
neural network (NN) models. For example, in one product requirement, a mobile device should consume
less than 1.2W and could use at most 2W only for 10 minutes due to thermal issue. Next, on-device AI
less than 1.2W and could use at most 2W only for 10 minutes due to thermal constraints. On-device AI
software stack needs to support diverse device environments, since embedded platforms may consist of
heterogeneous compute devices, such as CPU, GPU, DSP, or neural processing unit (NPU), and use
different OS platforms, such as Tizen, Android, or various Linux.
different OS platforms, such as Tizen, Android, or various Linux systems.

To tackle the challenges above and to have the leadership on on-device AI technology, this project,
To tackle the challenges above and to have the leadership in on-device AI technology, this project,
as the first step, aims at developing a neural network inference framework specialized and optimized
for on-device AI.
18 changes: 9 additions & 9 deletions docs/overview/roadmap.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Roadmap

This document describes roadmap of **ONE** project.
This document describes the roadmap of the **ONE** project.

This project **ONE** aims at providing a high-performance, on-device neural network (NN) inference
Project **ONE** aims at providing a high-performance, on-device neural network (NN) inference
framework that performs inference of a given NN model on processors, such as CPU, GPU, DSP, or NPU,
in the target platform, such as Tizen, Android, and Ubuntu.
on the target platform, such as Tizen, Android and Ubuntu.

## Progress

Until last year, we already saw significant gains in accelerating with a single CPU or GPU backend.
We have seen better performance improvements, not only when using a single backend, but even when
mixing CPUs or GPUs considering the characteristics of individual operations. It could give us an
opportunity to have a high degree of freedom in terms of operator coverage, and possibly provide
mixing CPUs or GPUs, considering the characteristics of individual operations. It could give us an
opportunity to have a high degree of freedom in terms of operator coverage and possibly provide
better performance compared to single backend acceleration.

On the other hand, we introduced the compiler as a front-end. This will support a variety of deep
Expand All @@ -27,7 +27,7 @@ model. From this year, now we start working on the voice model. The runtime requ
voice model will be different from those of the vision model. There will be new requirements that
we do not recognize yet, along with some already recognized elements such as control flow and
dynamic tensor. In addition, recent studies on voice models require efficient support for specific
architectures such as attention, transformer, and BERT. Also, depending on the characteristics of
architectures such as attention, transformer and BERT. Also, depending on the characteristics of
most voice models with large memory bandwidth, we will have to put more effort into optimizing the
memory bandwidth at runtime.

Expand All @@ -44,15 +44,15 @@ memory bandwidth at runtime.
+ Completion and application of _circle2circle_ pass
- _circle-quantizer_ for UINT8 and INT16
- _circle-optimizer_
+ Grphical _circle_ model viewer
+ Graphical _circle_ model viewer

## Milestones

- [2020 Project Milestones](https://github.com/Samsung/ONE/projects/1)

## Workgroups (WGs)

- We organize WGs for major topics, and each WG will be working on its own major topic by breaking
it into small tasks/issues, performing them inside WG, and collaborating between WGs.
- We organize WGs for major topics and each WG will be working on its own major topic by breaking
it into small tasks/issues, performing them inside WG and collaborating between WGs.
- The WG information can be found [here](workgroup.md).

0 comments on commit 5caf24e

Please sign in to comment.