The Compute Library is a collection of low-level machine learning functions optimized for Arm® Cortex®-A and Arm® Mali™ GPUs architectures.
The library provides superior performance to other open source alternatives and immediate support for new Arm® technologies e.g. SVE2.
Key Features:
- Open source software available under a permissive MIT license
- Over 100 machine learning functions for CPU and GPU
- Multiple convolution algorithms (GeMM, Winograd, FFT, Direct and indirect-GeMM)
- Support for multiple data types: FP32, FP16, INT8, UINT8, BFLOAT16
- Micro-architecture optimization for key ML primitives
- Highly configurable build options enabling lightweight binaries
- Advanced optimization techniques such as kernel fusion, Fast math enablement and texture utilization
- Device and workload specific tuning using OpenCL tuner and GeMM optimized heuristics
Repository | Link |
---|---|
Release | https://github.com/arm-software/ComputeLibrary |
Development | https://review.mlplatform.org/#/admin/projects/ml/ComputeLibrary |
Note: The documentation includes the reference API, changelogs, build guide, contribution guide, errata, etc.
All the binaries can be downloaded from here or from the tables below.
Platform | Operating System | Release archive (Download) |
---|---|---|
Raspberry Pi 4 | Linux 32bit | |
Raspberry Pi 4 | Linux 64bit | |
Odroid N2 | Linux 64bit | |
HiKey960 | Linux 64bit |
Architecture | Operating System | Release archive (Download) |
---|---|---|
armv7 | Android | |
armv7 | Linux | |
arm64-v8a | Android | |
arm64-v8a | Linux | |
arm64-v8.2-a | Android | |
arm64-v8.2-a | Linux |
-
Arm® CPUs:
- Arm® Cortex®-A processor family using Arm® Neon™ technology
- Arm® Cortex®-R processor family with Armv8-R AArch64 architecture using Arm® Neon™ technology
- Arm® Cortex®-X1 processor using Arm® Neon™ technology
-
Arm® Mali™ GPUs:
- Arm® Mali™-G processor family
- Arm® Mali™-T processor family
-
x86
- Android™
- Bare Metal
- Linux®
- macOS®
- Tizen™
- Tutorial: Running AlexNet on Raspberry Pi with Compute Library
- Gian Marco's talk on Performance Analysis for Optimizing Embedded Deep Learning Inference Software
- Gian Marco's talk on optimizing CNNs with Winograd algorithms at the EVS
- Gian Marco's talk on using SGEMM and FFTs to Accelerate Deep Learning
Contributions to the Compute Library are more than welcome. If you are interested on contributing, please have a look at our how to contribute guidelines.
Before the Compute Library accepts your contribution, you need to certify its origin and give us your permission. To manage this process we use the Developer Certificate of Origin (DCO) V1.1 (https://developercertificate.org/)
To indicate that you agree to the the terms of the DCO, you "sign off" your contribution by adding a line with your name and e-mail address to every git commit message:
Signed-off-by: John Doe <[email protected]>
You must use your real name, no pseudonyms or anonymous contributions are accepted.
For technical discussion, the ComputeLibrary project has a public mailing list: [email protected] The list is open to anyone inside or outside of Arm to self subscribe. In order to subscribe, please visit the following website: https://lists.linaro.org/mailman/listinfo/acl-dev
The software is provided under MIT license. Contributions to this project are accepted under the same license.
Android is a trademark of Google LLC.
Arm, Cortex, Mali and Neon are registered trademarks or trademarks of Arm Limited (or its subsidiaries) in the US and/or elsewhere.
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
Mac and macOS are trademarks of Apple Inc., registered in the U.S. and other countries.
Tizen is a registered trademark of The Linux Foundation.