Here we maintain the list of supported platforms for running inference.
- x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
- Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8)
- Full TensorFlow runtime (
deepspeech
packages) - TensorFlow Lite runtime (
deepspeech-tflite
packages)
- x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
- Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8)
- CUDA 10.0 (and capable GPU)
- Full TensorFlow runtime (
deepspeech
packages) - TensorFlow Lite runtime (
deepspeech-tflite
packages)
- Cortex-A53 compatible ARMv7 SoC with Neon support
- Raspbian Buster-compatible distribution
- TensorFlow Lite runtime (
deepspeech-tflite
packages)
- Cortex-A72 compatible Aarch64 SoC
- ARMbian Buster-compatible distribution
- TensorFlow Lite runtime (
deepspeech-tflite
packages)
- ARMv7 SoC with Neon support
- Android 7.0-10.0
- NDK API level >= 21
- TensorFlow Lite runtime (
deepspeech-tflite
packages)
- Aarch64 SoC
- Android 7.0-10.0
- NDK API level >= 21
- TensorFlow Lite runtime (
deepspeech-tflite
packages)
- x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
- macOS >= 10.10
- Full TensorFlow runtime (
deepspeech
packages) - TensorFlow Lite runtime (
deepspeech-tflite
packages)
- x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
- Windows Server >= 2012 R2 ; Windows >= 8.1
- Full TensorFlow runtime (
deepspeech
packages) - TensorFlow Lite runtime (
deepspeech-tflite
packages)
- x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
- Windows Server >= 2012 R2 ; Windows >= 8.1
- CUDA 10.0 (and capable GPU)
- Full TensorFlow runtime (
deepspeech
packages) - TensorFlow Lite runtime (
deepspeech-tflite
packages)