ESPnet is an end-to-end speech processing toolkit, mainly focuses on end-to-end speech recognition. ESPnet uses chainer and pytorch as a main deep learning engine, and also follows Kaldi style data processing, feature extraction/format, and recipes to provide a complete setup for speech recognition and other speech processing experiments.
- Hybrid CTC/attention based end-to-end ASR
- Fast/accurate training with CTC/attention multitask training
- CTC/attention joint decoding to boost monotonic alignment decoding
- Encoder: VGG-like CNN + BLSTM or pyramid BLSTM
- Attention: Dot product, location-aware attention, variants of multihead (pytorch only)
- Incorporate RNNLM/LSTMLM trained only with text data
- Flexible network architecture thanks to chainer and pytorch
- Kaldi style complete recipe
- Support numbers of ASR benchmarks (WSJ, Switchboard, CHiME-4, Librispeech, TED, CSJ, AMI, HKUST, Voxforge, etc.)
- State-of-the-art performance in Japanese/Chinese benchmarks (comparable/superior to hybrid DNN/HMM and CTC)
- Moderate performance in standard English benchmarks
Install Kaldi, Python libraries and other required tools using system python and virtualenv
$ cd tools
$ make -j
or using local miniconda
$ cd tools
$ make -f conda.mk -j
To use cuda (and cudnn), make sure to set paths in your .bashrc
or .bash_profile
appropriately.
CUDAROOT=/path/to/cuda
export PATH=$CUDAROOT/bin:$PATH
export LD_LIBRARY_PATH=$CUDAROOT/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=$CUDAROOT
export CUDA_PATH=$CUDAROOT
Move to an example directory under the egs
directory.
We prepare several major ASR benchmarks including WSJ, CHiME-4, and TED.
The following directory is an example of performing ASR experiment with the VoxForge Italian Corpus.
$ cd egs/voxforge/asr1
Once move to the directory, then, execute the following main script with a chainer backend:
$ ./run.sh
or execute the following main script with a pytorch backend (currently the pytorch backend does not support VGG-like layers):
$ ./run.sh --backend pytorch --etype blstmp
With this main script, you can perform a full procedure of ASR experiments including
- Data download
- Data preparation (Kaldi style, see http://kaldi-asr.org/doc/data_prep.html)
- Feature extraction (Kaldi style, see http://kaldi-asr.org/doc/feat.html)
- Dictionary and JSON format data preparation
- Training based on chainer or pytorch.
- Recognition and scoring
If you use GPU in your experiment, set --gpu
option in run.sh
appropriately, e.g.,
$ ./run.sh --gpu 0
Default setup uses CPU (--gpu -1
).
Change cmd.sh
according to your cluster setup.
If you run experiments with your local machine, please use default cmd.sh
.
For more information about cmd.sh
see http://kaldi-asr.org/doc/queue.html.
It supports Grid Engine (queue.pl
), SLURM (slurm.pl
), etc.
If you have the following error (or other numpy related errors),
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
Exception in main training loop: numpy.core.multiarray failed to import
Traceback (most recent call last):
;
:
from . import _path, rcParams
ImportError: numpy.core.multiarray failed to import
Then, please reinstall matplotlib with the following command:
$ cd egs/voxforge/asr1
$ . ./path.sh
$ pip install pip --upgrade; pip uninstall matplotlib; pip --no-cache-dir install matplotlib
We list the character error rate (CER) and word error rate (WER) of major ASR tasks.
CER (%) | WER (%) | |
---|---|---|
WSJ dev93 | 5.5 | 13.1 |
WSJ eval92 | 3.8 | 9.3 |
CSJ eval1 | 8.7 | N/A |
CSJ eval2 | 6.2 | N/A |
CSJ eval3 | 6.9 | N/A |
HKUST train_dev | 29.7 | N/A |
HKUST dev | 28.3 | N/A |
Librispeech dev_clean | 2.9 | 7.7 |
Librispeech test_clean | 2.7 | 7.7 |
Chainer | Pytorch | |
---|---|---|
Performance | ◎ | ○ |
Speed | ○ | ◎ |
VGG-like encoder | supported | no support |
RNNLM integration | supported | supported |
#Attention types | 3 (no attention, dot, location) | 12 including variants of multihead |
[1] Suyoun Kim, Takaaki Hori, and Shinji Watanabe, "Joint CTC-attention based end-to-end speech recognition using multi-task learning," Proc. ICASSP'17, pp. 4835--4839 (2017)
[2] Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey and Tomoki Hayashi, "Hybrid CTC/Attention Architecture for End-to-End Speech Recognition," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240-1253, Dec. 2017