ESPnet is an end-to-end speech processing toolkit, mainly focuses on end-to-end speech recognition and end-to-end text-to-speech. ESPnet uses chainer and pytorch as a main deep learning engine, and also follows Kaldi style data processing, feature extraction/format, and recipes to provide a complete setup for speech recognition and other speech processing experiments.
- Hybrid CTC/attention based end-to-end ASR
- Fast/accurate training with CTC/attention multitask training
- CTC/attention joint decoding to boost monotonic alignment decoding
- Encoder: VGG-like CNN + BiRNN (LSTM/GRU) or sub-sampling BiRNN (LSTM/GRU)
- Attention: Dot product, location-aware attention, variants of multihead
- Incorporate RNNLM/LSTMLM trained only with text data
- Batch GPU decoding
- Tacotron2 based end-to-end TTS
- Flexible network architecture thanks to chainer and pytorch
- Kaldi style complete recipe
- Support numbers of ASR recipes (WSJ, Switchboard, CHiME-4/5, Librispeech, TED, CSJ, AMI, HKUST, Voxforge, REVERB, etc.)
- Support numbers of TTS recipes with a similar manner to the ASR recipe (LJSpeech, Librispeech, M-AILABS, etc.)
- Support speech translation recipes (Fisher callhome Spanish to English, IWSLT'18)
- Support speech separation and recognition recipe (WSJ-2mix)
- State-of-the-art performance in several benchmarks (comparable/superior to hybrid DNN/HMM and CTC)
- Flexible front-end processing thanks to kaldiio and HDF5 support
- Tensorboard based monitoring
-
Python 2.7+, 3.7+ (mainly support Python3.7+)
-
protocol buffer (for the sentencepiece, you need to install via package manager e.g.
sudo apt-get install libprotobuf9v5 protobuf-compiler libprotobuf-dev
. See detailsInstallation
of https://github.com/google/sentencepiece/blob/master/README.md) -
PyTorch 0.4.1, 1.0.0
-
gcc>=4.9 for PyTorch1.0.0
-
Chainer 5.0.0
Optionally, GPU environment requires the following libraries:
- Cuda 8.0, 9.0, 9.1, 10.0 depending on each DNN library
- Cudnn 6+
- NCCL 2.0+ (for the use of multi-GPUs)
To use cuda (and cudnn), make sure to set paths in your .bashrc
or .bash_profile
appropriately.
CUDAROOT=/path/to/cuda
export PATH=$CUDAROOT/bin:$PATH
export LD_LIBRARY_PATH=$CUDAROOT/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=$CUDAROOT
export CUDA_PATH=$CUDAROOT
If you want to use multiple GPUs, you should install nccl
and set paths in your .bashrc
or .bash_profile
appropriately, for example:
CUDAROOT=/path/to/cuda
NCCL_ROOT=/path/to/nccl
export CPATH=$NCCL_ROOT/include:$CPATH
export LD_LIBRARY_PATH=$NCCL_ROOT/lib/:$CUDAROOT/lib64:$LD_LIBRARY_PATH
export LIBRARY_PATH=$NCCL_ROOT/lib/:$LIBRARY_PATH
export CUDA_HOME=$CUDAROOT
export CUDA_PATH=$CUDAROOT
Install Python libraries and other required tools with miniconda
$ cd tools
$ make KALDI=/path/to/kaldi
Or using specified python and virtualenv
$ cd tools
$ make KALDI=/path/to/kaldi PYTHON=/usr/bin/python2.7
Or install specific Python version with miniconda
$ cd tools
$ make KALDI=/path/to/kaldi PYTHON_VERSION=3.6
v0.3.0: Changed to use miniconda by default installation.
Install Kaldi, Python libraries and other required tools with miniconda
$ cd tools
$ make -j 10
Or using specified python and virtualenv
$ cd tools
$ make -j 10 PYTHON=/usr/bin/python2.7
Or install specific Python version with miniconda
$ cd tools
$ make PYTHON_VERSION=3.6
To install in a terminal that does not have a GPU installed, just clear the version of CUPY
as follows:
$ cd tools
$ make CUPY_VERSION='' -j 10
This option is enabled for any of the install configuration.
You can check whether the install is succeeded via the following commands
$ cd tools
$ make check_install
or make check_install --no-cupy
if you do not have a GPU on your terminal.
If you have no warning, ready to run the recipe!
If there are some problems in python libraries, you can re-setup only python environment via following commands
$ cd tools
$ make clean_python
$ make python
Move to an example directory under the egs
directory.
We prepare several major ASR benchmarks including WSJ, CHiME-4, and TED.
The following directory is an example of performing ASR experiment with the CMU Census Database (AN4) recipe.
$ cd egs/an4/asr1
Once move to the directory, then, execute the following main script with a chainer backend:
$ ./run.sh --backend chainer
or execute the following main script with a pytorch backend:
$ ./run.sh --backend pytorch
With this main script, you can perform a full procedure of ASR experiments including
- Data download
- Data preparation (Kaldi style, see http://kaldi-asr.org/doc/data_prep.html)
- Feature extraction (Kaldi style, see http://kaldi-asr.org/doc/feat.html)
- Dictionary and JSON format data preparation
- Training based on chainer or pytorch.
- Recognition and scoring
The training progress (loss and accuracy for training and validation data) can be monitored with the following command
$ tail -f exp/${expdir}/train.log
With the default verbose (=0), it gives you the following information
epoch iteration main/loss main/loss_ctc main/loss_att validation/main/loss validation/main/loss_ctc validation/main/loss_att main/acc validation/main/acc elapsed_time eps
:
:
6 89700 63.7861 83.8041 43.768 0.731425 136184 1e-08
6 89800 71.5186 93.9897 49.0475 0.72843 136320 1e-08
6 89900 72.1616 94.3773 49.9459 0.730052 136473 1e-08
7 90000 64.2985 84.4583 44.1386 72.506 94.9823 50.0296 0.740617 0.72476 137936 1e-08
7 90100 81.6931 106.74 56.6462 0.733486 138049 1e-08
7 90200 74.6084 97.5268 51.6901 0.731593 138175 1e-08
total [#################.................................] 35.54%
this epoch [#####.............................................] 10.84%
91300 iter, 7 epoch / 20 epochs
0.71428 iters/sec. Estimated time to finish: 2 days, 16:23:34.613215.
In addition Tensorboard events are automatically logged in the tensorboard/${expname}
folder. Therefore, when you install Tensorboard, you can easily compare several experiments by using
$ tensorboard --logdir tensorboard
and connecting to the given address (default : localhost:6006). This will provide the following information:
Note that we would not include the installation of Tensorboard to simplify our installation process. Please install it manually (pip install tensorflow; pip install tensorboard
) when you want to use Tensorboard.
If you use GPU in your experiment, set --ngpu
option in run.sh
appropriately, e.g.,
# use single gpu
$ ./run.sh --ngpu 1
# use multi-gpu
$ ./run.sh --ngpu 3
# if you want to specify gpus, set CUDA_VISIBLE_DEVICES as follows
# (Note that if you use slurm, this specification is not needed)
$ CUDA_VISIBLE_DEVICES=0,1,2 ./run.sh --ngpu 3
# use cpu
$ ./run.sh --ngpu 0
Default setup uses CPU (--ngpu 0
).
Note that if you want to use multi-gpu, the installation of nccl is required before setup.
When using multiple GPUs, if the training freezes or lower performance than expected is observed, verify that PCI Express Access Control Services (ACS) are disabled. Larger discussions can be found at: link1 link2 link3. To disable the PCI Express ACS follow instructions written here. You need to have a ROOT user access or request to your administrator for it.
go to docker/ and follow README.md instructions there.
Change cmd.sh
according to your cluster setup.
If you run experiments with your local machine, please use default cmd.sh
.
For more information about cmd.sh
see http://kaldi-asr.org/doc/queue.html.
It supports Grid Engine (queue.pl
), SLURM (slurm.pl
), etc.
If you have the following error (or other numpy related errors),
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
Exception in main training loop: numpy.core.multiarray failed to import
Traceback (most recent call last):
;
:
from . import _path, rcParams
ImportError: numpy.core.multiarray failed to import
Then, please reinstall matplotlib with the following command:
$ cd egs/an4/asr1
$ . ./path.sh
$ pip install pip --upgrade; pip uninstall matplotlib; pip --no-cache-dir install matplotlib
ESPnet can completely switch the mode from CTC, attention, and hybrid CTC/attention
# hybrid CTC/attention (default)
# --mtlalpha 0.5 and --ctc_weight 0.3 in most cases
$ ./run.sh
# CTC mode
$ ./run.sh --mtlalpha 1.0 --ctc_weight 1.0 --recog_model model.loss.best
# attention mode
$ ./run.sh --mtlalpha 0.0 --ctc_weight 0.0
The CTC training mode does not output the validation accuracy, and the optimum model is selected with its loss value
(i.e., --recog_model model.loss.best
).
About the effectiveness of the hybrid CTC/attention during training and recognition, see [2] and [3].
We list the character error rate (CER) and word error rate (WER) of major ASR tasks.
CER (%) | WER (%) | |
---|---|---|
WSJ dev93 | 3.2 | 7.0 |
WSJ eval92 | 2.1 | 4.7 |
CSJ eval1 | 6.6 | N/A |
CSJ eval2 | 4.8 | N/A |
CSJ eval3 | 5.0 | N/A |
Aishell dev | 6.8 | N/A |
Aishell test | 8.0 | N/A |
HKUST train_dev | 28.8 | N/A |
HKUST dev | 27.4 | N/A |
Librispeech dev_clean | N/A | 4.0 |
Librispeech test_clean | N/A | 4.0 |
Note that the performance of the CSJ, HKUST, and Librispeech tasks was significantly improved by using the wide network (#units = 1024) and large subword units if necessary reported by RWTH.
Chainer | Pytorch | |
---|---|---|
Performance | ◎ | ◎ |
Speed | ○ | ◎ |
Multi-GPU | supported | supported |
VGG-like encoder | supported | supported |
RNNLM integration | supported | supported |
#Attention types | 3 (no attention, dot, location) | 12 including variants of multihead |
TTS recipe support | no support | supported |
[1] Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai, "ESPnet: End-to-End Speech Processing Toolkit," Proc. Interspeech'18, pp. 2207-2211 (2018)
[2] Suyoun Kim, Takaaki Hori, and Shinji Watanabe, "Joint CTC-attention based end-to-end speech recognition using multi-task learning," Proc. ICASSP'17, pp. 4835--4839 (2017)
[3] Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey and Tomoki Hayashi, "Hybrid CTC/Attention Architecture for End-to-End Speech Recognition," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240-1253, Dec. 2017
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={ESPnet: End-to-End Speech Processing Toolkit},
year=2018,
booktitle={Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}