Implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion" https://arxiv.org/abs/1904.08352
- tensorflow-gpu==2.0.0-beta1 (cudnn=7.6.0)
- scipy
- pandas
- matplotlib
- librosa
For example,
conda create -n mosnet python=3.5
conda activate mosnet
pip install -r requirements.txt
cd ./data
and runbash download.sh
to download the VCC2018 evaluation results and submitted speech. (downsample the submitted speech might take some times)- Run
python mos_results_preprocess.py
to prepare the evaluation results. (Runpython bootsrap_estimation.py
to do the bootstrap experiment for intrinsic MOS calculation) - Run
python utils.py
to extract .wav to .h5 - Run
python train.py --model CNN-BLSTM
to train a CNN-BLSTM version of MOSNet. ('CNN', 'BLSTM' or 'CNN-BLSTM' are supported in model.py, as described in paper) - Run
python test.py
to test on the pre-trained weights with specified model and weight.
The model is trained on the large listening evaluation results released by the Voice Conversion Challenge 2018.
The listening test results can be downloaded from here
The databases and results (submitted speech) can be downloaded from here