Skip to content
/ wavenet Public
forked from musyoku/wavenet

Chainer implementation of Deepmind's WaveNet

Notifications You must be signed in to change notification settings

r9y9/wavenet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WaveNet: A Generative Model for Raw Audio

This is the Chainer implementation of WaveNet

この記事で実装したコードです。

まだ完成していませんが音声の生成はできます。

Todo:

  • Generating audio
  • Local conditioning
  • Global conditioning
  • Training on CSTR VCTK Corpus

Training the network

Requirements

  • Chainer 2
  • scipy.io.wavfile

Preprocessing

Donwsample your .wav to 16KHz / 8KHz to speed up convergence.

Create data directory

Add all .wav files to /train_audio/wav

Hyperparameters

You can edit the hyperparameters of the network in model.py before running train.py, or edit /params/params.json after training starts.

Training

run train.py

Generating audio

run generate.py

Passing --use_faster_wavenet will generate audio faster than original WaveNet.

Listen to a sample generated by WaveNet

🎶 music

Implementation

figure

figure

figure

About

Chainer implementation of Deepmind's WaveNet

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%