This is a Speaker Recognition system with GUI. At first, it served as an SRT project for the course Signal Processing (2013Fall) in Tsinghua University. But we did find it pretty useful!
For more details of this project, please see:
- Our presentation slides
- Our complete report
- SciPy
- scikit-learn
- scikits.talkbox
- pyssp
- PyQt
- PyAudio
- (Optional)bob.
See here for instructions on bob core library installation.
See here for bob python bindings. If you install python bindings manually, you may need to install the following in order:
- bob.extension
- bob.blitz
- bob.core
- bob.sp
- bob.ap
Note: We also have MFCC feature implemented on our own, which will be used as a fallback when bob is unavailable. But it's not so efficient as the C implementation in bob.
Run make -C src/gmm
to compile our fast gmm implementation. Require gcc >= 4.7.
It will be used as default, if successfully compiled.
Voice Activity Detection(VAD):
Feature:
Model:
- Gaussian Mixture Model (GMM)
- Universal Background Model (UBM)
- Continuous Restricted Boltzman Machine (CRBM)
- Joint Factor Analysis (JFA)
Our GUI not only has basic functionality for recording, enrollment, training and testing, but also has a visualization of real-time speaker recognition:
See our demo video (in Chinese) for more details.
usage: speaker-recognition.py [-h] -t TASK -i INPUT -m MODEL
Speaker Recognition Command Line Tool
optional arguments:
-h, --help show this help message and exit
-t TASK, --task TASK Task to do. Either "enroll" or "predict"
-i INPUT, --input INPUT
Input Files(to predict) or Directories(to enroll)
-m MODEL, --model MODEL
Model file to save(in enroll) or use(in predict)
Wav files in each input directory will be labeled as the basename of the directory.
Note that wildcard inputs should be *quoted*, and they will be sent to glob module.
Examples:
Train:
./speaker-recognition.py -t enroll -i "/tmp/person* ./mary" -m model.out
Predict:
./speaker-recognition.py -t predict -i "./*.wav" -m model.out