Fixed-MAML: Speech Emotion Recognition. The repository is inspired from MAML Pytorch Implementation
- Download the EmoFilm Dataset.
- Split the dataset based on language and emotion.
- Generate silence class label data and download neutral data from EmoDB-dataset or you can generate it yourself.
- Create a csv file accordingly to train and run the model.
- Keep all the wav files in in data/waveforms// and csv files in data/ .
- After that run train.py with appropriate changes to location for csv and data input.