At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
- An x86-64 Unix or Unix-like machine
- Python 3.7 or higher
multimodal-datasets
repo (submodule)
First configure the hyper parameters and the dataset in train-erc-text.yaml
and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
pip install -r requirements.txt
python train-erc-text.py
This will subsequently call train-erc-text-hp.py
and train-erc-text-full.py
.
Model | MELD | IEMOCAP | |
---|---|---|---|
EmoBERTa | No past and future utterances | 63.46 | 56.09 |
Only past utterances | 64.55 | 68.57 | |
Only future utterances | 64.23 | 66.56 | |
Both past and future utterances | 65.61 | 67.42 | |
→ without speaker names | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out ./results/
If you want to download the trained checkpoints and stuff, then here is where you can download them. It's a pretty big zip file.
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Check out the paper.
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}