Use Vision Transformer to generate Emotion Recognition using the DEAP dataset and EEG Signals.
This project is built with Tensorflow and PyTorch frameworks to implement EEG-based Emotion recognition. The Wavelet Transform methods DWT, CWT, and DTCWT are used to preprocess the raw EEG signals before inputting them into the ViT model. The emotion recognition test accuracy ranges from 80% to 90% with the abovementioned methods.
View Demo
·
Report
bug
·
Request
feature
Prerequisites | |
Languages & Tools | |
License | |
State |
Table of Contents
This code replicates the methodology described in the paper "Introducing attention mechanism for eeg signals: Emotion recognition with vision transformers" and provides empirical support for the proposed approach. This code is based on TensorFlow and improves and corrects numerous issues of the paper's code. The authors claim a 99.4% (Valence) and 99.1%(Arouse) accuracy for their original data runs, but even this still needs to be tested in practice. The actual test accuracy is at most 81%, and the CWT never reaches 97%(Valence) and 95.75%(Arouse) and is just over 60%. I emailed the author, Arjun, three months ago, and all I got back was a promise that he would update his program. For this reason, I am skeptical about their paper's results. After reading the program carefully, I found that their approach still has many inappropriate things. I tested various Wavelet Transform methods(DTCWT, DWT, and CWT), and my heavily modified model can then process EEG data for emotion recognition. All these tests have achieved test accuracy of 80% or higher, with the best reaching 85%.
This portion of the program is based on "lucidrains/vit-pytorch" and has re-implemented the same model as the TensorFlow Version. As a result, the program is clearer, more concise, and more aesthetically pleasing.
This directory contains DEAP Matlab data for the CWT, DTCWT, and DWT pre-processing program, including PSD, DE, MAE, DFA, etc., in the δ, γ, β, α and θ bands, with all functions included in Processing_mat_xwt.PY
, Processing_xwt.py
. Also included are three Matlab programs: processing_CWT.m
(only this Matlab file is modified from "Introducing attention mechanism for eeg signals: Emotion recognition with vision transformers"), processing_DTCWT.py
, and processing_DWT.py
(these two are written by myself).
The next stage is to research translating EEG signals into human-understandable text, image, or video.
Major Frameworks/Libraries |
|
python3.8
or above.
- For Ternsorflow Version: Install
tensorflow-gpu
- For PyTorch Version: Install
PyTorch-cudnn
- Install all the relavent Libraries
- run
Processing_mat_xwt.py
, - run
Processing_xwt.py
, - run
Runner.py
.
- The next stage is to research translating EEG signals into human-understandable text, image, or video.
Renhong Zhang
Github: @renhong-zhang |
There are lots of ways to support me! I would be so happy if you give this repository a ⭐️ and tell your friends about this little corner of the Internet.
- AniketRajpoot/Emotion-Recognition-Transformers: SOTA methods for performing emotion classification using Transformers.: Arjun, Arjun, Aniket Singh Rajpoot, and Mahesh Raveendranatha Panicker. "Introducing attention mechanism for eeg signals: Emotion recognition with vision transformers." 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021.
- lucidrains/vit-pytorch: Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch: The Pytorch Version is based on this ViT project
Copyright © 2022-present, Renhong Zhang