Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding [paper]
This is the PyTorch implementation of attention-based convolutional neural network with multi-modal temporal information fusion for MI-EEG decoding.
The proposed network is designed with the aim of extracting multi-modal temporal information and learning more comprehensive global dependencies. It is composed of the following four parts:
- Feature extraction module: The multi-modal temporal information is extracted from two distinct perspectives: average and variance.
- Self-attention module: The shared self-attention module is designed to capture global dependencies along these two feature dimensions.
- Convolutional encoder: The convolutional encoder is then designed to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features.
- Classification: A fully connected (FC) layer finally classifies features from the convolutional encoder into given classes.
- PyTorch 1.7
- Python 3.7
- mne 0.23
The classification results for our proposed network and other competing architectures are as follows:
If you find this code useful, please cite us in your paper.
@article{ma2024attention,
title={Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding},
author={Ma, Xinzhi and Chen, Weihai and Pei, Zhongcai and Zhang, Yue and Chen, Jianer},
journal={Computers in Biology and Medicine},
pages={108504},
year={2024},
publisher={Elsevier}
}