Giter Site home page Giter Site logo

xiangzhang1015 / deep-learning-for-bci Goto Github PK

View Code? Open in Web Editor NEW
160.0 3.0 36.0 2.14 GB

Resources for Book: Deep Learning for EEG-based Brain-Computer Interface: Representations, Algorithms and Applications

License: MIT License

Python 24.25% Jupyter Notebook 75.75%
eeg bci deep-learning machine-learning datamining artificial-intelligence

deep-learning-for-bci's Introduction

Deep Learning for Brain-Computer Interface (BCI)

Update

The whole well-processed and DL-ready data of 109 subjects from EEGMMIDB are uploaded!

Overview

This tutorial contains implementable python and jupyter notebook codes and benchmark datasets to learn how to recognize brain signals based on deep learning models. This tutorial associates our survey on DL-based noninvasive brain signals and book on DL-based BCI: Representations, Algorithms and Applications.

  • We present a new taxonomy of BCI signal paradigms according to the acquisition methods. ECOG: Electrocorticography, EEG: Electroencephalography, fNIRS: functional near-infrared spectroscopy, fMRI: functional magnetic resonance imaging, EOG: Electrooculography, MEG: Magnetoencephalography.

  • We systemically introduce the fundamental knowledge of deep learning models. MLP: Multi-Layer Perceptron, RNN: Recurrent Neural Networks, CNN: Convolutional Neural Networks, LSTM: Long Short-Term Memory, GRU: Gated Recurrent Units, AE: Autoencoder, RBM: Restricted Boltzmann Machine, DBN: Deep Belief Networks, VAE: Variational Autoencoder, GAN: Generative Adversarial Networks. D-AE denotes Stacked-Autoencoder which refers to the autoencoder with multiple hidden layers. Deep Belief Network can be composed of AE or RBM, therefore, we divided DBN into DBN-AE (stacked AE) and DBN-RBM (stacked RBM).

  • Moreover, the guidelines for the investigation and design of BCI systems are provided from the aspects of signal category, deep learning models, and applications. The following figures show the distribution on signals and DL models in frointer DL-based BCI studies.
Distribution on signals Distribution on DL models
Distribution on signals Distribution on DL models



  • Special attention has been given to the state-of-the-art studies on deep learning for EEG-based BCI research in terms of algorithms. Specically, we introduces a number of advanced deep learning algorithms and frameworks aimed at several major issues in BCI including robust brain signal representation learning, cross-scenario classification, and semi-supervised classification.

  • Furthermore, several novel prototypes of deep learning-based BCI systems are proposed which shed light on real-world applications such as authentication, visual reconstruction, language interpretation, and neurological disorder diagnosis. Such applications can dramatically benefit both healthy individuals and those with disabilities in real life.

BCI Dataset

Collection of brain signals is both financially and temporally costly. We extensively explore the benchmark data sets applicable to rain signal research and provides 31 public data sets with download links that cover most brain signal types.

Brain Signals  Dataset #-Subject #-Classes Sampling Rate (Hz) #-Channels Download Link
FM EcoG BCI-C IV, Data set IV 3 5 1000 48 -- 64 Link
MI EcoG BCI-C III 
Data set I
1 2 1000 64 Link
Sleeping EEG Sleep-EDF Telemetry 22 6 100 2 EEG, 1 EOG, 1 EMG Link
Sleeping EEG Sleep-EDF: Cassette 78 6 100, 1 2 EEG (100 Hz), 1 EOG (100 Hz),
1 EMG (1 Hz)
Link
Sleeping EEG MASS-1 53 5 256 17/19 EEG, 2 EOG, 5 EMG Link
Sleeping EEG MASS-2 19 6 256 19 EEG, 4 EOG, 1EMG Link
Sleeping EEG MASS-3 62 5 256 20 EEG, 2 EOG, 3 EMG Link
Sleeping EEG MASS-4 40 6 256 4 EEG, 4 EOG, 1 EMG Link
Sleeping EEG MASS-5 26 6 256 20 EEG, 2 EOG, 3 EMG Link
Sleeping EEG SHHS 5804 N/A 125, 50 2 EEG (125 Hz), 1EOG (50 Hz),
1 EMG (125 Hz)
Link
Seizure EEG CHB-MIT 22 2 256 18 Link
Seizure EEG TUH 315 2 200 19 Link
MI EEG EEGMMI 109 4 160 64 Link
MI EEG BCI-C II, Data set III 1 2 128 3 Link
MI EEG BCI-C III, Data set III a 3 4 250 60 Link
MI EEG BCI-C III, Data set III b 3 2 125 2 Link
MI EEG BCI-C III, Data set IV a 5 2 1000 118 Link
MI EEG BCI-C III, Data set IV b 1 2 1001 119 Link
MI EEG BCI-C III, Data set IV c 1 2 1002 120 Link
MI EEG BCI-C IV, Data set I 7 2 1000 64 Link
MI EEG BCI-C IV, Data set II a 9 4 250 22 EEG, 3 EOG Link
MI EEG BCI-C IV, Data set II b 9 2 250 3 EEG, 3 EOG Link
Emotional EEG AMIGOS 40 4 128 14 Link
Emotional EEG SEED 15 3 200 62 Link
Emotional EEG DEAP 32 4 512 32 Link
Others EEG Open MIIR 10 12 512 64 Link
VEP BCI-C II, Data set II b 1 36 240 64 Link
VEP BCI-C III, Data set II 2 26 240 64 Link
fMRI ADNI 202 3 N/A N/A Link
fMRI BRATS 65 4 N/A N/A Link
MEG BCI-C IV, Data set III 2 4 400 10 Link

In order to let the readers have a quick access of the dataset and can play around it, we provide the well-processed and ready-ro-use dataset of EEG Motor Movement/Imagery Database (EEGMMIDB). This dataset contains 109 subjects while the EEG signals are recorded in 64 channels with 160 Hz sampling rate. After our clearning and sorting, each npy file represents a subject: the data shape of each npy file is [N, 65], the first 64 columns correspond to 64 channel features, the last column denotes the class label. The N varies for different subjects, but N should be either 259520 or 255680. This is the inherent difference in the original dataset.

Running the code

In our tutorial files, you will learn the pipline and workflow of BCI system including data acquiction, pre-processing, feature extraction (optional), classification, and evaluation. We present necessary references and actionable codes of the most typical deep learning models (GRU, LSTM, CNN, GNN) while taking advantage of temporal, spatial, and topographical depencencies. We also provide python codes that are very handy. For example, to check the EEG classification performance of CNN, run the following code:

python 4-2_CNN.py 

For PyTorch beginners, we highly recommond Morvan Zhou's PyTorch Tutorials.

Chapter resources

For the algorithms and applications introduced in the book, we provide the necessary implementary codes (TensorFlow version):

Citing

If you find our research is useful for your research, please consider citing our survey or book:

@article{zhang2020survey,
  title={A survey on deep learning-based non-invasive brain signals: recent advances and new frontiers},
  author={Zhang, Xiang and Yao, Lina and Wang, Xianzhi and Monaghan, Jessica JM and Mcalpine, David and Zhang, Yu},
  journal={Journal of Neural Engineering},
  year={2020},
  publisher={IOP Publishing}
}

@book{zhang2021deep,
  title={Deep Learning for EEG-based Brain-Computer Interface: Representations, Algorithms and Applications},
  author={Zhang, Xiang and Yao, Lina},
  year={2021},
  publisher={World Scientific Publishing}
}

Requirements

The tutorial codes are tested to work under Python 3.7.

Recent versions of Pytorch, torch-geometric, numpy, and scipy are required. All the required basic packages can be installed using the following command: ''' pip install -r requirements.txt ''' Note: For toch-geometric and the related dependices (e.g., cluster, scatter, sparse), the higher version may work but haven't been tested yet.

Miscellaneous

Please send any questions you might have about the code and/or the algorithm to [email protected].

License

This tutorial is licensed under the MIT License.

deep-learning-for-bci's People

Contributors

xiangzhang1015 avatar ziyu-zui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deep-learning-for-bci's Issues

Conflicting statements in Feature Extraction.ipynb

Screenshot (53)

In tutorial/3-Feature Extraction.ipynb after normalization in cell 1, the BATCH_size is defined as train_fea_norm1.shape[0] but in the comments it says to use the test_data as batch size. I checked in the other tutorials as well, there the BATCH_size has been defined correctly as test_fea_norm1.shape[0]. Kindly make the necessary changes.

new_y in extract function assigning different label which mislead the model

image

In extract function from BCI_functions.ipynb, if the averaged y label of a particular window is a float (some y labels is from one class and remaining from another class during transition), the y label assigned to that window is 0. Instead it should be either floor or ceil of that averaged value right?

E.g: Consider y values of a window as [2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3] => ave_y = 2.56
Either this window's y label should be 2 or 3.
But this function will assign it as 0 (in the else part)

Am I miss something here? Please clarify me this. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.