Giter Site home page Giter Site logo

keypy's Introduction

Note: This package has a new home and maintainer. Please follow along to the new address.


The KEY Institute for Brain-Mind Research EEG Analysis Toolbox

This package contains functionality for EEG preprocessing, microstate analysis, spectral analysis, and statistics.

Installation

Install the package from the command line with:

sudo pip install keypy

On Windows, install the executable file from PyPi:

https://pypi.python.org/pypi/keypy

Python3 Setup

To get the additional required libraries, please run the following command in your virtual environment:

pip install -r requirements.txt

Citation

If you use this package, please cite:

  • P. Milz, P.L. Faber, D. Lehmann, T. Koenig, K. Kochi, R.D. Pascual-Marqui, The functional significance of EEG microstates โ€” Associations with modalities of thinking, NeuroImage, Available online 15 August 2015, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2015.08.023.

  • P. Milz. (2015). keypy: The KEY EEG Analysis Toolbox. Zenodo. 10.5281/zenodo.29120. DOI

Development

This repository is no longer maintained. Should you or your institute be interested in taking over the project, please contact the institute coordinator of the KEY Institute for Brain-Mind Research at [email protected].

If you want to contribute to the source code, you find the repository on GitHub:

https://github.com/keyinst/keypy

Developer

  • Patricia Milz

Contributers

  • Christian Oberholzer
  • Stephan Gerhard
  • Pascal L Faber
  • Thomas Koenig
  • Dietrich Lehmann

I am most thankful to all contributers!

keypy's People

Contributors

ascronia avatar davidghaydock avatar ignatenkobrain avatar unidesigner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

keypy's Issues

About the difference of ERP and EEG

Hi, in your example code it says

### Warning: Do not change ERP = False. Keypy has only been optimized to work with EEG and not ERP data.

And I read your function "find_modmaps" and learned that if ERP is set True, signed value of covariance matrix will be used. (line 361 in modelmaps.py) Could you tell me why we should use unsigned value for EEG while signed value for ERP?
Moreover, when I set ERP=True to analysis ERP data, the process of getting mod map became extremely slow. Does this mean keypy can't be used to analysis ERP data?

About the way that explained variance is calculated

Hello, I'm using your package for microstate analysis but I'm confused by your code of explained variance. I'm wondering if you use the same method in this paper:

Pascual-Marqui, R. D., Michel, C. M., & Lehmann, D. (1995). Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE Transactions on Biomedical Engineering, 42(7), 658-665.

I wrote my own code according to the above paper, but gained different result from yours. Moreover, I created a dataset in which every sample point matches perfectly to one of the model map. Using your way, the explained variance is larger than 1(1.84, to be accurate). My simulation is like the following. I copied part of your code from function find_modmaps in modelmap.py , and created my a small dataset (T = 9, n_ch = 5, number of model map = 3). While I used my own code , the result is exactly 1.

import numpy as np
from math import sqrt

model = np.array([[1.0,2.0,3.0,4.0,5.0],[0,1.0,0,1.0,0],[5.0,4.0,3.0,2.0,1.0]])


for col in range(nch):
    model[:,col]=model[:,col]/b
         
org_data = np.vstack((model,model))      
   
nch = 5
b=np.sum(np.abs(model)**2,axis=-1)**(1./2)

covm_all=np.dot(org_data,model.T)
loading_all=abs(covm_all).max(axis=1)
b_loading_all=loading_all/sqrt(nch)

eeg = org_data

exp_var_tot=sum(b_loading_all)/sum(eeg.std(axis=1))

According to the 1995 paper, the explained variance should not be larger than 1. Could you tell me if some other definition is applied here? I'll really appreciate it if you could kindly help with this issue.

There's another small issue, in the example/run_example_analysis1.py, there are some illustration about "Series_1 -Series_5 "(line 111-139). I noted that "Series_1" and "Series_4" are same, and "Series_4" doesn't correspond to the functions in "microstates/meanmods.py".

nanmean from scipy.stats

Dear keypy author(s),

in microstates/parameters.py and microstates/parameters_provider.py you are trying to import nanmean from scipy.stats, but in newer version of scipy, the nanmean function is deprecated and is provided by numpy package. Hence, when I install your package, I need to manually rewrite the two mentioned files to:
from numpy import sqrt, nanmean

could you change the code?

thanks a lot!
Best,

nikola

Question about the calculation of microstates duration

Hi Dr Milz,

I have noticed the repository is no longer maintained but I just wanna ask a simple question about the calculation of microstates duration.

I have take a glance at the parameters.py and notice that the code seems the microstate duration by assuming unchanged state between any 2 GFP peaks. Did I understand it correctly?

I have read the following paper about the modified k-means algorithm and the proposed method for temporal smoothing, the paper described an approach somewhat similar to penalized least-square.

Pascual-Marqui, R. D., Michel, C. M., & Lehmann, D. (1995). Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE Transactions on Bio-Medical Engineering, 42(7), 658โ€“665.

I understand there is no perfect way but I just wondered which smoothing method will be a better one? The assumption of unchanged state between two GFP peaks seems to be a big one but I am not sure.

Thank you very much.

a few questions on KeyPy

Dear Dr. Milz,

I am working in Hertfordshire, and have two quick questions on KeyPy.

  1. I am not so familiar with python array operations, but what KeyPy calculates as the explained variance (in modelmaps.py and processing.py) to me seems to be the explained standard deviation, not variance. Or am I wrong?

  2. In the pdf with instructions, it is suggested that the boxkeyfiltering can be commented out. When I do this, however, the program ends in an execution error. Apparently this procedure generates objects needed further on?

Many thanks in advance,
Reinoud Maex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.