Giter Site home page Giter Site logo

dynamicsandneuralsystems / pyspi Goto Github PK

View Code? Open in Web Editor NEW
197.0 7.0 25.0 59.52 MB

Comparative analysis of pairwise interactions in multivariate time series.

Home Page: https://time-series-features.gitbook.io/pyspi/

License: GNU General Public License v3.0

Python 84.34% MATLAB 15.28% Dockerfile 0.38%
time-series pairwise-interactions complex-networks complex-systems time-series-analysis multivariate-analysis multivariate-timeseries

pyspi's People

Contributors

anniegbryant avatar arianguyen avatar benfulcher avatar jmoo2880 avatar olivercliff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyspi's Issues

Compute only subset of SPIs in Calculator

First of all: great preprint, great package!! It will facilitate the research at our lab essentially!!!

I was wondering how to tell the Calculator function to compute only a few SPIs that I am interested in in order to save processing time?

Best,
Lukas

Can't install pyspi - missing pyEDM==1.9.3

Thank you of the great package.

Trying to install pyspi under Ubuntu using 'pip'. After cloning the repository and running pip install . from the pyspi derectory, the setup stop with the following error:

INFO: pip is looking at multiple versions of pyspi to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement pyEDM==1.9.3 (from pyspi) (from versions: 1.0.1, 1.0.3, 1.0.3.2, 1.10.1.1, 1.10.2.0, 1.10.3.0, 1.11.0.0, 1.12.0.0, 1.12.1.0, 1.12.2.0, 1.13.0.0, 1.13.1.0, 1.14.0.0, 1.14.0.1, 1.14.0.2, 1.14.2.0, 1.14.3.0, 1.15.0.0, 1.15.0.3, 1.15.0.4, 1.15.1.0)
ERROR: No matching distribution found for pyEDM==1.9.3

According to pypi.org the newest pyEDM version is 1.15.1.0 released: Oct 28, 2023

Where I can find pyEDM==1.9.3?

Thanks

Mac M2 Installation Error

Processing /Users/dereksnow/Sovai/GitHub/SovAI/notebooks/studies/pyspi
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting scikit-learn==1.0.1 (from pyspi-lib==0.4.2)
Using cached scikit-learn-1.0.1.tar.gz (6.6 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error

× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [1527 lines of output]
Partial import of sklearn during the build process.
:128: DeprecationWarning:

    `numpy.distutils` is deprecated since NumPy 1.23.0, as a result
    of the deprecation of `distutils` itself. It will be removed for
    Python >= 3.12. For older Python versions it will remain present.
    It is recommended to use `setuptools < 60.0` for those Python versions.
    For more details, see:
      https://numpy.org/devdocs/reference/distutils_status_migration.html
  
  
  INFO: C compiler: clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.sdk
  
  INFO: compile options: '-c'
  INFO: clang: test_program.c
  INFO: clang objects/test_program.o -o test_program
  INFO: C compiler: clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.sdk
  
  INFO: compile options: '-c'
  extra options: '-fopenmp'
  INFO: clang: test_program.c
  clang: error: unsupported option '-fopenmp'
  /private/var/folders/tj/2qbc2n2x1234_l7b3z5y06740000gn/T/pip-install-se08yce2/scikit-learn_96e7f55684d048fa808c01b4a5ac7177/sklearn/_build_utils/openmp_helpers.py:126: UserWarning:
  
                  ***********
                  * WARNING *
                  ***********
  
  It seems that scikit-learn cannot be built with OpenMP.
  
  - Make sure you have followed the installation instructions:
  
      https://scikit-learn.org/dev/developers/advanced_installation.html
  
  - If your compiler supports OpenMP but you still see this
    message, please submit a bug report at:
  
      https://github.com/scikit-learn/scikit-learn/issues
  
  - The build will continue with OpenMP-based parallelism
    disabled. Note however that some estimators will run in
    sequential mode instead of leveraging thread-based
    parallelism.
  
                      ***
  
    warnings.warn(message)
  Compiling sklearn/__check_build/_check_build.pyx because it changed.
  Compiling sklearn/preprocessing/_csr_polynomial_expansion.pyx because it changed.
  Compiling sklearn/cluster/_dbscan_inner.pyx because it changed.
  Compiling sklearn/cluster/_hierarchical_fast.pyx because it changed.
  Compiling sklearn/cluster/_k_means_common.pyx because it changed.
  Compiling sklearn/cluster/_k_means_lloyd.pyx because it changed.
  Compiling sklearn/cluster/_k_means_elkan.pyx because it changed.
  Compiling sklearn/cluster/_k_means_minibatch.pyx because it changed.
  Compiling sklearn/datasets/_svmlight_format_fast.pyx because it changed.
  Compiling sklearn/decomposition/_online_lda_fast.pyx because it changed.
  Compiling sklearn/decomposition/_cdnmf_fast.pyx because it changed.
  Compiling sklearn/ensemble/_gradient_boosting.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/_gradient_boosting.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/histogram.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/splitting.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/_binning.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/_predictor.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/_loss.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/_bitset.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/common.pyx because it changed.
  Compiling sklearn/ensemble/_hist_gradient_boosting/utils.pyx because it changed.
  Compiling sklearn/feature_extraction/_hashing_fast.pyx because it changed.
  Compiling sklearn/manifold/_utils.pyx because it changed.
  Compiling sklearn/manifold/_barnes_hut_tsne.pyx because it changed.
  Compiling sklearn/metrics/cluster/_expected_mutual_info_fast.pyx because it changed.
  Compiling sklearn/metrics/_pairwise_fast.pyx because it changed.
  Compiling sklearn/neighbors/_ball_tree.pyx because it changed.
  Compiling sklearn/neighbors/_kd_tree.pyx because it changed.
  Compiling sklearn/neighbors/_partition_nodes.pyx because it changed.
  Compiling sklearn/neighbors/_dist_metrics.pyx because it changed.
  Compiling sklearn/neighbors/_typedefs.pyx because it changed.
  Compiling sklearn/neighbors/_quad_tree.pyx because it changed.
  Compiling sklearn/tree/_tree.pyx because it changed.
  Compiling sklearn/tree/_splitter.pyx because it changed.
  Compiling sklearn/tree/_criterion.pyx because it changed.
  Compiling sklearn/tree/_utils.pyx because it changed.
  Compiling sklearn/utils/sparsefuncs_fast.pyx because it changed.
  Compiling sklearn/utils/_cython_blas.pyx because it changed.
  Compiling sklearn/utils/arrayfuncs.pyx because it changed.
  Compiling sklearn/utils/murmurhash.pyx because it changed.
  Compiling sklearn/utils/_fast_dict.pyx because it changed.
  Compiling sklearn/utils/_openmp_helpers.pyx because it changed.
  Compiling sklearn/utils/_seq_dataset.pyx because it changed.
  Compiling sklearn/utils/_weight_vector.pyx because it changed.
  Compiling sklearn/utils/_random.pyx because it changed.
  Compiling sklearn/utils/_logistic_sigmoid.pyx because it changed.
  Compiling sklearn/utils/_readonly_array_wrapper.pyx because it changed.
  Compiling sklearn/svm/_newrand.pyx because it changed.
  Compiling sklearn/svm/_libsvm.pyx because it changed.
  Compiling sklearn/svm/_liblinear.pyx because it changed.
  Compiling sklearn/svm/_libsvm_sparse.pyx because it changed.
  Compiling sklearn/linear_model/_cd_fast.pyx because it changed.
  Compiling sklearn/linear_model/_sgd_fast.pyx because it changed.
  Compiling sklearn/linear_model/_sag_fast.pyx because it changed.
  Compiling sklearn/_isotonic.pyx because it changed.
  warning: sklearn/cluster/_dbscan_inner.pyx:17:5: Only extern functions can throw C++ exceptions.
  warning: sklearn/neighbors/_dist_metrics.pxd:19:64: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:29:65: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:38:79: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:42:79: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:61:51: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:64:52: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:71:68: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  warning: sklearn/neighbors/_dist_metrics.pxd:73:67: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
  performance hint: sklearn/cluster/_k_means_common.pyx:31:5: Exception check on '_euclidean_dense_dense' will always require the GIL to be acquired. Declare '_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_common.pyx:63:5: Exception check on '_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_common.pyx:120:40: Exception check after calling '__pyx_fuse_0_euclidean_dense_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_0_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_common.pyx:120:40: Exception check after calling '__pyx_fuse_1_euclidean_dense_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_1_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_common.pyx:154:41: Exception check after calling '__pyx_fuse_0_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_0_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_common.pyx:154:41: Exception check after calling '__pyx_fuse_1_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_1_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:336:5: Exception check on '_update_chunk_dense' will always require the GIL to be acquired.
  Possible solutions:
      1. Declare '_update_chunk_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
      2. Use an 'int' return type on '_update_chunk_dense' to allow an error code to be returned.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:571:5: Exception check on '_update_chunk_sparse' will always require the GIL to be acquired.
  Possible solutions:
      1. Declare '_update_chunk_sparse' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
      2. Use an 'int' return type on '_update_chunk_sparse' to allow an error code to be returned.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:88:41: Exception check after calling '__pyx_fuse_0_euclidean_dense_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_0_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:93:45: Exception check after calling '__pyx_fuse_0_euclidean_dense_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_0_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:88:41: Exception check after calling '__pyx_fuse_1_euclidean_dense_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_1_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:93:45: Exception check after calling '__pyx_fuse_1_euclidean_dense_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_1_euclidean_dense_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:164:42: Exception check after calling '__pyx_fuse_0_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_0_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:172:46: Exception check after calling '__pyx_fuse_0_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_0_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:164:42: Exception check after calling '__pyx_fuse_1_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_1_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:172:46: Exception check after calling '__pyx_fuse_1_euclidean_sparse_dense' will always require the GIL to be acquired. Declare '__pyx_fuse_1_euclidean_sparse_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:294:31: Exception check after calling '__pyx_fuse_0_update_chunk_dense' will always require the GIL to be acquired.
  Possible solutions:
      1. Declare '__pyx_fuse_0_update_chunk_dense' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
      2. Use an 'int' return type on '__pyx_fuse_0_update_chunk_dense' to allow an error code to be returned.
  performance hint: sklearn/cluster/_k_means_elkan.pyx:294:31: Exception check after calling '__pyx_fuse_1_update_chunk_dense' will always require the GIL to be acquired.
  Possible solutions:

Could you please provide more details about the config.ymal file?

Hi,
We are trying to analyse some neural time-series data, pyspi is a excellent job in my opinion.
However we meet some problems for the unclear parts of the document, could you provide more illustration in documentation?

(1) For the transfer entropy, and Mutualinfo or other jpype1's content, the sub content like prop_k, k_history, l_history is different from idtxl's documentation, https://pwollstadt.github.io/IDTxl/html/index.html.
(2) For some other simple SPI, what;s the meaning of statistic:max\min?

If you could include more descriptions of these things (and more details) in the documentation,then we can get a lot of help.
Very thanks for your any reply!

Enable indexing into data object like a numpy array

Using the getitem method, we should be able to do the following:

print(data[i])

Rather than

print(data.to_numpy()[i])

Which makes a lot of the code neater. Potentially also enable the setitem feature, but that might be dubious

Only allow one format for inputting data

Including the dim_order option that allows users to modify their data format (either 'sp' or 'ps') can be confusing, especially because the data are always transposed to the 'ps' order afterwards.

It would be better to stick to the format from sklearn, which would mean the data can only be input as a shape (n_samples, n_processes).

Calculator.compute() hangs indefinitely when run in Multiprocessing pool

I'm trying to distribute thousands of pairs of datasets to the calculator, but it hangs indefinitely when run as a child process using python's multiprocessing (often at dcorr, dcorr_biased, or dtw when included). top indicates no cpu usage.

import numpy as np
import pandas as pd
import random
import time
from pyspi.calculator import Calculator
from multiprocessing import Pool

calc = Calculator()

def runCompute(dataset):
    calc.load_dataset(dataset)
    calc.compute()
    return calc.table.copy()

    
if __name__ == "__main__":
    print("running pool")
    random.seed(42)
    M = 2
    T = 50
    dataset = np.random.randn(M,T)
    
    # this works
    # rows = [runCompute(dataset)]
    
    # this hangs
    with Pool(1) as p:
         rows = p.map(runCompute, [dataset])
    print("complete")
    print(rows)

sktime/sklearn integration?

@anniegbryant, @benfulcher, I would like to congratulate you to this nice package, I really like the concept and it is quite nicely designed! There are also a lot of useful methods collected! Nice.

Now imo the next "big" question is integrability with the wider modelling ecosystem, e.g., can I use the pairwise time series metrics as components in sktime or sklearn. Where with "I", of course, I mean the wider user ecosystem.

Currently, I think there are a few blockers, but would you be interested to resolve them together?

Two main points imo from the codebase review:

  • sklearn interoperable interfaces expect a few things such as __init__ signature related, and availability of get_params, set_params. You can get this for free by inheriting from scikit-base base classes, of course that's not the only way to satisfy the interface requirements.
  • sktime has related classes which you could adopt or adapt, e.g., the BasePairwiseTransformerPanel. Options could involve, writing an adapter in sktime, or using the class in pyspi, the latter would give you testing for free by using check_estimator. Or, writing your own base class template based on scikit-base that marries the current interface definition with sklearn and sktime expectations.

Side points but synergistic points:

  • testing could - and should - be more systematic for reliable use, e.g., CI on operating system and python version combinations. Happy to help setting this up if we set aside some time. Of course, the "sktime interface" option would take care of this as part of sktime, although bugfixing could become more clunky as we would have to push bug reports upstream (like in pycatch22).
  • a good object/estimator search utility might be nice for the user, there are a lot of implemented objects! We could lift some components from sktime or skbase here.

ModuleNotFoundError `sktime`

Hey!

Great preprint! I am really interested in trying out the package. However, when I installed it, I cannot import Calculator, nor run the tests with pytest. I am getting:

ModuleNotFoundError: No module named 'sktime.utils.data_io'

I have sktime installed and can import it without any problems.
Thanks for looking into it!

N.

Causing Error with Zero Values

I've encountered the error of existance NaN while using the library. It seems that when certain values in the input are exactly zero, the code fails to execute properly, possibly due to a division by zero error.

No JVM DLL when using jpype

When trying to initialise a calculator, the jpype methods are failing with the error
FileNotFoundError: [Errno 2] JVM DLL not found: /Library/Java/JavaVirtualMachines/jdk-21.jdk/Contents/Home/lib/libjli.dylib

I have set Java Home to the correct directory and the .dylib file appears to be there but it does not recognise it.

I am using a Mac ARM architecture and a conda virtual environment.

Record compute time for each SPI

Will be useful for knowing which methods are fast/slow to compute and allows users to select faster options.

This might be finicky since many of the methods inherit preprocessed information from other methods (e.g., all spectral methods inherit spectral decompositions).

parallel computation in pyspi

I am currently running a code with pyspi and encountered some questions.

first I want to know if the pyspi runs any parallel calculations inside its codes or not

secondly, does pyspi uses GPU for calculations?

Third, if I use parallel computation with 32 cores for example, would it result in faster calculation for an EEG signal of 15000 samples and 18 channels? I used a small piece of my data and realized that parallelization causes the code to run longer. This may be due to the small size of the data that I used.

refactor suggestion: modular index, spi tags, softdep management by spi

Thanks for the great presentation today, @benfulcher!

Inspired, I looked in greater detail into the repository, to my shame perhaps for the first time at that level of detail.

What I understood is that your SPI are actually not all manually implemented, but there is a wealth of them, some using external dependencies in turn. As such, pyspi is, morally, very much similar to sktime, being a mix of de-novo implementations, direct interfaces to external algorithms, and implementations that use components with soft dependencies.

I also noticed that you have tags for the different SPI, which again is very similar to sktime.

Further, when trying to interface SPI individually, I noticed that this is currently not intended to be possible - only batch feature sets can be obtained? Which seems to be a shame, you have collected so many useful pairwise transformations! Unless of course you use the yaml, and the process of discovery if you want that is tedious, and currently cannot be automated, so composability with other frameworks is severely limited.

Based on this, I had a number of ideas if you would like to hear me out:

What do you think? I'd be happy to devote some time to shift the code base gradually towards this schema. As a side effect, it would also easily allow to interface all SPI as time seires distances in sktime, and would make it easier to add SPI for multivariate or unequal length time series.

FYI @jmoo2880

Support more recent Python versions

Currently pyspi requires Python <3.10,>=3.8. It would be awesome to see support for more recent Python versions

Given that Python 3.12 is now the default installation in the latest Ubuntu LTS release underscores the need to expand the range of supported Python versions.
image

directionality of pli_multitaper_max_fs-1_fmin-0_fmax-0-5

Hi Oliver,

Just wondering about the directionality of the SPI "pli_multitaper_max_fs-1_fmin-0_fmax-0-5"? In the pyspi manuscript, it's listed as undirected. However, in my dataset, I found that observation_1 --> observation_2 has a different value from observation_2 --> observation_1. Can you please clarify whether this feature is undirected or directed?

Thank you :)

Diagonal elements of covariance estimators are 'nan'

Just found that on the latest version pyspi==1.1.0 (google colab) and a previous version pyspi==1.0.3 (ubuntu)

  1. diagonal entries of calculated covariance matrices like that returned by cov_EmpiricalCovariance, are not 1, which should be due to the fact that data is normalized by default (thus covariance matrices are equal to correlation matrices) in fact they all have 'nan' on the main diagonal.

  2. precision matrix estimators like prec_EmpiricalCovariance also return 'nan' on the main diagonal thus the resulting matrix is not the inverse of the matrix returned by cov_EmpiricalCovariance

Am I doing something wrong?
Is this an issue? Thanks

Segmentation fault

Hello,
I am using pyspi on various datasets, but for one of them (156 vars / 300 time points) I obtain that:

Number of SPIs: 215

Processing [None: cov_GraphicalLasso]: 0%| | 0/215 [00:00<?, ?it/s]
Segmentation fault (core dumped)

and I have no idea how to spot the issue!? any tip?

Thanks for your help

Cannot install PySPI on Win10

Hi Oliver,

I'm currently trying to install the PySPI package (following the instructions that I found on the documentation) on a Windows 10 system and an error pops up:

_error: Multiple top-level packages discovered in a flat-layout: ['sktime', 'build_tools', 'extension_templates'].

  To avoid accidental inclusion of unwanted files or directories,
  setuptools will not proceed with this build.

  If you are trying to create a single distribution with multiple packages
  on purpose, you should not rely on automatic discovery.
  Instead, consider the following options:

  1. set up custom discovery (`find` directive with `include` or `exclude`)
  2. use a `src-layout`
  3. explicitly set `py_modules` or `packages` with a list of names

  To find more information, look for "package discovery" on setuptools docs.
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details._

AFAIK, this is related to setuptools, but I don't know how to fix it, any ideas?

Similarity or Distance - Higher is closer or lower is closer?

First of all, thanks for creating this awesome package.
I want to use it to evaluate the measures for finding related time series within industrial/building time series datasets.

For that, I have the following questions:
Is there a place where I can quickly gather whether the outcome of a measure is a similarity or a distance? In other words, whether something is more related if the number is higher (similarity) or more related if the metric is lower (distance).

The simplest example would be the differentiation between correlation and Euclidean distance of two time series. For related time series, one could expect the correlation to be high but the Euclidean distance to be low. When, for example, searching for the k most related time series throughout the different metrics implemented here, this information is of great interest.

I'm sorry if I missed that in the publication or the paper. Kindly refer me if there is information on that anywhere.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.