Giter Site home page Giter Site logo

dpys / pynets Goto Github PK

View Code? Open in Web Editor NEW
118.0 9.0 41.0 1.01 GB

A Reproducible Workflow for Structural and Functional Connectome Ensemble Learning

Home Page: https://pynets.readthedocs.io/en/latest/

Makefile 0.13% Python 99.38% Dockerfile 0.47% Shell 0.02%
workflow nipype dipy nilearn tractography fmri dmri ensemble-sampling networks brain-connectivity

pynets's Introduction

PyNets®

CircleCI codecov PyPI - Version PyPI - Python Version License: AGPL v3 https://www.singularity-hub.org/static/img/hosted-singularity--hub-%23e32929.svg Docker brainlife.io/app

About

PyNets is a tool for sampling and analyzing varieties of individual structural and functional connectomes. Using decision-tree learning, along with extensive bagging and boosting, PyNets is the first application of its kind to facilitate fully-reproducible, parametric sampling of connectome ensembles from neuroimaging data. As a post-processing workflow, PyNets is intended for any preprocessed fMRI or dMRI data in native anatomical space such that it supports normative-referenced connectotyping at the individual-level. Towards these ends, it comprehensively integrates best-practice tractography and functional connectivity analysis methods based open-source libraries such as Dipy and Nilearn, though it is powered primarily through NetworkX and the Nipype workflow engine. PyNets can now also be deployed as a BIDS application, where it takes BIDS derivatives and makes BIDS derivatives.

Install

Dockerhub (preferred):

docker pull dpys/pynets

Manual

Third-Party Dependencies: *FSL version >=5.0.9 *Python3.8+ with GUI programming enabled (See tkinter)

[sudo] pip install pynets [--user]

or

# Install git-lfs
1. brew install git-lfs (macOS) or [sudo] apt-get install git-lfs (linux) or skip to step 2 (Windows)
2. git lfs install --skip-repo

# Clone the repository and install
git clone https://github.com/dpys/pynets
cd PyNets
[sudo] python setup.py install [--user]

Hardware Requirements

4+ vCPU's, 8+ GB RSS memory, and at least 10 GB of free disk space though storage needs can vary considerably depending on the size of the input data and the type of analysis that you wish to run.

Operating Systems

UNIX/MacOS 64-bit platforms

Windows >10 with WSL2

Documentation

Explore official installation instructions, user-guide, API, and examples: https://pynets.readthedocs.io/en/latest/

Citing

A manuscript is in preparation, but for now, please cite all uses with the following entry:

@misc{PyNets,
    title = {PyNets: A Reproducible Workflow for Structural and Functional Connectome Ensemble Learning},
    author = {Pisner, Derek A and Hammonds, Ryan B.},
    publisher = {Poster session presented at: Annual Meeting of the Organization for Human Brain Mapping},
    url = {https://github.com/dPys/PyNets},
    year = {2020},
    month = {June}
}

Data already preprocessed with BIDS apps like fmriprep, CPAC, dmriprep? If your BIDS derivatives can be queried with pybids, then you should be able to run them with the user-friendly pynets_bids CLI!

   pynets_bids '/hnu/fMRIprep/fmriprep' '/Users/dPys/outputs/pynets' participant func --participant_label 0025427 0025428 --session_label 1 2 -config pynets/config/bids_config.json

*Note: If you preprocessed your BOLD data using fMRIprep, then you will need to have specified either T1w or anat in the list of fmriprep --output-spaces. Similarly, if you preprocessed your data using CPAC, then you will want to be sure that an ALFF image exists. PyNets does NOT currently accept template-normalized BOLD or DWI data. See the usage docs for more information on compatible file types.

where the -config flag specifies that path to a .json configuration spec that includes at least one of many possible connectome recipes to apply to your data. Pre-built configuration files are available (see: https://github.com/dPys/PyNets/tree/master/pynets/config), and an example is shown here (with commented descriptions):

{
    "func": { # fMRI options. If you only have functional (i.e. BOLD) data, set each of the `dwi` options to "None"
            "ct": "None", # Indicates the type(s) of clustering that will be used to generate a clustering-based parcellation. This should be left as "None" if no clustering will be performed, but can be included simultaneously with `-a`.
            "k": "None", # Indicates the number of clusters to generate in a clustering-based parcellation. This should be left as "None" if no clustering will be performed.
            "hp": "['0', '0.028', '0.080']", # Indicates the high-pass frequenc(ies) to apply to signal extraction from nodes.
            "mod": "['partcorr', 'cov']", # Indicates the functional connectivity estimator(s) to use. At least 1 is required for functional connectometry.
            "sm": "['0', '4']", # Indicates the smoothing FWHM value(s) to apply during the nodal time-series signal extraction.
            "es": "['mean', 'median']" # Indicates the method(s) of nodal time-series signal extraction.
        },
    "dwi": { # dMRI options. If you only have structural (i.e. DWI) data, set each of the `func` options to "None"
            "dg": "det", # The traversal method of tractography (e.g. deterministic, probabilistic)
            "ml": "40", # The minimum length criterion for streamlines in tractography
            "mod": "csd", # The diffusion model type
            "em": "8" # The tolerance distance (in the units of the streamlines, usually mm). If any node in the streamline is within this distance from the center of any voxel in the ROI, then the connection is counted as an edge"
        },
    "gen": { # These are general options that apply to all modalities
            "a":  "['BrainnetomeAtlasFan2016', 'atlas_harvard_oxford', 'destrieux2009_rois']", # Anatomical atlases to define nodes.
            "bin":  "False", # Binarize the resulting connectome graph before analyzing it. Note that undirected weighted graphs are analyzed by default.
            "embed":  "False", # Activate any of several available graph embedding methods
            "mplx":  0, # If both functional and structural data are provided, this parameter [0-3] indicates the type of multiplex connectome modeling to perform. See `pynets -h` for more details on multiplex modes.
            "n":  "['Cont', 'Default']", # Which, if any, Yeo-7/17 resting-state sub-networks to select from the given parcellation. If multiple are specified, all other options will iterate across each.
            "norm": "['6']", # Level of normalization to apply to graph (e.g. standardize betwee 0-1, Pass-to-Ranks (PTR), log10).
            "spheres":  "False", # Use spheres as nodes (vs. parcel labels, the default).
            "ns":  "None", # If `spheres` is True, this indicates integer radius size(s) of spherical centroid nodes.
            "p":  "['1']", # Apply anti-fragmentation, largest connected-component subgraph selection, or any of a variety of hub-detection methods to graph(s).
            "plt":  "False", # Activate plotting (adjacency matrix and glass-brain included by default).
            "thr":  1.0, # A threshold (0.0-1.0). This can be left as "None" if multi-thresholding is used.
            "max_thr":  0.80, # If performing multi-thresholding, a minimum threshold.
            "min_thr":  0.20, # If performing multi-thresholding, a maximum threshold.
            "step_thr":  0.10, # If performing multi-thresholding, a threshold interval size.
            "dt":  "False", # Global thresholding to achieve a target density. (Only one of `mst`, `dt`, and `df` can be used).
            "mst":  "True", # Local thresholding using the Minimum-Spanning Tree approach. (Only one of `mst`, `dt`, and `df` can be used).
            "df":  "False", # Local thresholding using a disparity filter. (Only one of `mst`, `dt`, and `df` can be used).
            "vox":  "'2mm'" # Voxel size (1mm or 2mm). 2mm is the default.
        }
}

Data not in BIDS format and/or preprocessed using in-house tools? No problem-- you can still run pynets manually:

    pynets -id '002_1' '/Users/dPys/outputs/pynets' \ # where `-id` is an arbitrary subject identifier and the first path is an arbitrary output directory to store derivatives of the workflow.
    -func '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/func/BOLD_PREPROCESSED_IN_ANAT_NATIVE.nii.gz' \ # The fMRI BOLD image data.
    -anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image. This is mandatory -- PyNets requires a T1/T2-weighted anatomical image unless you are analyzing raw graphs that ahve already been produced.
    -a 'BrainnetomeAtlasFan2016' \ # An anatomical atlas name. Note that if were to omit the `-a` flag, a custom parcellation file would need to be specified using the `-a` flag instead or a valid clustering mask (`-cm`) would be needed to generate an individual parcellation. For a complete catalogue of anatomical atlases available in PyNets, see the `Usage` section of the documentation.
    -mod 'partcorr' \ # The connectivity model. In the case of structural connectometry, this becomes the diffusion model type.
    -thr 0.20 \ # Optionally apply a single proportional threshold to the generated graph.
    pynets -id '002_1' '/Users/dPys/outputs/pynets' \ # where `-id` is an arbitrary subject identifier and the first path is an arbitrary output directory to store derivatives of the workflow.
    -dwi '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/dwi/DWI_PREPROCESSED_NATIVE.nii.gz' \ # The dMRI diffusion-weighted image data.
    -bval '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/dwi/BVAL.bval' \ # The b-values.
    -bvec '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/dwi/BVEC.bvec' \ # The b-vectors.
    -anat '/Users/dPys/PyNets/tests/examples/sub-002/ses-1/anat/ANAT_PREPROCESSED_NATIVE.nii.gz' \ # The T1w anatomical image.
    -a '/Users/dPys/.atlases/MyCustomParcellation-scale1.nii.gz' '/Users/dPys/.atlases/MyCustomParcellation-scale2.nii.gz' \ # The parcellations.
    -mod 'csd' 'csa' 'sfm' \ # The (diffusion) connectivity model(s).
    -dg 'prob' 'det'  \ # The tractography traversal method.
    -mst -min_thr 0.20 -max_thr 0.80 -step_thr 0.10 # Multi-thresholding from the Minimum-Spanning Tree, with AUC graph analysis.
    -n 'Default' # The resting-state network definition to restrict node-making.

Multiplex Layers Multiplex Glass Ensemble Connectome Yeo7 ICC Workflow DAG

pynets's People

Contributors

akinikolaidis avatar alexayala08 avatar dpys avatar kaelens avatar kunert avatar mbellamcc avatar nabaruns avatar ryanhammonds avatar shreyasfadnavis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pynets's Issues

Run issue cont'd

(From the other thread since it was closed.)

Sorry I was out of town last week. Getting what appears to be the same error, I should be able to just delete the PyNets directory and clone the git again, right? I'll post the error again just in case.

`lrdc-268:~ taylorhilton$ python PyNets/pynets.py -i "/Users/taylorhilton/10032_faces_run1_dtvd.nii.gz" -ID ‘999’ -a coords_power_2011

INPUT FILE: /Users/taylorhilton/10032_faces_run1_dtvd.nii.gz

SUBJECT ID: ‘999’

ATLAS: coords_power_2011

USING WHOLE-BRAIN CONNECTOME...

170710-16:04:44,649 workflow INFO:
['check', 'execution', 'logging']
170710-16:04:44,654 workflow INFO:
Running serially.
170710-16:04:44,655 workflow INFO:
Executing node imp_est in dir: /var/folders/jh/5dlqs2t9101dgwfdhmhhf1_m0000gp/T/tmpeRtaGU/PyNets_WORKFLOW/imp_est

Power 2011 atlas comes with ['rois', 'description']

Stacked atlas coordinates in array of shape (264, 3).

[Memory] Calling nilearn.input_data.base_masker.filter_and_extract...
filter_and_extract('/Users/taylorhilton/10032_faces_run1_dtvd.nii.gz', <nilearn.input_data.nifti_spheres_masker._ExtractionFunctor object at 0x10d72bfd0>,
{ 'allow_overlap': False,
'detrend': False,
'high_pass': None,
'low_pass': None,
'mask_img': None,
'radius': 3.0,
'seeds': array([[-25, ..., -12],
...,
[ 29, ..., 54]]),
'smoothing_fwhm': None,
'standardize': True,
't_r': None}, confounds=None, memory_level=5, verbose=2, memory=Memory(cachedir='nilearn_cache/joblib'))
[NiftiSpheresMasker.transform_single_imgs] Loading data from /Users/taylorhilton/10032_faces_run1_dtvd.nii.gz
[NiftiSpheresMasker.transform_single_imgs] Extracting region signals

[Memory] Calling nilearn.input_data.nifti_spheres_masker.nifti_spheres_masker_extractor...
nifti_spheres_masker_extractor(<nibabel.nifti1.Nifti1Image object at 0x10d91ce50>)
170710-16:04:50,514 workflow ERROR:
['Node imp_est failed to run on host lrdc-268.lrdc.pitt.edu.']
170710-16:04:50,515 workflow INFO:
Saving crash info to /Users/taylorhilton/crash-20170710-160450-taylorhilton-imp_est-58995258-f29f-4e50-812a-211a0eb9d9c8.pklz
170710-16:04:50,515 workflow INFO:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 39, in run
node.run(updatehash=updatehash)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 394, in run
self._run_interface()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 504, in _run_interface
self._result = self._run_command(execute)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 630, in _run_command
result = self._interface.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/interfaces/base.py", line 1043, in run
runtime = self._run_wrapper(runtime)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/interfaces/base.py", line 1000, in _run_wrapper
runtime = self._run_interface(runtime)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/interfaces/utility.py", line 499, in _run_interface
out = function_handle(**args)
File "", line 242, in mat_funcs
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 276, in fit_transform
return self.fit().transform(imgs, confounds=confounds)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/base_masker.py", line 176, in transform
return self.transform_single_imgs(imgs, confounds)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 322, in transform_single_imgs
verbose=self.verbose)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 483, in call
return self._cached_call(args, kwargs)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 430, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 675, in call
output = self.func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/base_masker.py", line 98, in filter_and_extract
memory_level=memory_level)(imgs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 483, in call
return self._cached_call(args, kwargs)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 430, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 675, in call
output = self.func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 136, in call
mask_img=self.mask_img)):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 115, in _iter_signals_from_spheres
raise ValueError('Sphere around seed #%i is empty' % i)
ValueError: Sphere around seed #0 is empty
Interface Function failed to run.

170710-16:04:50,522 workflow INFO:


170710-16:04:50,522 workflow ERROR:
could not run node: PyNets_WORKFLOW.imp_est
170710-16:04:50,522 workflow INFO:
crashfile: /Users/taylorhilton/crash-20170710-160450-taylorhilton-imp_est-58995258-f29f-4e50-812a-211a0eb9d9c8.pklz
170710-16:04:50,522 workflow INFO:


Traceback (most recent call last):
File "PyNets/pynets.py", line 859, in
wf.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 597, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 57, in run
report_nodes_not_run(notrun)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 95, in report_nodes_not_run
raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details`

ModuleNotFoundError: No module named 'dipy.tracking.local'

  • PyNets version:lates
  • Python version : 3.6
  • Operating System: Ubuntu 18.04

Description

When i try to run the pynets, i am getting a dipy.tracking.local ModuleNotFoundError , and for the past 5 hours the program is tuck

What I Did

Following is the command line call that i have used to run pynets_run.py

Paste the command(s) you ran and the output.
pynets_run.py -dwi '/sk-home/vasudev/Music/Subjects/KON11/KON11/dwi-ec.nii.gz' -bval '/sk-home/vasudev/Music/Subjects/KON11/KON11/bvals' -bvec '/sk-home/vasudev/Music/Subjects/KON11/KON11/bvecs' -id KON11 -a atlas_destrieux_2009 -parc -tt 'particle' -dg 'boot' -mod 'csd' 'tensor' -anat '/sk-home/vasudev/Music/Subjects/KON11/KON11/KON11/KON11_t1_brain.nii.gz' -s 1000000 -dt -min_thr 0.05 -max_thr 0.10 -step_thr 0.01 -p 1

If there was a crash, please include the traceback here.

Traceback (most recent call last):
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 472, in run
result = self._run_interface(execute=True)
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 563, in _run_interface
return self._run_command(execute)
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 643, in _run_command
result = self._interface.run(cwd=outdir)
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 375, in run
runtime = self._run_interface(runtime)
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface
out = function_handle(**args)
File "", line 80, in run_track
File "/sk-home/vasudev/.local/lib/python3.6/site-packages/pynets/dmri/track.py", line 57, in prep_tissues
from dipy.tracking.local import ActTissueClassifier, CmcTissueClassifier, BinaryTissueClassifier
ModuleNotFoundError: No module named 'dipy.tracking.local'

Run issue

Command and error posted below. The relevant portion seems to be "sphere around seed #0 is empty", maybe there's some preprocessing that it's expecting? I know the volume isn't empty. Also it's worth noting that it failed to run without the single quotes removed on 'coords_power_2011', unlike the examples given in the README.

Any help is much appreciated!

lrdc-268:~ taylorhilton$ python PyNets/pynets.py -i "/Users/taylorhilton/10075_faces_run1_dtvd.nii.gz" -ID ‘999’ -a coords_power_2011

INPUT FILE: /Users/taylorhilton/10075_faces_run1_dtvd.nii.gz

SUBJECT ID: ‘999’

ATLAS: coords_power_2011

USING WHOLE-BRAIN CONNECTOME...

170628-13:08:24,49 workflow INFO:
['check', 'execution', 'logging']
170628-13:08:24,69 workflow INFO:
Running serially.
170628-13:08:24,70 workflow INFO:
Executing node imp_est in dir: /var/folders/jh/5dlqs2t9101dgwfdhmhhf1_m0000gp/T/tmp6iMYk2/PyNets_WORKFLOW/imp_est
Power 2011 atlas comes with ['rois', 'description'].

Stacked atlas coordinates in array of shape (264, 3).


[Memory] Calling nilearn.input_data.base_masker.filter_and_extract...
filter_and_extract('/Users/taylorhilton/10075_faces_run1_dtvd.nii.gz', <nilearn.input_data.nifti_spheres_masker._ExtractionFunctor object at 0x10da32290>,
{ 'allow_overlap': False,
'detrend': False,
'high_pass': None,
'low_pass': None,
'mask_img': None,
'radius': 3.0,
'seeds': array([[-25, ..., -12],
...,
[ 29, ..., 54]]),
'smoothing_fwhm': None,
'standardize': False,
't_r': None}, confounds=None, memory_level=5, verbose=2, memory=Memory(cachedir='nilearn_cache/joblib'))
[NiftiSpheresMasker.transform_single_imgs] Loading data from /Users/taylorhilton/10075_faces_run1_dtvd.nii.gz
[NiftiSpheresMasker.transform_single_imgs] Extracting region signals


[Memory] Calling nilearn.input_data.nifti_spheres_masker.nifti_spheres_masker_extractor...
nifti_spheres_masker_extractor(<nibabel.nifti1.Nifti1Image object at 0x10da32e50>)
170628-13:08:29,764 workflow ERROR:
['Node imp_est failed to run on host lrdc-268.lrdc.pitt.edu.']
170628-13:08:29,765 workflow INFO:
Saving crash info to /Users/taylorhilton/crash-20170628-130829-taylorhilton-imp_est-d5ae546b-a2ec-4aa7-87d7-073792129cd8.pklz
170628-13:08:29,765 workflow INFO:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 39, in run
node.run(updatehash=updatehash)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 394, in run
self._run_interface()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 504, in _run_interface
self._result = self._run_command(execute)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 630, in _run_command
result = self._interface.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/interfaces/base.py", line 1043, in run
runtime = self._run_wrapper(runtime)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/interfaces/base.py", line 1000, in _run_wrapper
runtime = self._run_interface(runtime)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/interfaces/utility.py", line 499, in _run_interface
out = function_handle(**args)
File "", line 42, in import_mat_func
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 276, in fit_transform
return self.fit().transform(imgs, confounds=confounds)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/base_masker.py", line 176, in transform
return self.transform_single_imgs(imgs, confounds)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 322, in transform_single_imgs
verbose=self.verbose)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 483, in call
return self._cached_call(args, kwargs)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 430, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 675, in call
output = self.func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/base_masker.py", line 98, in filter_and_extract
memory_level=memory_level)(imgs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 483, in call
return self._cached_call(args, kwargs)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 430, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 675, in call
output = self.func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 136, in call
mask_img=self.mask_img)):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 115, in _iter_signals_from_spheres
raise ValueError('Sphere around seed #%i is empty' % i)
ValueError: Sphere around seed #0 is empty
Interface Function failed to run.

170628-13:08:29,772 workflow INFO:
***********************************
170628-13:08:29,772 workflow ERROR:
could not run node: PyNets_WORKFLOW.imp_est
170628-13:08:29,772 workflow INFO:
crashfile: /Users/taylorhilton/crash-20170628-130829-taylorhilton-imp_est-d5ae546b-a2ec-4aa7-87d7-073792129cd8.pklz
170628-13:08:29,772 workflow INFO:
***********************************
Traceback (most recent call last):
File "PyNets/pynets.py", line 736, in
wf.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 597, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/plugins/linear.py", line 57, in run
report_nodes_not_run(notrun)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 95, in report_nodes_not_run
raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details
lrdc-268:~ taylorhilton$ python PyNets/pynets.py -i "/Users/taylorhilton/10075_faces_run1_dtvd.nii.gz" -ID ‘999’ -a coords_power_2011

Potential dependency issues (imp_est and ExtractNetStats issues)?

  • PyNets version: 0.1.0 (2017-08-23)
  • Python version: python3.5 (though 2.7 also installed on server)
  • Operating System: Ubuntu 16.04.3 LTS

Description

PyNets was crashing with both example data, as well as my own (actual data). PyNets looks to recognize the data, but as network property calculation starts, everything crashes. I'm presuming it might be some issues with python dependencies (but PyNets looks to pass basic checks and installs fine, etc.). Two command-line entries below and two pklz files attached (here: Archive.zip)

What I Did

One example command:

python3.5 /home/jamielh/Volumes/Hanson/Training_Resources/PyNets/pynets/pynets_run.py -i '/home/jamielh/Volumes/Hanson/Training_Resources/PyNets/tests/examples/997/filtered_func_data_clean_standard.nii.gz' -ID '997' -a 'coords_dosenbach_2010' -model 'corr' -thr '0.95'

Another example command:

python3.5 /home/jamielh/Volumes/Hanson/Training_Resources/PyNets/pynets/pynets_run.py -i '/home/jamielh/Volumes/Hanson/Duke_PAC/proc/140313_18263/fmri/rest/filt_20140313_18263_REST_LAS_st_mcfr_brain_norm_wmcsf.nii.gz' -ID '14031318263' -a 'coords_dosenbach_2010' -model 'corr' -thr '0.95'

Thoughts? Any specific packages I need to update?

UnboundLocalError: local variable 'multimodal' referenced before assignment

  • PyNets version: release
  • Python version: python 3.6.8
  • Operating System: linux ubuntu 18.04

Description

Dear pynets developers,
I installed the pynets with the Docker container and tried to run the example A) of Quickstart from the latest documentation (https://buildmedia.readthedocs.org/media/pdf/pynets/latest/pynets.pdf page 13). Following UnboundLocalError was returned.

UnboundLocalError: local variable 'multimodal' referenced before assignment

What I Did

1> installed the pynets with the docker
docker build -t pynets .

2> ran the container
sudo docker run -ti --rm --privileged -v /tmp:/tmp -v /var/tmp:/var/tmp pynets -func '/home/bispl/test/sub-A00000300_ses-20110101_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz' -id 'test' -mod 'partcorr' -thr 0.20 -m '/home/bispl/test/sub-A00000300_ses-20110101_task-rest_bold_space-MNI152NLin2009cAsym_brainmask.nii.gz'

and the error was thrown as below


Running workflow across single subject:
test

Using whole-brain pipeline...

Using node size of: 4mm...

Using connectivity model:
partcorr

Running functional connectometry only...
Functional file: /home/bispl/test/sub-A00000300_ses-20110101_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz


Process Process-2:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.6/site-packages/pynets/pynets_run.py", line 1585, in build_workflow
directget, tiss_class, runtime_dict, embed, multi_directget, multimodal)
UnboundLocalError: local variable 'multimodal' referenced before assignment

WARNING: Upgrade to python3 for forkserver functionality...


Running workflow across single subject:
test

Using whole-brain pipeline...

Using node size of: 4mm...

Using connectivity model:
partcorr

Running functional connectometry only...
Functional file: /home/bispl/test/sub-A00000300_ses-20110101_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz


Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/pynets/pynets_run.py", line 1690, in main
sys.exit(p.exitcode)
SystemExit: 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/pynets/pynets_run.py", line 1701, in
main()
File "/opt/conda/lib/python3.6/site-packages/pynets/pynets_run.py", line 1694, in main
build_workflow(args, retval)
File "/opt/conda/lib/python3.6/site-packages/pynets/pynets_run.py", line 1585, in build_workflow
directget, tiss_class, runtime_dict, embed, multi_directget, multimodal)
UnboundLocalError: local variable 'multimodal' referenced before assignment


I tried to go through the documentation thoroughly and concluded that this was not related to some build failures or passing the wrong options.
Thank you for your help, and your effort on developing the package.

Byung-Hoon

Node Outside BrainMask

  • PyNets version:
  • Python version:
  • Operating System:

Description

Im running a graph analysis using resting state data thats been preprocessed via fMRIprep.
It looks like the brain mask I used excluded one of the nodes in the Power 2011 atlas, which then made the entire thing crash.
Should I not include a brain mask? If I don't, then I run the risk of including nodes that don't provide a reliable signal in my analysis.
It might be helpful if PyNets can identify which nodes are outside of the brain mask and give a warning, but still move forward with extracting the signal from remaining nodes.

What I Did

I used this command: singularity exec /work/04171/dpisner/pynets_singularity_latest-2018-11-19-60e143007b19.img pynets_run.py -i /scratch/05231/klray/R33/fmriprep/sub-10035/ses-01/func/sub-10035_ses-01_task-REST_run-01_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz -id 10035 -a coords_dosenbach_2010,coords_power_2011 -ua /scratch/05231/klray/Schaefer2018_400Parcels_7Networks_order_FSLMNI152_2mm.nii -min_thr 0.05 -max_thr 0.25 -step_thr 0.05 -dt -mod partcorr,corr,sps -ns 3,6 -sm 0,2 -b 5 -pm 40,48 -plug LegacyMultiProc -m /scratch/05231/klray/R33/fmriprep/sub-10035/ses-01/func/sub-10035_ses-01_task-REST_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz -conf /scratch/05231/klray/R33/fmriprep/confounds_KLR/sub-10035_ses-01_task-REST_desc-confounds_regressorsKLR.txt
I attached the crash files.
Thanks!
crash-20181127-210728-klray-extract_ts_node.cI.c09.d1-5d4dc546-b4af-4411-826f-ebed91090adf.txt
crash-20181127-211002-klray-extract_ts_node.cI.c06.d1-06905307-db94-4619-9e3a-cd1ae64c31af.txt
crash-20181127-211002-klray-extract_ts_node.cI.c08.d1-f06930f1-da2f-4de6-9ed8-25218aacb46c.txt
crash-20181127-211004-klray-extract_ts_node.cI.c04.d1-4531a55c-c99f-4d19-a485-fa6dc774dbce.txt
crash-20181127-211010-klray-extract_ts_node.cI.c02.d1-92879a84-a328-4133-8194-e84aeaaad004.txt
crash-20181127-211010-klray-extract_ts_node.cI.c03.d1-0f21b772-0da9-4212-8713-45643d0736cc.txt
crash-20181127-211010-klray-extract_ts_node.cI.c05.d1-287663e2-9256-4806-99b4-9dd25726012b.txt
crash-20181127-211014-klray-extract_ts_node.cI.c01.d1-8f2e0eb5-b4e4-4d56-93c3-99650a0d993d.txt
crash-20181127-211025-klray-extract_ts_node.cI.c00.d1-eabe4d56-5372-4d3c-8123-a700e1f2924c.txt

unit tests for diffconnectometry and netstats

  • PyNets version:
  • Python version:
  • Operating System:

Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Sample dataset included in the repo?

Would be great to have a sample dataset included that isn't too large so that users can test out PyNets immediately after download.

But even better so that we can provide data for testing our functions!

pynets thresholding

if i import the pynet library version 1.27 i am not able to get the pynet.thresholding.threshold_propotional(matrix, threh_value) working, it says "AttributeError: module 'pynets' has no attribute 'thresholding'"

Looking for WholeBrain connectivity matrix that includes all nodes

I ran PyNets on each of my subjects separately using the Schaefer2018_400Parcels_7Networks_order_FSLMNI152_2mm atlas.
While there are 400 nodes in this atlas, it looks like 5 or so nodes were outside of my brain mask and not included in the analysis. This is great, however, now when I look at the estimate files, they are 398x398 for one subject and 397x397 for another, etc.
Is there a whole brain output somewhere that includes all 400 nodes (I would suppose that the pruned nodes outside of the mask would simply have zeros in their corresponding row/column)? this would be beneficial so that I can generate a group graph.

Thanks

UnboundLocalError: local variable 'multimodal' referenced before assignment

  • Python version: 3.5
  • Operating System: Centos 7

Description

There's a missing assignment of boolean multimodal.
Suggest adding to Line 1167
multimodal = False

What I Did

pynets_run.py -func 'PyNets-master/tests/examples/002/fmri/002.nii.gz' -id '002' -a 'coords_dosenbach_2010' -mod 'partcorr' -thr 0.20

Traceback (most recent call last):
File "/usr/bin/pynets_run.py", line 1690, in main
sys.exit(p.exitcode)
SystemExit: 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/bin/pynets_run.py", line 1701, in
main()
File "/usr/bin/pynets_run.py", line 1694, in main
build_workflow(args, retval)
File "/usr/bin/pynets_run.py", line 1585, in build_workflow
directget, tiss_class, runtime_dict, embed, multi_directget, multimodal)
UnboundLocalError: local variable 'multimodal' referenced before assignment

Crossing coordinate atlas parcellations, resting-state networks, and specific brain regions?

  • PyNets version:0.7.25
  • Python version: 3.6.6 (Anaconda)
  • Operating System: Ubuntu 16.04.4

I had some (hopefully) quick, related-questions

  1. First, using the '-n' flag, one can pull Yeo-Schaefer networks, but some experimenting suggest that certain atlases (called via the '-a' flag) have different coverages of these networks. Is there a good way to think about how these two options intersect? Some -a specification seem to yield "better coverage" of some networks (just experimenting with the default mode, etc.)?

  2. Related, if one were to use another atlas (that don't look to be preconfigured in regards to the '-a' flag) such as AICHAJoliot2015 or BrainnetomeAtlasFan2016 included in the pynets/atlas folder, how can you pull out specific networks (or will PyNets automatically work through the appropriate coordinates)?

  3. And finally, if one wanted to focus on a small set of regions (say, for example, pull out the centrality of an ACC ROI from an atlas), could that be accomplished in PyNet? Or would it make more sense to generate graph files and then port them to igraph or other programs, etc?

Pardon the slightly basic questions. Based on the existing documentation, I wasn't sure about things. I know there's a new user-guide coming, I was just very excited about getting PyNets implemented (and getting more data analyzed!).

Thanks much in advance.
Jamie.

Node check_orient_and_dims_clust_mask_node failed to run

@dPys I am trying to test PyNets so that I can run it on brainlife.io eventually.

Right now, I am having a problem running it locally on my dev machine and I am wondering if you could guide me in the right direction.

I've prepared the following testdata

$ tree testdata/
testdata/
├── bold.nii.gz
├── dwi.bvals
├── dwi.bvecs
├── dwi.nii.gz
├── mask.nii.gz
├── regressors.tsv
└── t1.nii.gz

I then launched PyNets via singularity like this

singularity run -e docker://dpys/pynets:latest pynets \
output -p 1 -mod partcorr corr -min_thr 0.05 -max_thr 0.1 -step_thr 0.01 -sm 0 2 4 -hp 0 0.028 0.080 -ct ward -k 100 200 \
-cm /outputs/triple_net_ICA_overlap_3_sig_bin.nii.gz \
-norm 6 \
-anat '"testdata/t1.nii.gz"' \
-func '"testdata/bold.nii.gz"' \
-conf '"testdata/regressors.tsv"' \
-id brainlife \
-work tmp \
-pm 8,30

Here is the ouput/error messages.


PyNets Version:
0.9.99c


2020-07-24 01:27:22





------------------------------------------------------------------------

Running workflow for single subject:
brainlife

Using whole-brain pipeline...

Using parcels as nodes...

Applying smoothing to node signal at multiple FWHM mm values: 0, 2, 4...

Applying high-pass filter to node signal at multiple Hz values: 0, 0.028, 0.080...

Extracting node signal using a mean strategy...

Iterating graph estimation across multiple connectivity models: partcorr, corr...

Clustering within mask at multiple resolutions...
Cluster atlas: triple_net_ICA_overlap_3_sig_bin_ward_k100
Cluster atlas: triple_net_ICA_overlap_3_sig_bin_ward_k200

Running fmri connectometry only...
BOLD Image: testdata/bold.nii.gz
BOLD Confound Regressors: testdata/regressors.tsv
T1-Weighted Image:
testdata/t1.nii.gz

-------------------------------------------------------------------------


Parsing functional models...
Running Unimodal Workflow...
200724-01:27:26,84 nipype.workflow INFO:
	 Generated workflow graph: tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/graph.png (graph2use=colored, simple_form=True).

Running with {'n_procs': 8, 'memory_gb': 30, 'scheduler': 'mem_thread'}

200724-01:27:31,515 nipype.workflow INFO:
	 Workflow wf_single_sub_brainlife_fmri_20200724_012723 settings: ['check', 'execution', 'logging', 'monitoring']
200724-01:27:32,365 nipype.workflow INFO:
	 Running in parallel.
200724-01:27:32,381 nipype.workflow INFO:
	 [MultiProc] Running 0 tasks, and 3 jobs ready. Free memory (GB): 30.00/30.00, Free processors: 8/8.
200724-01:27:33,381 nipype.workflow INFO:
	 [MultiProc] Running 3 tasks, and 0 jobs ready. Free memory (GB): 27.30/30.00, Free processors: 4/8.
                     Currently running:
                       * wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_func_node
                       * wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_anat_node
                       * wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_clust_mask_node
200724-01:27:34,491 nipype.workflow INFO:
	 [Node] Setting-up "wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_func_node" in "/mnt/scratch/hayashis/syncthing/git/PyNets/tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/meta_wf_brainlife/fmri_connectometry_brainlife/check_orient_and_dims_func_node".
200724-01:27:34,494 nipype.workflow INFO:
	 [Node] Running "check_orient_and_dims_func_node" ("nipype.interfaces.utility.wrappers.Function")
200724-01:27:34,511 nipype.workflow INFO:
	 [Node] Setting-up "wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_clust_mask_node" in "/mnt/scratch/hayashis/syncthing/git/PyNets/tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/meta_wf_brainlife/fmri_connectometry_brainlife/check_orient_and_dims_clust_mask_node".
200724-01:27:34,518 nipype.workflow INFO:
	 [Node] Running "check_orient_and_dims_clust_mask_node" ("nipype.interfaces.utility.wrappers.Function")
200724-01:27:34,543 nipype.workflow WARNING:
	 Storing result file without outputs
200724-01:27:34,544 nipype.workflow WARNING:
	 [Node] Error on "wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_func_node" (/mnt/scratch/hayashis/syncthing/git/PyNets/tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/meta_wf_brainlife/fmri_connectometry_brainlife/check_orient_and_dims_func_node)
200724-01:27:34,585 nipype.workflow INFO:
	 [Node] Setting-up "wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_anat_node" in "/mnt/scratch/hayashis/syncthing/git/PyNets/tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/meta_wf_brainlife/fmri_connectometry_brainlife/check_orient_and_dims_anat_node".
200724-01:27:34,588 nipype.workflow INFO:
	 [Node] Running "check_orient_and_dims_anat_node" ("nipype.interfaces.utility.wrappers.Function")
200724-01:27:34,593 nipype.workflow WARNING:
	 Storing result file without outputs
200724-01:27:34,594 nipype.workflow WARNING:
	 [Node] Error on "wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_clust_mask_node" (/mnt/scratch/hayashis/syncthing/git/PyNets/tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/meta_wf_brainlife/fmri_connectometry_brainlife/check_orient_and_dims_clust_mask_node)
200724-01:27:34,632 nipype.workflow WARNING:
	 Storing result file without outputs
200724-01:27:34,633 nipype.workflow WARNING:
	 [Node] Error on "wf_single_sub_brainlife_fmri_20200724_012723.meta_wf_brainlife.fmri_connectometry_brainlife.check_orient_and_dims_anat_node" (/mnt/scratch/hayashis/syncthing/git/PyNets/tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/wf_single_sub_brainlife_fmri_20200724_012723/meta_wf_brainlife/fmri_connectometry_brainlife/check_orient_and_dims_anat_node)
200724-01:27:35,385 nipype.workflow ERROR:
	 Node check_orient_and_dims_clust_mask_node failed to run on host dev1.soichi.us.
200724-01:27:35,387 nipype.workflow ERROR:
	 Saving crash info to tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/crash-20200724-012735-hayashis-check_orient_and_dims_clust_mask_node-60ce31d5-251b-43c8-a2f1-3fd3a3f894f0.txt
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 397, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/utility/wrappers.py", line 142, in _run_interface
    out = function_handle(**args)
TypeError: check_orient_and_dims() missing 2 required positional arguments: 'outdir' and 'vox_size'

200724-01:27:35,393 nipype.workflow ERROR:
	 Node check_orient_and_dims_anat_node failed to run on host dev1.soichi.us.
200724-01:27:35,393 nipype.workflow ERROR:
	 Saving crash info to tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/crash-20200724-012735-hayashis-check_orient_and_dims_anat_node-6fcac807-3e79-422d-a461-99f773bc72cb.txt
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/nibabel/loadsave.py", line 42, in load
    stat_result = os.stat(filename)
FileNotFoundError: [Errno 2] No such file or directory: 'testdata/t1.nii.gz'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 397, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/utility/wrappers.py", line 142, in _run_interface
    out = function_handle(**args)
  File "<string>", line 28, in check_orient_and_dims
  File "/opt/conda/lib/python3.6/site-packages/nibabel/loadsave.py", line 44, in load
    raise FileNotFoundError("No such file or no access: '%s'" % filename)
FileNotFoundError: No such file or no access: 'testdata/t1.nii.gz'

200724-01:27:35,399 nipype.workflow ERROR:
	 Node check_orient_and_dims_func_node failed to run on host dev1.soichi.us.
200724-01:27:35,399 nipype.workflow ERROR:
	 Saving crash info to tmp/brainlife_20200724_012725_d2a2be11-bbe1-4bed-80ed-973c2acf26ba_wf_single_subject_fmri_brainlife/crash-20200724-012735-hayashis-check_orient_and_dims_func_node-5db57751-0581-44ea-aee9-d1976aa94ba2.txt
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/nibabel/loadsave.py", line 42, in load
    stat_result = os.stat(filename)
FileNotFoundError: [Errno 2] No such file or directory: 'testdata/bold.nii.gz'

Thank you for your assistance!

Dynamic connectivity

  • PyNets version:
  • Python version:
  • Operating System:

Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

AttributeError: 'numpy.ndarray' object has no attribute 'index'

  • PyNets version:
  • Python version:
    Docker
  • Operating System:
    Docker

Description

	 Saving crash info to /mnt/crash-20180614-092313-neuro-plot_all_node-df185f16-dd75-4361-9f4b-aeded70b1353.pklz
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 68, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 480, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 564, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 644, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 521, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface
    out = function_handle(**args)
  File "<string>", line 28, in plot_all
AttributeError: 'numpy.ndarray' object has no attribute 'index'
neuro@adf55fe5fc82:/mnt$ cat /tmp/tmpjot2l9k_/meta/wb_functional_connectometry/plot_all_node/_report/report.rst
Node: wb_functional_connectometry (plot_all_node (utility)
==========================================================


 Hierarchy : meta.wb_functional_connectometry.plot_all_node
 Exec ID : plot_all_node


Original Inputs
---------------


* ID : 997
* atlas_select : b'Dosenbach 2010 atlas'
* conn_matrix : [[ 0.          0.11278499 -0.03407978 ... -0.01133396  0.04338642
  -0.02806834]
 [ 0.11278499  0.          0.05601454 ... -0.06766719 -0.13525777
   0.09306326]
 [-0.03407978  0.05601454  0.         ...  0.03695102  0.06585721
   0.01376445]
 ...
 [-0.01133396 -0.06766719  0.03695102 ...  0.          0.01551453
   0.03925329]
 [ 0.04338642 -0.13525777  0.06585721 ...  0.01551453  0.
  -0.0080033 ]
 [-0.02806834  0.09306326  0.01376445 ...  0.03925329 -0.0080033
   0.        ]]
* conn_model : partcorr
* coords : [[ 18 -81 -33]
 [-21 -79 -33]
 [ -6 -79 -33]
 [ 33 -73 -30]
 [-34 -67 -29]
 [ 32 -61 -31]
 [-25 -60 -34]
 [-37 -54 -37]
 [ 21 -64 -22]
 [-34 -57 -24]
 [-24 -54 -21]
 [-28 -44 -25]
 [  5 -75 -11]
 [ 14 -75 -21]
 [-11 -72 -14]
 [  1 -66 -24]
 [-16 -64 -21]
 [ -6 -60 -15]
 [ -2  30  27]
 [-52 -63  15]
 [ 27  49  26]
 [-41 -47  29]
 [-36  18   2]
 [ 38  21  -1]
 [ 11 -24   2]
 [-20   6   7]
 [ 14   6   7]
 [ -6  17  34]
 [  9  20  34]
 [ 54 -31 -18]
 [  0  15  45]
 [-30 -14   1]
 [ 32 -12   2]
 [ 37  -2  -3]
 [-55 -44  30]
 [ 58 -41  20]
 [ -4 -31  -4]
 [-30 -28   9]
 [  8 -40  50]
 [ 42 -46  21]
 [-59 -47  11]
 [ 43 -43   8]
 [ 51 -30   5]
 [-12 -12   6]
 [ 11 -12   6]
 [-12  -3  13]
 [-48   6   1]
 [-46  10  14]
 [ 51  23   8]
 [ 34  32   7]
 [  9  39  20]
 [-36 -69  40]
 [-25  51  27]
 [-48 -63  35]
 [ 51 -59  34]
 [ 28 -37 -15]
 [-61 -41  -2]
 [-59 -25 -15]
 [ 52 -15 -13]
 [  0  51  32]
 [-42 -76  26]
 [ -2 -75  32]
 [ -9 -72  41]
 [ 45 -72  29]
 [-28 -42 -11]
 [-11 -58  17]
 [ 10 -55  17]
 [ -5 -52  17]
 [ -5 -43  25]
 [ -8 -41   3]
 [  1 -26  31]
 [ 11 -68  42]
 [ -6 -56  29]
 [  5 -50  33]
 [  9 -43  25]
 [ -3 -38  45]
 [-16  29  54]
 [ 23  33  47]
 [ 46  39 -15]
 [  8  42  -5]
 [-11  45  17]
 [ -6  50  -1]
 [  9  51  16]
 [  6  64   3]
 [ -1  28  40]
 [ 44 -52  47]
 [-53 -50  39]
 [-48 -47  49]
 [ 54 -44  43]
 [-41 -40  42]
 [ 32 -59  41]
 [-32 -58  46]
 [ 29  57  18]
 [-29  57  10]
 [-42   7  36]
 [ 44   8  34]
 [ 40  17  40]
 [-44  27  33]
 [ 46  28  31]
 [ 40  36  29]
 [-35 -46  48]
 [-52  28  17]
 [-43  47   2]
 [ 42  48  -3]
 [ 39  42  16]
 [ 20 -78  -2]
 [ 15 -77  32]
 [-16 -76  33]
 [  9 -76  14]
 [-29 -75  28]
 [ 29 -73  29]
 [ 39 -71  13]
 [ 17 -68  20]
 [ 19 -66  -1]
 [-44 -63  -7]
 [-34 -60  -5]
 [ 36 -60  -8]
 [-18 -50   1]
 [ -4 -94  12]
 [ 13 -91   2]
 [ 27 -91   2]
 [-29 -88   8]
 [-37 -83  -2]
 [ 29 -81  14]
 [ 33 -81  -2]
 [ -5 -80   9]
 [ 46 -62   5]
 [  0  -1  52]
 [ 60   8  34]
 [ 53  -3  32]
 [ 58  11  14]
 [ 33 -12  16]
 [-36 -12  15]
 [-42  -3  11]
 [-24 -30  64]
 [ 18 -27  62]
 [-38 -27  60]
 [ 41 -23  55]
 [-55 -22  38]
 [ 46 -20  45]
 [-47 -18  50]
 [-38 -15  59]
 [-47 -12  36]
 [-26  -8  54]
 [ 42 -24  17]
 [-41 -31  48]
 [ 10   5  51]
 [-54 -22  22]
 [ 44 -11  38]
 [-54  -9  23]
 [ 46  -8  24]
 [-44  -6  49]
 [ 58  -3  17]
 [ 34 -39  65]
 [-41 -37  16]
 [-53 -37  13]
 [-54 -22   9]
 [ 59 -13   8]
 [ 43   1  12]
 [-55   7  23]]
* dir_path : /mnt/coords_dosenbach_2010
* edge_threshold : 99.99%
* function_str : def plot_all(conn_matrix, conn_model, atlas_select, dir_path, ID, network, label_names, mask, coords, thr, node_size,
             edge_threshold):
    import matplotlib
    matplotlib.use('Agg')
    from matplotlib import pyplot as plt
    from nilearn import plotting as niplot
    import pkg_resources
    import networkx as nx
    from pynets import plotting
    from pynets.netstats import most_important
    try:
        import cPickle as pickle
    except ImportError:
        import _pickle as pickle

    pruning = True
    dpi_resolution = 500
    G_pre=nx.from_numpy_matrix(conn_matrix)
    if pruning == True:
        [G, pruned_nodes, pruned_edges] = most_important(G_pre)
    else:
        G = G_pre
    conn_matrix = nx.to_numpy_array(G)

    pruned_nodes.sort(reverse=True)
    for j in pruned_nodes:
        del label_names[label_names.index(label_names[j])]
        del coords[coords.index(coords[j])]

    pruned_edges.sort(reverse=True)
    for j in pruned_edges:
        del label_names[label_names.index(label_names[j])]
        del coords[coords.index(coords[j])]

    # Plot connectogram
    if len(conn_matrix) > 20:
        try:
            plotting.plot_connectogram(conn_matrix, conn_model, atlas_select, dir_path, ID, network, label_names)
        except RuntimeError:
            print('\n\n\nError: Connectogram plotting failed!')
    else:
        print('Error: Cannot plot connectogram for graphs smaller than 20 x 20!')

    # Plot adj. matrix based on determined inputs
    plotting.plot_conn_mat_func(conn_matrix, conn_model, atlas_select, dir_path, ID, network, label_names, mask, thr,
                                node_size)

    # Plot connectome
    if mask:
        if network:
            out_path_fig = "%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s" % (dir_path, '/', ID, '_', str(atlas_select), '_', str(conn_model), '_', str(os.path.basename(mask).split('.')[0]), '_', str(network), '_', str(thr), '_', str(node_size), '_func_glass_viz.png')
        else:
            out_path_fig = "%s%s%s%s%s%s%s%s%s%s%s%s%s%s" % (dir_path, '/', ID, '_', str(atlas_select), '_', str(conn_model), '_', str(os.path.basename(mask).split('.')[0]), '_', str(thr), '_', str(node_size), '_func_glass_viz.png')
        # Save coords to pickle
        coord_path = "%s%s%s%s" % (dir_path, '/coords_', os.path.basename(mask).split('.')[0], '_plotting.pkl')
        with open(coord_path, 'wb') as f:
            pickle.dump(coords, f, protocol=2)
        net_parcels_map_nifti = None
        # Save labels to pickle
        labels_path = "%s%s%s%s" % (dir_path, '/labelnames_', os.path.basename(mask).split('.')[0], '_plotting.pkl')
        with open(labels_path, 'wb') as f:
            pickle.dump(label_names, f, protocol=2)
    else:
        if network:
            out_path_fig = "%s%s%s%s%s%s%s%s%s%s%s%s%s%s" % (dir_path, '/', ID, '_', str(atlas_select), '_', str(conn_model), '_', str(network), '_', str(thr), '_', str(node_size), '_func_glass_viz.png')
        else:
            out_path_fig = "%s%s%s%s%s%s%s%s%s%s%s%s" % (dir_path, '/', ID, '_', str(atlas_select), '_', str(conn_model), '_', str(thr), '_', str(node_size), '_func_glass_viz.png')
        # Save coords to pickle
        coord_path = "%s%s" % (dir_path, '/coords_plotting.pkl')
        with open(coord_path, 'wb') as f:
            pickle.dump(coords, f, protocol=2)
        # Save labels to pickle
        labels_path = "%s%s" % (dir_path, '/labelnames_plotting.pkl')
        with open(labels_path, 'wb') as f:
            pickle.dump(label_names, f, protocol=2)
    #niplot.plot_connectome(conn_matrix, coords, edge_threshold=edge_threshold, node_size=20, colorbar=True, output_file=out_path_fig)
    ch2better_loc = pkg_resources.resource_filename("pynets", "templates/ch2better.nii.gz")
    connectome = niplot.plot_connectome(np.zeros(shape=(1, 1)), [(0, 0, 0)], node_size=0.0001)
    connectome.add_overlay(ch2better_loc, alpha=0.4, cmap=plt.cm.gray)
    [z_min, z_max] = -np.abs(conn_matrix).max(), np.abs(conn_matrix).max()
    connectome.add_graph(conn_matrix, coords, edge_threshold=edge_threshold, edge_cmap='Greens',
                         edge_vmax=z_max, edge_vmin=z_min, node_size=4)
    connectome.savefig(out_path_fig, dpi=dpi_resolution)
    return

* ignore_exception : False
* label_names : ["inf cerebellum' 155", "inf cerebellum' 150", "inf cerebellum' 151", "inf cerebellum' 140", "inf cerebellum' 131", "inf cerebellum' 122", "inf cerebellum' 121", "inf cerebellum' 110", "lat cerebellum' 128", "lat cerebellum' 113", "lat cerebellum' 109", "lat cerebellum' 98", "med cerebellum' 143", "med cerebellum' 144", "med cerebellum' 138", "med cerebellum' 130", "med cerebellum' 127", "med cerebellum' 120", "ACC' 19", "TPJ' 125", "aPFC' 8", "angular gyrus' 102", "ant insula' 28", "ant insula' 26", "asal ganglia' 71", "asal ganglia' 38", "asal ganglia' 39", "asal ganglia' 30", "dACC' 27", "fusiform' 81", "mFC' 31", "mid insula' 61", "mid insula' 59", "mid insula' 44", "parietal' 97", "parietal' 89", "post cingulate' 80", "post insula' 76", "precuneus' 87", "sup temporal' 100", "temporal' 103", "temporal' 95", "temporal' 78", "thalamus' 57", "thalamus' 58", "thalamus' 47", "vFC' 40", "vFC' 33", "vFC' 25", "vPFC' 18", "ACC' 14", "IPS' 134", "aPFC' 5", "angular gyrus' 124", "angular gyrus' 117", "fusiform' 84", "inf temporal' 91", "inf temporal' 72", "inf temporal' 63", "mPFC' 4", "occipital' 146", "occipital' 141", "occipital' 136", "occipital' 137", "occipital' 92", "post cingulate' 115", "post cingulate' 111", "post cingulate' 108", "post cingulate' 93", "post cingulate' 90", "post cingulate' 73", "precuneus' 132", "precuneus' 112", "precuneus' 105", "precuneus' 94", "precuneus' 85", "sup frontal' 20", "sup frontal' 17", "vlPFC' 15", "vmPFC' 13", "vmPFC' 11", "vmPFC' 7", "vmPFC' 6", "vmPFC' 1", "ACC' 21", "IPL' 107", "IPL' 104", "IPL' 101", "IPL' 96", "IPL' 88", "IPS' 116", "IPS' 114", "aPFC' 2", "aPFC' 3", "dFC' 36", "dFC' 34", "dFC' 29", "dlPFC' 24", "dlPFC' 22", "dlPFC' 16", "post parietal' 99", "vPFC' 23", "vent aPFC' 10", "vent aPFC' 9", "vlPFC' 12", "occipital' 149", "occipital' 148", "occipital' 145", "occipital' 147", "occipital' 142", "occipital' 139", "occipital' 135", "occipital' 133", "occipital' 129", "occipital' 126", "occipital' 118", "occipital' 119", "occipital' 106", "post occipital' 160", "post occipital' 158", "post occipital' 159", "post occipital' 157", "post occipital' 156", "post occipital' 153", "post occipital' 154", "post occipital' 152", "temporal' 123", "SMA' 43", "dFC' 35", "frontal' 45", "frontal' 32", "mid insula' 55", "mid insula' 56", "mid insula' 48", "parietal' 77", "parietal' 74", "parietal' 75", "parietal' 69", "parietal' 66", "parietal' 65", "parietal' 64", "parietal' 62", "parietal' 54", "parietal' 50", "post insula' 70", "post parietal' 79", "pre-SMA' 41", "precentral gyrus' 67", "precentral gyrus' 53", "precentral gyrus' 52", "precentral gyrus' 51", "precentral gyrus' 49", "precentral gyrus' 46", "sup parietal' 86", "temporal' 82", "temporal' 83", "temporal' 68", "temporal' 60", "vFC' 42", "vFC' 37"]
* mask : None
* network : None
* node_size : 2
* thr : 0.9999

What I Did

pynets_run.py -i /mnt/002.nii.gz -id 997 -mod 'partcorr' -ns 2 -a coords_dosenbach_2010 -plt

Official detailed User Guide

  • PyNets version:
  • Python version:
  • Operating System:

Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Adaptive Thresholding issue

Looking at the adaptive_thresholding function- right now it seems the user inputs a threshold, and the adaptive threshold function will adjust this threshold until less than 25% of the edges are missing.

Seems that making the 25% the input value would be useful to users. Instead of the user setting a start point for the adaptive threshold, we could choose a random high threshold,- like 99, and adapt until we meet the criterion that gets passed to the function?

Issue related to get_node_membership_node

  • PyNets version: 0.5.81
  • Python version: 3.6.3 (though 2.7 also installed)
  • Operating System: Ubuntu 16.04.4

Description

Trying to run Example B with preprocessed data. I can execute Example A without issue, but keeping running into errors (which may be nipype-related; also issues related to filetype of SMALLREF2mm.nii.gz).

What I Did

Initial command--

pynets_run.py -i /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/fmriprep/sub-NDARAA536PTU/func/test/sub-NDARAA536PTU_task-resting_bold_space-MNI152NLin2009cAsym_variant-smoothAROMAnonaggr_preproc.nii.gz -id NDARAA536PTU_temp -a coords_dosenbach_2010,coords_power_2011 -n Default -dt -thr 0.3 -ns 2,4 -mod partcorr,sps -plt

Output--
`------------------------------------------------------------------------

SUBJECT ID: NDARAA536PTU_temp
Iterating across multiple nilearn atlases...
Using RSN pipeline for: Default
Growing spherical nodes across multiple radius sizes: 2, 4...
Running functional connectometry only...
Functional file: /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/fmriprep/sub-NDARAA536PTU/func/test/sub-NDARAA536PTU_task-resting_bold_space-MNI152NLin2009cAsym_variant-smoothAROMAnonaggr_preproc.nii.gz


Running with {'n_procs': 2, 'memory_gb': 4}

181002-14:48:58,227 nipype.workflow INFO:
Workflow PyNets_NDARAA536PTU_temp settings: ['check', 'execution', 'logging', 'monitoring']
181002-14:48:58,337 nipype.workflow INFO:
Running in parallel.
181002-14:48:58,341 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 4.00/4.00, Free processors: 2/2.
181002-14:48:58,512 nipype.workflow INFO:
[Node] Setting-up "PyNets_NDARAA536PTU_temp.imp_est" in "/tmp/tmpksfjm8bv/PyNets_NDARAA536PTU_temp/imp_est".
181002-14:48:58,522 nipype.workflow INFO:
[Node] Running "imp_est" ("nipype.interfaces.utility.wrappers.Function")
181002-14:48:58,679 nipype.workflow INFO:
Workflow meta settings: ['check', 'execution', 'logging', 'monitoring']
181002-14:48:58,747 nipype.workflow INFO:
Running in parallel.
181002-14:48:58,752 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 2 jobs ready. Free memory (GB): 4.00/4.00, Free processors: 2/2.
181002-14:48:58,852 nipype.workflow INFO:
[Node] Setting-up "meta.rsn_functional_connectometry.RSN_fetch_nodes_and_labels_node" in "/tmp/tmpq71683rv/meta/rsn_functional_connectometry/_atlas_select_coords_power_2011/RSN_fetch_nodes_and_labels_node".
181002-14:48:58,854 nipype.workflow INFO:
[Node] Setting-up "meta.rsn_functional_connectometry.RSN_fetch_nodes_and_labels_node" in "/tmp/tmpf42uq49a/meta/rsn_functional_connectometry/_atlas_select_coords_dosenbach_2010/RSN_fetch_nodes_and_labels_node".
181002-14:48:58,858 nipype.workflow INFO:
[Node] Running "RSN_fetch_nodes_and_labels_node" ("nipype.interfaces.utility.wrappers.Function")
181002-14:48:58,860 nipype.workflow INFO:
[Node] Running "RSN_fetch_nodes_and_labels_node" ("nipype.interfaces.utility.wrappers.Function")
Fetching coordinates and labels from nilearn coordinate-based atlases
Fetching coordinates and labels from nilearn coordinate-based atlases

Power 2011 atlas comes with dict_keys(['rois', 'description'])

Stacked atlas coordinates in array of shape (264, 3).

Dosenbach 2010 atlas comes with dict_keys(['rois', 'labels', 'networks', 'description'])

Stacked atlas coordinates in array of shape (160, 3).

181002-14:48:59,590 nipype.workflow INFO:
[Node] Finished "meta.rsn_functional_connectometry.RSN_fetch_nodes_and_labels_node".
181002-14:48:59,593 nipype.workflow INFO:
[Node] Finished "meta.rsn_functional_connectometry.RSN_fetch_nodes_and_labels_node".
181002-14:49:00,345 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 0 jobs ready. Free memory (GB): 3.80/4.00, Free processors: 1/2.
Currently running:
* PyNets_NDARAA536PTU_temp.imp_est
181002-14:49:00,755 nipype.workflow INFO:
[Job 0] Completed (meta.rsn_functional_connectometry.RSN_fetch_nodes_and_labels_node).
181002-14:49:00,758 nipype.workflow INFO:
[Job 18] Completed (meta.rsn_functional_connectometry.RSN_fetch_nodes_and_labels_node).
181002-14:49:00,761 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 2 jobs ready. Free memory (GB): 4.00/4.00, Free processors: 2/2.
181002-14:49:00,966 nipype.workflow INFO:
[Node] Setting-up "meta.rsn_functional_connectometry.get_node_membership_node" in "/tmp/tmpkk_w66py/meta/rsn_functional_connectometry/_atlas_select_coords_power_2011/get_node_membership_node".
181002-14:49:00,994 nipype.workflow INFO:
[Node] Setting-up "meta.rsn_functional_connectometry.get_node_membership_node" in "/tmp/tmpq2h6b2ph/meta/rsn_functional_connectometry/_atlas_select_coords_dosenbach_2010/get_node_membership_node".
181002-14:49:01,6 nipype.workflow INFO:
[Node] Running "get_node_membership_node" ("nipype.interfaces.utility.wrappers.Function")
181002-14:49:01,22 nipype.workflow INFO:
[Node] Running "get_node_membership_node" ("nipype.interfaces.utility.wrappers.Function")
181002-14:49:01,46 nipype.workflow WARNING:
[Node] Error on "meta.rsn_functional_connectometry.get_node_membership_node" (/tmp/tmpkk_w66py/meta/rsn_functional_connectometry/_atlas_select_coords_power_2011/get_node_membership_node)
181002-14:49:01,60 nipype.workflow WARNING:
[Node] Error on "meta.rsn_functional_connectometry.get_node_membership_node" (/tmp/tmpq2h6b2ph/meta/rsn_functional_connectometry/_atlas_select_coords_dosenbach_2010/get_node_membership_node)
181002-14:49:02,760 nipype.workflow ERROR:
Node get_node_membership_node.c1 failed to run on host pfc.
181002-14:49:02,760 nipype.workflow ERROR:
Saving crash info to /home/jamielh/Volumes/Hanson/Neuroimaging_Resources/crash-20181002-144902-jamielh-get_node_membership_node.c1-ebe143cf-6b06-4bbb-a576-b1a20fceeb1f.pklz
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 471, in run
result = self._run_interface(execute=True)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 555, in _run_interface
return self._run_command(execute)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/lib/python3.6/dist-packages/nipype/interfaces/base/core.py", line 522, in run
runtime = self._run_interface(runtime)
File "/usr/local/lib/python3.6/dist-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface
out = function_handle(**args)
File "", line 48, in get_node_membership
File "/usr/local/lib/python3.6/dist-packages/nibabel/loadsave.py", line 53, in load
filename)
nibabel.filebasedimages.ImageFileError: Cannot work out file type of "/home/jamielh/anaconda3/lib/python3.6/site-packages/pynets-0.5.81-py3.6.egg/pynets/rsnrefs/SMALLREF2mm.nii.gz"

181002-14:49:02,785 nipype.workflow ERROR:
Node get_node_membership_node.c0 failed to run on host pfc.
181002-14:49:02,786 nipype.workflow ERROR:
Saving crash info to /home/jamielh/Volumes/Hanson/Neuroimaging_Resources/crash-20181002-144902-jamielh-get_node_membership_node.c0-2eda193e-33f2-4944-aae0-5abd1fdab84e.pklz
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 471, in run
result = self._run_interface(execute=True)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 555, in _run_interface
return self._run_command(execute)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/lib/python3.6/dist-packages/nipype/interfaces/base/core.py", line 522, in run
runtime = self._run_interface(runtime)
File "/usr/local/lib/python3.6/dist-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface
out = function_handle(**args)
File "", line 48, in get_node_membership
File "/usr/local/lib/python3.6/dist-packages/nibabel/loadsave.py", line 53, in load
filename)
nibabel.filebasedimages.ImageFileError: Cannot work out file type of "/home/jamielh/anaconda3/lib/python3.6/site-packages/pynets-0.5.81-py3.6.egg/pynets/rsnrefs/SMALLREF2mm.nii.gz"

181002-14:49:02,806 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 0 jobs ready. Free memory (GB): 4.00/4.00, Free processors: 2/2.
181002-14:49:04,757 nipype.workflow INFO:
***********************************
181002-14:49:04,757 nipype.workflow ERROR:
could not run node: meta.rsn_functional_connectometry.get_node_membership_node.c1
181002-14:49:04,757 nipype.workflow INFO:
crashfile: /home/jamielh/Volumes/Hanson/Neuroimaging_Resources/crash-20181002-144902-jamielh-get_node_membership_node.c1-ebe143cf-6b06-4bbb-a576-b1a20fceeb1f.pklz
181002-14:49:04,757 nipype.workflow ERROR:
could not run node: meta.rsn_functional_connectometry.get_node_membership_node.c0
181002-14:49:04,757 nipype.workflow INFO:
crashfile: /home/jamielh/Volumes/Hanson/Neuroimaging_Resources/crash-20181002-144902-jamielh-get_node_membership_node.c0-2eda193e-33f2-4944-aae0-5abd1fdab84e.pklz
181002-14:49:04,757 nipype.workflow INFO:
***********************************
181002-14:49:04,763 nipype.workflow WARNING:
[Node] Error on "PyNets_NDARAA536PTU_temp.imp_est" (/tmp/tmpksfjm8bv/PyNets_NDARAA536PTU_temp/imp_est)
181002-14:49:06,352 nipype.workflow ERROR:
Node imp_est failed to run on host pfc.
181002-14:49:06,359 nipype.workflow ERROR:
Saving crash info to /home/jamielh/Volumes/Hanson/Neuroimaging_Resources/crash-20181002-144906-jamielh-imp_est-9b01a0ed-8c7c-4445-983f-c75bb53f2b78.pklz
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 471, in run
result = self._run_interface(execute=True)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 555, in _run_interface
return self._run_command(execute)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/lib/python3.6/dist-packages/nipype/interfaces/base/core.py", line 522, in run
runtime = self._run_interface(runtime)
File "/usr/local/lib/python3.6/dist-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface
out = function_handle(**args)
File "", line 106, in workflow_selector
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/workflows.py", line 595, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/base.py", line 192, in run
report_nodes_not_run(notrun)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/tools.py", line 82, in report_nodes_not_run
raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details

181002-14:49:06,382 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 0 jobs ready. Free memory (GB): 4.00/4.00, Free processors: 2/2.
181002-14:49:08,350 nipype.workflow INFO:
***********************************
181002-14:49:08,350 nipype.workflow ERROR:
could not run node: PyNets_NDARAA536PTU_temp.imp_est
181002-14:49:08,350 nipype.workflow INFO:
crashfile: /home/jamielh/Volumes/Hanson/Neuroimaging_Resources/crash-20181002-144906-jamielh-imp_est-9b01a0ed-8c7c-4445-983f-c75bb53f2b78.pklz
181002-14:49:08,350 nipype.workflow INFO:
***********************************
Traceback (most recent call last):
File "/home/jamielh/anaconda3/bin/pynets_run.py", line 4, in
import('pkg_resources').run_script('pynets==0.5.81', 'pynets_run.py')
File "/home/jamielh/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 750, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/home/jamielh/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 1527, in run_script
exec(code, namespace, namespace)
File "/home/jamielh/anaconda3/lib/python3.6/site-packages/pynets-0.5.81-py3.6.egg/EGG-INFO/scripts/pynets_run.py", line 1034, in
wf.run(plugin='MultiProc', plugin_args=plugin_args)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/engine/workflows.py", line 595, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/base.py", line 192, in run
report_nodes_not_run(notrun)
File "/usr/local/lib/python3.6/dist-packages/nipype/pipeline/plugins/tools.py", line 82, in report_nodes_not_run
raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details`

Crash files uploaded Here and Here and Here.

I've tried to update nibabel, nilearn, and nipype (but still run into those errors). Also, I run pytest and it seemed fine (but I might have execute incorrect syntax... output below--

`jamielh@pfc:~/Volumes/Hanson/Neuroimaging_Resources$ pytest
============================================================================================================================================ test session starts =============================================================================================================================================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /home/jamielh/Volumes/Hanson/Neuroimaging_Resources, inifile:
plugins: xdist-1.23.2, forked-0.2, cov-2.6.0
collected 0 items

======================================================================================================================================== no tests ran in 4.79 seconds ========================================================================================================================================
jamielh@pfc:~/Volumes/Hanson/Neuroimaging_Resources$`

Thoughts on way to troubleshoot? (I was at least excited that Example A executed pretty quickly and easily for me... which was an improvement from previous test-drives for me!).

Thanks much,
Jamie.

Registering PyNets on brainlife.io

Hi @dPys

Thanks for the poster presentation during OHBM2020! I've been meaning to ask you and see if you might be interested in enabling this App on https://brainlife.io platform. I know there are already many ways to run PyNets, but brainlife can be yet another (easy?) way for someone to run PyNets - especially for folks who doesn't have access to any HCP resources. I think it would be great if people can run PyNets for fmri analysis.

Sorry for reaching out via github (I don't have your contact info!) Anyway, please let me know if you are interested.. and I can help you register the App. You can also email me at [email protected]

Cheers!

Multilayer graph analysis with multilayer plotting features

  • PyNets version:
  • Python version:
  • Operating System:

Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Node metrics?

Great package!

When looking at the network metrics (master/pynets/netstats.py), I could only see global metrics (i.e. one scalar for the whole network). How would one use PyNets code to get local metrics, i.e. at node level with one metric for each node, such as the ones below?

  • degree
  • global efficiency (efficiency of a node i with all other nodes)
  • local efficiency (efficiency of a node i with its neighbors)
  • betweenness centrality bi (Influence of a node i over the information flow between all other nodes)
  • clustering coefficient (fraction of the neighbors of a node i that are also neighbors of each other)

ward clustering based on AAL atlas

Hi,
Recently i am trying to do ward clustering based on AAL atlas on resting fMRI data, just like the article in https://www.ncbi.nlm.nih.gov/pubmed/24647227, i used nilearn but i found it cannot provide a "cutoff" option to get a total cluster number for all 116 rois, so i hope there is someone could share his script if someone have done the similar thing.

Guidance re: timing & atlas complexity?

Hello,

I was just starting to test-drive the package and it looks promising. I wondered if you had some information about typical processing time (especially as it scales with atlas complexity, and resting state scan length)? I had a few sizable datasets (n's vary 500-1000) I was trying to apply PyNets to, and wondered a (rough) range of how long processing takes to complete (per sub)?

Right now, I had a pilot subject where the package completed calculations for global and local efficiency using the power atlas. It's been running for ~2 hours, but I can't tell if it's hung up. And I wondered in your experiences, how processing time correlate w/ atlas ROIs? Power has 264, but what if I scaled down to the AAL or other atlas sets. Any information is greatly appreciated, this all looks quite promising (thanks for the development and support).

Cheers,
Jamie.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.