Giter Site home page Giter Site logo

dipy / dipy Goto Github PK

View Code? Open in Web Editor NEW
669.0 50.0 428.0 71.93 MB

DIPY is the paragon 3D/4D+ imaging library in Python. Contains generic methods for spatial normalization, signal processing, machine learning, statistical analysis and visualization of medical images. Additionally, it contains specialized methods for computational anatomy including diffusion, perfusion and structural imaging.

Home Page: https://dipy.org

License: Other

Makefile 0.07% Python 90.88% C 0.06% Shell 0.12% Batchfile 0.07% PowerShell 0.08% Cython 7.95% Smarty 0.09% Meson 0.69%
diffusion-mri dti tractography tractometry registration denoising spherical-harmonics dki tracking contextual-enhancement

dipy's Introduction

DIPY - Diffusion Imaging in Python

image

image

image

image

image

image

DIPY [DIPYREF] is a python library for the analysis of MR diffusion imaging.

DIPY is for research only; please contact [email protected] if you plan to deploy in clinical settings.

Website

Current information can always be found from the DIPY website - http://dipy.org

Mailing Lists

Please see the DIPY community list at https://mail.python.org/mailman3/lists/dipy.python.org/

Please see the users' forum at https://github.com/dipy/dipy/discussions

Please join the gitter chatroom here.

Code

You can find our sources and single-click downloads:

Installing DIPY

DIPY can be installed using `pip`:

pip install dipy

or using `conda`:

conda install -c conda-forge dipy

For detailed installation instructions, including instructions for installing from source, please read our installation documentation.

Python versions and dependencies

DIPY follows the Scientific Python SPEC 0 — Minimum Supported Versions recommendation as closely as possible, including the supported Python and dependencies versions.

Further information can be found in Toolchain Roadmap.

License

DIPY is licensed under the terms of the BSD license. Please see the LICENSE file.

Contributing

We welcome contributions from the community. Please read our Contributing guidelines.

Reference

DIPYREF
  1. Garyfallidis, M. Brett, B. Amirbekian, A. Rokem,

S. Van Der Walt, M. Descoteaux, I. Nimmo-Smith and DIPY contributors, "DIPY, a library for the analysis of diffusion MRI data", Frontiers in Neuroinformatics, vol. 8, p. 8, Frontiers, 2014.

dipy's People

Contributors

arokem avatar bramshqamar avatar gabknight avatar garyfallidis avatar guaje avatar iannimmosmith avatar jhlegarreta avatar karanphil avatar kenjimarshall avatar kesshijordan avatar maharshi-gor avatar marccote avatar matthew-brett avatar matthieudumont avatar maurozucchelli avatar mdesco avatar mrbago avatar omarocegueda avatar pjsjongsung avatar quantshah avatar rafaelnh avatar ranveeraggarwal avatar riddhishb avatar rutgerfick avatar samuelstjean avatar shilpiprd avatar shreyasfadnavis avatar skoudoro avatar stefanv avatar tomdelahaije avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dipy's Issues

viz.projections.sph_projection is broken

For some cases it works beautifully, but in some cases it is quite broken (producing quite ugly results...). The only way to make this work, I think, is to make basemap a dependency. We could do the same thing that we are doing with mpl here and use optional_package and tripwire if users don't have basemap installed.

Wrong normalization in peaks_from_model

Hi,

in peaks_from_model, from dipy.reconst.odf, when setting the variable normalize_peaks to False, the value of the peaks change (peaks_values), but not their direction (peak_directions). They stay the same as sphere.vertices[peaks_indices]. The direction should maybe be multiplied by the norm.

Thanks,

Emmanuelle

SlowAdcOpdfModel imported from reconst.shm but not defined

I'm getting an error importing the interfaces module:

---> 23 from ..reconst.shm import (SlowAdcOpdfModel, MonoExpOpdfModel, QballOdfModel,
     24                           normalize_data, ClosestPeakSelector,
     25                           ResidualBootstrapWrapper, hat, lcr_matrix,

ImportError: cannot import name SlowAdcOpdfModel

I think that's correct - that SlowAdcOpdfModel is not defined in reconst.shm or anywhere else in dipy at the moment. It is also imported from doc/examples/tracking_example.py.

buildbot error in mask.py

See http://nipy.bic.berkeley.edu/builders/dipy-py2.6-osx-10.5-ppc/builds/195/steps/shell_5/logs/stdio

======================================================================
ERROR: dipy.segment.tests.test_mask.test_mask
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/buildslave/osx-10.5-ppc/dipy-py2_6-osx-10_5-ppc/build/venv/lib/python2.6/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
File "/Users/buildslave/osx-10.5-ppc/dipy-py2_6-osx-10_5-ppc/build/venv/lib/python2.6/site-packages/dipy/segment/tests/test_mask.py", line 40, in test_mask
    median_test = multi_median(median_test, medianradius, 3)
File "/Users/buildslave/osx-10.5-ppc/dipy-py2_6-osx-10_5-ppc/build/venv/lib/python2.6/site-packages/dipy/segment/mask.py", line 28, in multi_median
    outvol = np.zeros_like(input, dtype=input.dtype)
TypeError: zeros_like() got an unexpected keyword argument 'dtype'

----------------------------------------------------------------------
Ran 254 tests in 163.926s

This is with numpy 1.4.1. I think that zeros_like already uses the input data type, so the extra dtype=input.dtype is not necessary. Maybe you could remove that in mask.py? The tests still pass if you do.

Clarify dsi example

The very last line of the example is just confusing :

If you really want to save the PDFs of a full dataset on the disc we recommend using memory maps (numpy.memmap) but still have in mind that if you do that for example for a dataset of volume size (96, 96, 60) you will need about 20 which can take less space when reasonable spheres (with < 1000 vertices) are.

What is it supposed to mean?

Cordinate maps stuff

Hi all,
I've been writing some code for fiber tracking that I was hopping we could merge into the main branch soon, but before we do I wanted to document this coordinate map issue. I find coordinate map stuff very hard to explain clearly but I've tried hard to make it clear. To try and make things even more clear I've made the following script, https://github.com/MrBago/dipy/blob/master/scratch/coordmap_example.py Running this script and looking the the streamlines that result using trackvis will help clarify the discussion that follows. You probably need to check out my master branch of the dipy repository for the script to work.

My fiber tracking code uses what I will call the 'trackvis" coordinate mapping system. In this coordinate map one corner of the first voxel of the image is at [0, 0, 0]. The diagonal corner of the same voxel is at [vs, vs, vs](assuming isotropic voxels) and the diagonal corner of the image is at [vs_x, vs_y, vs*z] where [x, y, z] is the shape of the image. Some existing code in dipy uses what I will call the "nifti" coordinate system, this is the coordinate system of a nifti image with an affine that has [vs, vs, vs, 1] along the diagonal and 0 elsewhere. I've drawn 2d examples of these coordinate systems bellow.

Trackvis:
A------------
| C |   |   |
----B--------
|   |   |   |
-------------
|   |   |   |
------------D

A = [0, 0]
B = [1, 1]
C = [.5, .5]
D = [3, 3]



Nifti:
A------------
| C |   |   |
----B--------
|   |   |   |
-------------
|   |   |   |
------------D

A = [-.5, -.5]
B = [.5, .5]
C = [0, 0]
D = [2.5, 2.5]

I'm bringing this up because I think it can be very confusing to have two coordinate systems in dipy. This is partially my fault because I wrongly assumed that dipy was already using the trackvis coordinate system when I started writing my code, but before I go changing a bunch of code I'd like to discuss the merits of these coordinate systems. The trackvis coordinate system seems to have at least two clear advantages over the nifti coordinate system.

First, if we use the trackvis coordinate system in dipy, we can save our tracks directly to .trk files, without having to change coordinate systems first, and our tracks will be displayed correctly in trackvis.

Second, the trackvis coordinate system seems to be a 'natural' coordinate system for holding streamline information. That is to say this coordinate system is independent of the voxel size of the image volume. This is not true for the nifti coordinate system, and can be very useful for limiting the number of places where one needs to think about and deal with coordinate system changes.

Return peaks directions in peaks_from_model

peaks_from_model calculates the directions, the norm of the peaks and the indices on the sphere, but throws away the directions. We could return them since they already get computed anyway and provide a boolean to disable the storage in the returned object, just like with return_odf and return_sh.

Currently, one has to reproject the data on the sphere using the indices (and as such be certain of the sphere that was used to produce the peaks), which is error prone in the case where the used sphere is not stored anywhere.

So we could simply return the directions also, since they are useful for metrics computation that do not simply rely on the norm of the peaks, but also where they are located.

Parallel peaksFromModel timing out on buildbot

We're getting an error on the OSX 10.7 buildbot for the (I think) new parallel code:

http://nipy.bic.berkeley.edu/builders/dipy-py2.7-osx-10.7/builds/19/steps/shell_5/logs/stdio

dipy.reconst.tests.test_odf.test_peaksFromModelParallel ... 
command timed out: 1200 seconds without output, attempting to kill
process killed by signal 9
program finished with exit code -1
elapsedTime=1281.151005

Has anyone got any ideas what's going wrong here? Anyone want ssh access to the machine to check it out?

CsaOdfModel online DIPY example : GFA and more

Hi again,

Another issue I have from the GFA.

  1. I believe that GFA from csa odf model is erroneous. Have a look at csa_example_gfa_sh.py in dipy/reconst/tests of my branch mdesco/csa_example_test. This example outputs the gfa computed from the qball, the min-max norm qball, the csa, the csa without negative values, the csa min-max normalized. The gfa are saved as nifti. If you load them, you will see the problem.
    i) The gfa odf looks good but is scaled between 0 and 0.3 or so (since unnormalized)
    ii) The gfa min-max looks ok but is much noisier, but now better scaled in a range of 0 to 0.8 or so.
    iii) The gfa from csa is almost equal to 1 everywhere (I think I now why...)

The GFA from a csa odf model should be nicely scaled between 0 and 1 with high values in the corpus callosum around 0.8 or so.

Something is wrong and I think it comes from the my next issue.

  1. GFA can easily be computed from a SH representation. It is much more efficient in memory and does not rely on a tessellation of the sphere.
    GFA = sqrt( 1 - c_0^2 / sum_i c_i^2
    where c_i are the SH coefficients of the odf.

I suggest we add such a gfa_sh in the SphHarmModels. This is a much cleaner way to do it.
See the gfa_sh.nii.gz file and the way I compute it. It is quite simple.

  1. This brings me to the next issue. For some reason, the zeroth-order SH coefficient of the CsaOdfModel fit is always = 0. This is a real problem and impossible. According to Fourier theory, the zeroth order coefficient contains the mean of the ODF. Having a c_0 cannot be.

As a result, the GFA is wrong if one computes it from the sh coefficients, it is 1 everywhere, just at the numerical ODF computation...

Any idea why the zeroth order coefficient is null? I can dig in the code but I curious to know if you know why before I do so...

Best

Max

Median_otsu b0slices too implicit?

I find the API for median_otsu a bit scary - because it assumes that the (only) b0 volume in a 4D image is the first.

I think this is too implicit as in zen-of-python:

"explicit is better than implicit"

I think it would be too easy for someone to use this blindly without realizing which volume(s) they were selecting. Also it implies that elsewhere in dipy we expect the b0 to be at 0 which presumably is not the case?

Ideas I can think of:

  • move b0slice argument to be before autocrop, and allow None only if input_volume is a 3D array (raise Error otherwise).
  • Remove b0slice argument and make the user generate the 3D image for median_otsu to work on.

nisext/sexts.py: do not just give arguments to the exception but interpolate the strings

or poor users like me get

RuntimeError: ('You have version %s of package "%s" but we need version >= %s', '0.12.1', 'cython', '0.13')

I meant in:

if checker(have_version) < checker(version):
    if optional:
        log.warn(msgs['version too old'] + msgs['opt suffix'],
                 have_version,
                 pkg_name,
                 version)
    else:
        raise RuntimeError(msgs['version too old'],
                           have_version,
                           pkg_name,
                           version)

writing data to /tmp peaks_from_model

peaks_from_model writes temporary files in /tmp on linux.

On Mac OS X, it is written according to:
print(tempfile.mkdtemp())
which points here on my machine:
/var/folders/XN/XNu2M8uWGV01N+eAyY5bKE+++TI/-Tmp-/

If you run many spherical deconvolution estimation, this folder quickly fills up with data and you can fill up your disk.

Is there a way to do a cleanup of these files when done?
Can we give an option to keep stuff in RAM?

Return peaks directions in peaks_from_model

peaks_from_model calculates the directions, the norm of the peaks and the indices on the sphere, but throws away the directions. We could return them since they already get computed anyway and provide a boolean to disable the storage in the returned object, just like with return_odf and return_sh.

Currently, one has to reproject the data on the sphere using the indices (and as such be certain of the sphere that was used to produce the peaks), which is error prone in the case where the used sphere is not stored anywhere.

So we could simply return the directions also, since they are useful for metrics computation that do not simply rely on the norm of the peaks, but also where they are located.

int / long errors on 64 bit machine with quick_squash

ERROR: dipy.reconst.tests.test_dsi.test_multivox_dsi

Traceback (most recent call last):
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/nose-1.1.3.dev-py2.6.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/tests/test_dsi.py", line 69, in test_multivox_dsi
PDF=DSfit.pdf()
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/multi_voxel.py", line 62, in getattr
return _squash(result, self.mask)
File "quick_squash.pyx", line 71, in dipy.reconst.quick_squash.quick_squash (dipy/reconst/quick_squash.c:1985)
ValueError: Buffer dtype mismatch, expected 'int' but got 'long'

ERROR: dipy.reconst.tests.test_gqi.test_mvoxel_gqi

Traceback (most recent call last):
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/nose-1.1.3.dev-py2.6.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/tests/test_gqi.py", line 69, in test_mvoxel_gqi
directions = gqfit.directions
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/multi_voxel.py", line 62, in getattr
return _squash(result, self.mask)
File "quick_squash.pyx", line 71, in dipy.reconst.quick_squash.quick_squash (dipy/reconst/quick_squash.c:1985)
ValueError: Buffer dtype mismatch, expected 'int' but got 'long'

ERROR: dipy.reconst.tests.test_multi_voxel.test_squash

Traceback (most recent call last):
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/nose-1.1.3.dev-py2.6.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/tests/test_multi_voxel.py", line 58, in test_squash
npt.assert_array_equal(_squash(obj_arr, mask=msk, fill=99), arr)
File "quick_squash.pyx", line 71, in dipy.reconst.quick_squash.quick_squash (dipy/reconst/quick_squash.c:1985)
ValueError: Buffer dtype mismatch, expected 'int' but got 'long'

ERROR: dipy.reconst.tests.test_multi_voxel.test_multi_voxel_model

Traceback (most recent call last):
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/nose-1.1.3.dev-py2.6.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/tests/test_multi_voxel.py", line 140, in test_multi_voxel_model
npt.assert_array_equal(fit.model_attr, expected)
File "/home/mb312/.virtualenvs/np-1.3.0/lib/python2.6/site-packages/dipy/reconst/multi_voxel.py", line 62, in getattr
return _squash(result, self.mask)
File "quick_squash.pyx", line 71, in dipy.reconst.quick_squash.quick_squash (dipy/reconst/quick_squash.c:1985)
ValueError: Buffer dtype mismatch, expected 'int' but got 'long'


Using numpy 1.3.0

GradientTable mask does not account for nan's in b-values

The b0s_mask is not correctly generated for nan b-values. Worse, when nan's are present, both the b0s_mask and the opposite mask must be false in those positions. It seems the only way to do that would be to add back a "signal_mask" or equivalent.

shm.py assume_normed option is unclear

Under dipy/reconst/shm.py, about line 453, called from the class S��phHarmModel at line 264, the assume_normed option seems cryptic. It would be good to specifies that the data will be normalized according to the mean of the B0.

If one has a B0 not necesarily normalized, but with significant values <= 1, these points will be clipped to 0. It should probably mention that this is the default behavior.

OpdfModel: negative values in the zero-th order coefficient

The GFA computed from the OpdfModel is erroneous because I think that the underlying coefficients have some problems. The range of the c_0 SH coefficient on the Stanford data is between -277705 and 19649.2.

First, negative values in the zero-th coefficient is physically impossible in diffusion MRI, just as a zeroth order coefficient. Is this related to my previous issue on the CsaModel?

Luckily, negative values seem to occur in CSF, gray matter voxels, and outside the brain (you can zoom in a range of -2 to 2 and you beautifully see that). The gfa dynamic range and map is bad and if one would be to use it in a study, it wouldn't be right.

Nonetheless, glyphs in SH or on the sphere look good

Otherwise, the Qball and Csa models seem bullet proof.

Best

Max

RF/OPT: Use scipy.optimize instead of dipy.core.sphere.disperse_charges

Most methods for optimal single shell and multishell design are based on electrostatic-like energy minimization [1-5]. To that extent, we already have in Dipy two very useful methods: dipy.core.sphere._get_forces and dipy.core.sphere.disperse_charges. I think to begin with, it would be cool to make these functions a bit more versatile so that they can be reused for the implementations of methods in [1-5]. Two major features would be:

  1. to generalize repulsion energy from 1/r to 1/r^k, as suggested in [3]
  2. to add some weighting matrix, which is the basis of methods in [4, 5]

So the general problem is a constrained optimization problem: the cost function is the potential, the constraints are imposed by the fact that points lie on the surface of the unit sphere. Both the cost function and the constraint have analytical gradient expressions, so I'd suggest using a constrained optimization routine in scipy.optimize. In particular, scipy.optimize.fmin_slsqp has proved useful in my tests.

Backward compatibility with _get_forces is not a problem. Question: Do we need backward compatibility with disperse_charges?

Suggestions welcome!

  1. Jones, D. K., M. A. Horsfield, and A. Simmons. "Optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging." Magn Reson Med 42 (1999).
  2. Jansons, Kalvis M., and Daniel C. Alexander. "Persistent angular structure: new insights from diffusion magnetic resonance imaging data." Inverse problems 19.5 (2003): 1031.
  3. Papadakis, Nikolaos G., et al. "Minimal gradient encoding for robust estimation of diffusion anisotropy☆." Magnetic resonance imaging 18.6 (2000): 671-679.
  4. Cook, Philip A., et al. "Optimal acquisition orders of diffusion‐weighted MRI measurements." Journal of magnetic resonance imaging 25.5 (2007): 1051-1058.
  5. Caruyer, Emmanuel, et al. "Design of multishell sampling schemes with uniform coverage in diffusion MRI." Magnetic Resonance in Medicine (2013).

qball not properly import-able

iris:reconst (trunk) $nosetests
../Users/arokem/source/dipy/dipy/reconst/dti.py:216: RuntimeWarning: invalid value encountered in double_scalars
/ (ev1_ev1 + ev2_ev2 + ev3*ev3 + all_zero))

.......E.............E....................

ERROR: Failure: ImportError (No module named qgrid)

Traceback (most recent call last):
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName
addr.filename, addr.module)
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/Users/arokem/source/dipy/dipy/reconst/tests/test_eit.py", line 5, in
from dipy.reconst.eit import DiffusionNablaModel, EquatorialInversionModel
File "/Users/arokem/source/dipy/dipy/reconst/eit.py", line 10, in
from dipy.reconst.qgrid import NonParametricCartesian
ImportError: No module named qgrid

ERROR: Failure: ImportError (No module named qball)

Traceback (most recent call last):
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName
addr.filename, addr.module)
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/Users/arokem/source/dipy/dipy/reconst/tests/test_qball.py", line 7, in
import dipy.reconst.qball as qball
ImportError: No module named qball


Ran 44 tests in 21.934s

FAILED (errors=2)

Is ravel_multi_index a new thing?

The Travis bot seems to choke on np.ravel_multi_index. Is that a new thing?

5103ERROR: dipy.tracking.tests.test_utils.test_connectivity_matrix
5104----------------------------------------------------------------------
5105Traceback (most recent call last):
5106 File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
5107 self.test(_self.arg)
5108 File "/usr/local/lib/python2.7/dist-packages/dipy/tracking/tests/test_utils.py", line 56, in test_connectivity_matrix
5109 matrix = connectivity_matrix(streamlines, label_volume, (1, 1, 1))
5110 File "/usr/local/lib/python2.7/dist-packages/dipy/tracking/utils.py", line 123, in connectivity_matrix
5111 matrix = ndbincount(endlabels, shape=(mx, mx))
5112 File "/usr/local/lib/python2.7/dist-packages/dipy/tracking/utils.py", line 151, in ndbincount
5113 x = np.ravel_multi_index(x, shape)
5114AttributeError: 'module' object has no attribute 'ravel_multi_index'
5115
5116======================================================================
5117ERROR: dipy.tracking.tests.test_utils.test_ndbincount
5118----------------------------------------------------------------------
5119Traceback (most recent call last):
5120 File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
5121 self.test(_self.arg)
5122 File "/usr/local/lib/python2.7/dist-packages/dipy/tracking/tests/test_utils.py", line 91, in test_ndbincount
5123 bc = ndbincount(x)
5124 File "/usr/local/lib/python2.7/dist-packages/dipy/tracking/utils.py", line 151, in ndbincount
5125 x = np.ravel_multi_index(x, shape)
5126AttributeError: 'module' object has no attribute 'ravel_multi_index'

Callable response broken in csd module

I want to help improve test coverage of the csd module. I was looking over the doc-strings and I run into this.

    response : tuple or callable
        If tuple, then it should have two elements. The first is the eigen-values as an (3,) ndarray
        and the second is the signal value for the response function without diffusion weighting.
        This is to be able to generate a single fiber synthetic signal. If callable then the function
        should return an ndarray with the all the signal values for the response function. The response

In the docs it says the response can be a callable, but this is never test (and I believe it is currently broken) the code looks like maybe the intent was to have the response be data from a single voxel. I just wanted to clarify, do we want to support a callable with no arguments, signal from a single voxel, both, or other?

Thanks

not-inforced onsistentcy in the order of examples results in mutating index.rst

e.g. on my system after building dipy I get vcs controlled index.rst modified, since order seems to be defined by 'glob' thus would vary. Please wrap into sorted() if alphabetic order is ok

$> git diff doc/examples/index.rst
diff --git a/doc/examples/index.rst b/doc/examples/index.rst
index 897fddd..eab80e8 100644
--- a/doc/examples/index.rst
+++ b/doc/examples/index.rst
@@ -11,8 +11,8 @@ Examples

    note_about_examples
    nii_2_tracks
+   tractography_clustering
    find_correspondence
-   aniso_vox_2_isotropic
    visualize_crossings
-   tractography_clustering
+   aniso_vox_2_isotropic
    file_formats

dti.fa division by 0 warning in tests

test_fa_of_zero tests that Tensor.fa(nonans=True) == 0 when all(evals == 0) and Tensor.fa(nonans=False) == numpy.nan. The tensor class and the nonans keyword are both deprecated, the nonans keyword does not appear in the new TensorFit class. The user will not see this warning unless they use the deprecated Tensor class.

Propose solutions:

  1. do nothing OR
  2. To avoid the warning during testing we could remove the offending test because it is testing a deprecated feature anyways.

RuntimeWarning in dti.py

dipy/dipy/reconst/dti.py:216: RuntimeWarning: invalid value encountered in double_scalars
/ (ev1_ev1 + ev2_ev2 + ev3*ev3 + all_zero))

Fix auto_attrs

The implementation of the auto_attrs can be refined. In particular, instead of using the ResetMixin, would it be better to implement a 'private' variable and a reset method at the level of the OneTimeProperty class instead?

CsaOdfModel online DIPY example : Visualization

Hi Bago and all,

I'm new to this so please let me know if my comments or issues are not done in the proper DIPY way.

I played all day with the Csa Odf Model on the DIPY examples to get more familiar with everything. I have several questions and issues for which I would like to start a discussion.

Here is my first issue about visualization of the CsaOdf example.
Have a look at the branch mdesco/csa_odf_example_test
In dipy/reconst/tests/csa_odf_example.py, your online example is replicated.

I think we need to be careful with the CSA odf model.
i) by default smooth should be equal to 0.006.
ii) then, in the example, we should show how negative values appear on the sphere. I suggest adding a odf_remove_negative_values function in odf.py. The first show() in my example shows the outputed csa odf model and the second, the one with the zeros removed
iii) then, in the example, we should also show the impact of min-max normalization, which in the case of the csa odf model does not have an impact as the csa odf should be normalized.
iv) I think the ROI I chose in my example should be used for all examples. We see a nice part of the splenium of the corpus callosum and some nice fiber crossings.

Best

Max

Extension test in dipy_fit_tensor seems brittle

Around line 46, we break the filename into the part before and after the first '.' I think this is for images, in order to get "filename" from "filename.nii.gz". But the filename could have another dot in it, and that would cause a mess. Maybe this could be better done by a little function that splits off known compression extensions like 'gz' and 'bz2' and then splits off the remaining extension?

It would be really good to have some tests for this script - they could go in dipy/tests/test_scripts.py

Add ndindex to peaks_from_model

Currently, it kinda ravel the 3D volume, use enum on the raveled array and reshape everything at the end. Could probably be made faster by using ndindex on the 3D array and keeping as-is the 4th dimension instead.

Problems with edges and faces in create_half_unit_sphere

'create_half_unit_sphere' in dipy/dipy/core/triangle_subdivide.py does not return correct edges and faces for 1 and 2 iterations (and very likely higher iterations).

With v,e,f = create_half_unit_sphere(1) we get

v = array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
which looks ok.

But e = array([[0, 2],
[2, 1],
[1, 0],
[1, 0],
[0, 2],
[2, 1]], dtype=uint16)

This appears to say there are 6 edges in all, 3 unique ones each repeated twice?.Perhaps there should be just 3?

Similarly f = array([[0, 1, 2],
[1, 3, 4],
[4, 5, 2],
[0, 5, 3]], dtype=uint16)

In terms of the indices of the vertices this is the triangle [0,1,2] repeated 4 times. There should be just one triangle in this list.

With v,e,f = create_half_unit_sphere(2)

v = array([[ 1. , 0. , 0. ],
[ 0. , 1. , 0. ],
[ 0. , 0. , 1. ],
[ 0.70710678, 0. , 0.70710678],
[ 0. , 0.70710678, 0.70710678],
[ 0.70710678, 0.70710678, 0. ],
[-0.70710678, 0.70710678, 0. ],
[-0.70710678, 0. , 0.70710678],
[ 0. , -0.70710678, 0.70710678]])
This looks ok.

However there are 24 edges in e of which I think 7 are spurious; e.g. [0,7], [6,0] ... By my reckoning there should only be 17 edges.

Likewise 16 triangle faces are listed when there should only be 9 by my reckoning. E.g. the ones with vertices [0,5,7] and [0,3,6] are spurious.

PPC test byte order failures

Sorry - the markdown stuff doesn't let me do pre html on this - so formatting is a mess.

ERROR: dipy.reconst.tests.test_dandelion.test_dandelion

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/tests/test_dandelion.py", line 19, in test_dandelion
sd=SphericalDandelion(data,bvals,gradients)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/dandelion.py", line 78, in init
peaks,inds=peak_finding(odf,odf_faces)
File "recspeed.pyx", line 130, in dipy.reconst.recspeed.peak_finding (dipy/reconst/recspeed.c:1667)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/reconst/recspeed.c:4115)
ValueError: Non-native byte order not supported
-------------------- >> begin captured stdout << ---------------------
((102,), (102, 3), (6, 10, 10, 102))

--------------------- >> end captured stdout << ----------------------

ERROR: dipy.reconst.tests.test_gqsampling.test_gqiodfmask

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/tests/test_gqsampling.py", line 24, in test_gqiodfmask
gqs = gq.GeneralizedQSampling(data,bvals,gradients,mask=mask)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/gqi.py", line 157, in init
peaks,inds=rp.peak_finding(odf,odf_faces)
File "recspeed.pyx", line 130, in dipy.reconst.recspeed.peak_finding (dipy/reconst/recspeed.c:1667)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/reconst/recspeed.c:4115)
ValueError: Non-native byte order not supported

ERROR: dipy.reconst.tests.test_gqsampling.test_gqiodf

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/tests/test_gqsampling.py", line 46, in test_gqiodf
gqs = gq.GeneralizedQSampling(data,bvals,gradients)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/gqi.py", line 169, in init
peaks,inds=rp.peak_finding(odf,odf_faces)
File "recspeed.pyx", line 130, in dipy.reconst.recspeed.peak_finding (dipy/reconst/recspeed.c:1667)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/reconst/recspeed.c:4115)
ValueError: Non-native byte order not supported

ERROR: dipy.reconst.tests.test_gqsampling.test_gqi_small

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/tests/test_gqsampling.py", line 232, in test_gqi_small
gqs = gq.GeneralizedQSampling(data,bvals,gradients)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/gqi.py", line 169, in init
peaks,inds=rp.peak_finding(odf,odf_faces)
File "recspeed.pyx", line 130, in dipy.reconst.recspeed.peak_finding (dipy/reconst/recspeed.c:1667)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/reconst/recspeed.c:4115)
ValueError: Non-native byte order not supported
-------------------- >> begin captured stdout << ---------------------
(65,)
(65, 3)
(10, 10, 10, 65)

--------------------- >> end captured stdout << ----------------------

ERROR: dipy.reconst.tests.test_sphere_max.test_performance

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/tests/test_sphere_max.py", line 174, in test_performance
maxes, pfmaxinds = dcr.peak_finding(vert_vals, faces)
File "recspeed.pyx", line 130, in dipy.reconst.recspeed.peak_finding (dipy/reconst/recspeed.c:1667)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/reconst/recspeed.c:4115)
ValueError: Non-native byte order not supported

ERROR: dipy.tracking.tests.test_propagation.test_eudx

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/tracking/tests/test_propagation.py", line 28, in test_eudx
gqs = GeneralizedQSampling(data,bvals,gradients)
File "/Users/mb312/dev_trees/dipy/dipy/reconst/gqi.py", line 169, in init
peaks,inds=rp.peak_finding(odf,odf_faces)
File "recspeed.pyx", line 130, in dipy.reconst.recspeed.peak_finding (dipy/reconst/recspeed.c:1667)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/reconst/recspeed.c:4115)
ValueError: Non-native byte order not supported
-------------------- >> begin captured stdout << ---------------------
(10, 10, 10, 65)

--------------------- >> end captured stdout << ----------------------

ERROR: Cause we love testin.. ;-)

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest
self.test(*self.arg)
File "/Users/mb312/dev_trees/dipy/dipy/tracking/tests/test_propagation.py", line 74, in test_eudx_further
T=[e for e in eu]
File "/Users/mb312/dev_trees/dipy/dipy/tracking/propagation.py", line 142, in iter
track =eudx_both_directions(seed.copy(),ref,self.a,self.ind,self.odf_vertices,self.a_low,self.ang_thr,self.step_sz,self.total_weight)
File "propspeed.pyx", line 304, in dipy.tracking.propspeed.eudx_both_directions (dipy/tracking/propspeed.c:2250)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/tracking/propspeed.c:3311)
ValueError: Non-native byte order not supported

FAIL: Doctest: dipy.tracking.propagation.EuDX.init

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/doctest.py", line 2152, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for dipy.tracking.propagation.EuDX.init
File "/Users/mb312/dev_trees/dipy/dipy/tracking/propagation.py", line 38, in init


File "/Users/mb312/dev_trees/dipy/dipy/tracking/propagation.py", line 78, in dipy.tracking.propagation.EuDX.init
Failed example:
tracks=[e for e in eu]
Exception raised:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/doctest.py", line 1248, in run
compileflags, 1) in test.globs
File "<doctest dipy.tracking.propagation.EuDX.__init
[11]>", line 1, in
tracks=[e for e in eu]
File "/Users/mb312/dev_trees/dipy/dipy/tracking/propagation.py", line 142, in iter
track =eudx_both_directions(seed.copy(),ref,self.a,self.ind,self.odf_vertices,self.a_low,self.ang_thr,self.step_sz,self.total_weight)
File "propspeed.pyx", line 304, in dipy.tracking.propspeed.eudx_both_directions (dipy/tracking/propspeed.c:2250)
File "numpy.pxd", line 248, in numpy.ndarray.getbuffer (dipy/tracking/propspeed.c:3311)
ValueError: Non-native byte order not supported

peaks_from_model_parallel bug with ravel_peaks=True

If you set the option to true, it fails on line 256 because of a shape error. Possbile culprit is line 219, because it sets the dimension of the memmap as a n x peaks x 3, while the ravel_peaks option turn that into a n x peaks*3 instead, making it fail.

File "/home/samuel/git/dipy/dipy/reconst/odf.py", line 256, in __peaks_from_model_parallel
pam.peak_dirs[start_pos: end_pos] = pam_res[i].peak_dirs[:]
ValueError: could not broadcast input array from shape (19200,15) into shape (19200,5,3)

It could also fail elsewhere for the same reason.

`unique_sets` misses some elements

from skimage.util import unique_rows
from dipy.core.sphere import unique_sets
import numpy as np

x = (255 * np.random.random((1024, 2))).astype(int)

ur = unique_rows(x)
us = unique_sets(x)

missed = [r for r in ur if not r in us]

print "The following rows exist in 'x', but are not included in"
print "unique_sets(x):"

print np.asarray(missed)

Typical output:

The following rows exist in 'x', but are not included in
unique_sets(x):
[[219  12]
 [219  40]
 [220  40]
 [227   2]
 [227   4]
 [233  26]
 [239  30]
 [241  19]
 [243  14]
 [245   8]
 [245  72]
 [246   9]
 [247  12]
 [248  72]
 [252   2]
 [253  49]
 [254   7]]

quickie: 'from raw data to tractographies' documentation implies dipy can't do anything with nonisotropic voxel sizes

...which of course it can, at least in terms of non-tractography voxelwise stuff like tensor fitting etc.

In particular this bit:

"Isotropic voxel sizes required

dipy requires its datasets to have isotropic voxel size. If you have datasets with anisotropic voxel size then you need to resample with isotropic voxel size. We have provided an algorithm for this. You can have a look at the example resample_aniso_2_iso.py"

You might want to reword it + put it lower down the page in the tractography section. At the moment it appears to say that you can't do anything with nonisotropic voxel sizes.

j

data.get_data() should be consistent across datasets

dipy.data.get_data('small_64') returns npy files for the gradients and bvalues, but dipy.data.get_data('small_101D') returns text files, so they need to be loaded using different commands (np.load compared to np.loadtxt). For consistency they should be the same, or (preferably) return data, rather than filenames.

X error BadRequest with fvtk.show

When fvtk.show is repeated multiple times we get this error

>>> from dipy.viz import fvtk
>>> r=fvtk.ren()   
>>> lines=[np.random.rand(10,3),np.random.rand(20,3)]   
>>> colors=np.random.rand(2,3)
>>> c=fvtk.line(lines,colors)   
>>> fvtk.add(r,c)
>>> fvtk.show(r)
>>> fvtk.show(r)
>>> fvtk.show(r)
>>> fvtk.show(r)
>>> fvtk.show(r)
>>> fvtk.show(r)

X Error of failed request: BadRequest (invalid request code or no such operation)
Major opcode of failed request: 0 ()
Serial number of failed request: 30
Current serial number in output stream: 32

Plotting in test suite

It limits the utility of the test suite a bit. Can we put these plotting things in the examples instead?

relative_peak_threshold in dipy_peak_extraction

In the peak extraction script, the default value for relative_peak_threshold is 0.5.

After speaking with @mdesco, it seems like this value is too high as default.

The script calls peak_directions, which has a default value of 0.25. The peak extraction script should probably use 0.25 as the default value too.

Bug in dipy.tracking.metrics.downsampling when we downsample a track to more than 20 points

File "/home/eg309/Devel/tractarian/devel/scripts/simulated_tracks.py", line 88, in
bun=[downsample(b,30) for b in bun]
File "/home/eg309/Devel/dipy/dipy/tracking/metrics.py", line 779, in downsample
for distance in np.arange(0,cumlen[-1],step)]
File "/home/eg309/Devel/dipy/dipy/tracking/metrics.py", line 714, in _extrap
ind=np.where((cumlen-distance)>0)[0][0]
IndexError: index out of bounds

Octahedron in dipy.core.triangle_subdivide has wrong faces

This are the correct vertices and faces of the octahedron

octahedron_vertices = np.array( [
[ 1.0, 0.0, 0.0], # 0
[-1.0, 0.0, 0.0], # 1
[ 0.0, 1.0, 0.0], # 2
[ 0.0,-1.0, 0.0], # 3
[ 0.0, 0.0, 1.0], # 4
[ 0.0, 0.0,-1.0] # 5
],dtype=np.float32)

octahedron_triangles = np.array( [
[ 0, 2, 4],
[ 2, 5, 0],
[ 1, 4, 2],
[ 1, 2, 5],
[ 3, 4, 0],
[ 3, 0, 5],
[ 3, 1, 4],
[ 1, 3, 5]
], dtype=np.uint32)

Best wishes,
Eleftherios

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.