Giter Site home page Giter Site logo

alleninstitute / allensdk Goto Github PK

View Code? Open in Web Editor NEW
325.0 325.0 149.0 239.5 MB

code for reading and processing Allen Institute for Brain Science data

Home Page: https://allensdk.readthedocs.io/en/latest/

License: Other

Makefile 0.01% Python 13.42% Shell 0.03% JavaScript 0.02% CSS 0.01% HTML 0.01% Gnuplot 0.01% AMPL 0.18% Jupyter Notebook 86.29% Dockerfile 0.03%
bioinformatics scientific

allensdk's People

Contributors

aamster avatar alexpiet avatar anirban6908 avatar corbennett avatar danielsf avatar djkapner avatar dougollerenshaw avatar dyf avatar gouwens avatar isaak-willett avatar jfperkins avatar jsiegle avatar kaeldai avatar keithg-alleninstitute avatar kschelonka avatar matchings avatar matyasz avatar mikejhuang avatar mochic avatar morriscb avatar nicain avatar nickponvert avatar nilegraddis avatar njmei avatar saskiad avatar seanmcculloch avatar sgratiy avatar tfliss avatar tmchartrand avatar wesley-jones avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

allensdk's Issues

version Cache class manifests

When we change the manifests, we need a way to notify users that the manifest is out of date and regenerate the new one.

brain observatory: mapping between cell_specimens_ids and roi_names?

It seems to be that there is no mapping between cell_specimens_ids and roi_names in /processing/brain_observatory_pipeline/DfOverF/imaging_plane_1 /roi_names of an nwb file. This means even if I have downloaded experiment data for a cell, I still don't know which subset of data belongs to that cell. Am I correct, or I have overlooked some resources?
thanks

Setup does not install required files

api/queries/rma_template.py imports Template from jinja2, but the setup.py does not include jinja2 as a package to be installed when installing from pip.

CCF versioning

Feature request: some kind of versioning scheme for the data so that end users know whether to re-pull from Allen servers

units of GLIF parameters hard to find

Hey,

first I have a question: Is it true that the parameters that you get from get_neuron_config() are the fitted values as described by the fitting procedure in the glif white paper ?

Further I wanted to suggest to put the units of the parameters maybe into a second dictionairy with the same keys as the neuron_config dict because if you overlooked them on the website, as I did, it's not so obvious where to find them. Another alternative that I find more intuitive would be the api docs.

Also, I couldn't figure out yet what to do with the neuron.coeffs (also returned by get_neuron_config()), do I have to multiply them to the parameters with the same name ? Did I just miss something in the white paper ?

Cheers,
awakenting

Would be cool if biophysical models could pull their own experimental nwb file

At the moment, when downloading a perisomatic biophysical model package from the ABI website, and running with:
python -m allensdk.model.biophysical_perisomatic.runner manifest.json
one gets (e.g):
IOError: [Errno 2] No such file or directory: './469801567.nwb'
Since the allensdk also contains functionality to download the nwb files, it might less confusing for users if the 'biophysical_perisomatic.runner' would download the nwb if it is not available (maybe after a confirmation by the user).

Incorrect presentation and use of "process_" functions leads to incorrect latency values.

The example seen at http://alleninstitute.github.io/AllenSDK/_static/examples/nb/cell_types.html#Computing-Electrophysiology-Features shows the user setting stim_start = 1.0 and passing it to the fx.process_instance("", v, i, t, stim_start, stim_duration, "") function as a parameter. In the nwb files, however, the SS and LS stimuli start at index 204000, which is actually 0.02 seconds after 1.0. The SDK software actually calculates latency not from the start of the stimulus, but from the start of the analysis window, which therefore makes calculations on the webpage, and in fact the parameter list, incorrect. This error can cause the feature extraction software to miss spikes associated with Short Squares if the analysis window is short, since 0.02 seconds is larger than the duration of the Short Square stimulus (0.003 seconds).

It may make sense that the parameters be changed to "analysis_start" and "analysis_duration" (which appears to be the actual use of the two parameters), an additional parameter be added to the parameter list ("stimulus_start"), and that the calculations having to do with timing in reference to stimulus start be corrected.

Error with pulling down on connectivity experiment

Error report from one of the student at Dynamic Brain Workshop

Student reports that script work for all other experiment except 278181912

FYI. i'm getting an error when i try to pull down this experiment.

pdf, pd_info = mcc.get_projection_density(278181912)


ValueError Traceback (most recent call last)
in ()
----> 1 pdf, pd_info = mcc.get_projection_density(cre_ids[i])

/Users/evadyer/anaconda/envs/py27/lib/python2.7/site-packages/allensdk/core/mouse_connectivity_cache.pyc in get_projection_density(self, experiment_id, file_name)
176 file_name, experiment_id, self.resolution)
177
--> 178 return nrrd.read(file_name)
179
180 def get_injection_density(self, experiment_id, file_name=None):

/Users/evadyer/anaconda/envs/py27/lib/python2.7/site-packages/nrrd.pyc in read(filename)
390 with open(filename,'rb') as filehandle:
391 header = read_header(filehandle)
--> 392 data = read_data(header, filehandle, filename)
393 return (data, header)
394

/Users/evadyer/anaconda/envs/py27/lib/python2.7/site-packages/nrrd.pyc in read_data(fields, filehandle, filename, seek_past_header)
280 decompressed_data += decompobj.decompress(chunk)
281 # byteskip applies to the decompressed byte stream
--> 282 data = np.fromstring(decompressed_data[byteskip:], dtype)
283

Documentation error in allensdk.brain_observatory.natural_scenes

The documentation for the get_response() method of the NaturalScenes class has the line:

The final dimension contains the mean response to the condition (index 0), standard error of the mean of the response to the condition (index 1), and p value of the response to that condition (index 3).

The index listed for the last value is off by one (should be index 2). Also, it looks like the function is actually returning the number of responses that are below a p-value threshold, rather than returning the p-value of the response as stated in the docstring.

NwbDataSet.get_sweeps broken

The NwbDataSet.get_sweeps function is looking for sweeps in "analysis/spike_times", but they actually live in "analysis/aibs_spike_time".

resolution of ontology masks

I've encountered issues with the resolution of masks. Code below. I was expecting the two masks to be arrays of identical dimensions. Not so.

from allensdk.core.mouse_connectivity_cache import MouseConnectivityCache
mcc = MouseConnectivityCache(manifest_file='connectivity/mouse_connectivity_manifest.json', resolution=10)
ontology = mcc.get_ontology()

mask, _ = mcc.get_structure_mask(315)
print mask.shape

mask, _ = mcc.get_structure_mask(385)
print mask.shape

allow filtering by transgenic line case insensitive

v0.14.2

# get a table of VISp experiments. This is a list of dictionaries.
from allensdk.core.mouse_connectivity_cache import MouseConnectivityCache
# specify your path and isometric resolution
manifest_file = os.path.join(drive_path, 'mouse_connectivity_manifest.json')
resolution = 25 #10,25,50,100

# instantiate the cache object
# after this step, the manifest file you specified should exist on your filesystem
mcc = MouseConnectivityCache(manifest_file=manifest_file, resolution=resolution)

cre = 'Scnn1a-Tg3-Cre'
no_experiments = mcc.get_experiments(cre=[cre.lower()])
correct_experiments = mcc.get_experiments(cre=[cre])

print cre.lower(), len(no_experiments)
print cre, len(correct_experiments)

# prints:
# scnn1a-tg3-cre 0

The former returns no experiments even though the intent is unambiguous, there should be no transgenic lines that differ only by case, so no case insensitive collisions.

This confused me a lot as a naive first time user and was challenging to debug and it just seems like it doesn't have to be that way.

broken nwb file in brain observatory exp??

except for "502376461.nwb", no nwb file worked in boc example.

Also, HDFview can open only "502376461.nwb".

May be the other nwb file broken or changed to other format???

stimuli mapping for locally_sparse_noise_4deg in three_session_C

Hi, I have an inquiry regarding the data structure in NWB files of Brain Observatory.

It's clear that there are 5812 trials for "locally sparse noise in 4 degree" in "three session C2", since the
dimension of /stimulus/presnetation/locally_sparse_noise_4deg_stimulus/indexed_timeseries/data is 5812 x 16 X 28 while the size of /stimulus/presnetation/locally_sparse_noise_4deg_stimulus/timestamps is 5812. However, in "three session C", the dimension of /stimulus/presnetation/locally_sparse_noise_4deg_stimulus/indexed_timeseries/data is 9000 x 16 x 28 while the size of /stimulus/presnetation/locally_sparse_noise_4deg_stimulus/timestamps is 8880. It seems that out of 9000 frames, only 8880 were shown.

Is there any mapping between stimuli and stimuli onset time I can refer to so I know which stimulus
was shown at a particular time? Apparently, a one-to-one mapping doesn't work in this case.

Thanks

bugs in new version

Old code that worked yesterday:

# grab the Ontology instance
ontology = mcc.get_ontology()

# get some info on the isocortex
isocortex = ontology['Isocortex']
isocortex_mask, _ = mcc.get_structure_mask(isocortex['id'])

returns TypeError:

/home/kameron/local/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py:2: VisibleDeprecationWarning: Function get_ontology is deprecated. Use get_structure_tree instead.
  from ipykernel import kernelapp as app
/home/kameron/local/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py:6: VisibleDeprecationWarning: Function get_structure_mask is deprecated. In the future, this functionality will be moved over to a ReferenceSpaceCache.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-7-85f3c3cfa88a> in <module>()
      4 # get some info on the isocortex
      5 isocortex = ontology['Isocortex']
----> 6 isocortex_mask, _ = mcc.get_structure_mask(isocortex['id'])

/home/kameron/local/anaconda2/lib/python2.7/site-packages/allensdk/deprecated.pyc in wrapper(*args, **kwargs)
     35                           category=VisibleDeprecationWarning, stacklevel=2)
     36 
---> 37             return fn(*args, **kwargs)
     38 
     39         return wrapper

/home/kameron/local/anaconda2/lib/python2.7/site-packages/allensdk/core/mouse_connectivity_cache.pyc in get_structure_mask(self, structure_id, file_name, annotation_file_name)
    688         file_name = self.get_cache_path(
    689             file_name, self.STRUCTURE_MASK_KEY, self.ccf_version,
--> 690             self.resolution, structure_id)
    691 
    692         if os.path.exists(file_name):

/home/kameron/local/anaconda2/lib/python2.7/site-packages/allensdk/api/cache.pyc in get_cache_path(self, file_name, manifest_key, *args)
     48                 return file_name
     49             elif self.manifest:
---> 50                 return self.manifest.get_path(manifest_key, *args)
     51 
     52         return None

/home/kameron/local/anaconda2/lib/python2.7/site-packages/allensdk/config/manifest.pyc in get_path(self, path_key, *args)
    213 
    214         if args is not None and len(args) != 0:
--> 215             path = path_spec % args
    216         else:
    217             path = path_spec

TypeError: %d format: a number is required, not str

definition of evoked response to natural movies in Brain Observatory data?

Hi @nicain, I get another question
The white paper: VisualCoding_VisualStimuli doesn't seem to say how the cell response to natural movies was defined; I cannot find any similar expression for natural movies as "The evoked response to each image presentation (in natural scenes) was defined as the mean change in fluorescence during the 0.5 second period following the start of the image presentation compared to the 1 second preceding the image presentation (Fig. 7)".
This definition also applies to other stimuli except for drifting gratings, so I assume it's the same case for natural movies, but I should not assume.
Thanks

Scipy Required

Scipy is required, but is not part of the requirements in the setup file.

Issues getting the test suite to pass

Hi All!

I recently installed the SDK on my local linux machine inside a virtualenv running Python 2.7.13. I ran into a few issues getting the test suite to run that I thought you might like to know about.

Here are the steps i followed to set up the environment:

$> git clone https://github.com/AllenInstitute/AllenSDK.git
$> cd AllenSDK/
$> mkvirtualenv -a . allensdk
$ (allensdk)> pip install -e .
$ (allensdk)> pip install -r requirements.txt
$ (allensdk)> pip install -r test_requirements.txt

When I ran the suite I encountered a few errors that appear to be due to missing packages in the test_requirements.txt These are Pillow, scikit-image and statsmodels:

py.test -s --boxed

==================================================== test session starts ====================================================
platform linux2 -- Python 2.7.13, pytest-2.9.2, py-1.4.34, pluggy-0.3.1
rootdir: /home/kotfic/kitware/projects/allen/src/AllenSDK, inifile: setup.cfg
plugins: xdist-1.14, pep8-1.0.6, ordering-0.5, mock-1.6.0, cov-2.2.1
collected 398 items / 16 errors


....

[Other errors ommited]

____________________________________ ERROR at setup of test_calculate_injection_centroid ____________________________________
@pytest.fixture
    def nrrd_read():
        with patch('nrrd.read',
                   Mock(name='nrrd_read_file_mcm',
                        return_value=('mock_annotation_data',
                                      'mock_annotation_image'))) as nrrd_read:
>           import allensdk.core.mouse_connectivity_cache as MCC

allensdk/test/api/test_mouse_connectivity_api.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
allensdk/core/mouse_connectivity_cache.py:25: in <module>
    from .reference_space import ReferenceSpace
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    from __future__ import division, print_function, absolute_import
    from collections import defaultdict
    import operator as op
    import functools
    import os
    
>   from scipy.misc import imresize
E   ImportError: cannot import name imresize

allensdk/core/reference_space.py:23: ImportError

These errors were resolved by installing Pillow which this scipy issue suggests is the fix for imresize not being available in scipy.misc

py.test -s --boxed

==================================================== test session starts ====================================================
platform linux2 -- Python 2.7.13, pytest-2.9.2, py-1.4.34, pluggy-0.3.1
rootdir: /home/kotfic/kitware/projects/allen/src/AllenSDK, inifile: setup.cfg
plugins: xdist-1.14, pep8-1.0.6, ordering-0.5, mock-1.6.0, cov-2.2.1
collected 436 items / 14 errors 


....

[Other Errors Omitted]

_________________________________ ERROR collecting allensdk/test/core/test_cell_filters.py __________________________________
allensdk/test/core/test_cell_filters.py:17: in <module>
    from test_brain_observatory_cache import CACHE_MANIFEST
/home/kotfic/.venvs/allensdk-clean/lib/python2.7/site-packages/_pytest/assertion/rewrite.py:171: in load_module
    py.builtin.exec_(co, mod.__dict__)
allensdk/test/core/test_brain_observatory_cache.py:18: in <module>
    from allensdk.core.brain_observatory_cache import (BrainObservatoryCache,
allensdk/core/brain_observatory_cache.py:21: in <module>
    from .brain_observatory_nwb_data_set import BrainObservatoryNwbDataSet
allensdk/core/brain_observatory_nwb_data_set.py:22: in <module>
    from allensdk.brain_observatory.locally_sparse_noise import LocallySparseNoise
allensdk/brain_observatory/locally_sparse_noise.py:22: in <module>
    from .receptive_field_analysis.receptive_field import compute_receptive_field_with_postprocessing
allensdk/brain_observatory/receptive_field_analysis/receptive_field.py:16: in <module>
    from .eventdetection import detect_events
allensdk/brain_observatory/receptive_field_analysis/eventdetection.py:16: in <module>
    from .utilities import smooth
allensdk/brain_observatory/receptive_field_analysis/utilities.py:23: in <module>
    from skimage.measure import block_reduce
E   ImportError: No module named skimage.measure

These errors were resolved by running pip install scikit-image

py.test -s --boxed

==================================================== test session starts ====================================================
platform linux2 -- Python 2.7.13, pytest-2.9.2, py-1.4.34, pluggy-0.3.1
rootdir: /home/kotfic/kitware/projects/allen/src/AllenSDK, inifile: setup.cfg
plugins: xdist-1.14, pep8-1.0.6, ordering-0.5, mock-1.6.0, cov-2.2.1
collected 474 items / 8 errors

....

[Other Errors Omitted]

_________________________________ ERROR collecting allensdk/test/core/test_cell_filters.py __________________________________
allensdk/test/core/test_cell_filters.py:17: in <module>
    from test_brain_observatory_cache import CACHE_MANIFEST
/home/kotfic/.venvs/allensdk-clean/lib/python2.7/site-packages/_pytest/assertion/rewrite.py:171: in load_module
    py.builtin.exec_(co, mod.__dict__)
allensdk/test/core/test_brain_observatory_cache.py:18: in <module>
    from allensdk.core.brain_observatory_cache import (BrainObservatoryCache,
allensdk/core/brain_observatory_cache.py:21: in <module>
    from .brain_observatory_nwb_data_set import BrainObservatoryNwbDataSet
allensdk/core/brain_observatory_nwb_data_set.py:22: in <module>
    from allensdk.brain_observatory.locally_sparse_noise import LocallySparseNoise
allensdk/brain_observatory/locally_sparse_noise.py:22: in <module>
    from .receptive_field_analysis.receptive_field import compute_receptive_field_with_postprocessing
allensdk/brain_observatory/receptive_field_analysis/receptive_field.py:17: in <module>
    from statsmodels.sandbox.stats.multicomp import multipletests
E   ImportError: No module named statsmodels.sandbox.stats.multicomp

These errors were eliminated by running pip install statsmodels

py.test -s --boxed

==================================================== test session starts ====================================================
platform linux2 -- Python 2.7.13, pytest-2.9.2, py-1.4.34, pluggy-0.3.1
rootdir: /home/kotfic/kitware/projects/allen/src/AllenSDK, inifile: setup.cfg
plugins: xdist-1.14, pep8-1.0.6, ordering-0.5, mock-1.6.0, cov-2.2.1
collected 592 items 

....

========================= 29 failed, 489 passed, 72 skipped, 1 xfailed, 1 xpassed in 163.92 seconds =========================

The remaining test failures are in two genres:

  1. Test nwb files that are absent from the git distribution defined in nwb_ephys_files.txt and nwb_files.txt These are used to parameterize the data_set fixture defined in test_nwb_data_set.py and the data_set fixture in test_brain_observatory_nwb_data_set.py

  2. Tests that fail because cell_specimens.json is empty throwing JSONDecodeError: Expecting value: line 1 column 1 (char 0) errors when json_utilities.read tries to load the empty string. This happens for test_dataframe_query_between_unmocked and test_dataframe_query_is_unmocked

I would really like to run the nwb tests if possible. Guidance on how I can get the test datasets would be great! It seems like maybe if I have that data cell_specimines.json would have valid JSON in it and the decoding issues would resolve themselves?

Thanks!

Workdir in perisomatic biophysical models doesn't exist

When downloading the perisomatic biophysical model for e.g.:

http://celltypes.brain-map.org/mouse/experiment/electrophysiology/469801569

After copying the experimental nwb file in the model directory, and running:

python -m allensdk.model.biophysical_perisomatic.runner manifest.json

One gets:

Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/site-packages/allensdk/model/biophysical_perisomatic/runner.py", line 136, in
run(description)
File "/usr/local/lib/python2.7/site-packages/allensdk/model/biophysical_perisomatic/runner.py", line 50, in run
manifest.get_path('output'))
File "/usr/local/lib/python2.7/site-packages/allensdk/model/biophysical_perisomatic/runner.py", line 79, in prepare_nwb_output
copy(nwb_stimulus_path, nwb_result_path)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 119, in copy
copyfile(src, dst)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 2] No such file or directory: '/Users/werner/Desktop/neuronal_model_473465774(3)/work/469801567.nwb'

This is caused by the runner.py code executing a command to copy the nwb file that assume that there exists a 'work' directory. Presumably the code should first create that directory before copying the file ?

Sweep list should be returned in sweep number order instead of random order

The list returned by api.queries.cell_types_api.get_ephys_sweeps is in random order, in what is probably the order they are pulled out of the database. I can find the desired sweep_number by iterating through that list and picking out the one that matches, but they should the list should be in sweep number order, i.e. index 0 is the first sweep collected in that experiment.

MouseConnectivityCache.get_structure_mask error

MouseConnectivityCache.get_structure_mask uses the Ontology instance incorrectly. All structure IDs must be passed in as a list. Error:

In [6]: sm = mcc.get_structure_mask(315)

TypeError Traceback (most recent call last)
in ()
----> 1 sm = mcc.get_structure_mask(315)

C:\Users\davidf.ALLENINST\AppData\Roaming\Python\Python27\site-packages\allensdk\core\mouse_connectivity_cache.pyc in get_structure_
mask(self, structure_id, file_name, annotation_file_name)
603 else:
604 ont = self.get_ontology()
--> 605 structure_ids = ont.get_descendant_ids(structure_id)
606 annotation, _ = self.get_annotation_volume(annotation_file_name)
607 mask = self.make_structure_mask(structure_ids, annotation)

C:\Users\davidf.ALLENINST\AppData\Roaming\Python\Python27\site-packages\allensdk\core\ontology.pyc in get_descendant_ids(self, struc
ture_ids)
115 """
116
--> 117 if len(structure_ids) == 0:
118 return self.descendant_ids
119 else:

TypeError: object of type 'int' has no len()

SWC files with a single Soma point

Hi everybody,
I noticed that most of "SWC" files provided by Allen Institute contains a single soma point rather than three soma points as we can see in the other morphology files from other sources (e.g neuromorpho.org ...). Issues appears when i try to import SWC files from module Import3D of Neuron Simulator. Can someone have tips or other things that can help me to transform single soma point to three soma points?

Connectivity: function to get a particular frame from the spinning thumbnail

The MIP spinning thumbnail provide at a glance summary of projection throughout the whole brain.
Currently, there is no API function to access a particular frame from a particular image series.

This is a request for a function to get a single frame image from either the horizontal or vertical spinning thumbnails

Access to parameters for stimulus waveform

Is there any way to get the parameters for a stimulus waveform other than to try to figure them out from the value of sweep_data['stimulus']? In other words, what I currently do is:

from allensdk.core.nwb_data_set import NwbDataSet
data_set = NwbDataSet(raw_ephys_file_name)
sweep_data = data_set.get_sweep(sweep_number)
stimulus = sweep_data['stimulus']

Because sweeps have multiple pulses, and of different sizes, I have to do some work to extract pulse parameters (start times, durations, amplitudes), for example to reproduce in a simulation. It would be helpful to have access to these parameters directly, e.g. data_set.get_sweep_parameters(), returning a dictionary of parameters. Are these stored anywhere that is accessible by the API?

Example from docs not working

Hi,

First thank you for making this data available.

I installed allensdk version 0.12.0 using pip.
I tried to follow the instructions on:
http://alleninstitute.github.io/AllenSDK/biophysical_models.html
In order to download and load a neuron (Nr5a1-Cre VISp layer 2/3 473862496 ) to NEURON.

A was able to download and compile the mod files with my installation of NEURON but running:

python -m allensdk.model.biophysical_perisomatic.runner manifest.json

returns an error:

/Users/moharb/anaconda/bin/python: No module named biophysical_perisomatic

I also tried to run:

python -m allensdk.model.biophysical.runner manifest.json

And got a different error:

NEURON -- Release 7.4 (1370:16a7055d4a86) 2015-11-09
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2015
See http://www.neuron.yale.edu/neuron/credits

loading membrane mechanisms from x86_64/.libs/libnrnmech.so
Additional mechanisms from files
 ./modfiles//CaDynamics.mod ./modfiles//Ca_HVA.mod ./modfiles//Ca_LVA.mod ./modfiles//Ih.mod ./modfiles//Im.mod ./modfiles//Im_v2.mod ./modfiles//K_P.mod ./modfiles//K_T.mod ./modfiles//Kd.mod ./modfiles//Kv2like.mod ./modfiles//Kv3_1.mod ./modfiles//NaTa.mod ./modfiles//NaTs.mod ./modfiles//NaV.mod ./modfiles//Nap.mod ./modfiles//SK.mod
Traceback (most recent call last):
  File "/Users/moharb/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/Users/moharb/anaconda/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/Users/moharb/anaconda/lib/python2.7/site-packages/allensdk/model/biophysical/runner.py", line 158, in <module>
    run(description)
  File "/Users/moharb/anaconda/lib/python2.7/site-packages/allensdk/model/biophysical/runner.py", line 54, in run
    sweeps_by_type = run_params['sweeps_by_type']
KeyError: 'sweeps_by_type'

Any help would be appreciated.

Thanks,
Boaz

hidden state of glif_api instance

Hey,

I have another suggestion for better usability:
On the website you give this example for getting the neuronal models:

glif_api = GlifApi()
glif_api.get_neuronal_model(neuronal_model_id)
glif_api.cache_stimulus_file('stimulus.nwb')

neuron_config = glif_api.get_neuron_config()
json_utilities.write('neuron_config.json', neuron_config)

I found it very confusing that by calling get_neuronal_model the instance glif_api changed its state so that you are then able to call get_neuron_config(). This way it's also not explicit which model you will get the neuron_config from.
So I would suggest to change get_neuronal_model() into get_neuronal_model_metadata() (because that's what you actually return) and then the get_neuron_config() method takes another argument "neuron_config_url" which you get from the metadata. Or get_neuron_config() could also have an argument "neuron_model_id" so that you can be sure which model you actually get.

There is probably other and better solutions to this but I hope you find this helpful.

Cheers,
awakenting

PS: This is the last issue that I have for today :)

Segmentation fault (core dumped)

I'm getting a segfault when trying to import BrainObservatoryCache

In [1]: from allensdk.core.brain_observatory_cache import BrainObservatoryCache
Segmentation fault (core dumped)

On CentOS 6.5

conda environment is attached

environment.yml.txt

Throws error when trying to import classes from submodules

Using allensdk version 0.14.2

$ pip list | grep allensdk
allensdk (0.14.2)

Some of the requirements to load submodules of the SDK are not included in the requirements.txt file causing it to error when importing classes

example:

from allensdk.core.brain_observatory_cache import BrainObservatoryCache

missing skimage.measure
missing statsmodels

Perhaps add scikit-image and statsmodels to the requirements.txt?

Methods for creating drifting/static grating images

At the Dynamic Brain workshop many students want to compare the drifting/static grating images with the natural scenes image and movie frame

The SDK should have a function that can generate these to enable a spatial comparison of the images

Arbitrary stimulus for biophysical models

Hi everybody,
I would like to know if allensdk provides codes allowing to inject an arbitrary stimulus into biophysical models and then record the voltage corresponding to this stimulus.

Unit conversion should not be hardcoded

All stimuli are converted to pA and responses to mV here. However, this assumes that 1) the cells were recorded in current clamp and 2) the gains are the same for every experiment. The latter may be true, but in several experiments the first few sweep are collected in voltage clamp, such that the stimulus is in mV and the response is in pA. So the units should be checked for each sweep before the conversion is made.

get_structure_unionizes without descendent structures

Add an option to retrieve unionized data without all descendent structures. Regional connectome analysis/fitting usually wouldn't care about descendents

This can easily be modified in filter_structure_unionizes

Meaning of the sweeps numbers contained in the file "fit_parameters.json".

Hi everybody,
Some models in the Allen Institute database, including perisomatic biophysical models, contain a file named "fit_parameter.json". Inside this file you have a list of sweeps numbers. I would like to know if this list corresponds to the numeros of the current steps that were used to optimize the parameters of the model.

Thank you

Reproducing published injection site set

Using only the AllenSDK, we are trying to reproduce the set of injection/target sites used in Oh et al 2014 (and Calabrese et al 2015). It is stated that the selection is made such that site sizes roughly match infection size, but it appears that the only way to reproduce this selection in the ontology is to analyze all the images ourselves.

Have we missed something? Are the nodes in the ontology annotated as to whether they are in the 295 regions used in the Oh et al paper? If not, could this be added to the SDK?

Discrepancy in stimulus start time

I noticed that there is a conflict between two ways of obtaining the square pulse start time:

dataset_id = 354190013 
file_name = '%d_raw_data.nwb' % dataset_id
experiment_params = ct.get_ephys_sweeps(dataset_id)
for sp in experiment_params:
    if sp['stimulus_name']=='Long Square' and sp['num_spikes']>0:
        print(sp)
        sweep_num = sp['sweep_number']
        break

prints 0.92 for the start time of the square pulse. However,

v, i, t = get_sweep_from_nwb('%d_raw_data.nwb' % dataset_id, sweep_num)
sweep_info = {}
sweep_info['stim_start'], sweep_info['stim_dur'], sweep_info['stim_amp'], start_idx, end_idx \
= get_square_stim_characteristics(i, t)
print(sweep_info)

prints 1.02 for the start time. On visual inspection, it looks like the pulse starts at about 1 s, so the latter appears to be correct. Where is the discrepancy coming from? I don't think it's the duration of the test pulse, because:

data_set = NwbDataSet(file_name)
index_range = sweep_data['index_range']
print(index_range[0]/sweep_data['sampling_rate'])

prints 0.75, which doesn't correspond to the 0.1 second discrepancy. So why does the dictionary returned by get_ephys_sweeps say that the stimulus starts at 0.92 s?

GlifNeuron simulation error

GlifNeuron simulation is running into index out-of-bounds error when the input stimulus ends on an action potential. When we run a ramped input stimulus for model 482529681

from allensdk.api.queries.glif_api import GlifApi
from allensdk.model.glif.glif_neuron import GlifNeuron

neuron_config = GlifApi().get_neuron_configs([482529681])[482529681]
neuron = GlifNeuron.from_dict(neuron_config)
ramp = [t*1.25e-13 for t in xrange(20000)]
neuron.run(ramp)

We get the following error:

/home/kael/workspace/Allen/AllenSDK/allensdk/model/glif/glif_neuron.pyc in run(self, stim)
379 threshold_out[time_step:time_step+n] = np.nan
380 AScurrents_out[time_step:time_step+n,:] = np.nan
--> 381 voltage_out[time_step+n] = voltage_t0
382 threshold_out[time_step+n] = threshold_t0
383 AScurrents_out[time_step+n,:] = AScurrents_t0

IndexError: index 20068 is out of bounds for axis 0 with size 20000

Thanks,
Kael

Problem running perisomatic biophysical model if HDF5 version < 1.8.7

This is an issue that is actually caused by what seems to be a problem in HDF5 library versions < 1.8.7. It is related to:
h5py/h5py#281
It seems that RH 6.7 has this version of HDF5 installed. This issue doesn't appear on e.g. a Mac with a higher version of HDF5.

When running a the perisomatic model from:

http://celltypes.brain-map.org/mouse/experiment/electrophysiology/469801569

One gets:
Traceback (most recent call last):
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27env/lib/python2.7/site-packages/allensdk/model/biophysical_perisomatic/runner.py", line 136, in
run(description)
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27env/lib/python2.7/site-packages/allensdk/model/biophysical_perisomatic/runner.py", line 50, in run
manifest.get_path('output'))
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27env/lib/python2.7/site-packages/allensdk/model/biophysical_perisomatic/runner.py", line 83, in prepare_nwb_output
data_set.set_spike_times(sweep, [])
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27env/lib/python2.7/site-packages/allensdk/core/nwb_data_set.py", line 208, in set_spike_times
spike_dir.create_dataset(sweep_name, data=spike_times, dtype='f8')
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27env/lib/python2.7/site-packages/h5py/_hl/group.py", line 103, in create_dataset
dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds)
File "/gpfs/bbp.cscs.ch/home/vangeit/local/lviz/python27env/lib/python2.7/site-packages/h5py/_hl/dataset.py", line 120, in make_new_dset
sid = h5s.create_simple(shape, maxshape)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-build-peLR0k/h5py/h5py/_objects.c:2453)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-build-peLR0k/h5py/h5py/_objects.c:2410)
File "h5py/h5s.pyx", line 99, in h5py.h5s.create_simple (/tmp/pip-build-peLR0k/h5py/h5py/h5s.c:1402)
ValueError: Zero sized dimension for non-unlimited dimension (Zero sized dimension for non-unlimited dimension)

It happens when the allensdk tries to create a dataset using 'create_dataset' of an empty array (spike_times) in allensdk/core/nwb_data_set.py:208 (allensdk version 0.10.1).
As proposed in the h5py issue mentioned above, this can be solved by adding

maxshape=(None,)

as an option of create_dataset(..)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.