Giter Site home page Giter Site logo

sciunit's Introduction

scidash

A collection of candidates, tests, and records for use with SciDash.

sciunit's People

Contributors

appukuttan-shailesh avatar chihweilhbird avatar cyrus- avatar frenzymadness avatar justasb avatar lakesare avatar musicinmybrain avatar rgerkin avatar rgutzen avatar russelljjarvis avatar sanjayankur31 avatar sbastiaens avatar zsinnema avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sciunit's Issues

Kitematic Configuration

Make a configuration to allow Kitematic to use a sciunit jupyter-stacks-based container so that people can just launch a sciunit-enabled Jupyter notebook server with no additional configuration.

Tutorial

  • Write up a simple tutorial using an IPython notebook.
  • Link to it from the README.

Figure out licensing

Assume we want some sort of liberal license (BSD, MIT, AFL, Apache, ...?). Need to determine which is best and add appropriate language to README, add license itself to base of repo and add license headers to the code files.

setup.py installation problems

Installing on python 2.7, two issues arise:

  1. Warning
    /sw/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'install_requires'

  2. Error:
    error: package directory 'sciunit/test' does not exist

I think you mean to say sciunit.tests (plural) in setup.py

Travis post-test integration with S3

Add hooks to sciunit command line tools so that when the tests are run on Travis-CI (or possibly more generally), the outputs (e.g. Jupyter notebooks, and possibly the entire environment) get dumped into Amazon S3 buckets.

Setup & Distribution

Need to create a distutils or whatever setup and submit to PyPI for easy installation.

Caching strategy for test results

Need to figure out how to avoid having to rerun every test-model combination when only one has updated (using a file + checksum based on the source code of the test + the values of its parameters, or something like that; maybe using SQLite?)

Implement "Required Tests" for tests

I think we probably want to explicitly list which tests must be passed in order for a given test to be applicable, similar to required_capabilities. This is going to be a potentially gnarly dependency-resolution problem though, so need to figure out how that should work.

MRO exception if overlapping capabilities are provided

If redundant capabilities are provided (e.g. neuronunit.capabilities.ProducesMembranePotential and neuronunit.capbilities.ProducesActionPotentials, which overlap because the latter is a subclass of the former), an MRO exception will be thrown. To avoid confusing the user, the __new__ method of the sciunit.Model class should have a check for this, and instructions for removing one of the overlapping classes.

Refactor score color map to use a config file

Refactoring score color mapping into a utility that is shared between SciDash web-app and SciUnit library (possibly codify into a json that we share that gets produced when there are changes). See line 617 in init.py for current implementation of color mapping.

scaling linear sum of errors and sciunit error for score 0.0

Relating to this error too. I now believe the real problem was, if my rheobase search method didn't succeed in finding rheobase after 4 iterations it returned 0.0 instead of None. From outside the algorithm it looked like a value of 0.0Amps was found, but in fact nothing was found. Also the mistaken 0Amp return from the rheobase search was not able to yield an appropriately large error in sciunit, meaning that the GA would converge on values where no rheobase could be found. In other words this error propogated on through the GA, and it meant the GA could not learn rheobase very well.

My latest push to dev circumvents these problems.

Test Suite

Need to write a test suite for sciunit core.

Formula for NEURON + Python3 support on OSX

Following up on my instructions here in the NEURON forums, I am making a repository here with the sole purpose of testing NEURON wth Python 3 on OSX. It will probably be awhile before that shows a working formula.

Sumatra integration

Use Sumatra for managing and tracking simulations associated with test runs. This would probably involve using the @capture decorator on the implementations of whatever functions run the model.

Plotting best candidate solutions against idealistic waveforms.

Are you aware of if neuroelectro or aibs has a particular sweep_id for a rheobase current injection.

For the sake of the presentation I want to plot best, and worst candidate solutions against the idealised waveform.

When I expand out the observation object its full of scalar value for things like spike width, capacitance test, rheobase test etc, but I need a v_{m} as an observation for comparison too.

I have found the cell through the webportal.

Here is xml code for the whole data set

An excerpt is below. I think it probably means that the rheobase current ri is 128.90625pA

And the sweep id for the appropriate v_{m} is 396427442

<peak-v-short-square type="float">23.9500007629395</peak-v-short-square><rheobase-sweep-id type="integer">396427442</rheobase-sweep-id><rheobase-sweep-number type="integer">46</rheobase-sweep-number><ri type="float">128.90625</ri>

Optional weight array for TestSuite

Every TestSuite should have its score determined by weighting the scores of the component tests. Currently, the weights are all equal, so the total TestSuite score is simply the mean of the sort_key attributes of the component tests.

The attribute should be called weights and there should be possible to pass it as an argument to the constructor or to change it at anytime. It should also be possible to change it after a call to judge and then call judge again and immediately get a re-weighted score without running the tests again, using some sort of caching.

The weights (w) should be an 1-D array or a 2-D array. If it is a 1-D array, the calculation of the total score is simply s . w, where s is the array of score_key attributes, and w is an array of weights. w must be normalized to have a sum of 1 before it is set. If it is a matrix, the calculation is sum(w x s). For example, if w is the identity matrix (divided by the number of tests), then this is the same as equal weighting. However, if any off-diagonal entries are non-zero, then this means that some tests influence multiple entries in the resulting vector (w x s). This may matter in some implementations (see here) that use this vector to do a non-dominated sort of the resulting test suite scores, retaining the whole vector instead of collapsing it into a single value.

Trying to catch obscure math domain error.

I am trying to catch a certain math error that occurs not regularly but periodically as the GA samples different parameters on subsequent runs due to the PRNG.

I feel that the sanity_check method should be able to stop these math errors if it is done well. I am unsure if I am being too picky by throwing out cases when the membrane potential doesn't change at all (its derivative is full of zero's).

Something else I have noticed is that if the rheobase current is found to be exactly 0 Amps, compute score fails for the Rheobase test, also I think due to a math error. I have put in a short term hack where a rheobase that is found to be exactly 0Amp is substituted for 0.0000001Amp instead.

    def sanity_check(self,rheobase,model):
        '''
        check if the membrane potential and its derivative, constitute continuous differentiable
        functions
        If they don't output boolean false, such that the corresponding model can be discarded
        inputs: a rheobase value and a model.
        outputs: a boolean flag.
        '''
        self.params['injected_square_current']['delay'] = DELAY
        self.params['injected_square_current']['duration'] = DURATION
        self.params['injected_square_current']['amplitude'] = rheobase
        model.inject_square_current(self.params['injected_square_current'])
        mp = model.results['vm']
        def nan_test(mp):
            import numpy as np
            #is the membrane potential filled with 0's
            if np.array(mp).all(0)==True:
                return False

            import math
            #is the membrane potential containing nans's

            for i in mp:            
                if math.isnan(i):
                    return False

            import neuronunit.capabilities as cap
            sws = cap.spike_functions.get_spike_waveforms(model.get_membrane_potential())
            #is the membrane potential derivative containing nans's?

            for i,s in enumerate(sws):
                s = np.array(s)
                dvdt = np.diff(s)
                import math
                for j in dvdt:
                    if math.isnan(j):
                        return False

            diffarrray = [ np.diff(np.array(s)) for i,s in enumerate(sws)]
            #is the derivative of the membrane potential filled with 0's

            if np.array(diffarrray).all(0)==True
                return False


        boolean = nan_test(mp)
        if boolean == False:
            return False

Parallel reading of neuroelectro.dir, bak, cache still problematic on big runs.

On bigger runs, the probability of a read not succeeding is greater. In theory parallel read should be fine, but there is something intrinsically not parallel friendly about the neuroelectro caching methods.

Parallel reading of neuroelectro.dir, bak, cache still problematic on big runs.

Ideally tests from get_neab should be pickable then it can be performed just once on only one CPU and then pushed to all the others. In practice though get_neab.tests is still not picklable.

This is my most current version of the bug. At least it demonstrates graceful failure.

NeuroElectroError("NeuroElectro.org appears to be down.")

I think I disabled writing the pickle files together so it tries to download new files every time, making the program more amenable to web server errors.

Parallel writes are always a problem.

---------------------------------------------------------------------------gaierror                                  Traceback (most recent call last)/opt/conda/lib/python3.5/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
   1253             try:
-> 1254                 h.request(req.get_method(), req.selector, req.data, headers)
   1255             except OSError as err: # timeout error
/opt/conda/lib/python3.5/http/client.py in request(self, method, url, body, headers)
   1105         """Send a complete request to the server."""
-> 1106         self._send_request(method, url, body, headers)
   1107 
/opt/conda/lib/python3.5/http/client.py in _send_request(self, method, url, body, headers)
   1150             body = _encode(body, 'body')
-> 1151         self.endheaders(body)
   1152 
/opt/conda/lib/python3.5/http/client.py in endheaders(self, message_body)
   1101             raise CannotSendHeader()
-> 1102         self._send_output(message_body)
   1103 
/opt/conda/lib/python3.5/http/client.py in _send_output(self, message_body)
    933 
--> 934         self.send(msg)
    935         if message_body is not None:
/opt/conda/lib/python3.5/http/client.py in send(self, data)
    876             if self.auto_open:
--> 877                 self.connect()
    878             else:
/opt/conda/lib/python3.5/http/client.py in connect(self)
    848         self.sock = self._create_connection(
--> 849             (self.host,self.port), self.timeout, self.source_address)
    850         self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
/opt/conda/lib/python3.5/socket.py in create_connection(address, timeout, source_address)
    692     err = None
--> 693     for res in getaddrinfo(host, port, 0, SOCK_STREAM):
    694         af, socktype, proto, canonname, sa = res
/opt/conda/lib/python3.5/socket.py in getaddrinfo(host, port, family, type, proto, flags)
    731     addrlist = []
--> 732     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
    733         af, socktype, proto, canonname, sa = res
gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
URLError                                  Traceback (most recent call last)~/neuronunit/neuronunit/neuroelectro.py in get_json(self, params, quiet)
    149         try:
--> 150             url_result = urlopen(url,None,3) # Get the page.
    151             html = url_result.read() # Read out the HTML (actually JSON)
/opt/conda/lib/python3.5/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
    162         opener = _opener
--> 163     return opener.open(url, data, timeout)
    164 
/opt/conda/lib/python3.5/urllib/request.py in open(self, fullurl, data, timeout)
    465 
--> 466         response = self._open(req, data)
    467 
/opt/conda/lib/python3.5/urllib/request.py in _open(self, req, data)
    483         result = self._call_chain(self.handle_open, protocol, protocol +
--> 484                                   '_open', req)
    485         if result:
/opt/conda/lib/python3.5/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
    443             func = getattr(handler, meth_name)
--> 444             result = func(*args)
    445             if result is not None:
/opt/conda/lib/python3.5/urllib/request.py in http_open(self, req)
   1281     def http_open(self, req):
-> 1282         return self.do_open(http.client.HTTPConnection, req)
   1283 
/opt/conda/lib/python3.5/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
   1255             except OSError as err: # timeout error
-> 1256                 raise URLError(err)
   1257             r = h.getresponse()
URLError: <urlopen error [Errno -2] Name or service not known>
During handling of the above exception, another exception occurred:
AttributeError                            Traceback (most recent call last)~/neuronunit/neuronunit/neuroelectro.py in get_json(self, params, quiet)
    153             try:
--> 154                 html = e.read().decode('utf-8')
    155                 self.json_object = json.loads(html)
AttributeError: 'URLError' object has no attribute 'read'
During handling of the above exception, another exception occurred:
NeuroElectroError                         Traceback (most recent call last)<string> in <module>()
/opt/conda/lib/python3.5/site-packages/ipyparallel/client/remotefunction.py in <lambda>(f, *sequences)
    248             if _mapping:
    249                 if sys.version_info[0] >= 3:
--> 250                     f = lambda f, *sequences: list(map(f, *sequences))
    251                 else:
    252                     f = map
~/neuronunit/neuronunit/plottools.py in dtc_to_plotting(dtc)
    540     dtc.vm1 = None
    541     from neuronunit.models.reduced import ReducedModel
--> 542     from neuronunit.optimization.get_neab import tests as T
    543     from neuronunit.optimization import get_neab
    544     from neuronunit.optimization import evaluate_as_module
~/neuronunit/neuronunit/optimization/get_neab.py in <module>()
     58         #use of the variable 'neuron' in this conext conflicts with the module name 'neuron'
     59         #at the moment it doesn't seem to matter as neuron is encapsulated in a class, but this could cause problems in the future.
---> 60         observation = cls.neuroelectro_summary_observation(neuron)
     61         tests += [cls(observation)]
     62 
~/neuronunit/neuronunit/tests/base.py in neuroelectro_summary_observation(cls, neuron)
    104                                                      # NeuroElectro ontology.
    105             )
--> 106         reference_data.get_values(quiet=not cls.verbose) # Get and verify summary data
    107                                     # from neuroelectro.org.
    108 
~/neuronunit/neuronunit/neuroelectro.py in get_values(self, params, quiet)
    262     def get_values(self, params=None, quiet=False):
    263         data = super(NeuroElectroSummary, self).get_values(params=params,
--> 264                                                            quiet=quiet)
    265         if data:
    266             self.neuron.name = data['n']['name']
~/neuronunit/neuronunit/neuroelectro.py in get_values(self, params, quiet)
    188             self.json_object = json.loads(db[identifier])
    189         else:
--> 190             self.get_json(params=params, quiet=quiet)
    191             if DUMP:
    192                 db[identifier] = json.dumps(self.json_object)
~/neuronunit/neuronunit/neuroelectro.py in get_json(self, params, quiet)
    158             except:
    159                 if hasattr(e,'reason'):
--> 160                     raise NeuroElectroError(e.reason)
    161             raise NeuroElectroError("NeuroElectro.org appears to be down.")
    162             #print "Using fake data for now."
NeuroElectroError: [Errno -2] Name or service not known

Describe a .sciunit file for pointing to notebooks

A .sciunit file in the root of a repository should point to the location of notebooks to be turned into .py files and run by Travis-CI or any other automated testing software. This file should also be usable by some sort of sciunit.py script that would get installed with sciunit.

Multidimensional run arguments

Need to be able to pass more than 1 test and model and get a matrix of results out.

This matrix needs to have methods for showing the results in plain text on the console in appropriate format.

Details in README

Need more details, links, examples in README file. Possibly don't use Markdown.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.