NEURON and neuroConstruct versions of Alon Korngreen's pyramidal neuron model
See http://www.opensourcebrain.org/projects/almog-korngreen-pyramidal-neuron
porting Alon Korngreen's pyramidal neuron model to neuroml
NEURON and neuroConstruct versions of Alon Korngreen's pyramidal neuron model
See http://www.opensourcebrain.org/projects/almog-korngreen-pyramidal-neuron
In the original (modeldb) file params.hoc, line 64:
ca_qh=2
should instead read
cah_qh=2
This misnaming is innocuous, since the qh par is set to 2 in ca_h.mod and is not changed elsewhere.
the original hoc files make heavy use of parameters in the mod files. Even if these parameters are defined in cml as <parameter>
, the translation to .mod will include prefixes that make them impossible to be accessed from the original .hoc files. This is particularly relevant for the "shifts" in cah.
There is a lot of redundancy in the generated mod files (possibly introduced by the automatic conversion from cml to neuroml2, see duplicate Constants in all comp types) in parametrized values, which can contribute to #3
The original .mod files have a hardcoded *(1e-4) scaling factor to the maximum conductance. What would be the best way to capture that in the cml transcriptions?
q10 should be a parameter in iH, not hardcoded (it is changed from the hoc). As of 1.8.1 channelml does not allow using a parameter for the q10_settings q10_factor attribute.
When using the .mod generated from cml (after replacing the appropriate parameter gmax->gbar permeability->pbar and shifts) in the original model, the simulations run much slower. Evidently, the .mod from cml have much more RANGE and ASSIGNED stuf, as well as intermediate vars, but I would expect those to be optimised off after compilation.
For kslow, there is a q10 correction for a kinetics, but none for b, b1.
Also, there seems to be missing parentheses for the correction in beta , a6 being left out.
After replacing the relevant parameters in the mod file with shift/shifth (which are changed by the hocs in the original model), it is only possible to run the original model with variable stepsize integration - otherwise, a scopmath error is raised on initialization. I suspect that can be related to #3
In the original modfile SK.mod, tau is incorrectly multiplied by the q10 factor, instead of divided. That traces back to a previous version of the mechanism from Deister et al 2009
I have contacted Alon Korngreen and he is now aware of this issue.
The NeuroML version of SK implements the correct q10 scaling.
in the original model, in params.hoc, line 63:
cah_qm =2
is set but never added to any section, i.e., this par gets its value from the modfile (qm=4).
in the original (modelDB) model, at model.hoc
vshift_ca
vshift_na
are defined but not used by the mechanisms.
When translating kslow to cml, my first natural choice was to have three gates a, b, b1 (in accordance to what is expressed in the original mod file). However, the generated mod was calculating the fraction as (0.5b * 0.5b1) instead of the expected (0.5b + 0.5b1). By making b1 into a second open state for a single gate b, the fractional conductance is generated correctly. Is that the expected behaviour in nml? Is it documented?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.