gustavoem / signetms Goto Github PK
View Code? Open in Web Editor NEWSigNetMS stands for Signaling Network Model Selection
License: GNU General Public License v3.0
SigNetMS stands for Signaling Network Model Selection
License: GNU General Public License v3.0
Currently, our lognormal jump is buggy since we can't actually control it's mean and variance, which is primordial for MCMC jumping distributions.
Pre-requisite: issue #9
This looks like a good python library for this: https://seaborn.pydata.org/tutorial/distributions.html
We should be able to define the prior of a parameter inside the xml model definition.
According to Supplementary Materials, proposal distribution should be log-normal. This implies that our jumping distributions are not symmetric, since that when jumping from theta = x the mean of the proposal should be x.
When proposal distributions are not symmetric, we should use the Metropolis-Hastings algorithm for sampling, instead of Metropolis algorithm.
ODEintWarning: Excess work done on this call (perhaps wrong Dfun type)
Should we make different samples to each chain starting values?
"To avoid computational overflows and underflows, one should compute with the logarithms
of posterior densities whenever possible". (Bayesian Data Analysis)
Currently, we use a lognormal distribution to sample the jump from one theta to another. That's causing an overflow because we consider that the jump is X_0 - mean (X)
where X_0
is a value sampled and X_0
is the mean of the distribution of X_0
. If X_0
is lognormal, than 'mean (X)' can easily become a very large number.
Currently, you can only evaluate likelihood of the state of a variable of the system. Alternatively, the user may want to calculate an expression, such as ERKPP / ERK
instead of just ERK
.
Maybe use some tool to automatically generate the documentation.
We currently have a very non-specific treatment for compartments, removing them from the formulas when they have specific names.
We are currently using p (y | theta) as a target function. Actually, our target function is p (theta | y).
Even though we can't calculate p (theta | y), we are able to calculate the ratio p (theta_A | y) / p (theta_B | y).
Observation error, sigma, is a parameter that should also be estimated on theta chains.
On some models the prior probability is zero. We should calculate the log of the probability instead.
Sometimes the marginal likelihood can fail with an exception and leave a ProcessPool opened in parallel_map.py.
On commit ed54dc3 we provided a quick and dirty fix.
The current implementation erroneously assume that the time points for solutions start on time zero.
On model goodwin3.xml
there's a compartment in every reaction rate. I'm not sure of what that means.
According to BIBm paper, experiment error can be Gamma (2.0, 3333.0) distributed, but we are not sure if it should be sampled only once or every time we calculate a likelihood.
Param priors do not have correct names!
Docstrings are not realy following any pattern
On ODEs, there should exist a method that prepares the system, creating the sympy autowrap for the system and jacobian. System evaluation should only be possible after using this initialization method.
According to supplementary materials: "The proposal distribution employed in the initialization stage is Normal on a logarithmic scale, and we were adapting the variance of the proposal distribution according to a local acceptance rate measured on each 1,000 of steps, keeping this rate between 0.25 and 0.4".
Girolami might have used this estimator instead of the estimator used on BIBm.
According to BIBm paper, Michaelis constants should have a Gamma (2.0, 3333.0) prior.
We have created a few statistics test that are very time consuming to be ran every time. It may be more suitable to have a small test set that does not contain many statistical tests.
The current jumps do not have average of zero (the chain starts to increase values and never come back).
Some models from bioinformatics define lambda functions that are used to describe the rate of reactions.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.