Giter Site home page Giter Site logo

dmod's People

Contributors

burgerga avatar dkaschek avatar dlill avatar malenkamader avatar marcusrosenblatt avatar mbueltmann avatar severinbang avatar svenjakemmer avatar vandensich avatar wmader avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dmod's Issues

Modify returned "mappings"

In "*.fn ": Take care for correct mappings such that e.g. a composed prediction function can be added with another (composed) prediction function.

New class "odemodel"

Hopefully final new class: odemodel() replaces generateModel() and is a constructor for the odemodel class. The print() function shows the DLL names as well as equations, states and parameters.

ATTENTION: Do the following replacements in your code:

  1. generateModel(...) -> odemodel(...)
  2. Xs(model$func, model$extended, ...) -> Xs(model, ...)
  3. Generate a model without sensitivities by odemodel(equations, deriv = FALSE, ...)
  4. Xf(func, ...) -> Xf(model, ...)

"intern" in compile

Wäre es mehrheitsfähig, in compile(), wo system aufgerufen wird, intern = TRUE zu setzen? Dann würde der ganze compiler-output nicht mehr gedruckt werden, wenn man es nicht will.

data argument in normL2

Since the introduction of the possibility to specify lower limit of quantification (lloq), the data argument has to contain the column lloq. When it is of type datalist, everything works fine. However, to ensure backward compatibility, I would suggest to

  1. Either automatically convert the data argument by as.datalist() within normL2 or
  2. output a warning when the argument is not of type datalist and no lloq column was provided

Took me a while to figure this out, so could be in favor for others ;-)

"+.fn" with many individual parameters

If there are many individual parameters, it is unnecessary to construct zero-derivatives for these parameters in all the conditions, where this individual parameter is not needed.
One solution is to construct individual "fn"s and add their results rather than the "fn"s themselves.

I would prefer a solution, in which the "+.fn "-operator checks if an individual parameter is required in this condition and use only required parameters.

Example: obj = obj_a(p_a) + obj_b(p_b) currently does
obj(c(p_a, p_b)) = obj_a(c(p_a, p_b)) + obj_b(c(p_a, p_b)).
It would be better to have obj(c(p_a, p_b)) = obj_a(p_a) + obj_b(p_b)

The problem is function concatenation "*.fn" which requires the zero-derivatives to be present, since a parameter could show up in a later function, such as g. Any ideas here?

Sensitivities and forcings

When forcings are in use, dMod somehow has problems with propagating derivatives.

in the output of the last three rows, pay attention to out1.A and out1.k of the three different observation functions

I'll start working on this on Tuesday.


## Generate a compiled ODE model from an equation vector
## The model will not return sensitivities for "switch"
## Files will be generated in your working directory!
library(magrittr)
library(dMod)


## Generate the same model from an equation list
f <- addReaction(NULL, from = "", to = "A", rate = "switch*F", description = "production")
f <- addReaction(f   , from = "A", to = "", rate = "k*A", description = "degradation")
print(f)

model <- odemodel(f, forcings = "F", fixed = "switch")
print(model)


# create forcings
forc1 <- data.frame(name = "F", time = seq(0,5, 0.1), value = sin(seq(0,5,0.1)))
forc2 <- data.frame(name = "F", time = seq(0,5, 0.1), value = exp(-seq(0,5,0.1)))
forc3 <- data.frame(name = "F", time= 0,              value = 0.1)


x <- Xs(model, forc1, condition = "forc1") + Xs(model, forc2, condition = "forc2") + Xs(model, forc3, condition = "forc3")


g <- Y(c(out1 = "F * A", out2 = "F", out3 = "A"), x)
g2 <- Y(c(out1 = "F * A", out2 = "F", out3 = "A"), as.eqnvec(f))
g3 <- Y(c(out1 = "F * A", out2 = "F", out3 = "A"), as.eqnvec(f), parameters = c("k", "switch"))



times <-  seq(0,5, 0.01)
pars <- c(A = 1, switch = 0.5, k = 0.5)


pred <- (g*x)(times, pars)  
pred2 <- (g2*x)(times, pars)  
pred3 <- (g3*x)(times, pars)  

plot(pred)


# look at sensitivities ----

# helper function to get sensitivities
getSens <- . %>% lapply(. %>% attr("sens"))

pred %>% getSens()  %>% lapply(colnames)
pred2 %>% getSens() %>% lapply(colnames)
pred3 %>% getSens() %>% lapply(colnames)


pred %>% getDerivs()  %>% lapply(colnames)
pred2 %>% getDerivs() %>% lapply(colnames)
pred3 %>% getDerivs() %>% lapply(colnames)


pred %>% getDerivs %>% lapply(. %>% .[1:2,] %>% round(4)) %>% .[[3]]
pred2 %>% getDerivs %>% lapply(. %>% .[1:2,] %>% round(4))%>% .[[3]]
pred3 %>% getDerivs %>% lapply(. %>% .[1:2,] %>% round(4))%>% .[[3]]

c.datalist

A good alias for +.datalist would be c.datalist which could also take more than two arguments.
Will do this soon myself, I'm just abusing github issues as to do list, sorry for the spam

RJSONIO issue in steady state calculation

When using the steadyStates functionality what can/will happen at some point is

> steadyStates(f, file="steadyStates")
Error in .Call("R_fromJSON", content, as.integer(sum(simplify)), nullValue,  : 
  "R_fromJSON" not resolved from current namespace (RJSONIO)

It can be circumvented by restarting your R session. It is not a bug in dMod, but maybe it could be added to the steadyStates documentation. The bug in is caused by RJSONIO in rPython (see more details here: r-lib/devtools#427 (even Hadley Wickham gave up on it it seems)), but since the latest rPython release is from 2015 I do not see this getting fixed.

So an alternative solution would be switching to the reticulate package developed by the rstudio people. It works nice with different python envs, and it even has some guidelines on how to use it in your R package, and how to include it in unit testing: https://rstudio.github.io/reticulate/articles/package.html

Feature request: Include ordering column in as.parframe

Currently I'm using mstrust with 2000 fits and when I get the results back I have no clue what the range of values is I'm getting. So sometimes when I do

plotValues(myframe, tol = 0.01, value < 100)
# or
plotPars(myframe, tol = 0.01, value < 100)

I get nothing, and sometimes I get an error because I get everything :P

Often I just want to have a look at the best 10 or 50, so what I'm doing now is

plotValues(myframe %>% rowid_to_column("rank"), tol = 0.01, rank <= 20)

Maybe it is useful to include this column already in as.parframe?

In welche Datei kommt welche Funktion

Bitte generics, z.B. as.parvec(), as.prdlist(), etc. in die entsprechende R-Datei einfügen, d.h. parClass.R bzw. prdClass.R

Generelle Methoden, so wie print(), summary(), etc. würde ich gerne auch in diesen Dateien sammeln. Andere Methoden, die etwas "spezielles" machen, können in andere R-Dateien. So zumindest meine idee.

compartments in steady state constraints

In some of my projects I need different compartments which have different volumes. The volume information is not stored in the model csv but in the equation list. Can we implement this into the steady state tool?

boolean argument for ```insert()``` and ```define()```

Hi folks ;-)

the argument conditionMatch for insert() and define() works pretty well in most cases. I wondered why we not (additionally) provide the possibility to specify a boolean argument that is based on the contents of covtable!? The column names of covtable could be used as identifiers.

I am thinking of something like
trafo <- insert(trafo, "x~x_scale", conditionMatch=NULL, newArgument=(!is.na(siRNA) & 2ndStimulus > 10), x=c("scaleSTAT123"), scale = scale),

where siRNA and 2ndStimulus are some of the column names in covtable.

I am not yet sure about a clever implementation, since newArgument should probably be provided as a ... argument.

For me something like this would be very helpful and more intuitive than conditionMatch. But I also see advantages in conditionMatch. It probably depends how you set up your model. So let me know what you think of this idea!?

Best,
Marcus

deriv = F by default

Hey,
I'd like to set the default of the deriv-arguments of parfn, prdfn, obsfn, *.fn, +.fn to FALSE, and keep deriv = TRUE only for objfn-like objects.
Is anyone against that?

Save all results of msnarrow in one fitlist.

Right now, the fitlist returned from msnarrow only holds the results of the its last call to mstrust. It might be a good idea to return a fitlist which holds the results of all fits.

vcov defined in dMod and stats

Hi,

I do not know when this was implemented, but dMod provides a function vcov that overwrites the standard stats::vcov.
Can lead to problems for "good old blotIt" (not blotIt2) functions. Thought this could be of general interest. If your code is outrunning the time your project takes.... ;-)

Is it possible to rename the dMod::vcov or make use of a different class?

Best,
Marcus

plotCombined(), plotData(), plotPrediction()

When conditions are numeric values or can interpreted as such, the plot functions do not work any more. The problem seems to have appeared with the introduction of dplyr in the plot functions.

"[.parframe"

This lunacy has to end. I know, the change has to be implemented carefully, but it's very necessary to give this subset operator back its natural behavior of data.frame subsetting. Who is with me?

Options for constraintExp()

Instead of "sigma" and "k" let the user define "limit" and "borderwidth" where
"limit" is defined by the parameter distance where the prior exceeds qchisq(p = 0.95, df = 1)
"borderwidth" is the distance from limit to the parameter where the prior is below a threshold, e.g. 0.1

getParameters.combined

#There is a bug in getParameters(g*x)
library(dplyr) #for pipe
library(dMod)

Model definition (text-based, scripting part)

f <- NULL %>%
addReaction("A", "B", "k1A", "translation") %>%
addReaction("B", "", "k2
B", "degradation") %>%
as.eqnvec()

events <- eventlist(var = "A", time = 5, value = "A_add", method = "add")

x <- odemodel(f, events = events) %>% Xs

g <- Y(c(Bobs = "s1*B"), x, compile = T, modelname = "obsfn")

conditions <- c("a", "b")

there is a bug in getParameters(g*x), it doesn't get the parameters from g

getParameters((g*x))
union(getParameters(g), getParameters(x))

plotResiduals does not plot predicted vs observed values

I am fitting an ODE model with three different conditions to a set of data. I thought the function "plotResiduals" generates the classical "predicted vs observed" plot, but when using it I obtained only one value for every condition, which I found not informative at all. A function plotting the individual weighted residuals and the pred vs obs would be very useful.

conditional parameter value

Hi there, I am working on a simple piecewise ODE model. I would like to set a condition that a parameter a=0 if t<=t_on and a = value otherwise. I would also like to estimate t_on and a. Is there anyway I can define the ODE like that?
Thanks,

Paola

github lags behind CRAN

I don't know if this is intended but the github DESCRIPTION shows 0.4.3 as version, but on CRAN we have 1.0.0.

confusion over chi^2 and likelihood

Hi,

Background of the issue: At the moment, normL2 outputs chi^2 for cases without error model and the log-likelihood (llh), for cases without. For implementation of the PEtab test functions we adapted our normL2 function to be able to output what is requested. Turned out that the adaption was not working with latest master pushes of Daniel2.

Of course we can adapt with the new version again. But this topic seems to be of general importance.

For the PEtab tests we need a possibility to output both chi^2 and llh independent of the availability of an error model.

Suggestion by Daniel2 and me: Add an additional FLAG "chi2" with default FALSE to normL2. Thus per default, log-likelihood is output and chi2 you get by chi2=TRUE.

This would of course mean that running old scripts with the new version gives a different objective value which might be confusing. But should we not all have saved the package versions together with the scripts and results!? ;-)

Best,
Marcus

compile(..., purge = FALSE)

I propose to introduce a purge-argument to automatically remove the .c and .o files. The default could be FALSE so the original behaviour of the function would be untouched.

If no one disagrees, I'll introduce it pbly on Sunday

Symbolic sensitivities bug, if all states were fixed

prodSymb() produces error when dv = NULL. This in fact happens when user tries to a priori fix (argument fixed in odemodel) all states of a given model. Same is probably for the parameters.

Use an if statement and output warning!?

mstrust() center could be list

It would be a nice feature to allow more than one center in mstrust(). It could be helpful to write additional as.parframe-methods for classes matrix, list and data.frame. Then center could be a parframe.

Provide a function to strip attributes from a fitlist

The fit function mstrust return a fitlist with attributes attached which are only needed to investigate the fit itself. Usually, the fitlist is interesting due to the fit parameter values it holds. This function should strip any extra attributes from the fitlist.

IQRtools

Hi everyone,

the newly introduced dependency IQRtools can be found here:

http://iqrtools.intiquan.com/

Best, Daniel

PS: Daniel, willst du mal Svenja adden? Dann kann sie auch die ganzen issues lesen. Ihr Name auf github ist svkemmer

Unit testing objects with environments

I tried converting the plotting example into a unit test, which failed because ggplots carry some environments with them, which change each time R is restarted.
This problem problem also appears when trying to compare dMod objects coming from function factories such as Xs() and Y().

For the plotting, I think of solving it with the following function and testing the hashes of the results of these calls:

representatives_from_ggplot <- function(myplot) {
      # extract objects of the plot which don't depend on environments created when invoking the plot
      nlayers <- length(myplot[["layers"]])
      lapply(1:nlayers, function(i) {
        ggplot2::layer_data(myplot, i)
      })
    }

I'm not sure however, if all the ggplot creates the same layer data on each machine, this is why I'm hesitant to push it.

In general however, it would be nice to have a function strip_environments which strips the environment off the dMod-objects to create session-independent hashes.

Advent calendar

Daniel and I had this idea yesterday, which was rather a joke that time, but it could be actually a nice idea and a good push forward towards a comprehensive documentation:
Who would be in in preparing little advent calendar doors for documentation? One thing that often came up during the workshop was that people didn't even know which functions exist, so the advent calendar could be little snippets like "did you know that you can create implicit parameter transformations?" or "a brief overview of all pipe-friendly functions". The goal would be to have an overview rather than deep-diving vignettes.
If we split the work, it could be done by each of us with manageable work, wo do you think?

Flüsse in generateEquations generieren

Vorgesehene Struktur: Rückgabe einer Liste mylist mit names(mylist) = species. Für jede Spezies dann die In- und Effluxes als characters reinschreiben. Hint: Auf Volumina achten.

Incorporation of steadyState() and quasiSteadyState()

I am planning to adapt our steadyState function as well as providing also a python based quasiSteadyState function for dMod.
Is there anyone currently using steadyStates() at all? If yes, what things are you missing (except for compartments what Daniel already mentioned)? If no, why not? ;-)

rjson dependency missing from DESCRIPTION

 > p_install_gh("dkaschek/dMod")
Using github PAT from envvar GITHUB_PAT
Downloading GitHub repo dkaschek/dMod@master

✓  checking for file ‘/tmp/RtmpdVALyk/remotes12aa4f69cb8/dkaschek-dMod-31f1e19/DESCRIPTION’ (353ms)
─  preparing ‘dMod’:
✓  checking DESCRIPTION meta-information ...
─  checking for LF line-endings in source and make files and shell scripts
─  checking for empty or unneeded directories
─  looking to see if a ‘data/datalist’ file should be added
─  building ‘dMod_1.1.0.tar.gz’
   
Installing package into ‘/home/gerhard/R/x86_64-pc-linux-gnu-library/3.6’
(as ‘lib’ is unspecified)
* installing *source* package ‘dMod’ ...
** using staged installation
** R
** data
** inst
** byte-compile and prepare package for lazy loading
Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : 
  there is no package called ‘rjson’
Calls: <Anonymous> ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
Execution halted
ERROR: lazy loading failed for package ‘dMod’
* removing ‘/home/gerhard/R/x86_64-pc-linux-gnu-library/3.6/dMod’
* restoring previous ‘/home/gerhard/R/x86_64-pc-linux-gnu-library/3.6/dMod’
installation of package ‘/tmp/RtmpdVALyk/file12aa380ba085/dMod_1.1.0.tar.gz’ had non-zero exit status
The following packages were installed:
dMod

Unexpected behavior in insert()

Hi,

actually you might also call this a bug. The insert function does not include brackets around new inserted expression like it used to be the case in replaceSymbols and repar.

Take the following example:
test <- c(A = "f/g")
and look at
insert(test, x~x*ratio_x, x=c("f","g"))

The result is f*ratio_f/g*ratio_g and not (f*ratio_f)/(g*ratio_g) as one would expect.

I am sure this was not intended!?

Best,
Marcus

getParameters()

Add conditions argument to obtain just the parameters required for the requested conditions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.