Giter Site home page Giter Site logo

Comments (12)

CiaranWelsh avatar CiaranWelsh commented on August 16, 2024

Hey Nils,

Sorry for the late response. To load steady state data (if memory serves me correctly), you just omit the time column. Steady state is actually the default, and when you give the copasi input file a time column, pycotools picks up on this and specifies as time.

So for this model:

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz
seaborn.set_context(context='talk')         # set seaborn context for formatting output of plots

## Choose a directory for our model and analysis. Note this can be anywhere.
working_directory = os.path.abspath('')

## In this model, A gets reversibly converted to B but the backwards reaction is additionally regulated by C.
## B is reversibly converted into C.
antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

You could try a data file that looks like:

A, B, C
4, 5, 6

Remember to include any independent variables you have by specifying '_indep' after the name (so A_indep here, for example).

Let me know if you run into trouble,

Ciaran

from pycotools.

Niels86 avatar Niels86 commented on August 16, 2024

Dear Ciaran,

Thank you for your response. I still have some problems.
Object names that are not metabolite concentrations, are not recognized by PycoTools in the parameter estimation.
For each sample in the steady state data used for parameter estimation I want to set some parameters in the model that represent the enzyme concentration, as well as provide some reaction flux measurements. I tried to refer to these model objects using their Antimony name. The parameters and flux information in my training data is skipped, so not recognized as can be seen from the warnings:

skipping variable "re01_PTS_Glc_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re02_PGI_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re03_PFK_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re04_FBA_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re05_GAPDH" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re06_ENO_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re07_PYK_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re08_LDH_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re09_PDH_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re10_PTA_ACK_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "flux(re01)" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "flux(re08)" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "flux(re10)" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"
skipping variable "re11_NOXE_indep" as it was not found in model "Model(name=default_name_1, time_unit=s, volume_unit=l, quantity_unit=mol)"

from pycotools.

CiaranWelsh avatar CiaranWelsh commented on August 16, 2024

Hi Neils,

These messages usually appear if the names in the experimental data files do not match the names in your model. However, if this behaviour isn't working then you may have stumbled on a bug. If you like, you can send through the relevant information (scripts and data files) and I'll see if I can reproduce the problem.

Ciaran

from pycotools.

Niels86 avatar Niels86 commented on August 16, 2024

Thx, I submited my code, data and models to you by email.

from pycotools.

CiaranWelsh avatar CiaranWelsh commented on August 16, 2024

Hi Neils,

Unfortunately I have some bad news.

You are trying to have pycotools to map to fluxes and I haven't built this into the software. You can only map local, global, initial concentration and compartment volumes with pycotools. The reason why is only that I've never needed to map a flux so I forgot to include it. I'll log this as a feature request, though I don't have time right now to implement it.

Another issue is that I'm not sure what you are trying to map with the labels which look like this: re01_PTS_Glc_indep. Are these also fluxes? If so, as I said, we can't yet map these yet, but if they are something else, then name them appropriately and they should be mapped.

I have also had trouble including model4 (see below) and data file 4 in the configuration. I'm not sure whether this is an inaccuracy in the model configuration or an actual bug (I suspect the former), but again I don't have time to dig right now.

I didn't know which of your python files or ipynb's you were using so I jotted down a quick script to test PyCoTools behaviour (see bottom). Although this is not exactly what you wanted I hope it can be of help as a reference.

One solution is to use the script below to configure as much of the models as you can and then switch to the manual method and manually configure the rest. You create an output table (in reports under output configurations) containing all your estimated parameters and the objective function value. Actually pycotools should already have done this for you and the correct report is called parameter_estimation. Then, instead of running the task via the parameter estimation task, do it via the scan task. In each model, configure a repeat scan item, choose the number, add the report and give it a name and ensure you are using parameter_estimation as the subtask. Then make sure the executable button (top right corner of each copasi task) is ticked in the scan task (but none of the others). That makes the parameter estimation executable via command line copasi. Then you can use the command $CopasiSE <filename>, to run the parameter estimations back to back. This is in essence what pycotools does for you. If you do this for each model, it is fairly easy to write a small script to execute them on the cluster together, or in the background at the same time. Take a look at os.system and multiprocessing.

Sorry that I can't be of more help right now. Good luck,

Ciaran

# save this code in a python file in your 'PyCoTools' folder 
from pycotools3 import model, viz, tasks
import os

#directories
working_directory = os.path.dirname(__file__)
models_dir = os.path.join(working_directory, 'Models')

# model files
model1 = os.path.join(models_dir, 'model.cps')
model2 = os.path.join(models_dir, 'model_ATPase.cps') #had to remove the space in the filename
model3 = os.path.join(models_dir, 'model_LDH_inhibition_by_O2.cps')
model4 = os.path.join(models_dir, 'model_NoxE_LDH_inhibition_by_O2.cps')

all_models = [model1, model2, model3, model4]
for i in all_models:
    if not os.path.isfile(i):
        raise FileNotFoundError(i)

# load into pycotools objects
for i, m in enumerate(all_models):
    all_models[i] = model.Model(m)

for i in all_models:
    if not isinstance(i, model.Model):
        raise TypeError(i)


# data files
data_file1 = os.path.join(working_directory, '1_ss_data.csv')
data_file2 = os.path.join(working_directory, '2_ss_training_data_ATPase.csv')
data_file3 = os.path.join(working_directory, '3_ss_training_data_O2_inhb_LDH.csv')
data_file4 = os.path.join(working_directory, '4_ss_training_data_noxE.csv')

all_data_files = [data_file1, data_file2, data_file3, data_file4]
for i in all_data_files:
    if not os.path.isfile(i):
        raise FileNotFoundError(i)


# There seems to be a problem with identifying "(NOXE).NOXE" in model 4. Removing model 4 for now
# My guess is that you'll want to be more selective about which parameters to estimate
with tasks.ParameterEstimation.Context([model1, model2, model3], all_data_files,
                                       context='s', parameters='l') as context:
    context.set('separator', ',')
    context.set('run_mode', False)
    config = context.get_config()

pe = tasks.ParameterEstimation(config)

# the configured model objects (this object will be
print(pe.models)
mod1 = pe.models.model.model
mod2 = pe.models.model_ATPase.model
mod3 = pe.models.model_LDH_inhibition_by_O2.model


mod2.open()

# note: on looking at the parameter estimation task from mod2.open, it looks like there is a mistake
#  somewhere in data file 4.

from pycotools.

Niels86 avatar Niels86 commented on August 16, 2024

from pycotools.

CiaranWelsh avatar CiaranWelsh commented on August 16, 2024

Last thing Nils, regarding the re01_PTS_Glc_indep variables. It sounds like you want global quantities. Create your global quantity and map it to the reaction parameter that you want, then use the name of the global quantity in the data file (followed by _indep if you need) to map it as a global quantity using pycotools.

In fact, I tend to only use global quantities and I find the local parameters (re1).k1 quite annoying. It also puts all your parameters in a single place for you to play about with.

from pycotools.

Niels86 avatar Niels86 commented on August 16, 2024

from pycotools.

Niels86 avatar Niels86 commented on August 16, 2024

from pycotools.

CiaranWelsh avatar CiaranWelsh commented on August 16, 2024

Hi Neils,

The easy way here is to define your model using antimony and then load the model into PyCoTools. Tellurium doesn't have the concept of local parameters and therefore all kinetic parameters are automatically global.

If you already have a copasi model defined, export the sbml and then use tellurium.sbmlToAntimony to get the antimony string without too much difficulty.

But yes, a local to global feature could be a useful utility.

Ciaran

from pycotools.

Niels86 avatar Niels86 commented on August 16, 2024

from pycotools.

CiaranWelsh avatar CiaranWelsh commented on August 16, 2024

That does look 'roundabout'. I've just remembered the model.Model.to_antimony() method. Its been a while since I've tested it but it should return the antimony string.

from pycotools.

Related Issues (13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.