Giter Site home page Giter Site logo

quaquel / emaworkbench Goto Github PK

View Code? Open in Web Editor NEW
123.0 10.0 88.0 214.32 MB

workbench for performing exploratory modeling and analysis

License: BSD 3-Clause "New" or "Revised" License

Python 32.13% NetLogo 1.66% Jupyter Notebook 65.75% C 0.44% Shell 0.02%
modeling python simulation

emaworkbench's Introduction

Build Status Coverage Status Documentation Status PyPi PyPi

Exploratory Modeling workbench

Exploratory Modeling and Analysis (EMA) is a research methodology that uses computational experiments to analyze complex and uncertain systems (Bankes, 1993). That is, exploratory modeling aims at offering computational decision support for decision making under deep uncertainty and robust decision making.

The EMA workbench aims at providing support for performing exploratory modeling with models developed in various modelling packages and environments. Currently, the workbench offers connectors to Vensim, Netlogo, Simio, Vadere and Excel.

The EMA workbench offers support for designing experiments, performing the experiments - including support for parallel processing on both a single machine as well as on clusters-, and analysing the results. To get started, take a look at the high level overview, the tutorial, or dive straight into the details of the API.

The EMA workbench currently under development at Delft University of Technology. If you would like to collaborate, open an issue/discussion or contact Jan Kwakkel.

Documentation

Documentation for the workbench is availabe at Read the Docs, including an introduction on Exploratory Modeling, tutorials and documentation on all the modules and functions.

There are also a lot of example models available at ema_workbench/examples, both for pure Python models and some using the different connectors. A release notes for each new version are available at CHANGELOG.md.

Installation

The workbench is available from PyPI, and currently requires Python 3.9 or newer. It can be installed with:

pip install -U ema_workbench

To also install some recommended packages for plotting, testing and Jupyter support, use the recommended extra:

pip install -U ema_workbench[recommended]

There are way more options installing the workbench, including installing connector packages, edible installs for development, installs of custom forks and branches and more. See Installing the workbench in the docs for all options.

Contributing

We greatly appreciate contributions to the EMA workbench! Reporting Issues such as bugs or unclairties in the documentation, opening a Pull requests with code or documentation improvements or opening a Discussion with a question, suggestions or comment helps us a lot.

Please check CONTRIBUTING.md for more information.

License

This repository is licensed under BSD 3-Clause License. See LICENSE.md.

emaworkbench's People

Contributors

deepsource-autofix[bot] avatar dependabot[bot] avatar eb4890 avatar ewouth avatar floristevito avatar github-actions[bot] avatar irene-sophia avatar jamesphoughton avatar jasonrwang avatar jeffreydillonlyons avatar jpn-- avatar marcjaxa avatar mikhailsirenko avatar pre-commit-ci[bot] avatar quaquel avatar rhysits avatar robcalon avatar steipatr avatar thesethtruth avatar tristandewildt avatar willu47 avatar wlauping avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emaworkbench's Issues

Cannot use analysis tools: Feature Scoring

I am running a Netlogo model through EMA Workbench and I am able to output a correct 3d array. However, Feature Scoring requires a 2d array. See error below:

File ".../venv/lib/python3.6/site-packages/sklearn/utils/validation.py", line 565, in check_array % (array.ndim, estimator_name)) ValueError: Found array with dim 3. Estimator expected <= 2.

Using a (very) slightly modified version of your example from the documentation, this is what I am attempting to run:

from ema_workbench.analysis import feature_scoring
import matplotlib.pyplot as plt
import seaborn as sns

experiments, outcomes = results_output
fs = feature_scoring.get_feature_scores_all(experiments, outcomes)
sns.heatmap(fs, cmap='viridis', annot=True)
plt.show()

I am running into a similar issue when trying to use Prim or Dimensional Stacking. Do you have a solution for this? am I running the model incorrectly?

I have forked your project. For my exact implementation please see this file.

Logging in v2 is slow

Logging in the v2 branch is very slow, as it is calling into "inspect" multiple times for every debug message, regardless if debug logging is on or not.

upcoming jpype release

Hi,

I'd like to invite you to test the upcoming jpype version prior we push the release on pypi and potentially have your users suffer from api changes we had to make. For instance strings are not automatically converted from java to python, if not explicitly desired.

pip install git+https://github.com/jpype-project/jpype@master should install a recent version.

Thanks for checking and reporting back with us.

Best regards

OrdinalParameter

consider adding an ordinal parameter, this would be in between an IntegerParameter and a CategoricalParameter. Basically, pass an ordered collection of values.

NetLogo 6.0.4, cannot setup chooser to a specific value in EMA using Categorical Parameter or Constant

The chooser is called "global-search-strategy", and it can be chosen from three different global variables.

The NetLogo code works perfectly when running on NetLogo, however, when I set up the experiments using the Exploratory Modeling Workbench (EMA), which is developed based on PyNetLogo, jpype. And when I try to overwrite the default value of the global-search-strategy, it causes the above error. But if I use the default one (the last saved value of global-search-strategy) without overwriting, it works just fine.

I am not exactly sure which part of the NetLogo code causes the problem, since I set it up on EMA and it works just fine on pure NetLogo.

I am not exactly sure if it is the same type of issues #818

Here is just a piece of the code for set-up:

image

Here is the part of the Python code:
image

The following is the error from the Python:

Screenshot 2019-07-17 10 29 47
image

Any help would be useful, thanks!

Nan and integers

Integer dtypes cannot be assigned a nan. This means that in case of running across the UNION of the uncertainties of multiple model structures the integers won't have NANs, but rather the defaults assigned by np.empty.

wondering about how to difine constraints in directed search.

I have read through the docs and got little information about how to specify constraints in a constrained optimization problem.

The only explanations about specifying constraints in the manual are that "A constraint can be applied to the model input parameters (both uncertainties and levers), and outcomes. A constraint is essentially a function that should return the distance from the feasibility threshold. The distance should be 0 if the constraint is met."

An example was illustrated as follow in the lake pollution problem.
constraints = [Constraint("max pollution", outcome_names="max_P", function=lambda x:max(0, x-1))]

My understanding of this constraint is that the "max_P" objective should be smaller than (or equal to) 1?
I did not find any information about the Constraint class both in the docs and the GitHub library, so I guess that the outcome_names="max_P" parameter tells the optimization algorithm that this constraint is applied to the objective (outcome) of "max_P"? Therefore, if I want to define constraints on uncertainties or levers, the corresponding parameters should be uncertainty_names="an Uncertainty name" or lever_names="a Lever name"?
As to the function parameter, if I want to define a constraint as a <= x <= b, how should I write the function parameter? Also, how should I define equality constraints like sum(wj) = 1 as required by the lake model? Another question is that is it possible to utilize a user-defined function instead of a lambda expression to define the constraints (function="user-defined function")?
And multiple constraints should be defined as follow?
constraints = [Constraint(...), Constraint(...),Constraint(...),...]

Thanks very much if you can give some instructions!!!
Best regards,
Jyang

prim results in ipython notebook

the current way of printing the tables of prim to stdout does not produce very readable tables when viewing them in ipython notebook. Moreover, it might be better to transpose the table, so have the boxes as columns, and the uncertainties as rows.

Morris sampler __init()__

The initialization function of the Morris sampler requires positional arguments num_levels and grid_jump, while it is called with no argument in perform_experiments() like:

sampler = SAMPLERSuncertainty_sampling

SAlib's morris.sample() does not require any input, though, since num_levels is 4 by default and grid_jump is not even an argument for this function. Therefore, the Morris sampler of ema_workbench does not need these inputs... or the function call in perform_experiments should cover Morris calling.

'NoneType' object has no attribute 'Real'

I get error messages when I implement "Directed search with the Exploratory Modeling workbench" following the documentation. (https://emaworkbench.readthedocs.io/en/latest/indepth_tutorial/directed-search.html)

input:

from ema_workbench import (RealParameter, ScalarOutcome, Constant, Model)
from dps_lake_model import lake_model

model = Model('lakeproblem', function=lake_model)
.......
#________________________________________________________
from ema_workbench import MultiprocessingEvaluator
from ema_workbench import ema_logging

ema_logging.log_to_stderr(ema_logging.INFO)

with MultiprocessingEvaluator(model) as evaluator:
results = evaluator.optimize(nfe=250, searchover='levers',
epsilons=[0.1,]*len(model.outcomes),
constraints=constraints)
#___________________________________________________________

Error message:
[MainProcess/INFO] pool started
[MainProcess/INFO] terminating pool

AttributeError Traceback (most recent call last)
in ()
8 results = evaluator.optimize(nfe=250, searchover='levers',
9 epsilons=[0.1,]*len(model.outcomes),
---> 10 constraints=constraints)

C:\Anaconda3\lib\site-packages\ema_workbench\em_framework\evaluators.py in optimize(self, algorithm, nfe, searchover, reference, constraints, **kwargs)
179 return optimize(self._msis, algorithm=algorithm, nfe=int(nfe),
180 searchover=searchover, evaluator=self,
--> 181 reference=reference, constraints=constraints, **kwargs)
182
183 def robust_optimize(self, robustness_functions, scenarios,

C:\Anaconda3\lib\site-packages\ema_workbench\em_framework\evaluators.py in optimize(models, algorithm, nfe, searchover, evaluator, reference, convergence, constraints, **kwargs)
503
504 problem = to_problem(models, searchover, constraints=constraints,
--> 505 reference=reference)
506
507 # solve the optimization problem

C:\Anaconda3\lib\site-packages\ema_workbench\em_framework\optimization.py in to_problem(model, searchover, reference, constraints)
119
120 '''
--> 121 _type_mapping = {RealParameter: platypus.Real,
122 IntegerParameter: platypus.Integer,
123 CategoricalParameter: platypus.Permutation}

AttributeError: 'NoneType' object has no attribute 'Real'

Issue with Python 3.6

Hello all,

Today I re-installed the latest Anaconda 5.0.1 Windows (Python 3.6) and EMA Workbench. I found my EMA code couldn't run now... I am not sure if the following error is caused by Python or something else...

Thanks,
Frank

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline

import mpld3

fig = box1.show_tradeoff()
fig.set_size_inches((10, 10))
mpld3.display()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-9-75d427415809> in <module>()
      8 fig = box1.show_tradeoff()
      9 fig.set_size_inches((10, 10))
---> 10 mpld3.display()

~\Anaconda3\lib\site-packages\mpld3\_display.py in display(fig, closefig, local, **kwargs)
    303     if closefig:
    304         plt.close(fig)
--> 305     return HTML(fig_to_html(fig, **kwargs))
    306 
    307 

~\Anaconda3\lib\site-packages\mpld3\_display.py in fig_to_html(fig, d3_url, mpld3_url, no_extras, template_type, figid, use_http, **kwargs)
    249                            d3_url=d3_url,
    250                            mpld3_url=mpld3_url,
--> 251                            figure_json=json.dumps(figure_json, cls=NumpyEncoder),
    252                            extra_css=extra_css,
    253                            extra_js=extra_js)

~\Anaconda3\lib\json\__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    236         check_circular=check_circular, allow_nan=allow_nan, indent=indent,
    237         separators=separators, default=default, sort_keys=sort_keys,
--> 238         **kw).encode(obj)
    239 
    240 

~\Anaconda3\lib\json\encoder.py in encode(self, o)
    197         # exceptions aren't as detailed.  The list call should be roughly
    198         # equivalent to the PySequence_Fast that ''.join() would do.
--> 199         chunks = self.iterencode(o, _one_shot=True)
    200         if not isinstance(chunks, (list, tuple)):
    201             chunks = list(chunks)

~\Anaconda3\lib\json\encoder.py in iterencode(self, o, _one_shot)
    255                 self.key_separator, self.item_separator, self.sort_keys,
    256                 self.skipkeys, _one_shot)
--> 257         return _iterencode(o, 0)
    258 
    259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,

~\Anaconda3\lib\site-packages\mpld3\_display.py in default(self, obj)
    136             numpy.float64)):
    137             return float(obj)
--> 138         return json.JSONEncoder.default(self, obj)
    139 
    140 

~\Anaconda3\lib\json\encoder.py in default(self, o)
    178         """
    179         raise TypeError("Object of type '%s' is not JSON serializable" %
--> 180                         o.__class__.__name__)
    181 
    182     def encode(self, o):

TypeError: Object of type 'ndarray' is not JSON serializable

optimization and memory issues

the optimization runs slower over time. The cause of this is that every new population member is checked against all previous members to assess whether it has already been evaluated. This is smart to do when evaluating a member is expensive. However, with very fast models and a large population and population size, this lookup can become more time consuming than just evaluating the member. In such a case, you would like to just skip the lookup.

Stack graph

figure_types['energy_types_stack'] = ['Relative share oil GE',
                                      'Relative share gas GE',
                                      'Relative share coal GE',
                                      'Relative share hydro GE',
                                      'Relative share nuclear GE',
                                      'Relative share renewables GE']
        outcomes_of_interest = figure_types[figure_type]
        y_maximum = 0
        cases_copy = outcomes
        y_maximum = 1
        last_line = np.zeros(time.shape[0])
        fig1 = plt.figure() #figsize=(16,3)
        ax1 = fig1.add_subplot(111)

        for j in range(len(outcomes_of_interest)):
            outcome = outcomes_of_interest[j]
            running_max = np.max(last_line + cases_copy[outcome][index])
            ax1.plot(time, 
                     last_line + cases_copy[outcome][index],
                     COLOR_LIST[j], #COLOR_LIST[j]
                     label=ylabels[outcome]
                     )
            ax1.fill_between(time, 
                             last_line, 
                             last_line + cases_copy[outcome][index],
                             facecolor=COLOR_LIST[j], #COLOR_LIST[j]
                             label=ylabels[outcome],
                             alpha=0.5,
                             )
            last_line = last_line + outcomes[outcome][index]
            handles, labels = ax1.get_legend_handles_labels()
            leg = ax1.legend(handles[::-1], 
                             labels[::-1], 
                             loc=0,
                             prop={'family':'calibri', 'size':12})

Formatting issue

The EMA framework is very good indeed. But to enable more flexible formatting, some formatting parameters are suggested to be added. For example, a formatting parameter can be added into PrimBox._inspect_graph (in prim.py) for users to easily customise the number format in their PRIM box inspection figures.

Improve documentation of result structure?

I would find it helpful if the structure of the results would be explained in more detail, maybe with some further examples on how to access different aspects of the results.

What I could find so far:

A. https://emaworkbench.readthedocs.io/en/latest/indepth_tutorial/general-introduction.html?highlight=experiments%5B%27policy

By default, the return of perform_experiments is a tuple of length 2. The first item in the tuple is the experiments. The second item is the outcomes. Experiments and outcomes are aligned by index. The experiments are stored in a numpy structured array, while the outcomes are a dict with the name of the outcome as key, and the values are in a numpy array.

B. https://emaworkbench.readthedocs.io/en/latest/ema_documentation/em_framework/evaluators.html
returns the experiments as a numpy recarray, and a dict with the name of an outcome as key, and the associated scores as numpy array. Experiments and outcomes are alinged on index.

If someone is quite new to python, like me, it might be astonishing that "an array can be used as a dictionary":

experiments['policy']

=>It might be helpful to show that the titles of the recarray can be accessed with
experiments.dtype.names and that it has for example following structure

('x1', 'x2', 'scenario_id', 'policy', 'model')

It might also be useful to show how to get all outcomes for the first experiment and how plot them, e.g. for TimeSeries results:

import pandas as pd
from matplotlib import pyplot as plt

keyList = list(outcomes.keys())
outcomeOfFirstExperiment = {key: values[0] for key, values in outcomes.items()}

df = pd.DataFrame(outcomeOfFirstExperiment)  
df.plot(x=keyList[0])
plt.show()   

In my opinion such small examples might help to easier understand the structure of the results.

Related: #54

How to publish Vensim model to be compatible with EMA Workbench?

I am able to run the Vensim example with model.vpm. However, I am not able to adapt the example for my needs.

If I just open model.mdl file in Vensim DSS 6.0b and re-published it as model.vpm ....
I already get some different behavior: When running the experiments, Vensim shows some dialog
New Name for Dataset, asking me to select some file.

=>What are the correct options for the publishing dialog of Vensim, so that I get a compatible *.vpm file?
=> Anything other I need to consider when creating the *.vpm file? Is it for example ok to use variable names like "Änderung Umsetzungsrate InvPro", including umlauts and spaces?

image

Should the Settings check box be deactivated? Should Additional files be empty?
Should the package type be different than Model?
ect.

plotting and log scaling

the plotting functions return a dict with the axes instances. However, if you log scale these instances, it appears that the scaling on the figure is different from the scaling on the density plot. At the moment these axes are not shared, which is what they should be.

documentation structural fixes needed

issure: get only one solution when running evaluator.optimize()

when I run the following codes in jupyterLab I can find only one solution every time. However, if I run them in an IDE I can find multiple solutions as described in your documentation. No matter which version of EMA I use to run the code.

from ema_workbench import Constraint
constraints = [Constraint("max pollution", outcome_names="max_P",function=lambda x:max(0, x-1))]
from ema_workbench import MultiprocessingEvaluator
from ema_workbench import ema_logging
with MultiprocessingEvaluator(model) as evaluator:
    results = evaluator.optimize(nfe=250, searchover='levers',epsilons=[0.1,]*len(model.outcomes),constraints=constraints)

have a regression and classification mode for prim analogous to CART

q-p values are not properly defined in y is not a binary vector. It would make sense to detect whether y is binary and set prim to regression mode explicitly in this case. In turn, q-p value calculation should than switch to using one sided t-test (I guess) instead of binomial test.

interactive prim

at RAND, prim is typically used in an interactive mode. The current implementation of prim in the workbench does not allow for this type of use.

A first modification that provides insight into the peeling trajectory is present in the sandbox under prim enhancements, but further work is needed to make this into an interactive way of using prim

show_pairs_scatter for prim

A question on the show_pairs_scatter feature when used with prim: in the current implementation, the pairs plot figure is drawn with the current box limits overlaid on a scatterplot of the points. The points plotted are all of the points, regardless if the box is the first box using all the data, or a subsequent box using a subset of the data. It seems to me like it would be valuable to be able to plot only the yi_initial points for the current box, not the full dataset every time. Am I interpreting this right? Or is there something about how prim is used that I am missing here?

If this is a reasonable thing, it's an easy change, which I can do and submit a PR.

Connector for Repast Symphony

On the ema homepage it is stated

Future plans include support for Netlogo and Repast
http://simulation.tbm.tudelft.nl/ema-workbench/contents.html

While there is already some connector for Netlogo...

... I could not find information about a connector for Repast.

=>Is somebody already working on that? (Tried and failed for some reason?)

Some Links about controlling Repast:

Zipping over scenarios and policies

For generating a design of experiments, the workbench currently iterates over scenarios and policies, creating a set of runs from the itertool.product of these two collections. So if there are 5 policies and 10 scenarios, there are 50 experiments. This is a reasonable approach for most EMA applications.

However, for running experiments to generate a meta-model, it is more efficient to "zip" these: make 50 draws for both policies and scenarios, pair them up, and run 50 experiments.

From the perspective of meta-model development, there is no difference at all between uncertainties and levers -- we merely seek to build a replacement black box for each model structure that converts inputs (both scenarios and policies) to outputs. To build the meta-model, it is often more efficient to have maximum variation in each of the input parameters. By providing a zip-over process, we can increase the variability in the inputs for the same number of experiments.

Vensim connector

I am trying to run a Vensim model using the multiprocessing evaluator. The model reads data from an excel file. It causes an error in run_model, traced back to an "OSError: exception: access violation reading 0x00000000". This is because of trying to read from Excel files with the same name in the cloned directories. I have no problem opening/reading only excel files with the same name in multiprocessing. I think, the problem is about Vensim/Excel connection. However, Vensim people say that the normal Vensim DLL cannot be used for multi-processing, which is obviously not true. Any idea about the cause of this problem?

prim and nans

if there are nans in the experiment array, prim creates a problem. For example, the real peel calculation using mquartiles starts to return nans.

Sequencing of runs in results file

When using parallel processing, the sequencing of runs saved in the results file does not necessarily match the sequence of sampled experiments (potentially causing issues with Sobol analysis, etc.)

Issue with Python 3.7

When initializing a MultiprocessingEvaluator instance an error occurs:

in initialize
    self._pool._taskqueue.maxsize = self._pool._processes * 5
AttributeError: '_queue.SimpleQueue' object has no attribute 'maxsize'

Caused by the change from queue.Queue() to queue.SimpleQueue() in the multiprocessing package: python/cpython#5216

Please add link to api doc for perform_experiments

In the python tutorial it says

The function perform_experiments() takes the model we just specified and will execute 100 experiments. By default, these experiments are generated using a Latin Hypercube sampling, but Monte Carlo sampling and Full factorial sampling are also readily available. Read the documentation for perform_experiments() for more details.

But where to find that documentation? If I search the doc for "perform_experiments" I get some results but not the actual doc for perform_experiments.
=>Please add a direct link to the corresponding doc in the tutorial.

EMAworkbench and Rhodium

Is EMAworkbench an update of Rhodium? Is it possible to briefly describe the difference between the two in README.md? Thanks!

conversion error in uncertainty related parameter specification

A very small issue related to specifying the names of uncertainty related parameters:

When I specify model uncertainty parameters with Parameter(name = 'X', variable_name = 'Y') the evaluation of my experiments results in a 'ValueError: cannot convert float NaN to integer'.
The error is resolved when I don't try to assign a different header name to the variable. It is also only related to the model uncertainties, as the name, variable_name specification does work when I specify model levers.

merge_results function in utilities: throws type error at EMA debug

In utilities.py code line 318

#debug("merged shape: %s" % merged_value.shape)

TypeError: not all arguments converted during string formatting

The old-style % formatting uses % codes for formatting should be probably updated with the new-style {} formatting uses {} codes and the .format method in whole Workbench.

Fix that solves encountered error:
debug("merged shape: {}s".format(merged_value.shape))

logging and ipython notebook

for some reason, everytime you run a log to stderr command, an additional line is logged with the same message. It appears as if every time a new logger is created, rather than returning the default logger.

Add control for random state

Are there any plans to include "random state" interface for any of the sampling functions (i.e. to guarantee stable output for testing)? Or: was this considered and explicitly not done for some reason?

If this isn't contra-indicated, would a pull request that implements this only partially be of interest? There's a lot of places where random state might be useful, and I don't have time to search through the whole codebase and find/implement them all, but I'd be willing to do so for a handful of "low hanging fruit" places that would be useful for the project I am working on. (I am going to do this anyhow, just want to know if I should isolate this as an independent PR)

TypeError: real_decorator() takes 1 positional argument but 2 were given

After updating to version 2 I get following error when using my custom repast connector, that worked with version 1. I don't understand the error. Any hint what I could do?

[EMA.ema_workbench.em_framework.evaluators/INFO/MainProcess] performing 10 scenarios * 1 policies * 1 model(s) = 10 experiments
[EMA.ema_workbench.em_framework.evaluators/INFO/MainProcess] performing experiments sequentially
[EMA.ema_workbench.em_framework.experiment_runner/ERROR/MainProcess] real_decorator() takes 1 positional argument but 2 were given
Traceback (most recent call last):
File "D:\EclipsePython\App\WinPython\python-3.7.2.amd64\lib\site-packages\ema_workbench\em_framework\experiment_runner.py", line 84, in run_experiment
model.run_model(scenario, policy)
File "D:\EclipsePython\App\WinPython\python-3.7.2.amd64\lib\site-packages\ema_workbench\util\ema_logging.py", line 158, in wrapper
res = func(*args, **kwargs)
File "D:\EclipsePython\App\WinPython\python-3.7.2.amd64\lib\site-packages\ema_workbench\em_framework\model.py", line 332, in run_model
super(SingleReplication, self).run_model(scenario, policy)
File "D:\EclipsePython\App\WinPython\python-3.7.2.amd64\lib\site-packages\ema_workbench\util\ema_logging.py", line 158, in wrapper
res = func(*args, **kwargs)
File "D:\EclipsePython\App\WinPython\python-3.7.2.amd64\lib\site-packages\ema_workbench\em_framework\model.py", line 178, in run_model
self.model_init(policy)
TypeError: real_decorator() takes 1 positional argument but 2 were given

Combination of levers (policy) non-unique

I am running an EMA experiment with a relatively small number of levers and policy evaluations. See code below.

# set levers
model.levers = [CategoricalParameter('summer_target', (-0.2, 0.0, 0.1) ),
                CategoricalParameter('winter_target', (-0.55, -0.4) ) ]

# run EMA evaluator
with MultiprocessingEvaluator(model,n_processes=10) as evaluator:
    results = evaluator.perform_experiments(scenarios=20, policies=4)

I get the problem, however, that the policy evaluations are not unique. Namely, I get the folling policies

0     {'winter_target': -0.4, 'summer_target': -0.2}
20    {'winter_target': -0.55, 'summer_target': 0.1}
40     {'winter_target': -0.4, 'summer_target': 0.0}
60    {'winter_target': -0.55, 'summer_target': 0.1} 

Policies at index 20 and 60 are the same, whereas other combinations of levers, such as

{'winter_target': -0.4, 'summer_target': 0.1}

or

{'winter_target': -0.55, 'summer_target': -0.2} 

are not evaluated.

Unable to use outcome as non-time series in Netlogo

When an outcome is not a time series, using the following code does not work:
Outcome("variable", time=False )
When this code is used, the variable is not saved at all.

Example of a situation:
Exporting the final value of a list in netlogo that saved a certain variable.
The variable thus reads [ x1 x2 x3 ...xn].
When exported as a time series, the output reads a large number of empty lists and then all forms the list takes. I would like to be able to export this list just once at the end of the run.

Thus, after the go function has done its repetitions, only then export this outcome.

How to log to stdout instead stderr?

I am new to ema workbench and try to use it with Eclipse PyDev. When I log the progress I get red error output in the console.

I would like to use

ema_logging.log_to_stdout(ema_logging.INFO)

instead of

ema_logging.log_to_stderr(ema_logging.INFO)

However, there is not such a method.

=>How can I log progress to stdout instead of stderr?

image

Non-uniform distributions for uncertainties

It would be desirable to be able to specify non-uniform distributions on uncertainty parameters. the use case is less compelling to do so for lever parameters, but given the common class definition I see no reason not to allow this.

A complication with incorporating this into the current interface is that the Parameter class currently explicitly calls for lower and upper bounds in the constructor, but not all distributions have such bounds.

A proposed solution: give the bounds default values of 'None' and add a dist argument to the Parameter class, which accepts an object of type scipy.stats.rv_frozen. Upper and lower bounds can be inferred from the dist argument using dist.ppf(0) and dist.ppf(1). Then change the code in samplers.py to draw from the frozen distribution. It may be easier to generate a frozen distribution (by default a uniform) for every parameter on construction even if the dist argument is not given, although I'm not sure how much overhead this will create in running the workbench.

And: for some (many) distributions, the bounds may be infinite. Allowing infinite bounds open some new challenges in other parts of the code (notably visualizations) that would need to be overcome. For example, a large size latin hypercube sample from an unbounded distribution with a long tail would have a small number of very extreme values, and allowing automatic range detection for figures without accounting for these outliers would make the outputs unusable. It is not clear if other errors may be introduced elsewhere as well.

Possible methods to address infinite bounds:

  1. Let it go, user beware.
  2. Check for infinite bounds on Parameter construction, and either disallow via an exception or throw a warning.
  3. Generate a wrapper that creates a truncated distribution when desired. We can use the lower and upper bound arguments from the constructor to define the truncated range, and adjust the ppf method to draw from the truncated range only -- ideally all the methods of a scipy.stats distribution would be there too, but if we only want to make random draws from a truncated distribution that shouldn't be too hard.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.