Giter Site home page Giter Site logo

epa141a_open's Introduction

EPA1361 Model-based Decision-making

Welcome to the EPA1361 code repository. This repository contains all the assignments, model answers (published each two weeks), the final project, and models and data to support those.

This repository is part of the EPA1361 Model-based Decision-making course of the Engineering and Policy Analysis (EPA) Master program at the Delft University of Technology.

Contents

Requirements

This repostitory is tested on Python 3.11. It has the same dependencies as the EMAworkbench (see installation guide). Furthermore it uses seaborn for many of the plots.

pip install -U ema_workbench[recommended] seaborn

Also checkout the Software section on Brightspace.

epa141a_open's People

Contributors

quaquel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

epa141a_open's Issues

Two issues in assignment 4

  1. When using the %matplotlib notebook magic, I don't get interactive graphs. Instead, in PyCharm, the graphs stay empty. And in Jupyter (in-browser), the graphs and control buttons appear, but I still cannot interact with them (click actions don't do anything).
  2. Running the_box.show_tradeoff() displays two identical graphs instead of just one.

Python 3.11.3 / PyCharm 2023.1 / macOS 13.3.1

`Optimization Moro.ipynb` has multiple issues that prevent it from running

The Optimization Moro.ipynb notebook crashes in multiple different places, preventing the notebook to run all the way though. So far we've encountered the following errors:

  1. The first MultiprocessingEvaluator will fail because uncertainty_sampling expects a Sampler, and not the string 'mc'.
  2. The same MultiprocessingEvaluator will fail after that for a second time with an NumPy error, numpy.ndarray' object has no attribute 'uncertainties', because a Tuple is inputted instead of a Model.
    Change cell 5 to model, _ = get_model_for_problem_formulation(1) to save only the Model in model and not the whole tuple.
  3. Then cell 11 will crash because pandas and seaborn aren't imported.
  4. And only then you will come on the case in Cell 21 (ln 60 in the Git version), in which the evaluator.robust_optimize() keeps running endlessly.
    Edit: It runs, but just quite slow, and in that time it doesn't update the progress bar, or the expected time.
  5. Cell 24 gives a KeyError: 'policy', because the experiments DataFrame doesn't contain a 'policy' column.

We also tried running it with the EMAworkbench 2.0.9, which was released at the time of last year's course. Only the first error (with the Sampler) in the list above is prevented, but all other still had to be fixed.

There might be more errors after the fourth error, but I couldn't get past it. It doesn't give any error codes, just keeps running endlessly, both with the MultiprocessingEvaluator and SequentialEvaluator. It might be related to #5, or maybe not. It just runs very long without updating the progress bar. A firth error is experienced afterwards.

@quaquel could you check if you can run the whole notebook and upload the fixed version?


On a bit of a broader note, I really don't mind a bit of debugging, but at this point we poured dozens of person-hours into this debugging the code and notebooks, and aren't able to spend this time into modelling in support of decision making. We feel the directed search is an integral part of this process and we love to focus on that instead of debugging. I think this course can provide far more value and learning experience to students if debugging the examples and source code is a (way) smaller part of the course.

Still issues with upstream problem

Noticed there are some issues with the upstream problem since the last commit/push. Certain outcomes are summed up with no option to individually analyse them. Could you please check this? Perhaps I may be mistaken in judgement.

The `Problem Formulation.ipynb` notebook has weird Policy data retention issues, potentially corrupting results

@anneheijbroek and I have been breaking my neck over this issue for the better part of a day. There seems something data retentions quicks in the dike_model object, and then especially it's policies (dike_model.policy), in the Problem Formulations.ipynb notebook.

In cell 8 the dike_model is first run, with 4 (random) generated policies. Before this run, the dike_model.policy attribute is empty, after this run, it's filled with a Policy() object which contains a dictionary.

Now the problem is, it seems that those 4 generated policies that are now in dike_model.policy are taken over to the second run with the custom defined policies! And that breaks all kind of things.

The obvious solution is removing the first run with the generated policies, but then the model crashes in the dike_model_function.py, where the progressive_height_and_costs function expects for each dike to have a dike increase key-value pair ready. It crashes on https://github.com/quaquel/epa1361_open/blob/cbc0e34a78cd73898c1a11a6a483ed84ef4cc114/final%20assignment/dike_model_function.py#L94

So here's the issue: The notebook in the current form seems to take the old, generated dike_model.policy dictionary keys f'DikeIncrease {s}' for every dike that wasn't defined by hand. So the results are all over the place.

@quaquel am I doing something totally wrong or isn't this expected behavior?

How to reproduce

In the Problem Formulations.ipynb notebook:

  • Readout dike_model.policy.data after first and second run, and notice how the values are (largely) the same, or
  • Remove policies=4 in the first run (cell 8) and note the second run crashes with a KeyError
  • Remove cell 8 and 9 all together and note the second run crashes with a KeyError

Note, if the MultiprocessingEvaluator keeps running endlessly, replace with SequentialEvaluator to see it crash.

assignment 4, inspect_tradeoff shows depreciation warning

/Users/brittreddingius/Documents/pythonProject2/lib/python3.11/site-packages/altair/utils/deprecation.py:65: AltairDeprecationWarning: 'selection_single' is deprecated.  Use 'selection_point'
  warnings.warn(message, AltairDeprecationWarning, stacklevel=1)

SchemaValidationError                     Traceback (most recent call last)
File ~/Documents/pythonProject2/lib/python3.11/site-packages/altair/vegalite/v5/api.py:843, in TopLevelMixin.to_dict(self, *args, **kwargs)
    838 kwargs["context"] = context
    840 # TopLevelMixin instance does not necessarily have to_dict defined
    841 # but due to how Altair is set up this should hold.
    842 # Too complex to type hint right now
--> 843 dct = super(TopLevelMixin, copy).to_dict(*args, **kwargs)  # type: ignore[misc]
    845 # TODO: following entries are added after validation. Should they be validated?
    846 if is_top_level:
    847     # since this is top-level we add $schema if it's missing

File ~/Documents/pythonProject2/lib/python3.11/site-packages/altair/utils/schemapi.py:814, in SchemaBase.to_dict(self, validate, ignore, context)
    807         self.validate(result)
    808     except jsonschema.ValidationError as err:
    809         # We do not raise `from err` as else the resulting
    810         # traceback is very long as it contains part
    811         # of the Vega-Lite schema. It would also first
    812         # show the less helpful ValidationError instead of
    813         # the more user friendly SchemaValidationError
--> 814         raise SchemaValidationError(self, err) from None
    815 return result

SchemaValidationError: `VConcatChart` has no parameter named 'selection'

Existing parameter names are:
vconcat      center     description   params    title       
autosize     config     name          resolve   transform   
background   data       padding       spacing   usermeta    
bounds       datasets                                       

See the help for `VConcatChart` to read the full description of these parameters

Upstream dike_model performance improvement

I just made a few model changes which made the model about 2x faster (everything tested and validated). Is the dike_model still being used somewhere in production or academics? In that case I can share the code in advance.

If this model is used in next year's course I will open a PR and the end of the quarter, it will save students lots of time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.