Giter Site home page Giter Site logo

geconpy's People

Contributors

jessegrabowski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

lukasgrahl

geconpy's Issues

All simulations should return data in `xarrays`

Model simulations, for example the .simulate and .impulse_response_function methods, currently return pandas dataframes with awkward multi-indexes. These functions should be refactored to use xarrays instead.

Add more tests of kalman filters

There currently aren't enough tests of the kalman filtering on model objects. The only test now is test_extract_system_matrices, which is currently failing.

Numba accelerate functions generated by `SteadyStateSolver`

SteadyStateSolver generates three sp.lambdify functions that need to be repeatedly called during estimation: f_ss, f_ss_resid, and f_jac_ss. These need to be wrapped with numba.njit for maximum performance during the estimation loop.

The function ss_func returned by SteadyStateSolver.solve_steady_state should also be jitted if possible. This will require some refactoring, as it uses SymbolDictionaries and calls the scipy.optimize.root function, neither of which are supported by numba.

If a complete analytic steady state is available, it should be easier to return a jitted function.

Add support for Latex printing of models

Users should be able to label equations and variables in the GCN file somehow. Here is how it could look using Python-style decorators for example:

@name: Law of Motion of Capital
K[] = (1 - delta) * K[1] + I[];

Variables could also be given shorthand representations to prevent descriptive (but long) variable names from getting printed to Latex:

shocks
{
    @shortname: epsilon[]
     epsilon_technology[];
};

Then, of course, the user should be able to call some kind of print_latex_model function to get a nice latex table with all model equations, steady state values, priors, etc.

Add indexing syntax to GCN parser for construction of multiple similar blocks

The gEcon R package allows for (something like) the following:

<i=(1,3)>
block HOUSEHOLD_<i>
{
    identities
        {
            C_<i>[] + I_<i>[] = r[] * K_<i>[-1] + w[] * L_<i>[] + Div_<i>[];
        };
};

This will automatically generate 3 household blocks with indices 1,2,3, along with the associated variables. It also allows for automatic summation across indexed variables, like:

Div[] = Sum<i>(Div_<i>[]);

This would be a nice feature when thinking about e.g. multiple sectors.

Re-write analytic steady state finder

In #1, I greatly simplified the steady-state solver. In doing so, I also removed the heuristic solvers that did "shallow" checks for analytic steady state solutions (e.g. r[ss] from the RBC Euler equation, or A[ss] = 1 from laws of motion of technology). These should be re-added as a part of a more robust system that can:

  • Recursively search for a reduced form steady state in parameters only, with
  • User-defined limits on depth (number of recursions) and compute budget (max time spent on sp.solve), and
  • Write results into a GCN file steady_state block, so it doesn't need to be run multiple times.

`model.steady_state` fails when user provides a partial steady state

When a partial steady-state is declared in the GCN, the model.steady_state() function raises an error. This is because the provided relationships eliminate input variables without eliminating model equations, and scipy.optimize.root can only handle problem from R^n to R^n.

Parameters defined as equations of other parameters are interpreted as calibrating constraints

Hi Jesse,

I was trying to load an already log-linear model and when reading in my GCN file I came across this error message below. I was wondering whether this is due to the GCN specification or possibly something else. The GCN can be found under:
https://github.com/lukasgrahl/memoire1/blob/b3e2901e96936badc71a61cee8b097f2b3de85cc/model_files/log_lin2.gcn

Thanks for your help!

KeyError

Traceback (most recent call last)
----> 1 mod.solve_model(model_is_linear=True)

gEconpy\classes\model.py:451, in gEconModel.solve_model(self, solver, not_loglin_variable, order, model_is_linear, tol, max_iter, verbose, on_failure)
448 steady_state_dict = self.steady_state_dict
450 if self.build_perturbation_matrices is None:
--> 451 self._perturbation_setup(not_loglin_variable, order, model_is_linear, verbose, bool)
453 A, B, C, D = self.build_perturbation_matrices(
454 **param_dict.to_string(), **steady_state_dict.to_string()
455 )
456 _, variables, _ = self.perturbation_solver.make_all_variable_time_combinations()

gEconpy\classes\model.py:595, in `gEconModel._perturbation_setup(self, not_loglin_variables, order, model_is_linear, verbose, return_F_matrices, tol)
592 if variable.base_name in not_loglin_variables:
593 continue
--> 595 if abs(steady_state_dict[variable.to_ss().name]) < tol:
596 not_loglin_variables.append(variable.base_name)
597 close_to_zero_warnings.append(variable)

KeyError: 'Y_ss'

PS: adding it as code would have ruined the indents

Numba linear algebra bindings are terrible

The linear algebra overrides in numba_linalg are very poorly done. Due to the use of the entry point in setup.py, they cause warnings and other unexpected behavior even when gEconpy is not imported. The entry point should not be used, and the overrides can instead be imported locally only where they are needed (during estimation).

In addition, one or more of these functions causes the module to load extremely slowly. Need to figure out which one is causing this and how to fix it.

Re-do example notebooks

I left some personal code for data preprocessing in the example notebooks that cause them to not run "out of the box". That needs to be fixed.

Implement higher order linear approximations

Currently only linear approximation is available, whereas Dynare allows approximation up to the 3rd order. Figure out how to use Sympy to get a multivariate Taylor series expansion to an arbitrary order, and implement the requisite solution algorithms.

PyMC Integration

Ideally, all the model estimation should be handled by PyMC. I'm currently using Emcee because gradients aren't available for several steps in the solution process, including:

  1. Log linearization
  2. BK condition check
  3. Kalman filter

Statsmodels uses numerical approximation of gradients for (3), which I have found to be very unstable. Gradients are available if a Scan Op is used, but I have found this to be extremely slow (see here) (2) requires a QZ decomposition, which is non-differentiable. (1) is differentiable if is solved via cycle reduction, but this also requires a Scan.

Perhaps a jax scan could be faster?

Improve documentation

Add some kind of auto-compiling documentation site (sphinx?), with more examples and how-to guides. Consider an example gallery that can collect different use cases.

Improve construction of Lagrangians

Currently, Lagrangians are required to be written as Bellman equations of the form:

U[] = u[] + beta * E[][U[1]];

The right-hand side can only include a discount factor and a single expectation, which must be of the left-hand side. Trying to include other variables, for example a stochastic discount factor Q[] = beta * E[][lambda[1]] / lambda[], will raise an exception. I forget why I did it this way, but this needs to be revisited.

njit the estimation pipeline end-to-end

The main gEconModel is currently not jittable and cannot be rewritten as a jitclass because of its heavy dependence on sympy.

During estimation, however, I want the main loop of steady state -> log linearization -> BK check -> Kalman Filter to be entirely jit-compiled. This is currently accomplished using a set of functions in estimation.estimation_utilities that extract the necessary components from the model and jit them all separately.

I think it would be better to instead "compile" the model to a jitclass that holds all the necessary functionality. This would also clean up the codebase, which is currently too fragmented into cluttered "utility" files.

Improve test coverage

Current testing suite is with in unittest, but it seems pytest is universally preferred among the scientific packages I follow. It will also give access to pytest-cov, which will complement #21

While I'm at it, I would like to add hypothesis as a dependency and re-write tests using that framework. Should ensure more robust coverage of core functionality (especially the parser).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.