Giter Site home page Giter Site logo

pnkraemer / probfindiff Goto Github PK

View Code? Open in Web Editor NEW
10.0 1.0 2.0 334 KB

Probabilistic numerical finite differences. Compute finite difference weights and differentiation matrices on scattered data sites and with out-of-the-box uncertainty quantification.

Home Page: https://probfindiff.readthedocs.io

License: MIT License

Python 100.00%
scientific-computing finite-difference-methods partial-differential-equations probabilistic-numerics

probfindiff's Introduction

probfindiff: Probabilistic numerical finite differences

PyPi Version Docs GitHub stars gh-actions License Badge

Traditional finite difference schemes are absolutely crucial for scientific computing. If you like uncertainty quantification, transparent algorithm assumptions, and next-level flexibility in your function evaluations, you need probabilistic numerical finite differences.

Why?

Because when using traditional finite difference coefficients, one implicitly assumes that the function to-be-differentiated is a polynomial and evaluated on an equidistant grid. Is your function really a polynomial? Are you satisfied with evaluating your function on equidistant grids? If not, read on.

In a nutshell

Traditional, non-probabilistic finite difference schemes fit a polynomial to function evaluations and differentiate the polynomial to compute finite difference weights (Fornberg, 1988). But why use a polynomial? If we use a Gaussian process, we get uncertainty quantification, scattered point sets, transparent modelling assumptions, and many more benefits for free!

How?

Probabilistic numerical finite difference schemes can be applied to function evaluations as follows.

>>> import jax
>>> import jax.numpy as jnp
>>> from jax.config import config
>>>
>>> import probfindiff
>>>
>>> config.update("jax_platform_name", "cpu")
>>>
>>> scheme, xs = jax.jit(probfindiff.central)(dx=0.2)
>>> f = lambda x: (x-1.)**2.
>>> dfx, cov = jax.jit(probfindiff.differentiate)(f(xs), scheme=scheme)
>>> print(jnp.round(dfx, 1))
-2.0
>>> print(jnp.round(jnp.log10(cov), 0))
-7.0
>>> print(isinstance(scheme, tuple))
True
>>> print(jnp.round(scheme.weights, 1))
[-2.5  0.   2.5]

See this page for more examples.

Installation

pip install probfindiff

This assumes that JAX is already installed. If not, use

pip install probfindiff[cpu]

to combine probfindiff with jax[cpu].

With all dev-related setups:

pip install probfindiff[ci]

Features

  • Forward, backward, central probabilistic numerical finite differences
  • Finite differences on arbitrarily scattered point sets and in high dimensions
  • Finite difference schemes for observation noise
  • Symmetric and unsymmetric collocation ("Kansa's method")
  • Polynomial kernels, exponentiated quadratic kernels, and an API for custom kernels
  • Partial derivatives, Laplacians, divergences, compositions of differential operators, and an API for custom differential operators
  • Tutorials that explain how to use all of the above
  • Compilation, vectorisation, automatic differentiation, and everything else that JAX provides.

Check the tutorials on this page for examples.

Background

The technical background is explained in the paper:

@article{kramer2022probabilistic,
  title={Probabilistic Numerical Method of Lines for Time-Dependent Partial Differential Equations},
  author={Kr{\"a}mer, Nicholas and Schmidt, Jonathan and Hennig, Philipp},
  journal={AISTATS},
  year={2022}
}

Please consider citing it if you use this repository for your research.

Related work finite difference methods

Finite difference schemes are not new, obviously.

Python

Julia

Distinction

probfindiff does many things similarly to the above, but differs in more than one points:

  • We do probabilistic numerical finite differences. Essentially, this encompasses a Gaussian process perspective on radial-basis-function-generated finite differences (provided by "RBF" above). As such, different to all of the above, we treat uncertainty quantification and modelling as first-class-citizens.
  • probfindiff uses JAX, which brings with it automatic differentiation, JIT-compilation, GPUs, and more.
  • probfindiff does not evaluate functions. Most of the packages above have an API differentiate(fn, x, scheme) whereas we use differentiate(fn(x), scheme.weights). We choose the latter because implementations simplify (we pass around arrays instead of callables), gain efficiency (it becomes obvious which quantities to reuse in multiple applications), and because users know best how to evaluate their functions (for example, whether the function is vectorised).

probfindiff's People

Contributors

girijakar avatar pnkraemer avatar schmidtjonathan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

probfindiff's Issues

Partial derivative tutorial

The "Partial derivatives" tutorial shows how to compute partial derivatives, not just at a single point but at many points at once. This is not really what it should be.

`grid` or `xs`?

The method is called from_grid, but the grid-variable is usually referred to as xs. While this may be okay for tutorial notebooks, it is a bit inconsistent as part of the API.

Readme typos

The readme sometimes talks about pnfindiff where it should reference probfindiff. This needs to be fixed.

Workflows

Currently, the CI is run in the workflow via tox.
It would be useful to run the different actions separately, so the feedback on which part is broken becomes available earlier.

Desired features

All of the following should be possible with pnfindiff at some point of the future, ideally, documented in example notebooks.

Basic scalar derivatives:

Evaluate the numerical derivative f'(x_0) of a function f: R -> R at point x_0

  • Central, forward, backward schemes
  • Variable derivative order
  • Variable method order
  • Numerical noise
  • Custom grid
  • Batching over x_0
  • Batching over x_0 with a non-uniform grid
  • Batching over f
  • Polynomial vs exponentiated quadratic kernel

Basic multivariate derivatives

Evaluate the numerical derivative (D f) (x_0) of a function f: R^n -> R^m at point x_0

  • Partial derivatives via schemes
  • Gradients via a tensor-product approach and using all of the above
  • Jacobians and Hessians via a tensor-product approach and using all of the above
  • Gradients, via an n-dimensional prior
  • (Mixed) partial derivatives for n-dimensional priors
  • Jacobians (forward and reverse mode?), Hessians, (JVPs?) via a n-dimensional prior

Advanced features

  • Uncertainty calibration
  • Control over stencils -> which point is on boundary, etc. (https://github.com/maroba/findiff)
  • Computation of (L f)(Xn) with FD. This is not the same as batched evaluation, because for instance, boundary nodes may use forward schemes, and central nodes use central schemes. (https://numpy.org/doc/stable/reference/generated/numpy.gradient.html)
  • Support for symbolic computation
  • Support for sparse matrices
  • A bunch of kernels: matern, (inverse) multiquadrics, thin-plate splines
  • Boundary conditions?
  • Advanced differential operators: Divergence, Laplacian, curl
  • Polar coordinates
  • complex differentials? integer-valued derivatives? (e.g. grad(f, allow_int=True, holomorphic=True))
  • Unsymmetric and symmetric collocation (with examples and docs)

TOX + sphinx-autodoc-typehint

To make tox compile the docs appropriately, I needed to repeat the sphinx-autodoc-typehint requirement in the tox.ini. This is not optimal and should be resolved in the future.

Easier access to custom kernels

Currently, it is possible to pass custom kernels to the functions, but if a user has access to efficient derivatives, that is hard make good use of.

Polynomial kernels

It would be useful to have polynomial kernels. For example, to sanity-check that familiar coefficients are returned. But also for other reasons.

Vector-value differential operators

It would be quite incredible if one could make the finite difference coefficients handle vector-valued functions/operators well. For example, the Jacobian.

One difficulty will be that the number of dimensions could become quite awkward...

1d coefficient API

The methods in coefficients_1d receive "x", but no neighbours. Instead, they assemble neighbours via dx and the specified order.
In light of this, they should not require "x" to be parsed. it is meaningless.

Rename package?

The findiff part is nice and clear, but the pn bit will be a little hard to remember for the uninitiated.

Alternatives could be

  • probnum_findiff (Second place, and might have been the winner if underscored in package names would not be discouraged)
  • probfindiff (Winner, because it is quite easy to remember. The "num" is implied in "findiff" :))
  • fdx
  • findiffpy
  • numdiff
  • pfindiff
  • probnumfindiff (Third place, lost bc too long. the "num" puts it in length significantly ahead of established packages like setuptools or sqlalchemy)

Public API

Let's work on the clarity and usability of the API. For example:

# ---------------------------------------------------
# Top-level interface, which mirrors jax' autodiff, and e.g. np.gradient(), np.diff().
# Essentially, functions in here only chain coefficients.apply() with coefficients.jac()
# ---------------------------------------------------
pnfindiff.gradient(fx, axis, k=exp_quad(), acc=2)
pnfindiff.jac(fx, axis, acc)
pnfindiff.laplace(fx axis, acc)
# (...)

# ---------------------------------------------------
# Implementation of common derivatives.
# Every grad-and-vmap-ready kernel 'k' can be used. For example, those in the utils
# Points to `collocation`
# ---------------------------------------------------
dfx, base_unc = pnfindiff.coefficients.apply(f, weights=(w, sc), indices=idcs)
dfx, base_unc = pnfindiff.coefficients.apply_along_axis(f, weights=(w, sc), axis=1)
(w, sc), idcs, info = pnfindiff.coefficients.derivative(*, xs, k, acc=2)
(w, sc), idcs, info = pnfindiff.coefficients.derivative_higher(*, order, xs, k=None ,acc=2)
(w, sc), idcs, info = pnfindiff.coefficients.gradient(xs, k, acc=2)
(w, sc), idcs, info = pnfindiff.coefficients.laplace(xs, k,  acc=2)
(w, sc), idcs, info = pnfindiff.coefficients.divergence(xs, k,  acc=2)
(w, sc), idcs, info = pnfindiff.coefficients.jac(xs, k, acc=2)
(w, sc), idcs, info = pnfindiff.coefficients.approx(L, xs, k, *, order=2, acc=2)  # the magic stuff. maybe move to collocation?


# ---------------------------------------------------
# Calibration API
# ---------------------------------------------------
output_scale = pnfindiff.stats.mle_output_scale(f, weights=(w, sc), info=info)
logpdf = pnfindiff.stats.logpdf(f, weights=(w, sc), info=info)

# ---------------------------------------------------
# API for 1d coefficients. 
# This is like in FinDiff, but for PNFinDiff, 
# I suppose such a module has only didactic purposes.
# Stack copies of coefficients for apply()-conformity.
# Points to `collocation`
# ---------------------------------------------------
(w, sc), idx, info = pnfindiff.coefficients_1d.forward(dx, deriv, acc)
(w, sc), idx, info = pnfindiff.coefficients_1d.center()
(w, sc), idx, info = pnfindiff.coefficients_1d.backward()
(w, sc), idx, info = pnfindiff.coefficients_1d.from_offset()

# ---------------------------------------------------
# Collocation implementations. This is where the magic happens, but only few will look at it.
# ---------------------------------------------------
pnfindiff.collocation.prepare_gram(x, xs, ks)
pnfindiff.collocation.unsymmetric(K, LK, LLK )

# ---------------------------------------------------
# Simplify our life via AD utility functions
# ---------------------------------------------------
pnfindiff.utils.autodiff.deriv_scalar(fun, **kwargs)
pnfindiff.utils.autodiff.div(fun, **kwargs)
pnfindiff.utils.autodiff.laplace(fun, **kwargs)
pnfindiff.utils.autodiff.compose(L1, L2)

# ---------------------------------------------------
# ... and kernel utility functions
# ---------------------------------------------------
pnfindiff.utils.kernel.batch_gram(k)
k, lk, llk = pnfindiff.utils.kernel.differentiate(k)
pnfindiff.utils.kernel.zoo.polynomial()
k, lk, llk = pnfindiff.utils.kernel.zoo.exp_quad()

Calibration

It should be made really easy to calibrate the uncertainties. At the moment, this is a bit cumbersome.

Numerical noise

It would be good to have an interface how to compute finite difference samples base on noisy observations of a function

Coefficient variable

At the moment, both coeffs and coefficients are used throughout the repo. This seems a bit inconsistent.

Boundary conditions

It would be great to have a neat way of handling boundary conditions. I am not 100% sure what the API should be.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.