hdembinski / jacobi Goto Github PK
View Code? Open in Web Editor NEWNumerical derivatives for Python
License: MIT License
Numerical derivatives for Python
License: MIT License
When passing the covariance as numpy array, the Indexable
type generates linter warnings of the type:
Argument of type "NDArray[float64]" cannot be assigned to parameter "cov" of type "float | Indexable[float] | Indexable[Indexable[float]]" in function "propagate"
Type "NDArray[float64]" cannot be assigned to type "float | Indexable[float] | Indexable[Indexable[float]]"
"NDArray[float64]" is incompatible with "float"
"NDArray[float64]" is incompatible with "Indexable[float]"
"NDArray[float64]" is incompatible with "Indexable[Indexable[float]]"Pylance[reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues)
It depends on function. For example, I have two functions:
def fun1(x):
# some calculations with res estimation (res is np.ndarray)
return res
def fun2(x):
return res
I don't want to call fun1
becaues of repeating calculation (I called fun1 previously in my code), so I remember res
and define fun2
.
Why the result is depends on calculations into fun1
? res
values are equal in fun1
and fun2
.
jac_mat = jacobi(fun1, x)[0] # gives me matrix with non-zero values
jac_mat = jacobi(fun2, x)[0] # gives me matrix with zero values
Test and improve support for Numpy masked arrays.
Jacobi raises a ValueError if a numpy.ma.core.MaskedConstant is encountered as an input.
Often, one has a function that takes independent arguments. Propagating this currently is a bit cumbersome, one has to combine the arguments into a vector and the covariances into joint block covariance matrix. propagate
could be made more flexible to make this easier. I am not sure how the signature should look like, though.
def func(a, b):
....
a = [...]
b = [...]
cov_a = [...]
cov_b = [...]
# option 1
propagate(func, a, b, cov_a, cov_b)
# option 2
propagate(func, a, cov_a, b, cov_b)
# option 3
propagate(func, (a, cov_a), (b, cov_b))
Options 1 and 2 are natural extensions of the current calling convention, just the order of values and covariances differ. I am not sure what is most natural. I am leaning towards option 2.
Option 3 requires more typing and is not really much safer.
A challenging test case is the function y = np.log(1 + a1 * x + a2 * x**2)
with
From discussion with @mdhaber
It looks like step[0] is a relative (to the value of x) initial step size and step[1] is a reduction factor for each iteration. However, when diagonal=True, step[0] effectively becomes an absolute step due to the way the wrapper function works.
@HDembinski HDembinski Jun 26, 2023
Yes, that's a speed trade-off. I am assuming here that the x-values are roughly of equal scale and don't vary a lot in magnitude. If they do, then diagonal=True should not be used. This needs to be properly documented at the very least.
The ideal solution in my view would be an algorithm that first groups x-values of similar magnitude together in blocks and then does the calculation using the same absolute step for those blocks. The speed of jacobi comes from doing array calculations as much as possible. Such an algorithm would give the same result as diagonal=True in the ideal case and would fall back to the slow "one-x value at a time" worst case if necessary.
Hi. First, thank you for your nice librarary! I have questions about jacobian matrix calculation.
I solve minimization problem with scipy.minimize(Nelder-Mead), my residual function is
np.mean(np.power(y_fact - y_predicted, 2)),
and I want to calculate Jacobian matrix for calculating confidence interval for each optimal parameters (params):
y_predicted = y_predicted(params, x).
Your library based on DERIVEST, and there are example in code:
When I do something like this:
I got not matrix, but vector with shape = (2).
Is it possible to get Jacobian matrix like this, where f(P, x) --> y_predicted(params, x), a - each parameter from params?
Hello Hans,
Thank you for this project, we needed something like this for a long time and we had tried to implement it ourselves privately, but given our limited programming skills our implementations were far from what you have done.
However I have a suggestion. Many of us are trying to get away from ROOT, but there are things like:
https://root.cern.ch/doc/master/classTEfficiency.html
which are constantly needed to calculate efficiency errors. I wonder if Jacobi is a good place to add this feature. I do not see it implemented yet from what I saw in the documentation.
Cheers.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.