anucvml / ddn Goto Github PK
View Code? Open in Web Editor NEWDeep Declarative Networks
License: MIT License
Deep Declarative Networks
License: MIT License
Implement and test gradients for r and c in optimal transport layer as H^{-1}A^T(AH^{-1}A^T)^{-1}(AH^{-1}B - C) - H^{-1}B
for appropriate B and C.
I have an optimization problem with two target variables in the following form:
where
Thank you in advance for your answer.
Update code to use linear algebra routines in pytorch.linalg
and support pytorch v1.9.0 and above only.
Evaluate using QR decommission for the least squares pytorch node instead of inverting A^TA.
hello,
I'm using the release code of your pnp_node.py,
as my inputs are batched points, each one with different pose , so i would like to use this operation:
# # Alternatively, disentangle batch element optimization:
# for i in range(p2d.size(0)):
# Ki = K[i:(i+1),...] if K is not None else None
# theta[i, :] = self._run_optimization(p2d[i:(i+1),...],
# p3d[i:(i+1),...], w[i:(i+1),...], Ki, y=theta[i:(i+1),...])
however, i find the upper level function dose not update the w value.
I printed the theta.grad to check whether the gradient is calculated, and find that theta[i:(i+1),...].grad is None.
maybe when the optimization is done, the slice or copy ops will not copy the grad value.
Is there any way for solving this problem?
Very appreciate for your advice.
Hello I am currently resolving a problem where the cost matrix M of shape (D, D) is somewhat fixed. Meanwhile r and c are batched and of dimension (B, D) and (C, D). Is there a way to adapt so that the layer can compute the OT loss is calculated of shape (B, C). Thank you very much.
Could you please provide the tutorials of perspective-n-point with your DDNs ?
Hi,
I have an equality constrained problem which I solve with RANSAC and then refine in a similar fashion as done in your pnp node with an auxillary objective function (as the ransac objective is typically non-differentiable), i approximately enforce the constraints by adding them to the objective only during the refinement. This seems to work well and my constraints are still fulfilled after refining. However, I still get objective gradients which cannot be solved exactly from my constraints:
UserWarning: Non-zero Lagrangian gradient at y:
[15.481806 -9.70834 -7.652554 18.65691 3.6125593 11.075308
0.03811455 11.670857 13.675308 ]
fY: [ 2.615292 5.0672874 -7.8673334 57.839783 12.556461 29.84853
-5.591362 1.9208729 -3.0231378]
It can be seen that LY is smaller than fY, but not 0. Have you had any similar experiences? Is there some optimization tricks here which may be employed? I can note that my constraints are overspecified and could be reduced, but not sure if that would help.
I guess issues could also come from the fact that my constraints are only approximately satisfied after my optimization, but they are very close to fulfilled (about 1e-8).
Hi there, I really appreciate your work and I see big potentials in it.
I'm trying to embed a DDN layer into my work, and I'm having trouble with it, which has been bothering me for weeks. It seems that my implementation of a DDN layer did not properly backpropagate gradients.
In detail, I want to use DDN to perform a least square minimization, say,
I'm not sure if I implemented it in the right way. When I implement the solve
method, do I have to detach all the input variables? When I call the solve
method, do I have to put it in the torch.no_grad()
? And, do I have to manually add y.requires_grad_()
after y
is solved? I have tried it with and without the above, but it did not seem to work properly, I think I must have missed something.
Looking forward to your reply.
Pytorch is (slowly) introducing vmap: https://pytorch.org/docs/master/generated/torch.vmap.html?highlight=vmap#torch.vmap
When this feature becomes stable it seems like a great addition for jacobian calculation, probably giving additional performance.
The forloop involving
Lines 138 to 145 in 1240a69
Lines 254 to 256 in 1240a69
Simply replacing these with:
gradients = []
for x,size in zip(xs,xs_sizes):
if x.requires_grad:
gradient = torch.einsum('byx,by->bx', fXY(x),u).reshape(size)
gradients.append(gradient)
else:
gradients.append(None)
fXY = lambda x : self._batch_jacobian(fY, x)
I am not able to see any issues with the solution. There would need to be some changes to the constrained nodes as well however to accomodate it. Perhaps there is something I'm missing, please let me know if that's the case.
Error related to Cholesky factorisation, in optimal_transport.py
here at line 144. Adding a small constant eps
does not solve the problem, even at eps=1e-1
OS: Ubuntu 20.04.1 LTS
PyTorch: 1.7.1
CUDA Toolkit: 10.2.89
Python: 3.7.9
Data: data.zip
Code:
import torch
src = 'data'
data = torch.load(src)
r = data['r']
W = data['W']; H = data['H']
P = data['P']; PdivC = data['PdivC']
# A small constant
eps = data['eps']
# Scale the constant if needed
# eps *= 1000
print(eps.max())
block_11 = torch.cholesky(torch.diag_embed(r[:, 1:H]) - torch.einsum("bij,bkj->bik", P[:, 1:H, 0:W], PdivC) + eps)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.