juliasmoothoptimizers / ldlfactorizations.jl Goto Github PK
View Code? Open in Web Editor NEWFactorization of Symmetric Matrices
License: GNU Lesser General Public License v3.0
Factorization of Symmetric Matrices
License: GNU Lesser General Public License v3.0
n, m = size(A)
A = sprand(Float64, 10, 5, 0.25)
λ = 1.0e-3
K1 = triu([sparse(1.0I,n,n) A; A' -λ^2 * I])
S1 = ldl(Symmetric(K1, :U))
K2 = [sparse(1.0I,n,n) A; A' -λ^2 * I]
S2 = ldl(Symmetric(K2, :U))
b = ones(m+n)
norm(S1 \ b - S2 \ b) # not zero
Hi! I think this may be a method error. Do we still need L521 - L577 ?
For example, METIS provides both the permutation and its inverse, so it’s not necessary to recompute it in the symbolic analysis.
It would be convenient for the user to call the symbolic and numerical phases explicitly without having to manage the temporary storage.
The positive_semidefinite
example in test_real.jl
appears to fail the invariant
The example is
A = [
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.0 0.0
2.0 4.0 5.0 -2 4.0 1.0 2.0 2.0 2.0 0.0
0.0 0.0 0.0 0.0 1.0 9.0 9.0 1.0 7.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 3.0 2.0 1.0 4.0 3.0 1.0 0.0 0.0 7.0
-3.0 8.0 0.0 0.0 0.0 0.0 -2.0 0.0 0.0 1.0
0.0 0.0 0.0 5.0 7.0 9.0 0.0 2.0 7.0 1.0
3.0 2.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 2.0
0.0 0.0 0.0 0.0 -3 -4 0.0 0.0 0.0 0.0
]
ms = SparseMatrixCSC{Float64, Int64}(A*A')
10×10 SparseMatrixCSC{Float64, Int64} with 78 stored entries:
16.0 20.0 8.0 28.0 4.0 ⋅ ⋅ 28.0 12.0 ⋅
20.0 25.0 10.0 35.0 5.0 ⋅ ⋅ 35.0 15.0 ⋅
8.0 10.0 78.0 47.0 2.0 43.0 22.0 45.0 28.0 -16.0
28.0 35.0 47.0 214.0 7.0 47.0 -17.0 140.0 35.0 -39.0
4.0 5.0 2.0 7.0 1.0 ⋅ ⋅ 7.0 3.0 ⋅
⋅ ⋅ 43.0 47.0 ⋅ 90.0 26.0 67.0 24.0 -24.0
⋅ ⋅ 22.0 -17.0 ⋅ 26.0 78.0 1.0 7.0 ⋅
28.0 35.0 45.0 140.0 7.0 67.0 1.0 209.0 29.0 -57.0
12.0 15.0 28.0 35.0 3.0 24.0 7.0 29.0 36.0 ⋅
⋅ ⋅ -16.0 -39.0 ⋅ -24.0 ⋅ -57.0 ⋅ 25.0
Now let's do the LDLT decomposition and check:
LDLT = ldlt(ms)
The left-hand side of the identity is
permute(ms, LDLT.P, LDLT.P)
10×10 SparseMatrixCSC{Float64, Int64} with 78 stored entries:
25.0 ⋅ -24.0 ⋅ ⋅ -16.0 -39.0 ⋅ -57.0 ⋅
⋅ 78.0 26.0 ⋅ ⋅ 22.0 -17.0 ⋅ 1.0 7.0
-24.0 26.0 90.0 ⋅ ⋅ 43.0 47.0 ⋅ 67.0 24.0
⋅ ⋅ ⋅ 16.0 20.0 8.0 28.0 4.0 28.0 12.0
⋅ ⋅ ⋅ 20.0 25.0 10.0 35.0 5.0 35.0 15.0
-16.0 22.0 43.0 8.0 10.0 78.0 47.0 2.0 45.0 28.0
-39.0 -17.0 47.0 28.0 35.0 47.0 214.0 7.0 140.0 35.0
⋅ ⋅ ⋅ 4.0 5.0 2.0 7.0 1.0 7.0 3.0
-57.0 1.0 67.0 28.0 35.0 45.0 140.0 7.0 209.0 29.0
⋅ 7.0 24.0 12.0 15.0 28.0 35.0 3.0 29.0 36.0
but the right-hand side is
(LDLT.L+I) * LDLT.D * (LDLT.L+I)'
10×10 SparseMatrixCSC{Float64, Int64} with 16 stored entries:
25.0 ⋅ -24.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ 78.0 26.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
-24.0 26.0 90.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 16.0 20.0 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 20.0 25.0 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0
We can see that they agree on the first 5 x 5 block, but disagree later.
This is probably due to the fact that D has too many zeros on the diagonal:
LDLT.D
LDLT.D
10×10 Diagonal{Float64, Vector{Float64}}:
25.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ 78.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ 58.2933 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 16.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0
The factorization works with the upper triangle of the input matrix, not the lower triangle as stated in the README. Thanks to @Goysa for reporting this error.
It would be useful to have an option to modify the factorization so nonpositive pivots are made positive and to compute a sparse modified Cholesky factorization. More generally, we could compute a modified sparse LDL by ensuring that pivots are not too close to zero, but preserving their sign.
A first modified Cholesky could be that of Gill, Murray and Wright, e.g., https://link.springer.com/article/10.1007/s10107-007-0177-6, Section 2.1.
cc @Goysa
Sometimes the sparse structure of a problem can change during non-linear optimization. It would be useful to be able to update the symbolic analysis of the matrix in-place, to avoid re-allocating all the memory when this happens.
Would you be open to a PR implementing this?
We use an LDL' factorization in Convex.jl to check for PSD-ness of real- and complex-valued hermitian matrices.
Being able to use this package with complex values and arbitrary arithmetic would greatly simplify things for us.
Just chased a type-instability in some code back to the getproperty
method for LDLFactorization
s here. Using getfield
instead is a workaround, but it would be nice if the dot-property syntax was type-stable, or if there were type-stable "getter" functions for the factorization's fields.
Please don't consider this a bashing of a good effort to equip Julia with a general native solver.
It can be useful to some people, no doubt. But for sufficiently large problems these factorizations are hopeless.
Example: A 3D model of a rectangular cantilever block. Roughly 100k equations, 7M non zeros.
LDLFactorizations SuiteSparse ldlt Skyline ldlt SuiteSparse cholesky
------------------------------------------------------------------------------
1414 seconds 1477 seconds 331 seconds 11 seconds (!!!)
Two orders of magnitude slow down!
This is not the case of the implementation being too slow. This is a case of the algorithm not being fast enough.
My point is: we really need to get a native implementation of one of the "supernodal"
factorizations. It is not enough to have access to HSL, MUMPS, or SuiteSparse in dynamically loaded libraries. It leads to duplication of memory, preventing these methods from reaching their full potential in Julia.
It would be interesting to use vectorization macros such as @simd
or @avx
to increase the speed of this package and if possible to add GPU support. However this would probably lead us to change most of the code since it uses commands such as continue
, break
, and loops that are not independent.
A current conversation on Slack # Intervals asks
is it reasonably easy to do?
bottine 01:39
I'm wondering how much work would be needed to be able to plug interval types into tulip
Jeffrey Sarnoff 2 hours ago
:wink: ? ask @mtanneau
Only visible to you
Slackbot 2 hours ago
OK! I’ve invited @mtanneau to this channel.
Mathieu Tanneau 2 hours ago
Hmm :male-detective:
Mathieu Tanneau 2 hours ago
On the top of my head, to support computations in arithmetic T,
it would need the following methods to be implemented (the list is non-exhaustive):
eps(T) to compute tolerances
sqrt(::T) to be used within factorizations
abs(::T) for some tolerances & step size computation
classical operations +, *, -, /
binary comparisons like >= and <=
Some norm(::Vector{T}, Inf) computations
Mathieu Tanneau 1 hour ago
The first step would be trying to compute an LDLt factorization of
a symmetric positive definite matrix with Interval eltype, using LDLFactorizations.jl.
If that fails badly and can't be resolved, then there's little change Tulip would work
Jeffrey Sarnoff 1 hour ago
IntervalArithmetic.jl supports all of those arithmetic T ops
(for norm(::Vector{T}, Inf) first do using LinearAlgebra). (edited)
Jeffrey Sarnoff 1 hour ago
@dpsanders do we have LDLt ___ with Interval eltype?
Mathieu Tanneau 30 minutes ago
Damn, I thought LDLFactorizations.jl supported any arithmetic... Turns out it requires T <: Real.
It might be fixable by "just" relaxing the type restrictions in the LDLFactorizations code.
In the example below
julia> using LDLFactorizations
julia> ldl([ 2 1 -83/40;
1 43/20 0
-83/40 0 5]).L
3×3 SparseArrays.SparseMatrixCSC{Float64,Int64} with 2 stored entries:
[3, 1] = -0.415
[3, 2] = 0.465116
The entries (1, 1), (2, 2) and (3, 3) with value 1 are missing.
If this is by design, it should maybe be stated in the README that we don't have L * D * L'
but rather (L+I) * D * (L+I)'
Only single-RHS solves are implemented. Should be simple to solve for multiple RHS simultaneously.
Looks like solving for x in Ax = b after ldl factorization and using the LDLFactorizations object raises a method error.
LDLT = ldl(A) x = LDLR\b
@JuliaRegistrator register()
The README still mentions that both triangles must be supplied as input, which is no longer true.
Hi, I noticed in your tests at
LDLFactorizations.jl/test/runtests.jl
Line 27 in 06e2175
[0 1; 1 1]
. With a different pivot this matrix has an LDL factorization. Matrices that look like this often arise in KKT conditions for the conic problems I am trying to solve. Is there a way to find a pivot that makes a pivoted LDL factorization exist? If not always, then at least fairly reliably?It should be possible to only supply one triangle of the input matrix, even when a symmetric permutation is applied.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
METIS.jl returns an array of Int32. We should accept any subtype of Integer
.
I think that LDLFactorization
could be a subtype of LinearAlgebra.Factorization
, cf. https://github.com/JuliaLang/julia/blob/master/stdlib/LinearAlgebra/src/factorization.jl .
We would benefit from the default Julia interface, and we would start some unifying of JSO factorization packages.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.