Giter Site home page Giter Site logo

ldlfactorizations.jl's People

Contributors

abelsiqueira avatar amontoison avatar celestineangla avatar dpo avatar geoffroyleconte avatar github-actions[bot] avatar jsobot avatar juliatagbot avatar monssaftoukal avatar nsajko avatar tgalizzi avatar tmigot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ldlfactorizations.jl's Issues

possible method error upper triangle

n, m = size(A)
A = sprand(Float64, 10, 5, 0.25)
λ = 1.0e-3

K1 = triu([sparse(1.0I,n,n) A; A' -λ^2 * I])
S1 = ldl(Symmetric(K1, :U))

K2 = [sparse(1.0I,n,n) A; A' -λ^2 * I]
S2 = ldl(Symmetric(K2, :U))

b = ones(m+n)

norm(S1 \ b - S2 \ b) # not zero

Hi! I think this may be a method error. Do we still need L521 - L577 ?

Improve data structure

It would be convenient for the user to call the symbolic and numerical phases explicitly without having to manage the temporary storage.

Issue with low-rank factorizations

The positive_semidefinite example in test_real.jl appears to fail the invariant
$$P A P^T \ = \ (L+I)D(L+I)^T.$$
The example is

A = [
      0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.0 0.0
      0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.0 0.0
      2.0 4.0 5.0 -2 4.0 1.0 2.0 2.0 2.0 0.0
      0.0 0.0 0.0 0.0 1.0 9.0 9.0 1.0 7.0 1.0
      0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
      1.0 3.0 2.0 1.0 4.0 3.0 1.0 0.0 0.0 7.0
      -3.0 8.0 0.0 0.0 0.0 0.0 -2.0 0.0 0.0 1.0
      0.0 0.0 0.0 5.0 7.0 9.0 0.0 2.0 7.0 1.0
      3.0 2.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 2.0
      0.0 0.0 0.0 0.0 -3 -4 0.0 0.0 0.0 0.0
    ]
ms = SparseMatrixCSC{Float64, Int64}(A*A')
10×10 SparseMatrixCSC{Float64, Int64} with 78 stored entries:
 16.0  20.0    8.0   28.0  4.0               28.0  12.0      
 20.0  25.0   10.0   35.0  5.0               35.0  15.0      
  8.0  10.0   78.0   47.0  2.0   43.0   22.0   45.0  28.0  -16.0
 28.0  35.0   47.0  214.0  7.0   47.0  -17.0  140.0  35.0  -39.0
  4.0   5.0    2.0    7.0  1.0                7.0   3.0      
            43.0   47.0       90.0   26.0   67.0  24.0  -24.0
            22.0  -17.0       26.0   78.0    1.0   7.0      
 28.0  35.0   45.0  140.0  7.0   67.0    1.0  209.0  29.0  -57.0
 12.0  15.0   28.0   35.0  3.0   24.0    7.0   29.0  36.0      
           -16.0  -39.0      -24.0        -57.0        25.0

Now let's do the LDLT decomposition and check:

LDLT = ldlt(ms)

The left-hand side of the identity is

permute(ms, LDLT.P, LDLT.P)
10×10 SparseMatrixCSC{Float64, Int64} with 78 stored entries:
  25.0        -24.0            -16.0  -39.0      -57.0     
        78.0   26.0             22.0  -17.0        1.0   7.0
 -24.0   26.0   90.0             43.0   47.0       67.0  24.0
                   16.0  20.0    8.0   28.0  4.0   28.0  12.0
                   20.0  25.0   10.0   35.0  5.0   35.0  15.0
 -16.0   22.0   43.0   8.0  10.0   78.0   47.0  2.0   45.0  28.0
 -39.0  -17.0   47.0  28.0  35.0   47.0  214.0  7.0  140.0  35.0
                    4.0   5.0    2.0    7.0  1.0    7.0   3.0
 -57.0    1.0   67.0  28.0  35.0   45.0  140.0  7.0  209.0  29.0
         7.0   24.0  12.0  15.0   28.0   35.0  3.0   29.0  36.0

but the right-hand side is

(LDLT.L+I) * LDLT.D * (LDLT.L+I)'
10×10 SparseMatrixCSC{Float64, Int64} with 16 stored entries:
  25.0       -24.0                              
       78.0   26.0                              
 -24.0  26.0   90.0                              
                  16.0  20.0                    
                  20.0  25.0                    
                            0.0                
                                0.0            
                                    0.0        
                                        0.0    
                                            0.0

We can see that they agree on the first 5 x 5 block, but disagree later.

This is probably due to the fact that D has too many zeros on the diagonal:

LDLT.D
LDLT.D
10×10 Diagonal{Float64, Vector{Float64}}:
 25.0                                          
      78.0                                     
           58.2933                             
                   16.0                        
                        0.0                    
                            0.0                
                                0.0            
                                    0.0        
                                        0.0    
                                            0.0

Bug in README

The factorization works with the upper triangle of the input matrix, not the lower triangle as stated in the README. Thanks to @Goysa for reporting this error.

Add ldl_analyze!

Sometimes the sparse structure of a problem can change during non-linear optimization. It would be useful to be able to update the symbolic analysis of the matrix in-place, to avoid re-allocating all the memory when this happens.

Would you be open to a PR implementing this?

Support complex-valued matrices?

We use an LDL' factorization in Convex.jl to check for PSD-ness of real- and complex-valued hermitian matrices.

Being able to use this package with complex values and arbitrary arithmetic would greatly simplify things for us.

`getproperty` is type-unstable

Just chased a type-instability in some code back to the getproperty method for LDLFactorizations here. Using getfield instead is a workaround, but it would be nice if the dot-property syntax was type-stable, or if there were type-stable "getter" functions for the factorization's fields.

Reality check: How useful exactly are SuiteSparse and LDLFactorizations' LDLT?

Please don't consider this a bashing of a good effort to equip Julia with a general native solver.
It can be useful to some people, no doubt. But for sufficiently large problems these factorizations are hopeless.

Example: A 3D model of a rectangular cantilever block. Roughly 100k equations, 7M non zeros.

LDLFactorizations   SuiteSparse ldlt   Skyline ldlt      SuiteSparse cholesky
------------------------------------------------------------------------------
  1414 seconds        1477 seconds       331 seconds          11 seconds (!!!)

Two orders of magnitude slow down!

This is not the case of the implementation being too slow. This is a case of the algorithm not being fast enough.

My point is: we really need to get a native implementation of one of the "supernodal"
factorizations. It is not enough to have access to HSL, MUMPS, or SuiteSparse in dynamically loaded libraries. It leads to duplication of memory, preventing these methods from reaching their full potential in Julia.

Vectorization

It would be interesting to use vectorization macros such as @simd or @avx to increase the speed of this package and if possible to add GPU support. However this would probably lead us to change most of the code since it uses commands such as continue, break , and loops that are not independent.

supporting Intervals

A current conversation on Slack # Intervals asks
is it reasonably easy to do?

bottine  01:39
I'm wondering how much work would be needed to be able to plug interval types into tulip

Jeffrey Sarnoff  2 hours ago
:wink: ? ask @mtanneau
Only visible to you

Slackbot  2 hours ago
OK! I’ve invited @mtanneau to this channel.

Mathieu Tanneau  2 hours ago
Hmm :male-detective:

Mathieu Tanneau  2 hours ago
On the top of my head, to support computations in arithmetic T, 
it would need the following methods to be implemented (the list is non-exhaustive):

eps(T) to compute tolerances
sqrt(::T) to be used within factorizations
abs(::T) for some tolerances & step size computation
classical operations +, *, -, /
binary comparisons like >= and <=
Some norm(::Vector{T}, Inf) computations

Mathieu Tanneau  1 hour ago
The first step would be trying to compute an LDLt factorization of 
a symmetric positive definite matrix with Interval eltype, using LDLFactorizations.jl.
If that fails badly and can't be resolved, then there's little change Tulip would work

Jeffrey Sarnoff  1 hour ago
IntervalArithmetic.jl supports all of those arithmetic T ops
 (for norm(::Vector{T}, Inf) first do using LinearAlgebra). (edited) 

Jeffrey Sarnoff  1 hour ago
@dpsanders do we have LDLt ___ with Interval eltype?

Mathieu Tanneau  30 minutes ago
Damn, I thought LDLFactorizations.jl supported any arithmetic... Turns out it requires T <: Real.
It might be fixable by "just" relaxing the type restrictions in the LDLFactorizations code.

Missing diagonal entries for L

In the example below

julia> using LDLFactorizations
julia> ldl([  2     1     -83/40;
              1    43/20    0
            -83/40  0       5]).L
3×3 SparseArrays.SparseMatrixCSC{Float64,Int64} with 2 stored entries:
  [3, 1]  =  -0.415
  [3, 2]  =  0.465116

The entries (1, 1), (2, 2) and (3, 3) with value 1 are missing.
If this is by design, it should maybe be stated in the README that we don't have L * D * L' but rather (L+I) * D * (L+I)'

Solve for multiple RHS

Only single-RHS solves are implemented. Should be simple to solve for multiple RHS simultaneously.

MethodError: no method matching /

Looks like solving for x in Ax = b after ldl factorization and using the LDLFactorizations object raises a method error.
LDLT = ldl(A) x = LDLR\b

Fix README

The README still mentions that both triangles must be supplied as input, which is no longer true.

Only supply one triangle

It should be possible to only supply one triangle of the input matrix, even when a symmetric permutation is applied.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.