Giter Site home page Giter Site logo

juliasmoothoptimizers / quadraticmodels.jl Goto Github PK

View Code? Open in Web Editor NEW
16.0 5.0 10.0 644 KB

Data structures for linear and quadratic optimization problems based on NLPModels.jl

License: Other

Julia 100.00%
nlpmodels julia julia-language optimization quadratic-programming

quadraticmodels.jl's Introduction

QuadraticModels

Linux/macOS/Windows: CI FreeBSD: Build Status codecov.io Documentation/stable Documentation/dev

A package to model linear and quadratic optimization problems using the NLPModels.jl data structures.

The problems represented have the form

optimize   c₀ + cᵀ x + ½ xᵀ Q x    subject to   L ≤ Ax ≤ U and ℓ ≤ x ≤ u,

where the square symmetric matrix Q is zero for linear optimization problems.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

quadraticmodels.jl's People

Contributors

abelsiqueira avatar amontoison avatar dpo avatar geoffroyleconte avatar github-actions[bot] avatar jsobot avatar juliatagbot avatar mohamed82008 avatar monssaftoukal avatar renanod avatar sshin23 avatar tmigot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

quadraticmodels.jl's Issues

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Issue when initializing a QuadraticModels with different matrix types.

 MethodError: no method matching QuadraticModels.QPData(::Int64, ::Vector{Float64}, ::Matrix{Int64}, ::SparseArrays.SparseMatrixCSC{Float64, Int64})
  Closest candidates are:
    QuadraticModels.QPData(::T, ::S, ::M1, ::M2) where {T, S, M1<:Union{AbstractMatrix{T}, LinearOperators.AbstractLinearOperator{T}}, M2<:Union{AbstractMatrix{T}, LinearOperators.AbstractLinearOperator{T}}} at .julia\packages\QuadraticModels\etxWq\src\qpmodel.jl:9
  Stacktrace:
    [1] QuadraticModels.QuadraticModel(c::Vector{Float64}, H::Matrix{Int64}; A::SparseArrays.SparseMatrixCSC{Float64, Int64}, lcon::Vector{Float64}, ucon::Vector{Float64}, lvar::Vector{Float64}, uvar::Vector{Float64}, c0::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
      @ QuadraticModels .julia\packages\QuadraticModels\etxWq\src\qpmodel.jl:166

Precompilation on CI fails since today: `DiagonalQN` not defined

Hi,
thanks for this nice package. We use the package as a (weak) dependency for one of our sub solvers in Manopt.jl that we currently finalise.

Today it started failing on CI with

Failed to precompile QuadraticModels [f468eda6-eac5-11e8-05a5-ff9e497bcd19] to "/home/runner/.julia/compiled/v1.10/QuadraticModels/jl_kb6naG".
ERROR: LoadError: UndefVarError: `DiagonalQN` not defined
Stacktrace:
 [1] top-level scope
   @ ~/.julia/packages/NLPModelsModifiers/IMwDI/src/quasi-newton.jl:29
[...]

cf. https://github.com/JuliaManifolds/Manopt.jl/actions/runs/8064457138/job/22028375264 for the full trace. Maybe that is even a dependency of yours failing?

Using a structured `Q` matrix?

Hello there,

I'm currently trying to solve a QP where the Q matrix is block-structured and the blocks can be inverted in linear complexity, meaning that I can invert Q easily (although its a little cumbersome). Is there a way to utilize this when solving the QP using this package? If not, do you know of any other package where this can be achieved?

Sorry for making an issue. But given the missing documentation it is hard to see whether or not this is possible.

Cheers,
Mikkel

Error with nonsensical dimensions

I have a NLPModelMeta of this form:

  Problem name: Generic
   All variables: ████████████████████ 2      All constraints: ████████████████████ 2
            free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
           lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
           upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
         low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0              low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
           fixed: ████████████████████ 2                fixed: ████████████████████ 2
          infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
            nnzh: ( 33.33% sparsity)   2               linear: ████████████████████ 2
                                                    nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
                                                         nnzj: ( 50.00% sparsity)   2

but ripqp calls the function presolve from this package and returns

  Nonsensical dimensions
  Stacktrace:
    [1] error(s::String)
      @ Base .\error.jl:33
    [2] NLPModelMeta{Float64, Vector{Float64}}(nvar::Int64; x0::Vector{Float64}, lvar::Vector{Float64}, uvar::Vector{Float64}, nlvb::Int64, nlvo::Int64, nlvc::Int64, ncon::Int64, y0::Vector{Float64}, lcon::Vector{Float64}, ucon::Vector{Float64}, nnzo::Int64, nnzj::Int64, lin_nnzj::Int64, nln_nnzj::Int64, nnzh::Int64, lin::UnitRange{Int64}, minimize::Bool, islp::Bool, name::String)
      @ NLPModels .julia\packages\NLPModels\xTGGo\src\nlp\meta.jl:121
    [3] presolve(qm::QuadraticModels.QuadraticModel{Float64, Vector{Float64}, SparseMatricesCOO.SparseMatrixCOO{Float64, Int64}, SparseMatricesCOO.SparseMatrixCOO{Float64, Int64}}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
      @ QuadraticModels .julia\packages\QuadraticModels\etxWq\src\presolve\presolve.jl:61
    [4] presolve
      @ .julia\packages\QuadraticModels\etxWq\src\presolve\presolve.jl:23 [inlined]

jac_coord! not implemented

julia> using LinearAlgebra, SparseArrays, QuadraticModels, NLPModels
julia> Q = [6 2 1
            2 5 2
            1 2 4]
julia> c = [-8; -3; -3]
julia> c0 = 0.
julia> A = [1 0 1
            0 1 1]
julia> b = [0; 3];
julia> lvar = [0;0;0]
julia> uvar = [Inf; Inf; Inf]
julia> x0 = [1; 2; 3];
julia> QM = QuadraticModel(c, Q, A=A, lcon=b, ucon=b, lvar=lvar, uvar=uvar, x0=x0, c0=c0, name="QM1")
julia> jac(QM, QM.meta.x0)
ERROR: jac_coord! not implemented

QuadraticModels with unit range

The idea would be to define the rows and cols using UnitRange. I am wondering whether this would be a desirable feature or not, because it looks so natural.

julia> QuadraticModel(zeros(3), 1:3, 1:3, ones(3))
ERROR: MethodError: no method matching SparseMatricesCOO.SparseMatrixCOO(::Int64, ::Int64, ::UnitRange{Int64}, ::UnitRange{Int64}, ::Vector{Float64})
Closest candidates are:
  SparseMatricesCOO.SparseMatrixCOO(::Integer, ::Integer, ::Vector, ::Vector, ::Vector) at .julia\packages\SparseMatricesCOO\z5uST\src\coo_types.jl:51
Stacktrace:
 [1] QuadraticModel(c::Vector{Float64}, Hrows::UnitRange{Int64}, Hcols::UnitRange{Int64}, Hvals::Vector{Float64}; Arows::Vector{Int64}, Acols::Vector{Int64}, Avals::Vector{Float64}, lcon::Vector{Float64}, ucon::Vector{Float64}, lvar::Vector{Float64}, uvar::Vector{Float64}, c0::Float64, sortcols::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ QuadraticModels .julia\packages\QuadraticModels\etxWq\src\qpmodel.jl:123
 [2] QuadraticModel(c::Vector{Float64}, Hrows::UnitRange{Int64}, Hcols::UnitRange{Int64}, Hvals::Vector{Float64})
   @ QuadraticModels .julia\packages\QuadraticModels\etxWq\src\qpmodel.jl:97
 [3] top-level scope
   @ REPL[5]:1
``

Type for Jacobian/Hessian callbacks

Currently, Jacobian/Hessian-related functions only accept qp::QuadraticModel, whereas other functions such as obj and cons! take qp::AbstractQuadraticModel. Is there a reason for this choice? Otherwise, making them all accept qp::AbstractQuadraticModel will make QuadraticModels much easier to extend.

tril! modifies input matrix H

tril! modifies the input matrix H given by the user. Maybe we should mention in the documentation that the user should input a lower triangular H, and check it with a boolean.

Track allocs

There seems to be two functions of the API allocating

  • obj
  • hess_coord

I used the following script to track allocations in QuadraticModels

using Pkg
Pkg.activate(".")

# stdlib
using LinearAlgebra, Printf, SparseArrays, Test

# our packages
using ADNLPModels,
  LinearOperators,
  NLPModels,
  NLPModelsModifiers,
  NLPModelsTest, # main version of NLPModelsTest
  QPSReader,
  QuadraticModels,
  SparseMatricesCOO

function only_nonzeros(table)
  for k in keys(table)
    if table[k] == 0
      pop!(table, k)
    end
  end
  return table
end

# Definition of quadratic problems
qp_problems_Matrix = ["bndqp", "eqconqp"]
qp_problems_COO = ["uncqp", "ineqconqp"]
for qp in [qp_problems_Matrix; qp_problems_COO]
  include(joinpath("problems", "$qp.jl"))
end

for problem in qp_problems_Matrix
  @info "Checking allocs of dense problem $(problem)_QPSData"
  nlp_qps = eval(Symbol(problem * "_QPSData"))()
  print_nlp_allocations(nlp_qps, only_nonzeros(test_allocs_nlpmodels(nlp_qps)))
end

for problem in qp_problems_Matrix
  @info "Checking allocs of dense problem $(problem)_QP_dense"
  nlp_qm_dense = eval(Symbol(problem * "_QP_dense"))()
  print_nlp_allocations(nlp_qm_dense, only_nonzeros(test_allocs_nlpmodels(nlp_qm_dense)))
end

for problem in qp_problems_Matrix
  @info "Checking allocs of dense problem $(problem)_QP_sparse"
  nlp_qm_sparse = eval(Symbol(problem * "_QP_sparse"))()
  print_nlp_allocations(nlp_qm_sparse, only_nonzeros(test_allocs_nlpmodels(nlp_qm_sparse)))
end

for problem in qp_problems_Matrix
  @info "Checking allocs of dense problem $(problem)_QP_symmetric"
  nlp_qm_symmetric = eval(Symbol(problem * "_QP_symmetric"))()
  print_nlp_allocations(nlp_qm_symmetric, only_nonzeros(test_allocs_nlpmodels(nlp_qm_symmetric)))
end

for problem in qp_problems_COO
  @info "Checking allocs of COO problem $(problem)_QPSData"
  nlp_qps = eval(Symbol(problem * "_QPSData"))()
  print_nlp_allocations(nlp_qps, only_nonzeros(test_allocs_nlpmodels(nlp_qps)))
end

for problem in qp_problems_COO
  @info "Checking allocs of COO problem $(problem)_QP"
  nlp_qm_dense = eval(Symbol(problem * "_QP"))()
  print_nlp_allocations(nlp_qm_dense, only_nonzeros(test_allocs_nlpmodels(nlp_qm_dense)))
end

for problem in NLPModelsTest.nlp_problems
  @info "Testing allocs of quadratic approximation of problem $problem"
  nlp = eval(Symbol(problem))()
  x = nlp.meta.x0
  nlp_qm = QuadraticModel(nlp, x)
  print_nlp_allocations(nlp_qm, only_nonzeros(test_allocs_nlpmodels(nlp_qm)))
end

and the results

[ Info: Checking allocs of dense problem bndqp_QPSData
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0

[ Info: Checking allocs of dense problem eqconqp_QPSData
  Problem name: Generic
                        obj: ████████████████████ 496.0
                hess_coord!: ████████████████████ 496.0
            hess_lag_coord!: ████████████████████ 496.0

[ Info: Checking allocs of dense problem bndqp_QP_dense
  Problem name: bndqp_QP
                        obj: ████████████████████ 80.0

[ Info: Checking allocs of dense problem eqconqp_QP_dense
  Problem name: eqconqp_QP
                        obj: ████████████████████ 496.0

[ Info: Checking allocs of dense problem bndqp_QP_sparse
  Problem name: bndqp_QP
                        obj: ████████████████████ 80.0

[ Info: Checking allocs of dense problem eqconqp_QP_sparse
  Problem name: eqconqp_QP
                        obj: ████████████████████ 496.0

[ Info: Checking allocs of dense problem bndqp_QP_symmetric
  Problem name: bndqp_QP
                        obj: ████████████████████ 80.0

[ Info: Checking allocs of dense problem eqconqp_QP_symmetric
  Problem name: eqconqp_QP
                        obj: ████████████████████ 496.0

[ Info: Checking allocs of COO problem uncqp_QPSData
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0

[ Info: Checking allocs of COO problem ineqconqp_QPSData
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0
            hess_lag_coord!: ████████████████████ 80.0

[ Info: Checking allocs of COO problem uncqp_QP
  Problem name: uncqp_QP
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0

[ Info: Checking allocs of COO problem ineqconqp_QP
  Problem name: ineqconqp_QP
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0
            hess_lag_coord!: ████████████████████ 80.0

[ Info: Testing allocs of quadratic approximation of problem BROWNDEN
  Problem name: Generic
                        obj: ██████████████⋅⋅⋅⋅⋅⋅ 96.0
                hess_coord!: ████████████████████ 144.0

[ Info: Testing allocs of quadratic approximation of problem HS5
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0

[ Info: Testing allocs of quadratic approximation of problem HS6
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████⋅⋅⋅⋅ 64.0
            hess_lag_coord!: ████████████████⋅⋅⋅⋅ 64.0

[ Info: Testing allocs of quadratic approximation of problem HS10
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0
            hess_lag_coord!: ████████████████████ 80.0

[ Info: Testing allocs of quadratic approximation of problem HS11
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0
            hess_lag_coord!: ████████████████████ 80.0

[ Info: Testing allocs of quadratic approximation of problem HS13
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0
            hess_lag_coord!: ████████████████████ 80.0

[ Info: Testing allocs of quadratic approximation of problem HS14
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████████ 80.0
            hess_lag_coord!: ████████████████████ 80.0

[ Info: Testing allocs of quadratic approximation of problem LINCON
  Problem name: Generic
                        obj: ████████████████████ 176.0
                hess_coord!: ████████████████████ 176.0
            hess_lag_coord!: ████████████████████ 176.0

[ Info: Testing allocs of quadratic approximation of problem LINSV
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████⋅⋅⋅⋅ 64.0
            hess_lag_coord!: ████████████████⋅⋅⋅⋅ 64.0

[ Info: Testing allocs of quadratic approximation of problem MGH01Feas
  Problem name: Generic
                        obj: ████████████████████ 80.0
                hess_coord!: ████████████████⋅⋅⋅⋅ 64.0
            hess_lag_coord!: ████████████████⋅⋅⋅⋅ 64.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.