oxfordcontrol / clarabel.jl Goto Github PK
View Code? Open in Web Editor NEWClarabel.jl: Interior-point solver for convex conic optimisation problems in Julia.
License: Apache License 2.0
Clarabel.jl: Interior-point solver for convex conic optimisation problems in Julia.
License: Apache License 2.0
as discussed at https://discourse.julialang.org/t/ann-clarabel-jl-new-convex-optimization-solver/83915
To filter out the b[i]==Inf cases, I currently need to build the vector cones in a loop, so it has as many elements as length(b). Then, I can index into it with cones[.!isinf.(b)].
It would nice if I could create this vector by collect([NonnegativeConeT(2)]) or similarly to allow the indexing.
@JuliaRegistrator register
Trying multiple types on example in documentation seems to give an error when changing to Float32 ( Float64 and BigFloat works fine).
`using Clarabel, SparseArrays, LinearAlgebra
P = sparse([3.0 1 -1; 0 4 2; 0 0 5])
q = [1, 2, -3.0]
Aeq = [1.0 1.0 -1.0]
beq = [1.0]
Aineq = [0.0 1.0 0.0
0.0 0.0 1.0]
bineq = [2.0, 2.0]
Asoc = -I(3) * 1.0
bsoc = [0.0, 0.0, 0.0]
A_ = sparse([Aeq; Aineq; Asoc])
b_ = [beq; bineq; bsoc]
A = sparse([Aeq; Aineq])
b = [beq; bineq]
#cones = [Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(2), Clarabel.SecondOrderConeT(3)]
cones = [Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(2)]
settings64 = Clarabel.Settings{Float64}(
verbose=true,
direct_solve_method=:qdldl)
solver64 = Clarabel.Solver{Float64}()
Clarabel.setup!(solver64, P, q, A, b, cones, settings64)
solution64 = Clarabel.solve!(solver64)
settings32 = Clarabel.Settings{Float32}(
verbose=true,
direct_solve_method=:qdldl)
solver32 = Clarabel.Solver{Float32}()
Clarabel.setup!(solver32, Float32.(P), Float32.(q), Float32.(A), Float32.(b), cones, settings32)
solution32 = Clarabel.solve!(solver32)
`
Hope I have done some elementary error.
cc @odow because I don't know whether the underlying issue is in JuMP or in Clarabel.
This was initially reported by @klamike.
In certain circumstances (which I have not been able to fully isolate beyond the example below), setting the RHS of some constraints, then re-optimizing, yields an infeasible model.
I have not been able to make a smaller example than the code below, which calls set_normalized_rhs
to reset the right-hand-side of all equality constraints to the same value as in the initial model (thus presumably not changing anything)
using PowerModelsAnnex, PGLib, JuMP, Clarabel, HiGHS
const PMA = PowerModelsAnnex
const PM = PMA.PowerModels
data = PM.make_basic_network(pglib("14_ieee"))
model = PMA.build_dc_opf(data)
set_silent(model)
set_optimizer(model, Clarabel.Optimizer) # Changing this to `HiGHS.Optimizer` removes any issue
optimize!(model) # commenting this line will remove the INFEASIBLE issue below
@show termination_status(model)
# The code below resets the RHS of all equality constraints to the same value as before.
# --> this should not change anything to the model
cons = all_constraints(model, AffExpr, MOI.EqualTo{Float64})
b = normalized_rhs.(cons)
set_normalized_rhs.(cons, b)
# set_optimizer(model, Clarabel.Optimizer) # re-setting the optimizer also gets rid of the infeasibility
# Re-optimize the model --> this should return `OPTIMAL` with the same objective value
optimize!(model)
@show termination_status(model) # ⚠⚠⚠ this is INFEASIBLE unless either
# 1) we don't optimize in the first run, or
# 2) we call `set_optimizer(model, Clarabel.Optimizer)` before the second `optimize!`
As far as I can tell, the following sequences yield the following results:
sequence | termination status (1) | termination status (2) |
---|---|---|
build/ optimize! / set_normalized_rhs / optimize! | OPTIMAL | INFEASIBLE |
build/ (do not optimize) / set_normalized_rhs / optimize! | OPTIMIZE_NOT_CALLED | OPTIMAL |
build/ optimize! / set_optimizer / set_normalized_rhs / optimize! | OPTIMAL | OPTIMAL |
Also note that the issue goes away when using HiGHS instead of Clarabel.
Further comments, hopefully helpful when debugging:
I was using Clarabel and JuMP to test my code that analytically calculates the minimum distance between line segments and triangles in 3D. In random testing I found a line segment and triangle that doesn't converge with Clarabel but does converge with Ipopt.
In this example the line segment and triangle are almost, but not quite parallel.
using JuMP
using LinearAlgebra
using Clarabel
using Ipopt
a = [
[0.796, 0.768, 0.334],
[0.879, 0.227, 0.297],
]
b = [
[0.322, 0.079, 0.305],
[0.486, 0.145, 0.792],
[0.194, 0.999, 0.333],
]
function refDist2(optimizer, points1,points2)
model = Model(optimizer)
set_optimizer_attribute(model, "max_iter", 2000)
n1 = length(points1)
n2 = length(points2)
dims = length(points1[1])
@assert length(points1[1])==length(points2[1])
@variable(model, s[i = 1:n1] ≥ 0)
@variable(model, t[i = 1:n2] ≥ 0)
@constraint(model, sum(s[i] for i in 1:n1) == 1)
@constraint(model, sum(t[i] for i in 1:n2) == 1)
@expression(model, pt1[i = 1:dims], sum(s[j]*points1[j][i] for j in 1:n1))
@expression(model, pt2[i = 1:dims], sum(t[j]*points2[j][i] for j in 1:n2))
@expression(model, d[i = 1:dims], pt1[i]-pt2[i])
@objective(model, Min, sum(d[i]*d[i] for i in 1:dims))
optimize!(model)
objective_value(model)
end
d2min_clarabel = refDist2(Clarabel.Optimizer, a, b)
d2min_ipopt = refDist2(Ipopt.Optimizer, a, b)
@show d2min_clarabel
@show d2min_ipopt
Clarabel finishes with:
1998 2.8110e-01 2.7930e-01 1.79e-03 8.19e-10 3.35e-09 6.50e-05 6.83e-63 4.80e-01
1999 2.8015e-01 2.7941e-01 7.46e-04 2.88e-10 7.15e-09 3.75e-05 2.87e-63 6.22e-01
2000 2.8114e-01 2.7931e-01 1.83e-03 8.09e-10 4.35e-09 7.41e-05 6.02e-63 5.26e-01
---------------------------------------------------------------------------------------------
Terminated with status = iteration limit
solve time = 333ms
Ipopt finishes with:
6 2.7969534e-01 0.00e+00 1.84e-11 -5.7 2.62e-03 - 1.00e+00 1.00e+00f 1
7 2.7968859e-01 1.11e-16 9.22e-07 -8.6 4.20e-04 - 1.00e+00 9.92e-01f 1
8 2.7968851e-01 0.00e+00 2.54e-14 -8.6 7.70e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 8
...
EXIT: Optimal Solution Found.
This is in Julia 1.8.1 with the following packages installed
(clarabelissue) pkg> st
Status `~/Desktop/clarabelissue/Project.toml`
[61c947e1] Clarabel v0.3.0
[b6b21f68] Ipopt v1.1.0
[4076af6c] JuMP v1.3.0
Hi,
I am a newbie to Julia and tried to install Clarabel. When precompiling Clarabel, I got the following error:
(julia01) pkg> precompile
Precompiling project...
✗ Clarabel
0 dependencies successfully precompiled in 5 seconds. 188 already precompiled.
ERROR: The following 1 direct dependency failed to precompile:
Clarabel [61c947e1-3e6d-4ee4-985a-eec8c727bd6e]
Failed to precompile Clarabel [61c947e1-3e6d-4ee4-985a-eec8c727bd6e] to "/Users/uwe/.julia/compiled/v1.10/Clarabel/jl_UnP2Al".
ERROR: LoadError: MethodError: no method matching Clarabel.GenPowerConeT(::Vector{Float64}, ::Int64)
Closest candidates are:
Clarabel.GenPowerConeT(::Vector{Float64}, ::Int32)
@ Clarabel ~/.julia/packages/Clarabel/dltFS/src/cones/cone_api.jl:40
Stacktrace:
[1] __precompile_native()
@ Clarabel ~/.julia/packages/Clarabel/dltFS/src/precompile.jl:46
[2] macro expansion
@ ~/.julia/packages/Clarabel/dltFS/src/Clarabel.jl:108 [inlined]
[3] macro expansion
@ ~/.julia/packages/SnoopPrecompile/1XXT1/src/SnoopPrecompile.jl:62 [inlined]
[4] (::Clarabel.var"#141#142")()
@ Clarabel ~/.julia/packages/Clarabel/dltFS/src/Clarabel.jl:107
[5] (::Base.RedirectStdStream)(thunk::Clarabel.var"#141#142", stream::Base.DevNull)
@ Base ./stream.jl:1429
[6] top-level scope
@ ~/.julia/packages/Clarabel/dltFS/src/Clarabel.jl:106
[7] top-level scope
@ stdin:3
in expression starting at /Users/uwe/.julia/packages/Clarabel/dltFS/src/Clarabel.jl:2
in expression starting at stdin:
(julia01) pkg>
Can you help me, please?
Thank you,
Stoffel
Hi, for an application I am developing I need to solve an LP and QP that are very similar. My current way of doing this is to just change the objective of the JuMP model, but I cannot do this using Clarabel (with other solvers this works fine). Is there a workaround?
using JuMP #v1.2.1
using Clarabel #v0.2.0
m = Model(Clarabel.Optimizer)
x = @variable(m, x[1:2])
@objective(m, Max, x[2])
@constraint(m, x[1] + x[2] == 2)
@constraint(m, x[2] <= 3)
@constraint(m, x[1] <= 3)
optimize!(m)
@objective(m, Min, sum(x .^ 2))
This throws:
ERROR: MathOptInterface.UnsupportedAttribute{MathOptInterface.ObjectiveSense}: Attribute MathOptInterface.ObjectiveSense() is not supported by the model.
Stacktrace:
[1] throw_set_error_fallback(model::Optimizer{Float64}, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense; error_if_supported::MathOptInterface.SetAttributeNotAllowed{MathOptInterface.ObjectiveSense})
@ MathOptInterface C:\Users\stelmo\.julia\packages\MathOptInterface\MMUy6\src\attributes.jl:528
[2] throw_set_error_fallback(model::Optimizer{Float64}, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense)
@ MathOptInterface C:\Users\stelmo\.julia\packages\MathOptInterface\MMUy6\src\attributes.jl:519
[3] set(model::Optimizer{Float64}, attr::MathOptInterface.ObjectiveSense, args::MathOptInterface.OptimizationSense)
@ MathOptInterface C:\Users\stelmo\.julia\packages\MathOptInterface\MMUy6\src\attributes.jl:513
[4] set(m::MathOptInterface.Utilities.CachingOptimizer{Optimizer{Float64}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense)
@ MathOptInterface.Utilities C:\Users\stelmo\.julia\packages\MathOptInterface\MMUy6\src\Utilities\cachingoptimizer.jl:764
[5] set(b::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Optimizer{Float64}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense)
@ MathOptInterface.Bridges C:\Users\stelmo\.julia\packages\MathOptInterface\MMUy6\src\Bridges\bridge_optimizer.jl:1013
[6] set(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Optimizer{Float64}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense)
@ MathOptInterface.Utilities C:\Users\stelmo\.julia\packages\MathOptInterface\MMUy6\src\Utilities\cachingoptimizer.jl:764
[7] set(m::Model, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense)
@ JuMP C:\Users\stelmo\.julia\packages\JuMP\60Bnj\src\JuMP.jl:1236
[8] set_objective_sense
@ C:\Users\stelmo\.julia\packages\JuMP\60Bnj\src\objective.jl:78 [inlined]
[9] set_objective(model::Model, sense::MathOptInterface.OptimizationSense, func::QuadExpr)
@ JuMP C:\Users\stelmo\.julia\packages\JuMP\60Bnj\src\objective.jl:151
[10] macro expansion
@ C:\Users\stelmo\.julia\packages\JuMP\60Bnj\src\macros.jl:1288 [inlined]
[11] top-level scope
@ REPL[14]:1
Is there any plan to add support the the Hermitian PSD cone ? It would help for instance for the OPF tutorial, see e.g., jump-dev/JuMP.jl#3432
Hi Paul (and the rest of the Clarabel team),
While adding Panua-Pardiso to the possible solvers I encountered an error where the final KKT-system turned out to be diagonal-only. Initially I was confused, but it turns out this line assumes that the KKT data is given as a upper-triangular.
As this is not the case for Panua-Pardiso it simply removed the lower triangular part and left the KKTSym matrix as a diagonal matrix. I've fixed this in my fork by simply adding this
If you want I can make a PR with the fix + Panua-Pardiso solver.
Cheers,
Mikkel
The issue arises for the problem:
using Clarabel, LinearAlgebra, SparseArrays
settings = Clarabel.Settings(verbose = true)
settings.direct_solve_method=:cholmod
settings.tol_feas =10^-4
settings.tol_gap_abs=10^-4
settings.max_iter=1
solver = Clarabel.Solver()
P = sparse(Matrix(I,1,1))
q = [0.]
nothing #hide
A = sparse([1.;
])
b = [0*ones(1)]
cones =
[Clarabel.NonnegativeConeT(1)]
nothing #hide
Clarabel.setup!(solver, P, q, A, b, cones, settings)
result = Clarabel.solve!(solver)
The Stacktrace:
[1] Dict{Symbol, Union{}}()
@ Base .\dict.jl:88
[2] Dict
@ .\dict.jl:101 [inlined]
[3] dict_with_eltype
@ .\abstractdict.jl:537 [inlined]
[4] Dict(kv::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Base .\dict.jl:129
[5] setup!(s::Clarabel.Solver{Float64}, P::SparseMatrixCSC{Bool, Int64}, c::Vector{Float64}, A::SparseVector{Float64, Int64}, b::Vector{Vector{Float64}}, cone_types::Vector{Clarabel.NonnegativeConeT}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
Hi, in my branch-and-bound wrapper around the Clarabel solver, I have come across the following issue:
I try to minimize x'x + c'x where c is a random vector between 0 and 1, and x is a n-dimensional vector of which m<=n variables are specified as integers.
The constraints are sum(x) == k, where k is any natural number.
In this minimal problem of
n = m = 2, k=3
I get an insufficient progress error at solving for the relaxed problem (to compute the lower bound to the objective value) when there is the first constraint set on the branching variable "x[1] <= 1" and with the following constraint data:
A : sparse([1, 2, 4, 6, 1, 3, 5, 7], [1, 1, 1, 1, 2, 2, 2, 2], [-1.0, -1.0, 1.0, 1.0, -1.0, -1.0, 1.0, 0.0], 7, 2)
b [-3.0, 0.0, 0.0, 1000.0, 1000.0, 1.0, 0.0]
cones : Clarabel.SupportedCone[Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(4), Clarabel.NonnegativeConeT(2)]
I suspect that the error is caused by a constraint of 0*x <= 0 .
Here is the output (along with many print lines for debugging as well as an initial JuMP model printed):
Min x[1]² + x[2]² + 0.32597672886359486 x[1] + 0.5490511363155669 x[2]
Subject to
sum_constraint : x[1] + x[2] == 3.0
lb : x[1] >= 0.0
lb : x[2] >= 0.0
ub : x[1] <= 1000.0
ub : x[2] <= 1000.0
IN SOLVER DEFAULT START!:
A : sparse([1, 2, 4, 1, 3, 5], [1, 1, 1, 2, 2, 2], [-1.0, -1.0, 1.0, -1.0, -1.0, 1.0], 5, 2)
b [-3.0, 0.0, 0.0, 1000.0, 1000.0]
cones : Clarabel.SupportedCone[Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(4)]
Solver.variables.x : [0.0, 0.0]
Solver.variables.z : [0.0, 1.0, 1.0, 1.0, 1.0]
Solver.variables.s : [0.0, 1.0, 1.0, 1.0, 1.0]
Solve_base_model: * Solver : Clarabel
* Status
Termination status : OPTIMAL
Primal status : FEASIBLE_POINT
Dual status : FEASIBLE_POINT
Message from the solver:
"SOLVED"
* Candidate solution
Objective value : 5.80632e+00
Dual objective value : 5.80632e+00
* Work counters
Solve time (sec) : 6.56080e-03
Barrier iterations : 8
A : sparse([1, 2, 4, 6, 1, 3, 5, 7], [1, 1, 1, 1, 2, 2, 2, 2], [-1.0, -1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0], 7, 2)
b [-3.0, 0.0, 0.0, 1000.0, 1000.0, 0.0, 0.0]
-------------------------------------------------------------
Clarabel.jl v0.3.0 - Clever Acronym
(c) Paul Goulart
University of Oxford, 2022
-------------------------------------------------------------
problem:
variables = 2
constraints = 7
nnz(P) = 2
nnz(A) = 8
cones (total) = 3
: Clarabel.Zero = 1, numel = 1
: Clarabel.Nonnegative = 2, numel = (4,2)
settings:
linear algebra: direct / qdldl, precision: Float64
max iter = 100, time limit = Inf, max step = 0.990
tol_feas = 1.0e-08, tol_abs = 1.0e-08, tol_rel = 1.0e-08,
static reg : on, ϵ1 = 1.0e-08, ϵ2 = 4.9e-32
dynamic reg: on, ϵ = 1.0e-13, δ = 2.0e-07
iter refine: on, reltol = 1.0e-13, abstol = 1.0e-12,
max iter = 10, stop ratio = 5.0
equilibrate: off, min_scale = 1.0e-04, max_scale = 1.0e+04
max iter = 10
iter pcost dcost gap pres dres k/t μ step
---------------------------------------------------------------------------------------------
IN SOLVER DEFAULT START!:
A : sparse([1, 2, 4, 6, 1, 3, 5, 7], [1, 1, 1, 1, 2, 2, 2, 2], [-1.0, -1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0], 7, 2)
b [-3.0, 0.0, 0.0, 1000.0, 1000.0, 0.0, 0.0]
cones : Clarabel.SupportedCone[Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(4), Clarabel.NonnegativeConeT(2)]
Solver.variables.x : [0.0, 0.0]
Solver.variables.z : [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
Solver.variables.s : [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
0 0.0000e+00 -2.0000e+03 2.00e+03 1.41e+00 2.70e-01 1.00e+00 1.00e+00 ------
1 1.1150e-01 -1.7939e+03 1.79e+03 1.20e+00 3.88e-03 6.92e+01 1.35e-02 9.87e-01
2 1.0060e+00 -1.0958e+03 1.10e+03 6.15e-01 1.06e-03 2.06e+02 3.51e-03 7.98e-01
3 4.5931e+00 -2.3603e+02 2.41e+02 7.89e-02 1.28e-02 1.22e+01 2.17e-01 9.90e-01
4 5.7889e+00 2.3254e+00 1.49e+00 1.06e-03 2.46e-04 3.50e-01 5.47e-01 9.87e-01
5 5.8061e+00 5.7691e+00 6.42e-03 1.07e-05 2.42e-06 3.73e-03 5.85e-03 9.90e-01
6 5.8063e+00 5.8059e+00 6.38e-05 1.07e-07 2.42e-08 3.73e-05 5.86e-05 9.90e-01
7 5.8063e+00 5.8063e+00 6.38e-07 1.07e-09 2.42e-10 3.73e-07 5.86e-07 9.90e-01
8 5.8063e+00 5.8063e+00 6.38e-09 1.07e-11 2.42e-12 3.73e-09 5.86e-09 9.90e-01
---------------------------------------------------------------------------------------------
Terminated with status = solved
solve time = 15.8ms
STARTING CLARABEL BNB LOOP
Solution x of unbounded base model: [1.5557686015271, 1.444231398418256]
rounded variables: [2.0, 1.0]
Residual z : [0.0, -0.4442313984546875, 0.444231398436467, 0.4442313802777562, -0.4442314166134338, -0.4442313984546875, 0.444231398436467]
Infeasible or unbounded problem for ub computation
root has ub: Inf
current node at depth 0 has data.solution.x as [1.5557686015271, 1.444231398418256]
IS IT same object as node.data.solver.solution.x ???? [1.5557686015271, 1.444231398418256]
checking x...1 0.05576860152709995
idx1 0.05576860152709995
checking x...2 0.055768601581744015
idx1 0.05576860152709995
GOT BRANCHING VARIABLE: 1 SET SMALLER THAN FLOOR (left): 1.0 OR GREATER THAN CEIL (right)2.0
fixed_x_left: [1.0]
set upper bound for index: 1 to 1.0
A : sparse([1, 2, 4, 6, 1, 3, 5, 7], [1, 1, 1, 1, 2, 2, 2, 2], [-1.0, -1.0, 1.0, 1.0, -1.0, -1.0, 1.0, 0.0], 7, 2)
b [-3.0, 0.0, 0.0, 1000.0, 1000.0, 1.0, 0.0]
cones : Clarabel.SupportedCone[Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(4), Clarabel.NonnegativeConeT(2)]
Solver.variables.x : [0.0, 9.737999577e-315]
Solver.variables.z : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Solver.variables.s : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
-------------------------------------------------------------
Clarabel.jl v0.3.0 - Clever Acronym
(c) Paul Goulart
University of Oxford, 2022
-------------------------------------------------------------
problem:
variables = 2
constraints = 7
nnz(P) = 2
nnz(A) = 8
cones (total) = 3
: Clarabel.Zero = 1, numel = 1
: Clarabel.Nonnegative = 2, numel = (4,2)
settings:
linear algebra: direct / qdldl, precision: Float64
max iter = 100, time limit = Inf, max step = 0.990
tol_feas = 1.0e-08, tol_abs = 1.0e-08, tol_rel = 1.0e-08,
static reg : on, ϵ1 = 1.0e-08, ϵ2 = 4.9e-32
dynamic reg: on, ϵ = 1.0e-13, δ = 2.0e-07
iter refine: on, reltol = 1.0e-13, abstol = 1.0e-12,
max iter = 10, stop ratio = 5.0
equilibrate: off, min_scale = 1.0e-04, max_scale = 1.0e+04
max iter = 10
iter pcost dcost gap pres dres k/t μ step
---------------------------------------------------------------------------------------------
IN SOLVER DEFAULT START!:
A : sparse([1, 2, 4, 6, 1, 3, 5, 7], [1, 1, 1, 1, 2, 2, 2, 2], [-1.0, -1.0, 1.0, 1.0, -1.0, -1.0, 1.0, 0.0], 7, 2)
b [-3.0, 0.0, 0.0, 1000.0, 1000.0, 1.0, 0.0]
cones : Clarabel.SupportedCone[Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(4), Clarabel.NonnegativeConeT(2)]
Solver.variables.x : [0.0, 0.0]
Solver.variables.z : [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
Solver.variables.s : [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
0 0.0000e+00 -2.0010e+03 2.00e+03 1.41e+00 4.79e-01 1.00e+00 1.00e+00 ------
1 1.7810e+00 -1.9690e+03 1.97e+03 1.37e+00 2.06e-01 1.20e+01 7.71e-02 9.18e-01
2 1.3254e-01 -1.9736e+03 1.97e+03 1.37e+00 4.34e-01 1.26e+01 1.08e-01 1.90e-01
3 3.6639e-01 -1.8708e+03 1.87e+03 1.26e+00 1.04e-01 4.29e+01 3.25e-02 7.56e-01
4 2.0131e+00 -1.8202e+03 1.82e+03 1.19e+00 9.85e-01 4.40e+01 3.02e-02 1.62e-01
5 2.5192e+00 -1.5722e+03 1.57e+03 8.82e-01 1.12e+00 3.77e+01 5.21e-02 2.06e-01
6 5.2530e+00 -7.3207e+02 7.37e+02 2.83e-01 9.33e-01 2.40e+01 1.45e-01 8.25e-01
7 3.4161e+00 -5.7753e+02 5.81e+02 2.08e-01 9.71e-01 1.60e+01 2.19e-01 4.59e-02
8 8.2790e+00 -1.9153e+02 2.00e+02 5.82e-02 9.04e-01 1.93e+00 2.88e+00 4.28e-01
9 6.4085e+00 -5.7937e+02 5.86e+02 1.86e-01 8.58e-02 1.50e+01 9.24e-01 5.99e-02
10 6.3599e+00 -3.5726e+02 3.64e+02 1.11e-01 1.81e-01 2.00e+01 3.20e-01 3.30e-01
11 4.7310e+00 -2.3396e+02 2.39e+02 6.91e-02 1.47e-01 1.43e+01 2.94e-01 1.18e-01
12 1.0228e+01 -6.2276e+01 7.25e+01 1.73e-02 3.96e-01 2.83e+00 9.18e-01 4.85e-01
13 1.0334e+01 -3.2999e+01 4.33e+01 1.06e-02 9.39e-01 3.29e+00 5.27e-01 6.68e-01
14 1.0540e+01 5.9858e+00 7.61e-01 1.14e-03 8.08e-01 5.07e-01 9.73e-01 9.89e-01
15 1.0586e+01 1.0540e+01 4.41e-03 1.18e-05 6.11e-01 5.34e-03 1.51e-02 9.90e-01
16 1.0350e+01 1.0315e+01 3.45e-03 3.22e-05 4.26e-01 5.03e-03 7.83e-03 4.81e-01
17 6.1226e+00 1.1757e+00 4.21e+00 9.53e-04 3.70e-01 2.48e-03 7.07e-01 3.81e-01
18 5.9097e+00 5.3144e+00 1.12e-01 3.23e-04 6.61e-02 1.71e-03 8.61e-03 8.95e-01
19 5.8082e+00 5.6416e+00 2.95e-02 2.99e-04 1.19e-02 1.30e-04 8.05e-04 9.90e-01
20 5.8071e+00 5.6927e+00 2.01e-02 2.23e-04 7.90e-03 1.92e-04 5.35e-04 4.64e-01
21 6.1658e+00 5.2992e+00 1.64e-01 9.07e-04 2.16e-01 1.33e-04 1.13e-01 2.40e-01
22 6.7532e+00 6.3939e+00 5.62e-02 7.44e-04 3.82e-01 1.10e-02 1.04e-01 9.90e-01
23 5.8490e+00 2.9229e+00 1.00e+00 6.39e-04 2.34e-01 5.11e-02 4.79e-01 9.90e-01
24 5.8115e+00 5.6918e+00 2.10e-02 2.16e-04 1.09e-01 4.62e-03 2.88e-02 9.79e-01
25 5.8248e+00 5.7326e+00 1.61e-02 2.72e-04 2.15e-01 1.65e-03 4.58e-03 9.90e-01
26 5.8589e+00 5.7989e+00 1.03e-02 3.00e-04 2.69e-01 1.04e-03 1.62e-03 5.80e-01
27 6.2606e+00 4.5922e+00 3.63e-01 4.22e-04 1.35e-01 1.24e-03 2.28e-01 9.90e-01
28 6.1492e+00 6.0337e+00 1.91e-02 5.93e-05 9.09e-02 6.85e-04 3.38e-03 9.42e-01
29 6.0536e+00 6.0362e+00 2.88e-03 8.46e-05 9.48e-02 2.68e-05 2.22e-04 9.90e-01
30 5.9473e+00 5.9175e+00 5.05e-03 1.20e-04 8.44e-02 1.10e-06 1.13e-05 9.90e-01
31 5.8521e+00 5.7995e+00 9.07e-03 1.68e-04 5.27e-02 4.73e-08 4.45e-07 9.90e-01
32 5.8068e+00 5.7290e+00 1.36e-02 2.24e-04 1.15e-02 2.91e-08 1.78e-07 9.90e-01
33 5.8246e+00 5.7473e+00 1.35e-02 2.70e-04 7.24e-02 6.50e-08 2.83e-06 9.90e-01
34 5.9166e+00 5.8187e+00 1.68e-02 3.28e-04 1.68e-01 7.23e-07 2.32e-05 9.90e-01
35 6.0658e+00 5.9665e+00 1.66e-02 3.80e-04 2.56e-01 3.26e-06 6.71e-05 7.31e-01
36 5.8734e+00 5.6562e+00 3.84e-02 3.23e-04 1.76e-01 6.22e-05 1.80e-02 9.90e-01
37 5.8121e+00 5.7289e+00 1.45e-02 3.13e-04 1.41e-01 1.10e-03 1.32e-03 9.75e-01
38 5.8422e+00 5.6971e+00 2.55e-02 2.84e-04 6.42e-02 1.76e-04 2.83e-04 9.90e-01
39 5.8645e+00 5.8572e+00 1.25e-03 2.01e-04 4.15e-02 8.46e-06 1.53e-05 9.90e-01
40 5.8658e+00 5.8657e+00 1.63e-05 1.85e-04 4.03e-02 3.03e-07 3.92e-07 9.90e-01
41 5.8659e+00 5.8659e+00 1.83e-07 1.84e-04 4.02e-02 1.03e-08 1.25e-08 9.90e-01
42 5.8659e+00 5.8659e+00 1.96e-09 1.84e-04 4.02e-02 3.49e-10 4.15e-10 9.90e-01
43 5.8659e+00 5.8659e+00 2.00e-11 1.84e-04 4.02e-02 1.18e-11 1.39e-11 9.90e-01
44 5.8659e+00 5.8659e+00 1.73e-13 1.84e-04 4.02e-02 3.98e-13 4.62e-13 9.90e-01
45 5.8659e+00 5.8659e+00 1.51e-16 1.84e-04 4.02e-02 1.34e-14 1.54e-14 9.90e-01
46 5.8659e+00 5.8659e+00 3.03e-16 1.84e-04 4.02e-02 4.52e-16 5.14e-16 9.90e-01
47 5.8659e+00 5.8659e+00 4.54e-16 1.84e-04 4.02e-02 1.52e-17 1.71e-17 9.90e-01
48 5.8659e+00 5.8659e+00 3.03e-16 1.84e-04 4.02e-02 5.12e-19 5.72e-19 9.90e-01
49 5.8659e+00 5.8659e+00 0.00e+00 1.84e-04 4.02e-02 1.72e-20 1.91e-20 9.90e-01
---------------------------------------------------------------------------------------------
Terminated with status = insufficient progress
solve time = 58.8ms
solution.x looks like: [1.3831719715285904, 1.6168280284714096]
Infeasible or unbounded problem for lb computation```
I am trying Julia 1.9 rc1
Clarabel [61c947e1-3e6d-4ee4-985a-eec8c727bd6e]
│ WARNING: method definition for Solver at ~/.julia/packages/Clarabel/CuZFo/src/types.jl:346 declares type variable T but does not use it.
@JuliaRegistrator register
Reported here: https://discourse.julialang.org/t/ann-clarabel-jl-new-convex-optimization-solver/83915/14.
The solver fails without giving a clear explanation if any of the entries of b
vector are infinite valued.
What paper(s) should I cite if I use Clarabel.jl in experiments for an academic paper? I've made good use of it and want to make sure I properly assign credit where it's due. Apologies if this is in plain sight, but I couldn't find it. Thanks!
This is an excellent package, for my applications, it is 1000 times faster than COSMO.
because of the Pardiso
, it takes up more than 1G hard-disk. Pardiso
dependence takes too much time on installation, and the pre-compile takes 900 seconds during Pkg> add
. Sometimes it is fast (I remove the ~.julia
for each testing)
Is it the Pardiso
a must for Clarabel
? Is it possible to move it to be optional?
using Clarabel, LinearAlgebra, SparseArrays
E0 = [7.2 0.7 0.8 2.3 2.2 1.9 5.6 5.6 7.2 1.3 0.7 -0.1 4.1 7.2]
E = vec(E0)
EB = convert(Vector{BigFloat}, E)
function QPB(E::Vector{BigFloat})
settings = Clarabel.Settings{BigFloat}(
verbose=true,
direct_kkt_solver=true,
direct_solve_method=:qdldl)
settings.verbose = false
solver = Clarabel.Solver{BigFloat}()
N = length(E)
P = sparse(zeros(BigFloat, N, N))
q = -E
A = sparse([ones(BigFloat, 1, N); -Matrix{BigFloat}(I, N, N)])
b = [one(BigFloat); zeros(BigFloat, N)]
cones = [Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(N)]
#setprecision(BigFloat, 128)
Clarabel.setup!(solver, P, q, A, b, cones, settings)
return solver
end
function QP4(E::Vector{Float64})
N = length(E)
P = sparse(zeros(Float64,N,N))
q = -E
A = sparse([ones( 1, N); -Matrix(I, N, N)])
b = [1.0; zeros(N)]
cones = [Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(N)]
settings = Clarabel.Settings()
settings.verbose = false
solver = Clarabel.Solver()
Clarabel.setup!(solver, P, q, A, b, cones, settings)
return solver
end
t0 = time()
xb = Clarabel.solve!(QPB(EB))
t1 = time()
println(Float64[xb.solve_time t1 - t0])
t0 = time()
x4 = Clarabel.solve!(QP4(E))
t1 = time()
show([x4.solve_time t1 - t0])
typical outputs are
[0.0028578740000000003 0.03750300407409668]
[0.00021125400000000002 0.023264169692993164]
or
[0.003459025 0.02099299430847168]
[0.00022872500000000002 0.023643016815185547]
For Float64
, a shorter solve_time
is expected, but an always longer overall computing time is unexpected.
ALMOST_PRIMAL_INFEASIBLE
and ALMOST_DUAL_INFEASIBLE
termination stati are missing their MOI counterparts in ClarabeltoMOITerminationStatus
(MOI_Wrapper.jl
), which sometimes causes errors like KeyError: key Clarabel.ALMOST_PRIMAL_INFEASIBLE not found
. The fix is trivial: add
Clarabel.ALMOST_PRIMAL_INFEASIBLE => MOI.ALMOST_INFEASIBLE,
Clarabel.ALMOST_DUAL_INFEASIBLE => MOI.ALMOST_DUAL_INFEASIBLE
to ClarabeltoMOITerminationStatus
Dict.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
on Julia 1.9 rc1, there still needs 13 seconds for first run, and 0.3 seconds for second run, for one of my dataset.
I find that the package EfficientFrontier
cut down from 7.89 seconds ( Julia 1.8.5) to 0.13 seconds (1.9 rc1). Lines like Base.precompile(Tuple{...})
may be helpful
The EfficientFrontier.jl calculates the entire efficient frontier using closed-form formulas. Corner portfolios are the endpoints of parabola segments of efficient frontier in the mean-variance plane.
There is a Quick Example to get a glimpse.
to run the following code snippet, download ClarabelQP to local directory.
using LinearAlgebra
using EfficientFrontier
#= V = [133.0 39.0 36.0 -17.0 -28.0 31.0 -5.0 -6.0 -34.0 -10.0 -46.0 -68.0 -52.0 -63.0
39.0 16.0 17.0 10.0 0.0 22.0 13.0 7.0 1.0 19.0 3.0 0.0 5.0 9.0
36.0 17.0 38.0 19.0 3.0 34.0 45.0 29.0 37.0 48.0 20.0 21.0 42.0 50.0
-17.0 10.0 19.0 98.0 69.0 65.0 73.0 43.0 68.0 151.0 99.0 130.0 121.0 165.0
-28.0 0.0 3.0 69.0 84.0 34.0 42.0 25.0 55.0 99.0 64.0 93.0 87.0 119.0
31.0 22.0 34.0 65.0 34.0 93.0 83.0 48.0 57.0 115.0 76.0 85.0 99.0 145.0
-5.0 13.0 45.0 73.0 42.0 83.0 154.0 87.0 111.0 168.0 113.0 143.0 165.0 225.0
-6.0 7.0 29.0 43.0 25.0 48.0 87.0 57.0 75.0 95.0 71.0 89.0 104.0 142.0
-34.0 1.0 37.0 68.0 55.0 57.0 111.0 75.0 286.0 117.0 101.0 125.0 164.0 239.0
-10.0 19.0 48.0 151.0 99.0 115.0 168.0 95.0 117.0 491.0 231.0 327.0 259.0 322.0
-46.0 3.0 20.0 99.0 64.0 76.0 113.0 71.0 101.0 231.0 214.0 256.0 217.0 269.0
-68.0 0.0 21.0 130.0 93.0 85.0 143.0 89.0 125.0 327.0 256.0 380.0 265.0 342.0
-52.0 5.0 42.0 121.0 87.0 99.0 165.0 104.0 164.0 259.0 217.0 265.0 297.0 359.0
-63.0 9.0 50.0 165.0 119.0 145.0 225.0 142.0 239.0 322.0 269.0 342.0 359.0 556.0]
E = [0.1 0.7 0.8 2.3 2.2 1.9 5.6 5.6 2.2 1.3 0.7 -0.1 4.1 7.2]
E = vec(E) =#
E, V = EfficientFrontier.EVdata(:Ungil, false)
A = [1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0]
b = [1.0; 0.25]
G = [0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0]
G[1, :] = -G[1, :]
g = [-0.3; 0.6]
d = vec([-0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.1 -0.1 -0.1 -0.1])
u = vec([0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.3 0.3 0.3 0.3])
P = Problem(E, V, u, d, G, g, A, b)
aCL = EfficientFrontier.ECL(P)
aEF = eFrontier(aCL, P)
#https://github.com/PharosAbad/EfficientFrontier.jl/blob/main/examples/uClarabel.jl
if length(filter((x) -> x == :uClarabel, names(Main, imported=true))) == 0
include("./uClarabel.jl")
using .uClarabel
end
N = length(aEF.mu) - 1
M = 16
D = zeros(N, M)
F = zeros(N, M)
S = zeros(N, M)
for k in 1:N
for m = 1:M
mu = ((M + 1 - m) * aEF.mu[k] + (m - 1) * aEF.mu[k+1]) / M
#y = ClarabelQP(E, V, mu, u, d, G, g, A, b)
y = ClarabelQP(P, mu)
z = ePortfolio(aEF, mu)
D[k, m] = y.x' * E - mu
F[k, m] = norm(y.x - z, Inf)
S[k, m] = sqrt(y.x'*V*y.x) - sqrt(z'*V*z)
end
end
#display(D)
display(F)
#display(S)
It is very interesting that: Clarabel
is relative inaccurate for corner portfolios (except for the first 3 corner portfolio), and it seems a bit better in the middle of corner portfolios.
18×16 Matrix{Float64}:
3.18849e-10 3.19008e-10 4.45465e-10 4.66152e-10 3.30139e-10 3.5543e-10 3.49423e-10 3.30721e-10 2.74177e-8 2.3783e-8 2.00713e-8 1.98678e-8 2.21365e-8 3.62659e-10 1.25773e-9 1.14174e-9
3.23935e-11 6.02425e-11 4.39114e-11 3.9909e-11 3.57194e-11 3.19812e-11 2.87831e-11 2.56941e-9 2.20505e-9 1.79289e-9 1.35926e-9 9.36248e-10 6.78427e-10 1.6364e-10 5.26255e-11 4.91228e-10
1.64502e-10 1.33627e-9 4.84164e-10 2.46606e-10 2.87599e-10 4.21606e-10 4.90347e-10 6.07846e-10 9.58121e-10 1.66706e-9 3.46329e-9 8.15022e-9 2.19701e-8 8.02907e-8 5.09215e-9 1.26649e-7
6.72157e-5 5.10289e-9 6.50983e-10 2.44242e-10 1.51662e-8 1.09261e-8 8.92366e-9 7.82586e-9 6.76068e-9 5.46922e-9 4.21745e-9 3.67409e-9 3.99321e-9 2.84697e-11 1.97773e-10 1.86628e-9
4.22805e-5 9.84327e-8 2.88725e-7 3.20136e-8 5.4494e-7 1.19294e-7 2.6235e-8 5.32558e-9 4.54418e-10 5.7994e-9 1.90225e-8 6.5954e-8 2.3539e-7 9.45569e-9 5.61901e-8 9.26414e-7
5.41409e-5 1.35702e-9 6.20739e-8 3.52116e-8 5.79467e-8 8.29902e-8 1.13455e-7 1.56422e-7 2.24538e-7 3.448e-9 5.82692e-9 1.12859e-8 2.70055e-8 8.58269e-8 4.63327e-7 1.11453e-7
2.54868e-5 3.3339e-7 1.02682e-8 1.18808e-9 2.21187e-8 2.7844e-8 3.55413e-10 4.73281e-10 7.41238e-10 1.19479e-9 1.23088e-9 2.01452e-9 3.68378e-9 9.20721e-9 4.99684e-8 9.85152e-9
2.67653e-5 6.3477e-7 3.47676e-8 2.41868e-7 5.41674e-8 1.56753e-8 5.47785e-9 2.191e-9 1.10954e-9 8.1288e-10 7.08941e-10 7.27622e-10 1.10131e-9 2.44134e-9 1.60113e-8 4.59363e-7
2.24447e-5 3.91684e-8 2.01582e-7 4.59754e-8 2.91806e-8 3.35608e-8 4.5077e-8 6.73894e-8 1.10197e-7 1.9656e-7 3.88494e-7 8.9426e-9 2.53455e-8 9.62254e-8 6.32408e-7 1.67155e-7
3.84521e-5 1.65032e-8 6.95928e-8 1.1328e-8 3.37393e-9 1.407e-9 7.69311e-10 5.23167e-10 4.69155e-10 6.67748e-10 9.85157e-10 1.63559e-9 2.91212e-9 5.89317e-9 1.46668e-8 5.67367e-8
2.83886e-5 6.07142e-8 2.32408e-7 3.86582e-8 1.25363e-8 1.09506e-8 1.50301e-8 2.37551e-8 4.13291e-8 7.83363e-8 1.64627e-7 3.98575e-7 1.14953e-6 4.78442e-8 3.3082e-7 4.78871e-6
3.06065e-5 1.20129e-7 5.27567e-9 8.40941e-8 2.31253e-8 7.18234e-9 2.39417e-9 1.39597e-9 9.17832e-10 5.8463e-10 3.8057e-10 2.58178e-10 1.49507e-10 5.96917e-11 5.33247e-9 3.52487e-10
2.81204e-5 4.16145e-9 2.33296e-8 5.7679e-9 2.43298e-9 1.35713e-9 9.34784e-10 1.00321e-9 1.21287e-9 1.53015e-9 2.40955e-9 4.47092e-9 1.06462e-8 3.43203e-8 7.29692e-8 8.464e-8
2.87727e-5 4.59344e-6 6.20909e-7 1.09511e-7 2.91795e-8 9.63486e-7 4.43793e-7 2.19281e-7 1.16043e-7 6.45772e-8 3.6364e-8 2.23048e-8 2.03823e-8 5.22717e-8 3.28179e-7 5.26505e-6
2.80846e-5 1.43465e-7 8.14972e-7 1.69849e-7 5.25732e-8 1.93747e-8 7.62444e-7 3.04216e-7 1.10523e-7 2.91276e-8 3.82857e-9 3.51186e-8 2.68104e-7 2.08999e-8 2.09988e-7 4.11675e-6
2.75587e-5 2.87013e-8 1.55595e-8 1.09817e-8 8.23124e-9 6.69369e-9 5.98572e-9 5.95886e-9 1.23745e-8 1.40904e-8 1.9061e-8 3.20466e-8 7.05844e-8 2.19093e-9 1.24271e-8 2.77669e-7
2.41899e-5 3.23609e-9 1.16639e-8 1.67665e-9 3.96964e-10 1.33526e-10 5.32078e-11 3.49767e-9 2.94218e-9 4.41103e-9 7.49515e-9 1.50824e-10 3.92278e-10 1.61352e-9 1.12016e-8 2.89123e-9
2.32557e-5 9.47135e-7 4.011e-8 5.85978e-7 1.52584e-7 5.38992e-8 2.28213e-8 1.09714e-8 5.79936e-9 3.31643e-9 2.02047e-9 1.29206e-7 8.57067e-8 5.85171e-8 4.09324e-8 2.91957e-8
Would need to refactor the step length calcs a bit, but should be easy to do.
Hi there,
In the solve!
function for PardisoDirectLDLSolver
a Kfake
is utilized in the place of the actual KKT system.
Clarabel.jl/src/kktsolvers/direct-ldl/directldl_mklpardiso.jl
Lines 93 to 100 in 76a9a11
While it is true that there is already a factorization I think that this causes issues in the refinement stage, as this stage utilizes matrix-vector products in order to reduce the residual. To show why this causes issues I set Pardiso.set_msglvl!(ps, Pardiso.MESSAGE_LEVEL_ON)
which result in the following for one my matrices
In the highlighted box a relative error of 1 can be seen in the first refinement iteration (ref_it
), which would correspond to the relative error one would expect when Kfake
is set to zero.
Cheers,
Mikkel
using Clarabel
CSf = Clarabel.Settings()
CSb = Clarabel.Settings{BigFloat}()
println(CSf.tol_feas,"\n" ,CSb.tol_feas)
It seems that the Settings for BigFloat
are the same as those (tolerances etc) of Float64
.
What sets of settings for BigFloat
do you recommend? We need a stringent one as we have more digits involved
Hi there,
I have a few problems that contain second-order cones with dimension greater than 4, meaning that the sparse-representation is used. For these problems the solver reports back either “insufficient progress” or "reduced accuracy" (Clarabel does get fairly close to the solution). However, by digging around (changing the value of when the switch to sparse data is used) I found that the solver returns "solved" when the full dense representation is used. Is this to be expected for some problems?
Cheers,
Mikkel
Please find this SDP below. There seems to be an issue in convergence (using v0.2.0) with this error:
LoadError: PosDefException: matrix is not positive definite; Cholesky factorization failed.
This can be circumvented by reducing the number of floating points in elements of A. Most probably a numerical issue. Here is the SDP instance:
using JuMP
using Clarabel
A = [0.0 11.08638213 4.23716998 3.42755487 15.4293342
11.08638213 0.0 6.26171389 21.78641091 23.78527429
4.23716998 6.26171389 0.0 25.51211523 30.34857242
3.42755487 21.78641091 25.51211523 0.0 2.24851143
15.4293342 23.78527429 30.34857242 2.24851143 0.0]
n = size(A)[1]
m = JuMP.Model(Clarabel.Optimizer)
@variable(m, γ >= 0)
@variable(m, W[1:n, 1:n], PSD)
@variable(m, 0 <= x[1:n, 1:n] <= 1)
@objective(m, Max, γ)
@constraint(m, [i=1:(n-1), j=(i+1):n], W[i,j] == -A[i,j]*x[i,j] + γ/n)
@constraint(m, [i=1:n], W[i,i] == sum(A[i,:] .* x[i,:]) - γ*(n-1)/n)
@constraint(m, [i=1:n], sum(x[i,:]) >= 1)
@constraint(m, [i=1:(n-1), j=(i+1):n], x[i,j] == x[j,i])
@constraint(m, sum(x) == 2*(n-1))
JuMP.optimize!(m)
The license links in the README (and in other places it's referred to, probably - I think the only other place is at the bottom of https://oxfordcontrol.github.io/Clarabel.jl/stable/) do not work. Currently they lead to
https://github.com/oxfordcontrol/Clarabel.jl/blob/main/LICENSE.md
when it should be
https://github.com/oxfordcontrol/Clarabel.jl/blob/main/LICENSE.
I was going to edit it but I wasn't sure if you'd want the link change, or the extension to be corrected to .md in the repository.
That's with version 0.1.1
, JuMP-1.1.1
julia> let N = 10, M = 50, r = 3, eps=1e-8
using Random
Random.seed!(1)
m = JuMP.Model()
@variable m P[1:N, 1:N] Symmetric
@constraint m P in PSDCone()
c = rand(N^2)
QQ = (Q = rand(N, r); Q*Q')
@objective m Max dot(P, c)
A_i = zeros(N^2)
for i in 1:M
A_i = rand!(A_i)
@constraint m dot(A_i, P) == dot(A_i, QQ)
end
m
set_optimizer(
m,
# optimizer_with_attributes(SCS.Optimizer, "eps_abs"=>eps, "eps_rel"=>eps),
optimizer_with_attributes(Clarabel.Optimizer, "tol_gap_abs"=>eps, "tol_gap_rel"=>eps),
)
optimize!(m)
@info "" objective_value(m) dot(QQ, c)
m
end
ERROR: MethodError: no method matching qdldl(::SparseMatrixCSC{Float64, Int64}; Dsigns=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1 … -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], regularize_eps=1.0e-13, regularize_delta=2.0e-7, logical=true)
Closest candidates are:
qdldl(::SparseMatrixCSC{Tv, Ti}; perm, logical) where {Tv<:AbstractFloat, Ti<:Integer} at ~/.julia/packages/QDLDL/Des1p/src/QDLDL.jl:100 got unsupported keyword arguments "Dsigns", "regularize_eps", "regularize_delta"
Stacktrace:
[1] kwerr(::NamedTuple{(:Dsigns, :regularize_eps, :regularize_delta, :logical), Tuple{Vector{Int64}, Float64, Float64, Bool}}, ::Function, ::SparseMatrixCSC{Float64, Int64})
@ Base ./error.jl:163
[2] Clarabel.QDLDLDirectLDLSolver{Float64}(KKT::SparseMatrixCSC{Float64, Int64}, Dsigns::Vector{Int64}, settings::Clarabel.Settings{Float64})
@ Clarabel ~/.julia/packages/Clarabel/ZrqWV/src/kktsolvers/direct-ldl/directldl_qdldl.jl:12
[3] Clarabel.DirectLDLKKTSolver{Float64}(P::SparseMatrixCSC{Float64, Int64}, A::SparseMatrixCSC{Float64, Int64}, cones::Clarabel.ConeSet{Float64}, m::Int64, n::Int64, settings::Clarabel.Settings{Float64})
@ Clarabel ~/.julia/packages/Clarabel/ZrqWV/src/kktsolvers/kktsolver_directldl.jl:78
[4] Clarabel.DefaultKKTSystem{Float64}(data::Clarabel.DefaultProblemData{Float64}, cones::Clarabel.ConeSet{Float64}, settings::Clarabel.Settings{Float64})
@ Clarabel ~/.julia/packages/Clarabel/ZrqWV/src/kktsystem.jl:33
[5] macro expansion
@ ~/.julia/packages/Clarabel/ZrqWV/src/solver.jl:105 [inlined]
[6] macro expansion
@ ~/.julia/packages/TimerOutputs/jgSVI/src/TimerOutput.jl:252 [inlined]
[7] macro expansion
@ ~/.julia/packages/Clarabel/ZrqWV/src/solver.jl:104 [inlined]
[8] macro expansion
@ ~/.julia/packages/TimerOutputs/jgSVI/src/TimerOutput.jl:236 [inlined]
[9] setup!(s::Clarabel.Solver{Float64}, P::SparseMatrixCSC{Float64, Int64}, q::Vector{Float64}, A::SparseMatrixCSC{Float64, Int64}, b::Vector{Float64}, cone_types::Vector{Clarabel.SupportedCone})
@ Clarabel ~/.julia/packages/Clarabel/ZrqWV/src/solver.jl:88
[10] copy_to(dest::Optimizer{Float64}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}})
@ Clarabel ~/.julia/packages/Clarabel/ZrqWV/src/MOI_wrapper/MOI_wrapper.jl:305
[11] optimize!
@ ~/.julia/packages/MathOptInterface/FHFUH/src/MathOptInterface.jl:80 [inlined]
[12] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{Optimizer{Float64}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/FHFUH/src/Utilities/cachingoptimizer.jl:313
[13] optimize!
@ ~/.julia/packages/MathOptInterface/FHFUH/src/Bridges/bridge_optimizer.jl:348 [inlined]
[14] optimize!
@ ~/.julia/packages/MathOptInterface/FHFUH/src/MathOptInterface.jl:81 [inlined]
[15] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Optimizer{Float64}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/FHFUH/src/Utilities/cachingoptimizer.jl:313
[16] optimize!(model::Model; ignore_optimize_hook::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ JuMP ~/.julia/packages/JuMP/0STkJ/src/optimizer_interface.jl:161
[17] optimize!(model::Model)
@ JuMP ~/.julia/packages/JuMP/0STkJ/src/optimizer_interface.jl:143
[18] top-level scope
@ REPL[83]:26
using Clarabel, SparseArrays, LinearAlgebra
function gMVP(E, V)
N = length(E)
P = sparse(V)
q = zeros(N)
A = sparse([ones(1, N); -Matrix{Float64}(I, N, N)])
b = [1.0; zeros(N)]
cones = [Clarabel.ZeroConeT(1), Clarabel.NonnegativeConeT(N)]
settings = Clarabel.Settings()
settings.verbose = false
solver = Clarabel.Solver()
Clarabel.setup!(solver, P, q, A, b, cones, settings)
result = Clarabel.solve!(solver)
return result.x
end
E3 = [7.2, 7.2, 7.2]
V3 = [286.0 101.0 239.0
101.0 214.0 269.0
239.0 269.0 556.0]
gMVP(E3, V3)
V1 = convert(Matrix{BigFloat}, V3)
E1 = convert(Vector{BigFloat}, E3)
gMVP(E1, V1)
For Float64, it finds the optimal solution. When using BigFloat:
@ Clarabel ~/.julia/packages/Clarabel/BTj1J/src/solver.jl:72
--- the last 2 lines are repeated 37005 more times ---
[74022] setup!(s::Clarabel.Solver{Float64}, P::SparseMatrixCSC{BigFloat, Int64}, c::Vector{Float64}, A::SparseMatrixCSC{Float64, Int64}, b::Vector{Float64}, cone_types::Vector{Clarabel.SupportedCone})
@ Clarabel ~/.julia/packages/Clarabel/BTj1J/src/solver.jl:69
[74023] setup!(s::Clarabel.Solver{Float64}, P::SparseMatrixCSC{BigFloat, Int64}, c::Vector{Float64}, A::SparseMatrixCSC{Float64, Int64}, b::Vector{Float64}, cone_types::Vector{Clarabel.SupportedCone}, settings::Clarabel.Settings{Float64})
@ Clarabel ~/.julia/packages/Clarabel/BTj1J/src/solver.jl:66
[74024] gMVP(E::Vector{BigFloat}, V::Matrix{BigFloat})
@ Main ~/Documents/test.jl:55
I just installed Julia v1.8.3 with JuMP v1.5.0, and I'm eager to try out Clarabel. Unfortunately, I can't seem to install it, and the package manager appears to complain about compatibility with other solvers? I just ran pkg> update
so all my installed packages are current. Here is the error message from the package manager.
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package COSMO [1e616198]:
COSMO [1e616198] log:
├─possible versions are: 0.3.0-0.8.6 or uninstalled
├─restricted to versions * by an explicit requirement, leaving only versions 0.3.0-0.8.6
├─restricted by compatibility requirements with QDLDL [bfc457fd] to versions: 0.3.0-0.4.1 or uninstalled, leaving only versions: 0.3.0-0.4.1
│ └─QDLDL [bfc457fd] log:
│ ├─possible versions are: 0.1.3-0.3.0 or uninstalled
│ └─restricted by compatibility requirements with Clarabel [61c947e1] to versions: 0.3.0
│ └─Clarabel [61c947e1] log:
│ ├─possible versions are: 0.1.0-0.3.0 or uninstalled
│ ├─restricted to versions * by an explicit requirement, leaving only versions 0.1.0-0.3.0
│ └─restricted by compatibility requirements with JuMP [4076af6c] to versions: 0.1.1-0.3.0 or uninstalled, leaving only versions: 0.1.1-0.3.0
│ └─JuMP [4076af6c] log:
│ ├─possible versions are: 0.18.3-1.5.0 or uninstalled
│ ├─restricted to versions * by an explicit requirement, leaving only versions 0.18.3-1.5.0
│ └─restricted by compatibility requirements with Pajarito [2f354839] to versions: [0.18.3-0.18.6, 1.0.0-1.5.0]
│ └─Pajarito [2f354839] log:
│ ├─possible versions are: 0.6.1-0.8.0 or uninstalled
│ └─restricted to versions * by an explicit requirement, leaving only versions 0.6.1-0.8.0
└─restricted by compatibility requirements with MathOptInterface [b8f27783] to versions: 0.8.4-0.8.6 or uninstalled — no versions left
└─MathOptInterface [b8f27783] log:
├─possible versions are: 0.5.0-1.11.0 or uninstalled
├─restricted to versions * by an explicit requirement, leaving only versions 0.5.0-1.11.0
├─restricted by compatibility requirements with Cbc [9961bab8] to versions: [0.6.0-0.6.4, 0.8.0-0.8.4, 0.9.1-1.11.0]
│ └─Cbc [9961bab8] log:
│ ├─possible versions are: 0.4.0-1.0.3 or uninstalled
│ ├─restricted to versions * by an explicit requirement, leaving only versions 0.4.0-1.0.3
│ └─restricted by compatibility requirements with MathOptInterface [b8f27783] to versions: 0.4.4-1.0.3 or uninstalled, leaving only versions: 0.4.4-1.0.3
│ └─MathOptInterface [b8f27783] log: see above
├─restricted by compatibility requirements with OSQP [ab2f91bb] to versions: [0.5.0-0.6.4, 0.8.0-0.8.4, 0.9.1-0.9.22, 0.10.3-1.11.0], leaving only versions: [0.6.0-0.6.4, 0.8.0-0.8.4, 0.9.1-0.9.22, 0.10.3-1.11.0]
│ └─OSQP [ab2f91bb] log:
│ ├─possible versions are: 0.1.8-0.8.0 or uninstalled
│ ├─restricted to versions * by an explicit requirement, leaving only versions 0.1.8-0.8.0
│ └─restricted by compatibility requirements with MathOptInterface [b8f27783] to versions: 0.5.1-0.8.0 or uninstalled, leaving only versions: 0.5.1-0.8.0
│ └─MathOptInterface [b8f27783] log: see above
├─restricted by compatibility requirements with COSMO [1e616198] to versions: [0.8.0-0.8.4, 0.9.1-0.9.22, 0.10.7-1.11.0]
│ └─COSMO [1e616198] log: see above
├─restricted by compatibility requirements with MadNLP [2621e9c9] to versions: [0.9.0-0.9.22, 0.10.5-0.10.8, 1.0.0-1.11.0], leaving only versions: [0.9.1-0.9.22, 0.10.7-0.10.8, 1.0.0-1.11.0]
│ └─MadNLP [2621e9c9] log:
│ ├─possible versions are: 0.1.0-0.5.1 or uninstalled
│ ├─restricted to versions * by an explicit requirement, leaving only versions 0.1.0-0.5.1
│ └─restricted by compatibility requirements with JuMP [4076af6c] to versions: 0.2.0-0.5.1 or uninstalled, leaving only versions: 0.2.0-0.5.1
│ └─JuMP [4076af6c] log: see above
└─restricted by compatibility requirements with Clarabel [61c947e1] to versions: 1.1.0-1.11.0
└─Clarabel [61c947e1] log: see above
The family of functions
mul_WtWinv!
mul_WtW!
Need to :
Right now, solve!() in directldl_cholmod.jl and directldl_mklpardiso.jl will allocate additional memory when we call it.
Thanks for making this cool solver available! I've added it to ConvexTests.jl, which uses test problems from Convex.jl and SumOfSquares.jl in order to test the reliability of Julia-accessible solvers. So these are somewhat weird problems that tend to test edge cases, rather than standard benchmarking problems. I started it when I couldn't find a single solver that could solve all of the problems from Convex.jl's tests (in order to run in CI).
You can see the results for Clarabel here: https://ericphanson.github.io/ConvexTests.jl/dev/Clarabel/
If you would prefer ConvexTests to run the solver with different settings, you can make a PR to adjust them here: https://github.com/ericphanson/ConvexTests.jl/blob/9653106903f05143a0785e5560fd9d5a8f52690d/Clarabel/test.jl#L6-L11
If you are interested, it also possible to run Convex's tests as part of your own test suite to prevent regression on these problems. See e.g. https://jump.dev/Convex.jl/stable/problem_depot/#Problem-Depot.
I am trying Julia 1.9 rc1
Clarabel [61c947e1-3e6d-4ee4-985a-eec8c727bd6e]
│ WARNING: method definition for Solver at ~/.julia/packages/Clarabel/CuZFo/src/types.jl:346 declares type variable T but does not use it.
I have a small quadratic program with linear and semidefinite constraints. The optimal objective value is -130.
using LinearAlgebra, SparseArrays, Clarabel
P = sparse([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 3, 4, 5, 6, 7, 8, 9, 10, 11, 4, 5, 6, 7, 8, 9, 10, 11, 5, 6, 7, 8, 9, 10, 11, 6, 7, 8, 9, 10, 11, 7, 8, 9, 10, 11, 8, 9, 10, 11, 9, 10, 11, 10, 11, 11], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 10, 10, 11], [2.0, 4.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4444444444444444, 0.0, 0.0, 0.0, 0.0, -0.8888888888888887, 0.0, 0.0, 0.0, 0.44444444444444464, 0.0, 0.0, 0.0, 0.0, 0.0, -0.4444444444444443, 0.0, 1.0564784053156147, -1.3343410355945482, 0.0, 0.0, 0.28190303568898906, 0.0, -0.6013931428031767, 1.5282392026578073, 0.0, 0.0, 0.19933554817275745, 0.0, -0.42524916943521596, 1.1111111111111112, 0.0, 0.0, 0.0, 0.0, 0.4444444444444444, 0.0, 0.0, 0.0, 1.5282392026578073, 0.0, 0.14617940199335547, 1.1111111111111112, 0.0, 0.7774086378737541], 11, 11);
q = [-1.0, -1.0, -16.81342790821346, -0.0, 27.86611507785657, 20.704318936877076, 0.6285393610547088, 15.399214345840367, 6.132890365448506, -0.0, 15.049833887043189];
A = sparse([1, 12, 2, 12, 3, 5, 8, 4, 12, 6, 9, 7, 12, 10, 11, 12], [1, 1, 2, 2, 3, 4, 5, 6, 6, 7, 8, 9, 9, 10, 11, 11], [-1.0, 1.0, -1.0, 1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, 1.0], 12, 11);
b = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.0];
cones = [Clarabel.NonnegativeConeT(1), Clarabel.PSDTriangleConeT(4), Clarabel.NonnegativeConeT(1)];
settings = Clarabel.Settings();
solver = Clarabel.Solver();
Clarabel.setup!(solver, P, q, A, b, cones, settings);
result = Clarabel.solve!(solver)
Now, the result status is SOLVED
, however, Clarabel reports a better optimal objective value than possible:
julia> result.obj_val
-146.0668053087124
julia> dot(result.x, P, result.x)/2 + dot(result.x, q)
-129.2779437022457
So the actual solution that is returned has a different objective value that the objective value in the result object. Additionally, the duality gap is reported to be 9.79e-9
. For this small a gap, even the "actual" result seems to be still a bit far from the optimal value. Mosek reports -130.00045750377342
, Mathematica -130.00045729259804
...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.