Giter Site home page Giter Site logo

oxfordcontrol / clarabeldocs Goto Github PK

View Code? Open in Web Editor NEW
13.0 6.0 5.0 2.16 MB

Documentation for the Clarabel interior point conic solver

License: Apache License 2.0

Julia 100.00%
optimization conic-optimization conic-programs convex-optimization interior-point-method julia-language linear-programming optimization-algorithms quadratic-programming semidefinite-programming

clarabeldocs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

clarabeldocs's Issues

Objective value is wrong

I have a small quadratic program with linear and semidefinite constraints. The optimal objective value is -130.

using LinearAlgebra, SparseArrays, Clarabel
P = sparse([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 3, 4, 5, 6, 7, 8, 9, 10, 11, 4, 5, 6, 7, 8, 9, 10, 11, 5, 6, 7, 8, 9, 10, 11, 6, 7, 8, 9, 10, 11, 7, 8, 9, 10, 11, 8, 9, 10, 11, 9, 10, 11, 10, 11, 11], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 10, 10, 11], [2.0, 4.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4444444444444444, 0.0, 0.0, 0.0, 0.0, -0.8888888888888887, 0.0, 0.0, 0.0, 0.44444444444444464, 0.0, 0.0, 0.0, 0.0, 0.0, -0.4444444444444443, 0.0, 1.0564784053156147, -1.3343410355945482, 0.0, 0.0, 0.28190303568898906, 0.0, -0.6013931428031767, 1.5282392026578073, 0.0, 0.0, 0.19933554817275745, 0.0, -0.42524916943521596, 1.1111111111111112, 0.0, 0.0, 0.0, 0.0, 0.4444444444444444, 0.0, 0.0, 0.0, 1.5282392026578073, 0.0, 0.14617940199335547, 1.1111111111111112, 0.0, 0.7774086378737541], 11, 11);
q = [-1.0, -1.0, -16.81342790821346, -0.0, 27.86611507785657, 20.704318936877076, 0.6285393610547088, 15.399214345840367, 6.132890365448506, -0.0, 15.049833887043189];
A = sparse([1, 12, 2, 12, 3, 5, 8, 4, 12, 6, 9, 7, 12, 10, 11, 12], [1, 1, 2, 2, 3, 4, 5, 6, 6, 7, 8, 9, 9, 10, 11, 11], [-1.0, 1.0, -1.0, 1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, 1.0], 12, 11);
b = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.0];
cones = [Clarabel.NonnegativeConeT(1), Clarabel.PSDTriangleConeT(4), Clarabel.NonnegativeConeT(1)];
settings = Clarabel.Settings();
solver = Clarabel.Solver();
Clarabel.setup!(solver, P, q, A, b, cones, settings);
result = Clarabel.solve!(solver)

Now, the result status is SOLVED, however, Clarabel reports a better optimal objective value than possible:

julia> result.obj_val
-146.0668053087124
julia> dot(result.x, P, result.x)/2 + dot(result.x, q)
-129.2779437022457

So the actual solution that is returned has a different objective value that the objective value in the result object. Additionally, the duality gap is reported to be 9.79e-9. For this small a gap, even the "actual" result seems to be still a bit far from the optimal value. Mosek reports -130.00045750377342, Mathematica -130.00045729259804...

Tips on collecting multiple cones

I was wondering if it would be more efficient to have many small cones or few big cones. We are code-generating MPC problems for control, so we have the opportunity to pre-process the problem at no additional runtime cost.

I think I found the answer in the Clarabel.jl issue tracker oxfordcontrol/Clarabel.jl#86. My understanding is that preprocessing the problem into as few cones as possible would reduce function call overhead from the solver so it should be faster - is that correct? Would it affect the numerical properties in any way, such as significantly changing the factorization? Is the answer the same for Rust and Julia?

I'm asking on the docs issue tracker because I first looked at the docs before finding the answer in the issue trackers. I suggest adding a "tips and tricks" or "frequently asked questions" section in the docs for information such as this.

Reduced performance with scaled-down data

I'm a very happy user of Clarabel and have now moved away from all my previous choices (ECOS, SCS, quadprog).

I am using it from Python, using the CVXPY API and qpsolvers, to solve large scale problems, e.g., 256 variables and 100K linear equality and inequality constraints.

I now run into an issue where scaling the problem by a factor of 100 changes the results considerably. Clarabel seems to be working well when the data mean is order 1 and the performance is reduced considerably when it is 100 times smaller, even though the problems are equivalent.

Is it a tolerance issue?
Should I scale the data myself?
What's your recommendation on this issue?

Well done, and best wishes.

Documentation of PSD cone

In the overview of the cone types, the description for the PSD cone only says that the cone is build from the columnwise stacking of the upper triangle. However, the page on the semidefinite cone explains that actually, we are dealing with the more common scaled vectorization. This information should also be present in the API reference.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.