oxfordcontrol / clarabeldocs Goto Github PK
View Code? Open in Web Editor NEWDocumentation for the Clarabel interior point conic solver
License: Apache License 2.0
Documentation for the Clarabel interior point conic solver
License: Apache License 2.0
Was the name of this repository recently changed? The documentation seems to now be deployed on
https://oxfordcontrol.github.io/clarabel
previously the documentation was deployed on
https://oxfordcontrol.github.io/ClarabelDocs/
As a result of the change many links to the documentation are currently linking to nowhere.
I have a small quadratic program with linear and semidefinite constraints. The optimal objective value is -130.
using LinearAlgebra, SparseArrays, Clarabel
P = sparse([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 3, 4, 5, 6, 7, 8, 9, 10, 11, 4, 5, 6, 7, 8, 9, 10, 11, 5, 6, 7, 8, 9, 10, 11, 6, 7, 8, 9, 10, 11, 7, 8, 9, 10, 11, 8, 9, 10, 11, 9, 10, 11, 10, 11, 11], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 10, 10, 11], [2.0, 4.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4444444444444444, 0.0, 0.0, 0.0, 0.0, -0.8888888888888887, 0.0, 0.0, 0.0, 0.44444444444444464, 0.0, 0.0, 0.0, 0.0, 0.0, -0.4444444444444443, 0.0, 1.0564784053156147, -1.3343410355945482, 0.0, 0.0, 0.28190303568898906, 0.0, -0.6013931428031767, 1.5282392026578073, 0.0, 0.0, 0.19933554817275745, 0.0, -0.42524916943521596, 1.1111111111111112, 0.0, 0.0, 0.0, 0.0, 0.4444444444444444, 0.0, 0.0, 0.0, 1.5282392026578073, 0.0, 0.14617940199335547, 1.1111111111111112, 0.0, 0.7774086378737541], 11, 11);
q = [-1.0, -1.0, -16.81342790821346, -0.0, 27.86611507785657, 20.704318936877076, 0.6285393610547088, 15.399214345840367, 6.132890365448506, -0.0, 15.049833887043189];
A = sparse([1, 12, 2, 12, 3, 5, 8, 4, 12, 6, 9, 7, 12, 10, 11, 12], [1, 1, 2, 2, 3, 4, 5, 6, 6, 7, 8, 9, 9, 10, 11, 11], [-1.0, 1.0, -1.0, 1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, 1.0], 12, 11);
b = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.0];
cones = [Clarabel.NonnegativeConeT(1), Clarabel.PSDTriangleConeT(4), Clarabel.NonnegativeConeT(1)];
settings = Clarabel.Settings();
solver = Clarabel.Solver();
Clarabel.setup!(solver, P, q, A, b, cones, settings);
result = Clarabel.solve!(solver)
Now, the result status is SOLVED
, however, Clarabel reports a better optimal objective value than possible:
julia> result.obj_val
-146.0668053087124
julia> dot(result.x, P, result.x)/2 + dot(result.x, q)
-129.2779437022457
So the actual solution that is returned has a different objective value that the objective value in the result object. Additionally, the duality gap is reported to be 9.79e-9
. For this small a gap, even the "actual" result seems to be still a bit far from the optimal value. Mosek reports -130.00045750377342
, Mathematica -130.00045729259804
...
I was wondering if it would be more efficient to have many small cones or few big cones. We are code-generating MPC problems for control, so we have the opportunity to pre-process the problem at no additional runtime cost.
I think I found the answer in the Clarabel.jl issue tracker oxfordcontrol/Clarabel.jl#86. My understanding is that preprocessing the problem into as few cones as possible would reduce function call overhead from the solver so it should be faster - is that correct? Would it affect the numerical properties in any way, such as significantly changing the factorization? Is the answer the same for Rust and Julia?
I'm asking on the docs issue tracker because I first looked at the docs before finding the answer in the issue trackers. I suggest adding a "tips and tricks" or "frequently asked questions" section in the docs for information such as this.
I'm a very happy user of Clarabel and have now moved away from all my previous choices (ECOS, SCS, quadprog).
I am using it from Python, using the CVXPY API and qpsolvers, to solve large scale problems, e.g., 256 variables and 100K linear equality and inequality constraints.
I now run into an issue where scaling the problem by a factor of 100 changes the results considerably. Clarabel seems to be working well when the data mean is order 1 and the performance is reduced considerably when it is 100 times smaller, even though the problems are equivalent.
Is it a tolerance issue?
Should I scale the data myself?
What's your recommendation on this issue?
Well done, and best wishes.
In the overview of the cone types, the description for the PSD cone only says that the cone is build from the columnwise stacking of the upper triangle. However, the page on the semidefinite cone explains that actually, we are dealing with the more common scaled vectorization. This information should also be present in the API reference.
Data updating functionality new to v0.7.0 requires documentation
Hello there,
It seems the documentation on possible solver output is missing a couple of fields, c.f. https://github.com/oxfordcontrol/Clarabel.rs/blob/abff1b457871c2e8a92a6504749321117e263e35/src/python/impl_default_py.rs#L96-L102
AlmostSolved,
AlmostPrimalInfeasible,
AlmostDualInfeasible,
NumericalError,
InsufficientProgress,
I am not sure as to what they mean exactly, so, would it be possible to document them for your users?
Cheers,
Roman
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.