Giter Site home page Giter Site logo

julianonconvex / nonconvex.jl Goto Github PK

View Code? Open in Web Editor NEW
110.0 4.0 10.0 2.37 MB

Toolbox for gradient-based and derivative-free non-convex constrained optimization with continuous and/or discrete variables.

Home Page: https://julianonconvex.github.io/Nonconvex.jl/

License: MIT License

Julia 100.00%
augmented-lagrangian-method automatic-differentiation bayesian-optimization black-box-optimization derivative-free-optimization discrete-optimization evolutionary-algorithms global-optimization implicit-differentiation interior-point-optimizer

nonconvex.jl's Introduction

Nonconvex

Actions Status codecov

Nonconvex.jl is an umbrella package over implementations and wrappers of a number of nonconvex constrained optimization algorithms and packages making use of automatic differentiation. Zero, first and second order methods are available. Nonlinear equality and inequality constraints as well as integer and nonlinear semidefinite constraints are supported. A detailed description of all the algorithms and features available in Nonconvex can be found in the documentation.

The JuliaNonconvex organization

The JuliaNonconvex organization hosts a number of packages which are available for use in Nonconvex.jl. The correct package is loaded using the Nonconvex.@load macro with the algorithm or package name. See the documentation for more details. The following is a summary of all the packages in the JuliaNonconvex organization.

Package Description Tests Coverage
Nonconvex.jl Umbrella package for nonconvex optimization Actions Status codecov
NonconvexCore.jl All the interface functions and structs Build Status Coverage
NonconvexMMA.jl Method of moving asymptotes implementation in pure Julia Build Status Coverage
NonconvexIpopt.jl Ipopt.jl wrapper Build Status Coverage
NonconvexNLopt.jl NLopt.jl wrapper Build Status Coverage
NonconvexPercival.jl Percival.jl wrapper (an augmented Lagrangian algorithm implementation) Build Status Coverage
NonconvexJuniper.jl Juniper.jl wrapper Build Status Coverage
NonconvexPavito.jl Pavito.jl wrapper Build Status Coverage
NonconvexSemidefinite.jl Nonlinear semi-definite programming algorithm Build Status Coverage
NonconvexMultistart.jl Multi-start optimization algorithms Build Status Coverage
NonconvexBayesian.jl Constrained Bayesian optimization implementation Build Status Coverage
NonconvexSearch.jl Multi-trajectory and local search methods Build Status Coverage
NonconvexAugLagLab.jl Experimental augmented Lagrangian package Build Status Coverage
NonconvexUtils.jl Some utility functions for automatic differentiation, history tracing, implicit functions and more. Build Status Coverage
NonconvexTOBS.jl Binary optimization algorithm called "topology optimization of binary structures" (TOBS) which was originally developed in the context of optimal distribution of material in mechanical components. Build Status Coverage
NonconvexMetaheuristics.jl Metaheuristic gradient-free optimization algorithms as implemented in Metaheuristics.jl. Build Status Coverage
NonconvexNOMAD.jl NOMAD algorithm as wrapped in the NOMAD.jl. Build Status Coverage

Design philosophy

Nonconvex.jl is a Julia package that implements and wraps a number of constrained nonlinear and mixed integer nonlinear programming solvers. There are 3 focus points of Nonconvex.jl compared to similar packages such as JuMP.jl and NLPModels.jl:

  1. Emphasis on a function-based API. Objectives and constraints are normal Julia functions.
  2. The ability to nest algorithms to create more complicated algorithms.
  3. The ability to automatically handle structs and different container types in the decision variables by automatically vectorizing and un-vectorizing them in an AD compatible way.

Installing Nonconvex

To install Nonconvex.jl, open a Julia REPL and type ] to enter the package mode. Then run:

add Nonconvex

Alternatively, copy and paste the following code to a Julia REPL:

using Pkg; Pkg.add("Nonconvex")

Loading Nonconvex

To load and start using Nonconvex.jl, run:

using Nonconvex

Quick example

using Nonconvex
Nonconvex.@load NLopt

f(x) = sqrt(x[2])
g(x, a, b) = (a*x[1] + b)^3 - x[2]

model = Model(f)
addvar!(model, [0.0, 0.0], [10.0, 10.0])
add_ineq_constraint!(model, x -> g(x, 2, 0))
add_ineq_constraint!(model, x -> g(x, -1, 1))

alg = NLoptAlg(:LD_MMA)
options = NLoptOptions()
r = optimize(model, alg, [1.0, 1.0], options = options)
r.minimum # objective value
r.minimzer # decision variables

Algorithms

A summary of all the algorithms available in Nonconvex through different packages is shown in the table below. Scroll right to see more columns and see a description of the columns below the table.

Algorithm name Is meta-algorithm? Algorithm package Order Finite bounds Infinite bounds Inequality constraints Equality constraints Semidefinite constraints Integer variables
Method of moving asymptotes (MMA) NonconvexMMA.jl (pure Julia) or NLopt.jl 1
Primal dual interior point method Ipopt.jl 1 or 2
DIviding RECTangles algorithm (DIRECT) NLopt.jl 0
Controlled random search (CRS) NLopt.jl 0
Multi-Level Single-Linkage (MLSL) Limited NLopt.jl Depends on sub-solver
StoGo NLopt.jl 1
AGS NLopt.jl 0
Improved Stochastic Ranking Evolution Strategy (ISRES) NLopt.jl 0
ESCH NLopt.jl 0
COBYLA NLopt.jl 0
BOBYQA NLopt.jl 0
NEWUOA NLopt.jl 0
Principal AXIS (PRAXIS) NLopt.jl 0
Nelder Mead NLopt.jl 0
Subplex NLopt.jl 0
CCSAQ NLopt.jl 1
SLSQP NLopt.jl 1
TNewton NLopt.jl 1
Shifted limited-memory variable-metric NLopt.jl 1
Augmented Lagrangian in NLopt Limited NLopt.jl Depends on sub-solver
Augmented Lagrangian in Percival Percival.jl 1 or 2
Multiple trajectory search NonconvexSearch.jl 0
Branch and bound for mixed integer nonlinear programming Juniper.jl 1 or 2
Sequential polyhedral outer-approximations for mixed integer nonlinear programming Pavito.jl 1 or 2
Evolutionary centers algorithm (ECA) Metaheuristics.jl 0
Differential evolution (DE) Metaheuristics.jl 0
Particle swarm optimization (PSO) Metaheuristics.jl 0
Artificial bee colony (ABC) Metaheuristics.jl 0
Gravitational search algorithm (GSA) Metaheuristics.jl 0
Simulated annealing (SA) Metaheuristics.jl 0
Whale optimization algorithm (WOA) Metaheuristics.jl 0
Machine-coded compact genetic algorithm (MCCGA) Metaheuristics.jl 0
Genetic algorithm (GA) Metaheuristics.jl 0
Nonlinear optimization with the MADS algorithm (NOMAD) NOMAD.jl 0 Limited
Topology optimization of binary structures (TOBS) NonconvexTOBS.jl 1 Binary Binary
Hyperband Hyperopt.jl Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver
Random search Hyperopt.jl Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver
Latin hypercube search Hyperopt.jl Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver
Surrogate assisted optimization NonconvexBayesian.jl Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver
Log barrier method for nonlinear semidefinite constraint handling NonconvexSemidefinite.jl Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver Depends on sub-solver

The following is an explanation of all the columns in the table:

  • Algorithm name. This is the name of the algorithm and/or its acronym. Some algorithms have multiple variants implemented in their respective packages. When that's the case, the whole family of algorithms is mentioned only once.
  • Is meta-algorithm? Some algorithms are meta-algorithms that call a sub-algorithm to do the optimization after transforming the problem. In this case, a lot of the properties of the meta-algorithm are inherited from the sub-algorithm. So if the sub-algorithm requires gradients or Hessians of functions in the model, the meta-algorithm will also require gradients and Hessians of functions in the model. Fields where the property of the meta-algorithm is inherited from the sub-solver are indicated using the "Depends on sub-solver" entry. Some algorithms in NLopt have a "Limited" meta-algorithm status because they can only be used to wrap algorithms from NLopt.
  • Algorithm package. This is the Julia package that either implements the algorithm or calls it from another programming language. Nonconvex wraps all these packages using a consistent API while allowing each algorithm to be customized where possible and have its own set of options.
  • Order. This is the order of the algorithm. Zero-order algorithms only require the evaluation of the objective and constraint functions, they don't require any gradients or Hessians of objective and constraint functions. First-order algorithms require both the value and gradients of objective and/or constraint functions. Second-order algorithms require the value, gradients and Hessians of objective and/or constraint functions.
  • Finite bounds. This is true if the algorithm supports finite lower and upper bound constraints on the decision variables. One special case is the TOBS algorithm which only supports binary decision variables so an entry of "Binary" is used instead of true/false.
  • Infinite bounds. This is true if the algorithm supports unbounded decision variables either from below, above or both.
  • Inequality constraints. This is true if the algorithm supports nonlinear inequality constraints.
  • Equality constraints. This is true if the algorithm supports nonlinear equality constraints. Algorithms that only support linear equality constraints are given an entry of "Limited".
  • Semidefinite constraints. This is true if the algorithm supports nonlinear semidefinite constraints.
  • Integer variables. This is true if the algorithm supports integer/discrete/binary decision variables, not just continuous. One special case is the TOBS algorithm which only supports binary decision variables so an entry of "Binary" is used instead of true/false.

How to contribute?

A beginner? The easiest way to contribute is to read the documentation, test the package and report issues.

An impulsive tester? Improving the test coverage of any package is another great way to contribute to the JuliaNonconvex org. Check the coverage report of any of the packages above by clicking the coverage badge. Find the red lines in the report and figure out tests that would cover these lines of code.

An algorithm head? There are plenty of optimization algorithms that can be implemented and interfaced in Nonconvex.jl. You could be developing the next big nonconvex semidefinite programming algorithm right now! Or the next constraint handling method for evolutionary algorithms!

A hacker? Let's figure out how to wrap some optimization package in Julia in the unique, simple and nimble Nonconvex.jl style.

A software designer? Let's talk about design decisions and how to improve the modularity of the ecosystem.

You can always reach out by opening an issue.

How to cite?

If you use Nonconvex.jl for your own research, please consider citing the following publication: Mohamed Tarek. Nonconvex.jl: A Comprehensive Julia Package for Non-Convex Optimization. 2023. doi: 10.13140/RG.2.2.36120.37121.

@article{MohamedTarekNonconvexjl,
  doi = {10.13140/RG.2.2.36120.37121},
  url = {https://rgdoi.net/10.13140/RG.2.2.36120.37121},
  author = {Tarek,  Mohamed},
  language = {en},
  title = {Nonconvex.jl: A Comprehensive Julia Package for Non-Convex Optimization},
  year = {2023}
}

nonconvex.jl's People

Contributors

carlolucibello avatar github-actions[bot] avatar lrnv avatar matbesancon avatar mohamed82008 avatar oxinabox avatar pizhn avatar tmigot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

nonconvex.jl's Issues

Conflict with NLopt

optimize in Nonconvex will conflict with optimize in NLopt. Better resolve it when import both

Proximal algorithms and generalized convex constraints

Some proximal algorithms can be used to solve non-convex optimization algorithms. https://github.com/kul-forbes/ProximalAlgorithms.jl should be able to support this. However for ideal performance, we need to be able to communicate any convex structure in the problem to the solver to use efficient proximal operators. This requires a method to represent linear, conic and other special constraints in Nonconvex. Efficient proximal operators for these constraints exist which can significantly speed up convergence.

Convert JuMP model to Nonconvex model

Now that we have DictModel it's trivial to change a JuMP model to a DictModel. This allows the use of JuMP syntax for linear constraint and variable definitions followed by use of Nonconvex for defining nonlinear functions.

flag based augmented Lagrangian

It would be nice to re-think the augmented Lagrangian algorithm by only relaxing constraints that have a :relax flag on them. Then users can choose the sub-algorithm that matches the relaxed problem.

Constraint handling for evolutionary optimization

Constraint handling methods can be implemented to generalize evolutionary algorithms. This can be either embedded in Evolutionary.jl, BlackBoxOptim.jl, HyperOpt.jl, etc. or it can be done in a non-invasive way.

SCIP integration

In #26, it was suggested that we can support SCIP for limited MINLP. This would require using MTK to get the expression of the functions because SCIP only supports a subset of nonlinear functions.

Useage with ComponentArrays

Thanks for the nice package!

I was wondering if it is possible to build a Model using ComponentArrays, and if not what would be needed to do so.
This would be especially useful for larger models with some kind of underlying structure. I also tried a NamedTuple, but maybe I am getting the concept of the constraint wrong here.

A MWE that does not work for me

using LinearAlgebra
using Nonconvex
using ComponentArrays

# Lower bound
p_lb = ComponentArray(
    qs = zeros(30),
    a = 1.0
)

# Upper Bound
p_ub = ComponentArray(
    qs = ones(30),
    a = 1.0
)

m = Model()
set_objective!(m, p->sum(abs2, p))

# Works fine
addvar!(m, p_lb, p_ub)

# Define inequality constraint
upper_bounds(p) = vec(p.qs) .- 1

upper_bounds(p_ub)

add_ineq_constraint!(m, upper_bounds)

Returns type Array has no field qs. I tried tracing the error, but got stuck in the add_ineq_constraint! method. If there is a quick PR to fix this, I would be happy to help 😃

Handle infeasibility gracefully in NLopt + Juniper

NLopt sometimes throws when the problem is infeasible. This causes Juniper + NLopt to throw sometimes because a node is infeasible. This can in theory be handled gracefully but would require a try-catch somewhere, either in Juniper or NLopt.

Percival : other types than Float64 and initialisation of the `inity` variable.

Hi,

Seems like what you did to set the inity parameter of Percival is not Ok : I cannot use your code to solve anything that is not Float64.

I tried overriding it by :

ST = BigFloat # ST for 'solve type'.
options = Nonconvex.AugLagOptions(
    ctol=ST(1e-15),
    atol=ST(1e-15),
    rtol=ST(1e-15),
    inity=x -> ones(ST,x)
)
...

But I still got the same error :

julia> result = Nonconvex.optimize(model, alg, x0, options=options)
[ Info:   iter        fx    normgp    normcx         μ     normy    sumc     inner_status        iter_type  
[ Info:      0   2.5e+00   4.6e+01   4.6e+02   1.0e+01   2.2e+01       5
[ Info:      1   2.5e+00   4.6e+01   4.6e+02   1.0e+02   2.2e+01      10      first_order         update_μ
ERROR: TypeError: in typeassert, expected Vector{Double64}, got a value of type Vector{Float64}
Stacktrace:
  [1] *(op::LinearOperators.LBFGSOperator{Float64}, v::Vector{Double64})
    @ LinearOperators ~\.julia\packages\LinearOperators\YEf3E\src\operations.jl:7
  [2] hprod!(nlp::NLPModelsModifiers.LBFGSModel, x::Vector{Double64}, v::Vector{Double64}, Hv::Vector{Double64}; kwargs::Base.Iterators.Pairs{Symbol, Double64, Tuple{Symbol}, NamedTuple{(:obj_weight,), Tuple{Double64}}})
    @ NLPModelsModifiers ~\.julia\packages\NLPModelsModifiers\QDvlk\src\quasi-newton.jl:65
  [3] (::NLPModels.var"#30#31"{Double64, NLPModelsModifiers.LBFGSModel, Vector{Double64}, Vector{Double64}})(v::Vector{Double64})
    @ NLPModels ~\.julia\packages\NLPModels\FNZ3q\src\nlp\api.jl:616
  [4] *(op::LinearOperators.LinearOperator{Double64}, v::Vector{Double64})
    @ LinearOperators ~\.julia\packages\LinearOperators\YEf3E\src\operations.jl:7
  [5] compute_Hs_slope_qs!(Hs::Vector{Double64}, H::LinearOperators.LinearOperator{Double64}, s::Vector{Double64}, g::Vector{Double64})
    @ SolverTools ~\.julia\packages\SolverTools\TmThx\src\auxiliary\bounds.jl:70
  [6] cauchy(x::Vector{Double64}, H::LinearOperators.LinearOperator{Double64}, g::Vector{Double64}, Δ::Double64, α::Double64, ℓ::Vector{Double64}, u::Vector{Double64}; μ₀::Double64, μ₁::Double64, σ::Double64)
    @ JSOSolvers ~\.julia\packages\JSOSolvers\w21mV\src\tron.jl:263
  [7] tron(::Val{:Newton}, nlp::NLPModelsModifiers.LBFGSModel; subsolver_logger::Base.CoreLogging.NullLogger, x::Vector{Double64}, μ₀::Double64, μ₁::Double64, σ::Double64, max_eval::Int64, max_time::Float64, max_cgiter::Int64, use_only_objgrad::Bool, cgtol::Double64, atol::Double64, rtol::Double64, fatol::Double64, frtol::Double64)
    @ JSOSolvers ~\.julia\packages\JSOSolvers\w21mV\src\tron.jl:91
  [8] tron(nlp::NLPModelsModifiers.LBFGSModel; variant::Symbol, kwargs::Base.Iterators.Pairs{Symbol, Any, NTuple{7, Symbol}, NamedTuple{(:x, :cgtol, :rtol, :atol, :max_time, :max_eval, :max_cgiter), Tuple{Vector{Double64}, Double64, Double64, Double64, Float64, Int64, Int64}}})
    @ JSOSolvers ~\.julia\packages\JSOSolvers\w21mV\src\tron.jl:6
  [9] (::Percival.var"#7#10"{Float64, Nonconvex.var"#395#396"{Int64}, Int64, Dict{Symbol, Int64}, AugLagModel{NLPModelsModifiers.SlackModel, Double64, Vector{Double64}}})()
    @ Percival ~\.julia\packages\Percival\k19Y2\src\method.jl:141
 [10] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging .\logging.jl:491
 [11] with_logger
    @ .\logging.jl:603 [inlined]
 [12] percival(::Val{:equ}, nlp::NLPModelsModifiers.SlackModel; μ::Double64, max_iter::Int64, max_time::Float64, max_eval::Int64, atol::Double64, rtol::Double64, ctol::Float64, subsolver_logger::Base.CoreLogging.NullLogger, inity::Vector{Double64}, subproblem_modifier::Nonconvex.var"#395#396"{Int64}, subsolver_max_eval::Int64, subsolver_kwargs::Dict{Symbol, Int64})
    @ Percival ~\.julia\packages\Percival\k19Y2\src\method.jl:140
 [13] percival(::Val{:ineq}, nlp::ADNLPModels.ADNLPModel; kwargs::Base.Iterators.Pairs{Symbol, Any, NTuple{10, Symbol}, NamedTuple{(:inity, :max_iter, :max_time, :max_eval, :atol, :rtol, :subsolver_logger, :subproblem_modifier, :subsolver_max_eval, :subsolver_kwargs), Tuple{Vector{Double64}, Int64, Float64, Int64, Double64, Double64, Base.CoreLogging.NullLogger, Nonconvex.var"#395#396"{Int64}, Int64, Dict{Symbol, Int64}}}})
    @ Percival ~\.julia\packages\Percival\k19Y2\src\method.jl:49
 [14] _percival(nlp::ADNLPModels.ADNLPModel; μ::Double64, max_iter::Int64, max_time::Float64, max_eval::Int64, atol::Double64, rtol::Double64, ctol::Double64, first_order::Bool, memory::Int64, subsolver_logger::Base.CoreLogging.NullLogger, inity::Vector{Double64}, max_cgiter::Int64, subsolver_max_eval::Int64, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Nonconvex ~\.julia\dev\Nonconvex\src\wrappers\percival.jl:79
 [15] optimize!(workspace::Nonconvex.PercivalWorkspace{Nonconvex.VecModel{Vector{Double64}}, ADNLPModels.ADNLPModel, Vector{Double64}, PercivalOptions{NamedTuple{(:first_order, :memory, :inity, :ctol, :atol, :rtol), Tuple{Bool, Int64, var"#37#38", Double64, Double64, Double64}}}, Base.RefValue{Int64}})
    @ Nonconvex ~\.julia\dev\Nonconvex\src\wrappers\percival.jl:46
 [16] #optimize#112
    @ ~\.julia\dev\Nonconvex\src\models\vec_model.jl:59 [inlined]
 [17] optimize(::Model{Vector{Any}}, ::PercivalAlg, ::Vector{Double64}; kwargs::Base.Iterators.Pairs{Symbol, PercivalOptions{NamedTuple{(:first_order, :memory, :inity, :ctol, :atol, :rtol), Tuple{Bool, Int64, var"#37#38", Double64, Double64, Double64}}}, Tuple{Symbol}, NamedTuple{(:options,), Tuple{PercivalOptions{NamedTuple{(:first_order, :memory, :inity, :ctol, :atol, :rtol), Tuple{Bool, Int64, var"#37#38", Double64, Double64, Double64}}}}}})
    @ Nonconvex ~\.julia\dev\Nonconvex\src\algorithms\common.jl:239
 [18] top-level scope
    @ REPL[60]:1

julia>

Where should I look next ? I tried digging into the stckframe but this is as far as I was able to go..

Wrap FrankWolfe.jl

FrankWolfe is a nice package that can handle structured constraints and unstructured objectives. We can start by supporting it when the constraints are are all linear.

custom adjoints

What is the recommended way here to use a custom adjoint for an objective and/or constraints defined using ChainRulesCore.jl for the use in the value_jacobian function? Lets say I have defined my adjoint at the driver level and that I want that to be picked up rather than relying on the default autodiff. How should I tell Nonconvex I have the derivative information? If something like what is below worked that would be great:

using Nonconvex, LinearAlgebra, Test

function f(x::AbstractVector) 
    val = sqrt(x[2])
    jac = [0.5*x[1]^-0.5,0]
    val, jac
end

function g(x::AbstractVector,a,b)
    val = ( a*x[1] + b)^3 - x[2]
    jac = [3* (a)  *(x[1]+ (b) )^2,-1]
    val, jac 
end

options = MMAOptions(
    tol = Tolerance(kkt = 1e-6, f = 0.0),
    s_init = 0.1,
)

m = Model(f)
addvar!(m, [0.0, 0.0], [10.0, 10.0])
add_ineq_constraint!(m, x -> g(x,2,0))
add_ineq_constraint!(m, x -> g(x,-1,1))
alg = MMA87()
convcriteria = KKTCriteria()
r = Nonconvex.optimize(m, alg, [1.234, 2.345], options = options, convcriteria = convcriteria)
@test abs(r.minimum - sqrt(8/27)) < 1e-6
@test norm(r.minimizer - [1/3, 8/27]) < 1e-6

Thanks for your help

Optim and NLSolvers

Would be nice to wrap Optim and NLSolvers algorithms using the same API here.

Error using IpoptAlg with TopOpt.jl

MWE script using TopOpt.jl can be found: https://github.com/mohamed82008/TopOpt.jl/blob/yh/doc_improvement/examples/benchmark/compare_top3d.jl#L68-L69

Detailed error messages:

ERROR: LoadError: TypeError: in typeassert, expected Float64, got a value of type ForwardDiff.Dual{Nothing, Float64, 12}
Stacktrace:
  [1] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{Nothing, Float64, 12}, i1::Int64)
    @ Base .\array.jl:839
  [2] _unsafe_copyto!(dest::Vector{Float64}, doffs::Int64, src::Vector{ForwardDiff.Dual{Nothing, Float64, 12}}, soffs::Int64, n::Int64)
    @ Base .\array.jl:235
  [3] unsafe_copyto!
    @ .\array.jl:289 [inlined]
  [4] _copyto_impl!
    @ .\array.jl:313 [inlined]
  [5] copyto!
    @ .\array.jl:299 [inlined]
  [6] copyto!
    @ .\array.jl:325 [inlined]
  [7] copyto!
    @ .\broadcast.jl:977 [inlined]
  [8] copyto!
    @ .\broadcast.jl:936 [inlined]
  [9] materialize!
    @ .\broadcast.jl:894 [inlined]
 [10] materialize!
    @ .\broadcast.jl:891 [inlined]
 [11] macro expansion
    @ ~\Dropbox (MIT)\code_ws_dropbox\TO_ws\TopOpt.jl\src\Functions\compliance.jl:58 [inlined]
 [12] macro expansion
    @ ~\.julia\packages\TimerOutputs\4QAIk\src\TimerOutput.jl:190 [inlined]
 [13] (::Compliance{Float64, NewPointLoadCantilever{3, Float64, 8, 6, RectilinearGrid{3, Float64, 8, 6, Grid{3, Hexahedron, Float64}, Grid{3, Hexahedron, Float64}, Tuple{Int64, Int64, Int64}, Tuple{Float64, Float64, Float64}, Tuple{Vec{3, Float64}, Vec{3, Float64}}, BitVector, BitVector, BitVector}, Float64, Float64, ConstraintHandler{DofHandler{3, Hexahedron, Float64}, Float64}, Dict{Int64, Vector{Float64}}, BitVector, BitVector, Vector{Int64}, Metadata{Matrix{Int64}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, Matrix{Int64}}}, DirectDisplacementSolver{Float64, 3, PowerPenalty{Float64}, NewPointLoadCantilever{3, Float64, 8, 6, RectilinearGrid{3, Float64, 8, 6, Grid{3, Hexahedron, Float64}, Grid{3, Hexahedron, Float64}, Tuple{Int64, Int64, Int64}, Tuple{Float64, Float64, Float64}, Tuple{Vec{3, Float64}, Vec{3, Float64}}, BitVector, BitVector, BitVector}, Float64, Float64, ConstraintHandler{DofHandler{3, Hexahedron, Float64}, Float64}, Dict{Int64, Vector{Float64}}, BitVector, BitVector, Vector{Int64}, Metadata{Matrix{Int64}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, Matrix{Int64}}}, GlobalFEAInfo{Float64, Symmetric{Float64, SparseArrays.SparseMatrixCSC{Float64, Int64}}, Vector{Float64}, SuiteSparse.CHOLMOD.Factor{Float64}, SuiteSparse.SPQR.QRSparse{Float64, Int64}}, ElementFEAInfo{3, Float64, Vector{Symmetric{Float64, ElementMatrix{Float64, StaticArrays.SMatrix{24, 24, Float64, 576}, StaticArrays.SMatrix{24, 24, Float64, 576}, StaticArrays.SVector{24, Bool}, Float64}}}, Vector{StaticArrays.SVector{24, Float64}}, Vector{Float64}, Vector{Float64}, CellScalarValues{3, 3, Float64, RefCube}, FaceScalarValues{3, 3, Float64, RefCube}, Metadata{Matrix{Int64}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, Matrix{Int64}}, BitVector, BitVector, Vector{Int64}, Vector{Hexahedron}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, PowerPenalty{Float64}, PowerPenalty{Float64}, Float64, Bool}, Float64, Vector{Float64}, Vector{Float64}, Bool, TopOptTrace{Float64, Int64, Vector{Float64}, Vector{Float64}, Vector{Vector{Float64}}, Vector{Int64}, Vector{Int64}}, Bool, Int64, Bool, Int64})(x::Vector{ForwardDiff.Dual{Nothing, Float64, 12}}, grad::Vector{Float64})
    @ TopOpt.Functions ~\Dropbox (MIT)\code_ws_dropbox\TO_ws\TopOpt.jl\src\Functions\compliance.jl:45
 [14] rrule
    @ ~\Dropbox (MIT)\code_ws_dropbox\TO_ws\TopOpt.jl\src\Functions\compliance.jl:96 [inlined]
 [15] chain_rrule
    @ ~\.julia\packages\Zygote\CgsVi\src\compiler\chainrules.jl:89 [inlined]
 [16] macro expansion
    @ ~\.julia\packages\Zygote\CgsVi\src\compiler\interface2.jl:0 [inlined]
 [17] _pullback(ctx::Zygote.Context, f::Compliance{Float64, NewPointLoadCantilever{3, Float64, 8, 6, RectilinearGrid{3, Float64, 8, 6, Grid{3, Hexahedron, Float64}, Grid{3, Hexahedron, Float64}, Tuple{Int64, Int64, Int64}, Tuple{Float64, Float64, Float64}, Tuple{Vec{3, Float64}, Vec{3, Float64}}, BitVector, BitVector, BitVector}, Float64, Float64, ConstraintHandler{DofHandler{3, Hexahedron, Float64}, Float64}, Dict{Int64, Vector{Float64}}, BitVector, BitVector, Vector{Int64}, Metadata{Matrix{Int64}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, Matrix{Int64}}}, DirectDisplacementSolver{Float64, 3, PowerPenalty{Float64}, NewPointLoadCantilever{3, Float64, 8, 6, RectilinearGrid{3, Float64, 8, 6, Grid{3, Hexahedron, Float64}, Grid{3, Hexahedron, Float64}, Tuple{Int64, Int64, Int64}, Tuple{Float64, Float64, Float64}, Tuple{Vec{3, Float64}, Vec{3, Float64}}, BitVector, BitVector, BitVector}, Float64, Float64, ConstraintHandler{DofHandler{3, Hexahedron, Float64}, Float64}, Dict{Int64, Vector{Float64}}, BitVector, BitVector, Vector{Int64}, Metadata{Matrix{Int64}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, Matrix{Int64}}}, GlobalFEAInfo{Float64, Symmetric{Float64, SparseArrays.SparseMatrixCSC{Float64, Int64}}, Vector{Float64}, SuiteSparse.CHOLMOD.Factor{Float64}, SuiteSparse.SPQR.QRSparse{Float64, Int64}}, ElementFEAInfo{3, Float64, Vector{Symmetric{Float64, ElementMatrix{Float64, StaticArrays.SMatrix{24, 24, Float64, 576}, StaticArrays.SMatrix{24, 24, Float64, 576}, StaticArrays.SVector{24, Bool}, Float64}}}, Vector{StaticArrays.SVector{24, Float64}}, Vector{Float64}, Vector{Float64}, CellScalarValues{3, 3, Float64, RefCube}, FaceScalarValues{3, 3, Float64, RefCube}, Metadata{Matrix{Int64}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, RaggedArray{Vector{Int64}, Vector{Tuple{Int64, Int64}}}, Matrix{Int64}}, BitVector, BitVector, Vector{Int64}, Vector{Hexahedron}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, PowerPenalty{Float64}, PowerPenalty{Float64}, Float64, Bool}, Float64, Vector{Float64}, Vector{Float64}, Bool, TopOptTrace{Float64, Int64, Vector{Float64}, Vector{Float64}, Vector{Vector{Float64}}, Vector{Int64}, Vector{Int64}}, Bool, Int64, Bool, Int64}, args::Vector{ForwardDiff.Dual{Nothing, Float64, 12}})
    @ Zygote ~\.julia\packages\Zygote\CgsVi\src\compiler\interface2.jl:9
 [18] _pullback
    @ ~\Dropbox (MIT)\code_ws_dropbox\TO_ws\TopOpt.jl\examples\benchmark\compare_top3d.jl:40 [inlined]
 [19] _pullback(ctx::Zygote.Context, f::var"#13#14", args::Vector{ForwardDiff.Dual{Nothing, Float64, 12}})
    @ Zygote ~\.julia\packages\Zygote\CgsVi\src\compiler\interface2.jl:0
 [20] adjoint
    @ ~\.julia\packages\Zygote\CgsVi\src\lib\lib.jl:188 [inlined]
 [21] adjoint(__context__::Zygote.Context, 450::typeof(Core._apply_iterate),
 451::typeof(iterate), f::Function, args::Tuple{Vector{ForwardDiff.Dual{Nothing, Float64, 12}}})
    @ Zygote .\none:0
 [22] _pullback
    @ ~\.julia\packages\ZygoteRules\OjfTt\src\adjoint.jl:57 [inlined]
 [23] _pullback
    @ ~\Dropbox (MIT)\code_ws_dropbox\TO_ws\TopOpt.jl\src\Functions\Functions.jl:84 [inlined]
 [24] _pullback(ctx::Zygote.Context, f::Objective{Float64, var"#13#14"}, args::Vector{ForwardDiff.Dual{Nothing, Float64, 12}})
    @ Zygote ~\.julia\packages\Zygote\CgsVi\src\compiler\interface2.jl:0
 [25] adjoint
    @ ~\.julia\packages\Zygote\CgsVi\src\lib\lib.jl:188 [inlined]
 [26] _pullback
    @ ~\.julia\packages\ZygoteRules\OjfTt\src\adjoint.jl:57 [inlined]
 [27] _pullback
    @ ~\.julia\packages\Nonconvex\FgWVe\src\functions\functions.jl:156 [inlined]
 [28] _pullback(::Zygote.Context, ::Nonconvex.var"##_#7", ::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, ::Nonconvex.Objective{Objective{Float64, var"#13#14"}, Base.RefValue{Float64}}, ::Vector{ForwardDiff.Dual{Nothing, Float64, 12}})
    @ Zygote ~\.julia\packages\Zygote\CgsVi\src\compiler\interface2.jl:0
in expression starting at C:\Users\harry\Dropbox (MIT)\code_ws_dropbox\TO_ws\TopOpt.jl\examples\benchmark\compare_top3d.jl:64

Random coordinate descent as a meta-algorithm

It would be great to be able to have a random coordinate descent version of any solver such that it optimises only some of the parameters at any one time. This can be helpful to counter non-convexity.

Percival fails with ERROR: outside of the trust region: ‖x‖²= NaN

The MWE uses this repository for SectorModelMWE, at commit ca7e0e00.

Note that from the output it seems that the values are finite, could this be a conditioning problem?

MWE

The following script reproduces the problem:

using SectorModelMWE
using Nonconvex, ChainRulesCore, ForwardDiff, DiffResults, Logging

Logging.global_logger(SimpleLogger(stdout)) # full precision printing

const DEBUG = Ref(true)         # true: always print, false: print non-finite

function nonfinite_warn(x; kwargs...)
    isnonfinite = any(x -> any(!isfinite, last(x)), kwargs)
    if DEBUG[] || isnonfinite
        @warn (isnonfinite ? "non-finite values" : "debugging") x kwargs...
    end
end

objective(x) = objective_and_constraint(PROBLEM, x)[1]

constraint(x) = objective_and_constraint(PROBLEM, x)[2]

function ChainRulesCore.rrule(::typeof(objective), x::AbstractVector)
    result = DiffResults.GradientResult(x)
    result = ForwardDiff.gradient!(result, objective, x)
    val = DiffResults.value(result)
    grad = DiffResults.gradient(result)
    nonfinite_warn(x, objective = val, ∇ = grad)
    val, Δ -> (NO_FIELDS, Δ * grad)
end

function ChainRulesCore.rrule(::typeof(constraint), x::AbstractVector)
    result = DiffResults.JacobianResult(zeros(5), x)
    result = ForwardDiff.jacobian!(result, constraint, x)
    val = DiffResults.value(result)
    jac = DiffResults.jacobian(result)
    nonfinite_warn(x, constraint = val, ∂ = jac)
    val, Δ -> (NO_FIELDS, jac' * Δ)
end

m = Model(objective)
addvar!(m, LOWER_BOUNDS_X, UPPER_BOUNDS_X)
add_eq_constraint!(m, FunctionWrapper(constraint, 5))

alg = AugLag()
options = Nonconvex.AugLagOptions()
x0 = [0.4683960639229081, 0.8400753712868766, 0.8194473520749728, 1.7190740064948666,
      0.49831460023812674, 2681.10696006373, 2881.4771575869295, 2994.7180619903943,
      457.97811450658014, 457.97811450657997]
sol = Nonconvex.optimize(m, alg, x0, options = options)

Output and backtrace

julia> sol = Nonconvex.optimize(m, alg, x0, options = options)
┌ Warning: debugging
│   x = [0.4683960639229081, 0.8400753712868766, 0.8194473520749728, 1.7190740064948666, 0.49831460023812674, 2681.10696006373, 2881.4771575869295, 2994.7180619903943, 457.97811450658014, 457.97811450657997]
│   objective = 562753.0708490529
│   ∇ = [0.008539714056223478, 0.002001338855565355, 3920.2751620731724, -442.1630506615913, -0.004737052357618956, -0.044891831971265866, -0.08206262647627585, -0.11671199733245007, 0.12183315396074128, 0.12183330181925056]
└ @ Main REPL[10]:4
┌ Warning: debugging
│   x = [0.4683960639229081, 0.8400753712868766, 0.8194473520749728, 1.7190740064948666, 0.49831460023812674, 2681.10696006373, 2881.4771575869295, 2994.7180619903943, 457.97811450658014, 457.97811450657997]
│   objective = 562753.0708490529
│   ∇ = [0.008539714056223478, 0.002001338855565355, 3920.2751620731724, -442.1630506615913, -0.004737052357618956, -0.044891831971265866, -0.08206262647627585, -0.11671199733245007, 0.12183315396074128, 0.12183330181925056]
└ @ Main REPL[10]:4
┌ Warning: debugging
│   x = [0.4683960639229081, 0.8400753712868766, 0.8194473520749728, 1.7190740064948666, 0.49831460023812674, 2681.10696006373, 2881.4771575869295, 2994.7180619903943, 457.97811450658014, 457.97811450657997]
│   constraint = [-7.2062078526613504e-12, -1.2265133353395186e-11, 1.2269019133981374e-11, 1.1719579300073502e-12, 1.1719574963264812e-12]
│   ∂ = [-0.7630419681616327 -0.2784502755416528 0.11735966895154426 0.08701055894920123 0.23894566568271322 0.0005589769512999752 -0.00023496642357380388 -0.00031055881069954943 -2.410770638450196e-5 -2.410770638450198e-5; -0.518080796729811 -0.224923523544702 1.0935488866927146 0.14823992865381563 0.047940065294448185 -0.0001935167002294156 0.0007701811115232878 -0.0005537467149087483 -4.107231027595321e-5 -4.107231027595325e-5; -0.07360966940883162 -0.08732715078168107 2.217373334206754 0.21313243716568314 -0.22508608761678872 -0.00027277884950435656 -0.0005861791295426577 0.0008958873541771344 -5.903652183877688e-5 -5.903652183877689e-5; 1.8605352062881548 0.8461575891768994 -1.285850809792808 -0.7052551923129947 0.03321209851756579 1.4723408584320038e-5 2.69172065888774e-5 -0.0004383814498827035 0.00020436346099006513 0.00019474037990484683; 1.8605352062881553 0.8461575891768996 -1.285850809792808 -0.7052551923129948 0.03321209851756579 1.472340858432004e-5 2.6917206588877407e-5 -0.00043838144988270363 0.00019474037990484683 0.0002043634609900652]
└ @ Main REPL[10]:4
┌ Info:   iter        fx    normgp    normcx         μ     normy    sumc     inner_status        iter_type  
└ @ Percival /home/tamas/.julia/packages/Percival/k19Y2/src/method.jl:130
┌ Info:      0   5.6e+05   4.4e+02   1.9e-11   1.0e+01   2.2e+00       5
└ @ Percival /home/tamas/.julia/packages/Percival/k19Y2/src/method.jl:132
┌ Warning: debugging
│   x = [0.4683960639229081, 0.8400753712868766, 0.8194473520749728, 1.7190740064948666, 0.49831460023812674, 2681.10696006373, 2881.4771575869295, 2994.7180619903943, 457.97811450658014, 457.97811450657997]
│   objective = 562753.0708490529
│   ∇ = [0.008539714056223478, 0.002001338855565355, 3920.2751620731724, -442.1630506615913, -0.004737052357618956, -0.044891831971265866, -0.08206262647627585, -0.11671199733245007, 0.12183315396074128, 0.12183330181925056]
└ @ Main REPL[10]:4
┌ Warning: debugging
│   x = [0.4683960639229081, 0.8400753712868766, 0.8194473520749728, 1.7190740064948666, 0.49831460023812674, 2681.10696006373, 2881.4771575869295, 2994.7180619903943, 457.97811450658014, 457.97811450657997]
│   constraint = [-7.2062078526613504e-12, -1.2265133353395186e-11, 1.2269019133981374e-11, 1.1719579300073502e-12, 1.1719574963264812e-12]
│   ∂ = [-0.7630419681616327 -0.2784502755416528 0.11735966895154426 0.08701055894920123 0.23894566568271322 0.0005589769512999752 -0.00023496642357380388 -0.00031055881069954943 -2.410770638450196e-5 -2.410770638450198e-5; -0.518080796729811 -0.224923523544702 1.0935488866927146 0.14823992865381563 0.047940065294448185 -0.0001935167002294156 0.0007701811115232878 -0.0005537467149087483 -4.107231027595321e-5 -4.107231027595325e-5; -0.07360966940883162 -0.08732715078168107 2.217373334206754 0.21313243716568314 -0.22508608761678872 -0.00027277884950435656 -0.0005861791295426577 0.0008958873541771344 -5.903652183877688e-5 -5.903652183877689e-5; 1.8605352062881548 0.8461575891768994 -1.285850809792808 -0.7052551923129947 0.03321209851756579 1.4723408584320038e-5 2.69172065888774e-5 -0.0004383814498827035 0.00020436346099006513 0.00019474037990484683; 1.8605352062881553 0.8461575891768996 -1.285850809792808 -0.7052551923129948 0.03321209851756579 1.472340858432004e-5 2.6917206588877407e-5 -0.00043838144988270363 0.00019474037990484683 0.0002043634609900652]
└ @ Main REPL[10]:4
┌ Info:      1   5.6e+05   4.4e+02   1.9e-11   1.0e+01   2.2e+00      10      first_order         update_y
└ @ Percival /home/tamas/.julia/packages/Percival/k19Y2/src/method.jl:181
┌ Warning: debugging
│   x = [0.010000000000000009, 1.606224146826592, 0.01, 51.03948944477472, 0.5796373532789317, 2681.146672053711, 2881.5286636105243, 2994.778570288716, 457.90191714570767, 457.9019171297725]
│   objective = 553855.8007479684
│   ∇ = [-0.02215531149216976, 0.0001634457038592099, 5180.659352482244, -3.333453274387851, -0.03453233410733292, -2.477035911808962, -3.0274552620835298, -3.3913465015315207, 4.447918797302337, 4.447918878121673]
└ @ Main REPL[10]:4
┌ Warning: debugging
│   x = [0.010000000000000009, 1.606224146826592, 0.01, 51.03948944477472, 0.5796373532789317, 2681.146672053711, 2881.5286636105243, 2994.778570288716, 457.90191714570767, 457.9019171297725]
│   constraint = [0.1613835436318019, 0.3036593877668866, 0.4614854164212876, 0.012550477553153819, 0.012550477553088395]
│   ∂ = [-0.6909136306433854 0.01368402062623054 0.01816695985477588 -3.7606854825204524e-5 0.19085315590665766 0.00040505667418997825 -0.00018902118734475494 -0.0003228427337963879 4.9498053373934286e-5 4.9498053413023305e-5; -1.6171722253941776 0.029070670444224005 0.039829347858303096 3.3689155258822995e-6 0.0099295978272783 -0.0001533463758404125 0.0007590363491046999 -0.0006097210707321685 -5.7803211026089575e-6 -5.780321106082496e-6; -2.6475821563028816 0.045214040753104284 0.06273766504281464 7.902157526120232e-5 -0.3350465410067529 -0.00023311068964983176 -0.0005422772921189344 0.0009668119486967581 -0.00010739286249439469 -0.00010739286257646042; 0.045174868709954574 -0.000333267183514278 -7.792027425304856e-5 -1.488480624138309e-5 0.07041172285197775 -6.064828348088269e-6 -1.3958360827905089e-5 -2.370331014808855e-5 2.3916607660948753e-5 1.980991159959415e-5; 0.04517486870979145 -0.00033326718351309127 -7.792027425276729e-5 -1.488480624132922e-5 0.07041172285172355 -6.064828348056595e-6 -1.3958360827832254e-5 -2.370331014796491e-5 1.980991158405562e-5 2.391660767625915e-5]
└ @ Main REPL[10]:4
┌ Info:      2   5.5e+05   8.9e+00   5.8e-01   1.0e+02   2.2e+00      21      first_order         update_μ
└ @ Percival /home/tamas/.julia/packages/Percival/k19Y2/src/method.jl:181
ERROR: outside of the trust region: ‖x‖²=    NaN, Δ²=7.6e+02
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] to_boundary(x::Vector{Float64}, d::Vector{Float64}, radius::Float64; flip::Bool, xNorm2::Float64, dNorm2::Float64)
    @ Krylov ~/.julia/packages/Krylov/XqTOU/src/krylov_utils.jl:164
  [3] cg(A::LinearOperators.LinearOperator{Float64}, b::Vector{Float64}; M::LinearOperators.opEye, atol::Float64, rtol::Float64, itmax::Int64, radius::Float64, linesearch::Bool, verbose::Int64, history::Bool)
    @ Krylov ~/.julia/packages/Krylov/XqTOU/src/cg.jl:103
  [4] projected_newton!(x::Vector{Float64}, H::LinearOperators.LinearOperator{Float64}, g::Vector{Float64}, Δ::Float64, cgtol::Float64, s::Vector{Float64}, ℓ::Vector{Float64}, u::Vector{Float64}; max_cgiter::Int64)
    @ JSOSolvers ~/.julia/packages/JSOSolvers/w21mV/src/tron.jl:333
  [5] (::JSOSolvers.var"#12#13"{Vector{Float64}, Int64, Float64, Vector{Float64}, Vector{Float64}, Vector{Float64}, LinearOperators.LinearOperator{Float64}, Float64})()
    @ JSOSolvers ~/.julia/packages/JSOSolvers/w21mV/src/tron.jl:100
  [6] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging ./logging.jl:491
  [7] with_logger
    @ ./logging.jl:603 [inlined]
  [8] tron(::Val{:Newton}, nlp::NLPModelsModifiers.LBFGSModel; subsolver_logger::Base.CoreLogging.NullLogger, x::Vector{Float64}, μ₀::Float64, μ₁::Float64, σ::Float64, max_eval::Int64, max_time::Float64, max_cgiter::Int64, use_only_objgrad::Bool, cgtol::Float64, atol::Float64, rtol::Float64, fatol::Float64, frtol::Float64)
    @ JSOSolvers ~/.julia/packages/JSOSolvers/w21mV/src/tron.jl:99
  [9] tron(nlp::NLPModelsModifiers.LBFGSModel; variant::Symbol, kwargs::Base.Iterators.Pairs{Symbol, Any, NTuple{7, Symbol}, NamedTuple{(:x, :cgtol, :rtol, :atol, :max_time, :max_eval, :max_cgiter), Tuple{Vector{Float64}, Float64, Float64, Float64, Float64, Int64, Int64}}})
    @ JSOSolvers ~/.julia/packages/JSOSolvers/w21mV/src/tron.jl:6
 [10] (::Percival.var"#7#10"{Float64, Nonconvex.var"#183#184"{Int64}, Int64, Dict{Symbol, Int64}, Percival.AugLagModel{ADNLPModels.ADNLPModel, Float64, Vector{Float64}}})()
    @ Percival ~/.julia/packages/Percival/k19Y2/src/method.jl:141
 [11] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging ./logging.jl:491
 [12] with_logger
    @ ./logging.jl:603 [inlined]
 [13] percival(::Val{:equ}, nlp::ADNLPModels.ADNLPModel; μ::Float64, max_iter::Int64, max_time::Float64, max_eval::Int64, atol::Float64, rtol::Float64, ctol::Float64, subsolver_logger::Base.CoreLogging.NullLogger, inity::Vector{Float64}, subproblem_modifier::Nonconvex.var"#183#184"{Int64}, subsolver_max_eval::Int64, subsolver_kwargs::Dict{Symbol, Int64})
    @ Percival ~/.julia/packages/Percival/k19Y2/src/method.jl:140
 [14] _percival(nlp::ADNLPModels.ADNLPModel; μ::Float64, max_iter::Int64, max_time::Float64, max_eval::Int64, atol::Float64, rtol::Float64, ctol::Float64, first_order::Bool, memory::Int64, subsolver_logger::Base.CoreLogging.NullLogger, inity::Vector{Float64}, max_cgiter::Int64, subsolver_max_eval::Int64, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Nonconvex ~/.julia/packages/Nonconvex/prdTV/src/wrappers/percival.jl:71
 [15] optimize!(workspace::Nonconvex.PercivalWorkspace{Model{Vector{Float64}}, ADNLPModels.ADNLPModel, Vector{Float64}, PercivalOptions{NamedTuple{(:first_order, :memory, :inity), Tuple{Bool, Int64, typeof(ones)}}}, Base.RefValue{Int64}})
    @ Nonconvex ~/.julia/packages/Nonconvex/prdTV/src/wrappers/percival.jl:42
 [16] optimize(::Model{Vector{Float64}}, ::Vararg{Any, N} where N; kwargs::Base.Iterators.Pairs{Symbol, PercivalOptions{NamedTuple{(:first_order, :memory, :inity), Tuple{Bool, Int64, typeof(ones)}}}, Tuple{Symbol}, NamedTuple{(:options,), Tuple{PercivalOptions{NamedTuple{(:first_order, :memory, :inity), Tuple{Bool, Int64, typeof(ones)}}}}}})
    @ Nonconvex ~/.julia/packages/Nonconvex/prdTV/src/algorithms/mma_algorithm.jl:183
 [17] top-level scope
    @ REPL[19]:1

Transformations to and from vectors

Currently constraints are assumed to return a vector. We need functions to transform to and from vectors. For example to support constraints on sparse matrix valued functions or functions of structs.

Bi-level optimisation

Would be cool to solve bi-level optimisation using custom adjoints for KKT-based optimisers in ChainRulesCore. Then optimisation algorithms can be nested seamlessly.

Sequential (mixed integer) convex optimization

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Integrate with MadNLP

MadNLP is a Julia implementation of the Ipopt algorithm. This means that it can generalize to semidefinite constraints with enough trickery. Would be nice to support it here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.