Giter Site home page Giter Site logo

jump-dev / amplnlwriter.jl Goto Github PK

View Code? Open in Web Editor NEW
65.0 6.0 20.0 284 KB

A Julia interface to AMPL-enabled solvers

Home Page: http://ampl.com/products/solvers/all-solvers-for-ampl/

License: MIT License

Julia 100.00%
julia jump-jl nonlinear-programming

amplnlwriter.jl's Introduction

AmplNLWriter.jl

Build Status MINLPTests codecov

AmplNLWriter.jl is an interface between MathOptInterface.jl and AMPL-enabled solvers.

Affiliation

This wrapper is maintained by the JuMP community and has no official connection with the AMPL modeling language or AMPL Optimization Inc.

Getting help

If you need help, please ask a question on the JuMP community forum.

If you have a reproducible example of a bug, please open a GitHub issue.

Installation

Install AmplNLWriter using Pkg.add:

import Pkg
Pkg.add("AmplNLWriter")

Use with JuMP

AmplNLWriter requires an AMPL compatible solver binary to function.

Pass a string pointing to any AMPL-compatible solver binary as the first positional argument to AmplNLWriter.

For example, if the bonmin executable is on the system path, use:

using JuMP, AmplNLWriter
model = Model(() -> AmplNLWriter.Optimizer("bonmin"))

If the solver is not on the system path, pass the full path to the solver:

using JuMP, AmplNLWriter
model = Model(() -> AmplNLWriter.Optimizer("/Users/Oscar/ampl.macos64/bonmin"))

Precompiled binaries

To simplify the process of installing solver binaries, a number of Julia packages provide precompiled binaries that are compatible with AmplNLWriter. These are generally the name of the solver, followed by _jll. For example, bomin is provided by the Bonmin_jll package.

To call Bonmin via AmplNLWriter.jl, install the Bonmin_jll package, then run:

using JuMP, AmplNLWriter, Bonmin_jll
model = Model(() -> AmplNLWriter.Optimizer(Bonmin_jll.amplexe))

Supported packages include:

Solver Julia Package Executable
Bonmin Bonmin_jll.jl Bomin_jll.amplexe
Couenne Couenne_jll.jl Couenne_jll.amplexe
Ipopt Ipopt_jll.jl Ipopt_jll.amplexe
SHOT SHOT_jll.jl SHOT_jll.amplexe
KNITRO KNITRO.jl KNITRO.amplexe

MathOptInterface API

The AmplNLWriter optimizer supports the following constraints and attributes.

List of supported objective functions:

List of supported variable types:

List of supported constraint types:

List of supported model attributes:

Note that some solver executables may not support the full list of constraint types. For example, Ipopt_jll does not support MOI.Integer or MOI.ZeroOne constraints.

Options

A list of available options for each solver can be found here:

Set an option using set_attribute. For example, to set the "bonmin.nlp_log_level" option to 0 in Bonmin, use:

using JuMP
import AmplNLWriter
import Bonmin_jll
model = Model(() -> AmplNLWriter.Optimizer(Bonmin_jll.amplexe))
set_attribute(model, "bonmin.nlp_log_level", 0)

opt files

Some options need to be specified via an .opt file.

This file must be located in the current working directory whenever the model is solved.

The .opt file must be named after the name of the solver, for example, bonmin.opt, and each line must contain an option name and the desired value, separated by a space.

For example, to set the absolute and relative tolerances in Couenne to 1 and 0.05 respectively, the couenne.opt file should contain:

allowable_gap 1
allowable_fraction_gap 0.05

amplnlwriter.jl's People

Contributors

blegat avatar ccoffrin avatar deltova avatar dourouc05 avatar femtocleaner[bot] avatar gregplowman avatar jackdunnnz avatar jcheyns avatar jgoldfar avatar joaquimg avatar joehuchette avatar juliatagbot avatar mlubin avatar odow avatar staticfloat avatar tkoolen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

amplnlwriter.jl's Issues

no solver message (or white spaces) in solution file (.sol)

There might be no solver message in solution file. For example, a MINLP solver minotaur generates solution file without any solver message. To address such case, please incorporate the following modification to lines 574-579 in AmplNLWriter.jl:

    # Skip the following lines when there is no solver message or just white spaces.
    if strip(chomp(line)) != "Options"
        # Keep building solver message by reading until empty line
        while true
            m.solve_message *= line
            line = readline(f)
            strip(chomp(line)) == "" && break
        end
        @assert chomp(readline(f)) == "Options"
    end

How to write a .nl file

Sorry for asking a naive question.

I've been looking for a function that can write a model down in .nl file that I can test with different solvers (e.g., on NEOS).

How shall I write the model into a .nl file? E.g., in the example code here.

Regards,

Sam

Solver Parameter Consistency

In most of the other JuMP solvers parameters are input as keyword arguments like so,

solverName(param1 = val, param2 = val, ...)

In the NL writer we have to give parameters as an array of strings, like so,

solverName(["solver.param1 = val", "solver.param2 = val", ...])

It seems to me that it is fairly easy to re-write keyword arguments into this string-array format. If so, why not make these solver's parameter API similar to other solvers in JuMP?

Shouldn't user-defined functions work?

From the JuMP docs:

mysquare(x) = x^2
myf(x,y) = (x-1)^2+(y-2)^2

m = Model()

JuMP.register(m, :myf, 2, myf, autodiff=true)
JuMP.register(m, :mysquare, 1, mysquare, autodiff=true)

@variable(m, x[1:2] >= 0.5)
@NLobjective(m, Min, myf(x[1],mysquare(x[2])))

solve(m)

Works fine when solver = IpoptSolver(). With solver = BonminNLSolver() or solver = CouenneNLSolver(), though I get the following error:

KeyError: key :rsd not found

Stacktrace:
 [1] getindex at ./dict.jl:474 [inlined]
 [2] nl_operator(::Symbol) at /home/hessammehr/.julia/v0.6/AmplNLWriter/src/nl_write.jl:257
 [3] write_nl_expr(::IOStream, ::AmplNLWriter.AmplNLMathProgModel, ::Expr) at /home/hessammehr/.julia/v0.6/AmplNLWriter/src/nl_write.jl:231
 [4] write_nl_o_block(::IOStream, ::AmplNLWriter.AmplNLMathProgModel) at /home/hessammehr/.julia/v0.6/AmplNLWriter/src/nl_write.jl:95
 [5] write_nl_file(::IOStream, ::AmplNLWriter.AmplNLMathProgModel) at /home/hessammehr/.julia/v0.6/AmplNLWriter/src/nl_write.jl:9
 [6] optimize!(::AmplNLWriter.AmplNLMathProgModel) at /home/hessammehr/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:371
 [7] optimize!(::AmplNLWriter.AmplNLNonlinearModel) at /home/hessammehr/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:722
 [8] #solvenlp#165(::Bool, ::Function, ::JuMP.Model, ::JuMP.ProblemTraits) at /home/hessammehr/.julia/v0.6/JuMP/src/nlp.jl:1271
 [9] (::JuMP.#kw##solvenlp)(::Array{Any,1}, ::JuMP.#solvenlp, ::JuMP.Model, ::JuMP.ProblemTraits) at ./<missing>:0
 [10] #solve#116(::Bool, ::Bool, ::Bool, ::Array{Any,1}, ::Function, ::JuMP.Model) at /home/hessammehr/.julia/v0.6/JuMP/src/solvers.jl:172
 [11] solve(::JuMP.Model) at /home/hessammehr/.julia/v0.6/JuMP/src/solvers.jl:150
 [12] include_string(::String, ::String) at ./loading.jl:515

IpoptSolver does the trick in this case, but I was really hoping to mix in some integer variables.`

odow: formatting

support ifelse

Would it be difficult to support ifelse, i.e.,

ifelse(condition, expression_if_true, expression_if_false)

Just ran into a model that I wanted to test out the .nl writer with but it has some ifelse statements.

Update for parsing changes on julia 0.5

Ref jump-dev/JuMP.jl#711
One-sided comparisons parse as :call expressions in Julia 0.5. JuMP follows the new syntax on 0.5 for expressions returned through the MPB interface.

Before:

julia> dump(:(x <= 1))
Expr 
  head: Symbol comparison
  args: Array(Any,(3,))
    1: Symbol x
    2: Symbol <=
    3: Int64 1
  typ: Any

After:

julia> dump(:(x <= 1))
Expr
  head: Symbol call
  args: Array(Any,(3,))
    1: Symbol <=
    2: Symbol x
    3: Int64 1
  typ: Any

Support quadratic constraints in @constraint

The following will fail

m1 = Model(solver =  BonminNLSolver())
@variable(m1, 0 <= yp <= 1, Int)
@variable(m1, 0 <= l <= 1000.0)
@variable(m1, 0 <= f <= 1000.0)
@constraint(m1, .087*l >= f^2)
@constraint(m1,  l <= yp*1000.0)
try
   solve(m1)
catch e
   println(e)
end

because there is a quadratic expression in one of the constraints. However, this syntax for quadratic constraints is ok for use with the OSIL solvers, Gurobi, cplex, etc.

The current fix is to replace @constraint with @NLConstraint. However, @NLConstraint is incompatible with cplex and Gurobi. So, while the constraint expression is the same, the functions for expressing them are currently solver dependent.

Wrong value of a binary variable at solution

Hello, i'm having some problems in solving the following problem:

workspace()
using JuMP, AmplNLWriter, CoinOptServices

## Solve test problem
 #
 #  min   100 * (x5 - (0.5 + x4) ^ 2) ^ 2 + (1 - x4) ^ 2
 #  s.t.  x4*x3 + x5*x1 <= 20
 #        5x1 - 2x5 + 3x3 <= 17
 #        x4, x5 binary
 #        0 <= x1 <= 3
 #        0 <= x2 <= 5
 #        0 <= x3 <= 10
 ##

m = Model(solver=AmplNLSolver(CoinOptServices.bonmin))

X_U = [3;5;10]

@variable(m, 0 <= X[i=1:3] <= X_U[i])
@variable(m, x[i=4:5], Bin)

@NLobjective(m, Min, 100*(x[5] - (0.5 + x[4])^2)^2 + (1 - x[4])^2)

@NLconstraint(m, x[4]*X[3]+x[5]*X[1] <= 20)
@constraint(m, 5*X[1]-2*x[5]+3*X[3] <= 17)

print(m)

status = solve(m)

println("Objective value: ", getobjectivevalue(m))
println("x = ", getvalue(x))
println("X = ", getvalue(X))

Using both Bonmin and Couenne to solve this problem, i'm getting the following solution:

Objective value: 0.24937733210128057
x = x: 1 dimensions:
[4] = 0.501244569130593
[5] = 1.0
X = [1.0, 2.5, 2.0]

The constraint to make x4 binary isn't being enforced. Is this some sort of bug? Can it be fixed?

Error when setValue is given

When using JuMP(0.15.0) & AmplNLWriter(0.3.0), error happens when setValue() is utilized with solver Bonmin (with intention of warm starting the value, not sure if Bonmin is supported, but there shouldn't be any error given the code fits different solvers). I attached the solving log below the error log.

Thanks!

ERROR: LoadError: SystemError: opening file /Users/sitew/.julia/v0.5/AmplNLWriter/.solverdata/tmpZNwvZa.sol: No such file or directory
 in #systemerror#51 at ./error.jl:34 [inlined]
 in systemerror(::String, ::Bool) at ./error.jl:34
 in open(::String, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool) at ./iostream.jl:89
 in open(::String, ::String) at ./iostream.jl:101
 in read_sol(::AmplNLWriter.AmplNLMathProgModel) at /Users/sitew/.julia/v0.5/AmplNLWriter/src/AmplNLWriter.jl:596
 in read_results(::AmplNLWriter.AmplNLMathProgModel) at /Users/sitew/.julia/v0.5/AmplNLWriter/src/AmplNLWriter.jl:532
 in optimize!(::AmplNLWriter.AmplNLMathProgModel) at /Users/sitew/.julia/v0.5/AmplNLWriter/src/AmplNLWriter.jl:387
 in optimize!(::AmplNLWriter.AmplNLNonlinearModel) at /Users/sitew/.julia/v0.5/AmplNLWriter/src/AmplNLWriter.jl:722
 in #solvenlp#150(::Bool, ::Function, ::JuMP.Model, ::JuMP.ProblemTraits) at /Users/sitew/.julia/v0.5/JuMP/src/nlp.jl:1208
 in (::JuMP.#kw##solvenlp)(::Array{Any,1}, ::JuMP.#solvenlp, ::JuMP.Model, ::JuMP.ProblemTraits) at ./<missing>:0
 in #solve#97(::Bool, ::Bool, ::Bool, ::Array{Any,1}, ::Function, ::JuMP.Model) at /Users/sitew/.julia/v0.5/JuMP/src/solvers.jl:139
 in solve(::JuMP.Model) at /Users/sitew/.julia/v0.5/JuMP/src/solvers.jl:117
 in reoptimize_ub(::Array{JuMP.Variable,1}, ::Array{Array{Float64,1},1}, ::Array{Array{Float64,1},1}, ::Array{Any,1}, ::Array{Float64,1}, ::Float64) at /Users/sitew/Dropbox/SourceCodes/DTMC_suite_R/ubworker.jl:45
 in dynamicmccormick(::Int64, ::Int64, ::Int64, ::Int64, ::Int64) at /Users/sitew/Dropbox/SourceCodes/DTMC_suite_R/dynamicmcormick.jl:251
 in DTMC_simulation(::Int64) at /Users/sitew/Dropbox/SourceCodes/DTMC_suite_R/DTMC_simulation.jl:137
 in include_from_node1(::String) at ./loading.jl:488
 in include_from_node1(::String) at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 in process_options(::Base.JLOptions) at ./client.jl:262
 in _start() at ./client.jl:318
 in _start() at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib:?
while loading /Users/sitew/Dropbox/SourceCodes/DTMC_suite_R/DTMC_simulation.jl, in expression starting on line 226
Bonmin 1.8.4 using Cbc 2.9.7 and Ipopt 3.12.4
bonmin: bonmin.fp_log_level=1
bonmin.num_resolve_at_root=10
bonmin.num_resolve_at_infeasibles=100
bonmin.oa_log_level=1
bonmin.nlp_log_level=1
bonmin.bb_log_level=1
bonmin.milp_log_level=1
bonmin.milp_solver=Cplex
bonmin.algorithm=B-OA


******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************

NLP0012I 
              Num      Status      Obj             It       time                 Location
NLP0014I             1         OPT -48.835201       86 0.681959     build initial OA
NLP0014I *           1         OPT -48.835201       63 0.455417         resolve cost
NLP0014I             2         OPT -48.835201       44 0.258605         resolve cost
NLP0014I *           3         OPT -48.835201      259 1.607007         resolve cost
NLP0014I             4         OPT -48.835201       95 0.566131         resolve cost
NLP0014I             5         OPT -48.835201       78 0.485658         resolve cost
NLP0014I             6         OPT -48.835201      184 1.322304         resolve cost
NLP0014I             7         OPT -48.835201       47 0.277185         resolve cost
NLP0014I             8         OPT -48.835201       45 0.287969         resolve cost
NLP0014I             9         OPT -48.835201       51 0.319179         resolve cost
NLP0014I *          10         OPT -48.835201      128 0.770185         resolve cost
NLP3017W OA on non-convex constraint is very experimental.
OCbc0031I 18 added rows had average density of 77.166667
OCbc0013I At root node, 18 cuts changed objective from -48.314801 to -47.788586 in 40 passes
OCbc0014I Cut generator 0 (Probing) - 68 row cuts average 2.0 elements, 1 column cuts (1 active)  in 0.045 seconds - new frequency is -100
OCbc0014I Cut generator 1 (Gomory) - 424 row cuts average 150.9 elements, 0 column cuts (0 active)  in 0.028 seconds - new frequency is -100
OCbc0014I Cut generator 2 (Knapsack) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.045 seconds - new frequency is -100
OCbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.001 seconds - new frequency is -100
OCbc0014I Cut generator 4 (FlowCover) - 2 row cuts average 7.0 elements, 0 column cuts (0 active)  in 0.035 seconds - new frequency is -100
OCbc0014I Cut generator 5 (MixedIntegerRounding2) - 72 row cuts average 11.9 elements, 0 column cuts (0 active)  in 0.021 seconds - new frequency is -100
OCbc0010I After 0 nodes, 1 on tree, 1e+50 best solution, best possible -47.788586 (0.44 seconds)
OCbc0012I Integer solution of -44.841665 found by rounding after 1883 iterations and 20 nodes (0.62 seconds)
OCbc0004I Integer solution of -44.888401 found after 1898 iterations and 23 nodes (0.63 seconds)
OCbc0012I Integer solution of -45.045873 found by rounding after 2334 iterations and 47 nodes (0.77 seconds)
OCbc0012I Integer solution of -45.163126 found by rounding after 2379 iterations and 52 nodes (0.78 seconds)
OCbc0012I Integer solution of -45.631461 found by rounding after 2539 iterations and 63 nodes (0.85 seconds)
OCbc0012I Integer solution of -45.704891 found by rounding after 2632 iterations and 73 nodes (0.90 seconds)
OCbc0012I Integer solution of -45.811665 found by rounding after 7401 iterations and 341 nodes (1.64 seconds)
OCbc0004I Integer solution of -45.858401 found after 7469 iterations and 346 nodes (1.65 seconds)
OCbc0010I After 1000 nodes, 16 on tree, -45.858401 best solution, best possible -47.788586 (3.76 seconds)
OCbc0010I After 2000 nodes, 7 on tree, -45.858401 best solution, best possible -47.782111 (7.26 seconds)
OCbc0012I Integer solution of -46.125879 found by rounding after 37736 iterations and 2045 nodes (7.43 seconds)
OCbc0012I Integer solution of -46.285879 found by rounding after 39177 iterations and 2109 nodes (7.64 seconds)
OCbc0012I Integer solution of -46.393241 found by rounding after 40800 iterations and 2170 nodes (7.85 seconds)
OCbc0004I Integer solution of -46.492433 found after 40819 iterations and 2172 nodes (7.86 seconds)
OCbc0012I Integer solution of -46.62449 found by rounding after 41344 iterations and 2204 nodes (7.97 seconds)
OCbc0012I Integer solution of -46.958306 found by rounding after 41414 iterations and 2210 nodes (7.99 seconds)
OCbc0004I Integer solution of -46.96903 found after 41927 iterations and 2232 nodes (8.04 seconds)
OCbc0001I Search completed - best objective -46.96902997737194, took 43100 iterations and 2274 nodes (8.19 seconds)
OCbc0032I Strong branching done 12084 times (125210 iterations), fathomed 115 nodes and fixed 115 variables
OCbc0035I Maximum depth 21, 1650 variables fixed on reduced cost
NLP0014I            12      INFEAS 0.049980845       74 0.369824     OA decomposition
OCbc0031I 17 added rows had average density of 7.4117647
OCbc0013I At root node, 17 cuts changed objective from -48.290147 to -48.039859 in 13 passes
OCbc0014I Cut generator 0 (Probing) - 28 row cuts average 2.0 elements, 1 column cuts (1 active)  in 0.017 seconds - new frequency is -100
OCbc0014I Cut generator 1 (Gomory) - 6 row cuts average 24.7 elements, 0 column cuts (0 active)  in 0.007 seconds - new frequency is -100
OCbc0014I Cut generator 2 (Knapsack) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.030 seconds - new frequency is -100
OCbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.001 seconds - new frequency is -100
OCbc0014I Cut generator 4 (FlowCover) - 1 row cuts average 12.0 elements, 0 column cuts (0 active)  in 0.011 seconds - new frequency is -100
OCbc0014I Cut generator 5 (MixedIntegerRounding2) - 18 row cuts average 11.9 elements, 0 column cuts (0 active)  in 0.011 seconds - new frequency is -100
OCbc0010I After 0 nodes, 1 on tree, 1e+50 best solution, best possible -48.039859 (0.14 seconds)
OCbc0012I Integer solution of -44.897039 found by rounding after 506 iterations and 19 nodes (0.31 seconds)
OCbc0012I Integer solution of -45.612438 found by rounding after 1419 iterations and 64 nodes (0.49 seconds)
OCbc0001I Search completed - best objective -45.61243822610445, took 15047 iterations and 498 nodes (2.72 seconds)
OCbc0032I Strong branching done 2682 times (54682 iterations), fathomed 20 nodes and fixed 21 variables
OCbc0035I Maximum depth 17, 489 variables fixed on reduced cost
NLP0014I            13      INFEAS 0.033333315       90 0.634502     OA decomposition
OCbc0031I 11 added rows had average density of 5.8181818
OCbc0013I At root node, 11 cuts changed objective from -48.100568 to -47.722262 in 5 passes
OCbc0014I Cut generator 0 (Probing) - 8 row cuts average 2.0 elements, 1 column cuts (1 active)  in 0.008 seconds - new frequency is -100
OCbc0014I Cut generator 1 (Gomory) - 3 row cuts average 18.7 elements, 0 column cuts (0 active)  in 0.003 seconds - new frequency is -100
OCbc0014I Cut generator 2 (Knapsack) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.014 seconds - new frequency is -100
OCbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.000 seconds - new frequency is -100
OCbc0014I Cut generator 4 (FlowCover) - 1 row cuts average 10.0 elements, 0 column cuts (0 active)  in 0.005 seconds - new frequency is -100
OCbc0014I Cut generator 5 (MixedIntegerRounding2) - 7 row cuts average 9.0 elements, 0 column cuts (0 active)  in 0.004 seconds - new frequency is -100
OCbc0010I After 0 nodes, 1 on tree, 1e+50 best solution, best possible -47.722262 (0.08 seconds)
OCbc0012I Integer solution of -33.961504 found by rounding after 23395 iterations and 156 nodes (4.23 seconds)
OCbc0004I Integer solution of -34.526247 found after 23403 iterations and 157 nodes (4.24 seconds)
OCbc0012I Integer solution of -34.556601 found by rounding after 23907 iterations and 179 nodes (4.38 seconds)
OCbc0012I Integer solution of -34.706601 found by rounding after 23948 iterations and 182 nodes (4.40 seconds)
OCbc0012I Integer solution of -35.0166 found by rounding after 24052 iterations and 187 nodes (4.43 seconds)
OCbc0012I Integer solution of -35.0966 found by rounding after 24190 iterations and 194 nodes (4.47 seconds)
OCbc0004I Integer solution of -35.133175 found after 24331 iterations and 202 nodes (4.52 seconds)
OCbc0004I Integer solution of -35.443174 found after 24378 iterations and 204 nodes (4.55 seconds)

odow: edited formatting

BONMIN options not working

using JSON
using JuMP,AmplNLWriter

data={"P":1.0,"I":4,"J":2,"Bandwidth":[1.25,1.25,1.25,1.25],"Noise":[5.1E-8,9.65714E-8,7.6164E-8,2.33161E-8],"ell":[45,55]}
data=JSON.parse(data)

P=data["P"]
I=data["I"]
J=data["J"]
bw=data["Bandwidth"]
ns=data["Noise"]
el=data["ell"]

solver = BonminNLSolver([
		"halt_on_ampl_error=yes";
		"bonmin.bb_log_level=0"
		])

m = Model(solver=solver)


@variable(m,x[1:I,1:J],Bin)
@variable(m,p[1:I,1:J]>=0)
@variable(m,r[1:I,1:J]>=0)

@constraint(m,sum(p)<=P)

@constraint(m,[i=1:I],sum(x[i,j] for j=1:J)<=1)

@constraint(m,[i=1:I,j=1:J],p[i,j]<=P*x[i,j])

@constraint(m,[j=1:J],sum(r[i,j] for i=1:I)>=el[j])

@NLconstraint(m,[i=1:I,j=1:J],r[i,j]<=bw[i]*log2(1+p[i,j]/ns[i]))

@objective(m,Max,sum(r))

status=solve(m)

print(getvalue(x))
print(getvalue(p))
getvalue(r)

In the solution given by the solver above, some variables are set to 1.0e-8 instead of 0. Because of the nature of this problem, precision of 1.0e-8 is far from enough for us. So I tried to set BONMIN options including:

bonmin.integer_tolerance                     1e-18
bonmin.cutoff_decr                           1e-18
bonmin.oa_rhs_relax                          1e-18     	#Value by which to relax OA cut
bonmin.tiny_element                          1e-18     	#Value for tiny element in OA cut
bonmin.very_tiny_element                     1e-27     	#Value for very tiny element in OA cut

However, they didn't affect the result at all.

I also tried to set:
bonmin.milp_solver Cplex

However, the result&output were exactly the same with Cbc_D.

So I think some options are not taking effect.

odow: formatting

Couenne options while using AmplNLWriter

Hi,
I am using Couenne solver via AmplNLWriter.jl in Julia.
I created an option file couenne.opt:
{
allowable_fraction_gap 0.05
time_limit 3600
}
For an instance of my problem I am getting solutions however for another instance, I am getting an AssertionError. I couldn`t understand the cause of the problem and how to solve it. Below you can find the error log regarding the issue. I will be glad if you can help me regarding this or share a source to reach.

Regards,
Berkay GULCAN

Error log:
{
ERROR: LoadError: AssertionError: line[1:7] == "Options"
Stacktrace:
[1] read_sol(::IOStream, ::AmplNLWriter.AmplNLMathProgModel) at /home/bgulcan/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:666
[2] read_results(::IOStream, ::AmplNLWriter.AmplNLMathProgModel) at /home/bgulcan/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:575
[3] open(::AmplNLWriter.##117#118{AmplNLWriter.AmplNLMathProgModel}, ::String, ::String) at ./iostream.jl:152
[4] read_results(::AmplNLWriter.AmplNLMathProgModel) at /home/bgulcan/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:569
[5] optimize!(::AmplNLWriter.AmplNLMathProgModel) at /home/bgulcan/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:419
[6] optimize!(::AmplNLWriter.AmplNLNonlinearModel) at /home/bgulcan/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:772
[7] #solvenlp#165(::Bool, ::Function, ::JuMP.Model, ::JuMP.ProblemTraits) at /home/bgulcan/.julia/v0.6/JuMP/src/nlp.jl:1271
[8] (::JuMP.#kw##solvenlp)(::Array{Any,1}, ::JuMP.#solvenlp, ::JuMP.Model, ::JuMP.ProblemTraits) at ./:0
[9] #solve#116(::Bool, ::Bool, ::Bool, ::Array{Any,1}, ::Function, ::JuMP.Model) at /home/bgulcan/.julia/v0.6/JuMP/src/solvers.jl:172
[10] createandsolvemodel_DCT(::Biomass, ::Int64, ::Int64, ::Int64, ::Int64, ::Float64, ::Float64, ::Float64, ::Array{Float64,3}, ::Array{Float64,3}, ::Array{Float64,1}, ::Array{Float64,2}, ::Int64, ::Float64, ::Array{Float64,3}, ::Array{Float64,3}, ::Array{Float64,1}) at /home/bgulcan/Model-SmallestFull-Couenne3.jl:139
[11] macro expansion at /home/bgulcan/Run-SmallestFull-Couenne3.jl:115 [inlined]
[12] anonymous at ./:?
[13] include_from_node1(::String) at ./loading.jl:576
[14] include(::String) at ./sysimg.jl:14
[15] process_options(::Base.JLOptions) at ./client.jl:305
[16] _start() at ./client.jl:371
while loading /home/bgulcan/Run-SmallestFull-Couenne3.jl, in expression starting on line 113
}

Passing options

There seem to be multiple ways that solvers take options for solving:

  • Bonmin, Couenne and Ipopt all accept a limited subset of options at the command line, and require the writing of an options file to set most options. This file must be in the current working directory when the solver is invoked (note this isn't necessarily the directory containing the solver executable, nor the directory with the .nl file)
  • I had a look at SNOPT after it was requested, and it takes all options on the command line.

I haven't yet looked at any other solvers, are there other major ones that might take options in a different way?

This suggests we need some logic that figures out what to do with the options we get passed. An additional option that triggers writing an options file? A lookup table for the solvers? Something else entirely?

Another consideration is that the options files must be named based on the solver eg. ipopt.opt. Currently the solver is run using the command passed in the first argument to NLSolver, but that's a pretty fragile way to detect which solver we are using. Thoughts?

cc @mlubin @tkelman @IainNZ @joehuchette

no method matching supports_default_copy_to

I get the following error when following the installation instructions:

julia> using JuMP, AmplNLWriter

julia> m = Model(with_optimizer(AmplNLSolver, "bonmin"))
ERROR: MethodError: no method matching supports_default_copy_to(::AmplNLSolver, ::Bool)
Closest candidates are:
  supports_default_copy_to(::MathOptInterface.Utilities.AbstractModel, ::Bool) at C:\Users\U1251476\.julia\packages\MathOptInterface\C1XBe\src\Utilities\model.jl:693
  supports_default_copy_to(::MathOptInterface.Utilities.MockOptimizer, ::Bool) at C:\Users\U1251476\.julia\packages\MathOptInterface\C1XBe\src\Utilities\mockoptimizer.jl:521
  supports_default_copy_to(::MathOptInterface.Utilities.CachingOptimizer, ::Bool) at C:\Users\U1251476\.julia\packages\MathOptInterface\C1XBe\src\Utilities\cachingoptimizer.jl:165
  ...
Stacktrace:
 [1] #set_optimizer#77(::Bool, ::typeof(set_optimizer), ::Model, ::OptimizerFactory) at C:\Users\U1251476\.julia\packages\JuMP\MsUSY\src\optimizer_interface.jl:43
 [2] #Model#7 at .\none:0 [inlined]
 [3] Model(::OptimizerFactory) at C:\Users\U1251476\.julia\packages\JuMP\MsUSY\src\JuMP.jl:193
 [4] top-level scope at none:0

I am using JuMP v0.20.1 and AmplNLWriter v0.5.0 on Julia 1.2.0. Do you have any suggestions on how to solve this issue?

MathProgBase.getobjbound()

When Bonmin terminates because of a timeout, for example, it would be helpful to be able to access the best lower bound it was able to prove. Is this information available in some form? If so, could we expose it via the standard MathProgBase.getobjbound method?

Bonmin: MOI issues

I was trying to test the library using bonmin(I will also use minos, but later) with this code:

using JuMP, AmplNLWriter

model = Model(with_optimizer(AmplNLWriter.Optimizer, "/home/nevevini/bonmin"))
@variable(model, x, start = 0.0)
@variable(model, y, start = 0.0)
@NLobjective(model, Min, (1 - x)^2 + 100 * (y)^2)
optimize!(model)

However, when I execute I have this error

ERROR: LoadError: MethodError: supports_constraint(::AmplNLWriter.InnerModel{Float64}, ::Type{MathOptInterface.SingleVariable}, ::Type{MathOptInterface.EqualTo{Float64}}) is ambiguous. Candidates:
  supports_constraint(model::AmplNLWriter.InnerModel{#76#T}, ::Type{#s21} where #s21<:Union{MathOptInterface.SingleVariable, MathOptInterface.ScalarAffineFunction{#76#T}, MathOptInterface.ScalarQuadraticFunction{#76#T}}, ::Type{#s20} where #s20<:Union{MathOptInterface.Integer, MathOptInterface.ZeroOne, MathOptInterface.EqualTo{#76#T}, MathOptInterface.GreaterThan{#76#T}, MathOptInterface.Interval{#76#T}, MathOptInterface.LessThan{#76#T}}) where #76#T in AmplNLWriter at /home/nevevini/.julia/packages/MathOptInterface/XE04a/src/Utilities/model.jl:1014
  supports_constraint(::MathOptInterface.Utilities.AbstractModel{T}, ::Type{MathOptInterface.SingleVariable}, ::Type{#s118} where #s118<:Union{MathOptInterface.Integer, MathOptInterface.ZeroOne, MathOptInterface.EqualTo{T}, MathOptInterface.GreaterThan{T}, MathOptInterface.Interval{T}, MathOptInterface.LessThan{T}, MathOptInterface.Semicontinuous{T}, MathOptInterface.Semiinteger{T}}) where T in MathOptInterface.Utilities at /home/nevevini/.julia/packages/MathOptInterface/XE04a/src/Utilities/model.jl:483
Possible fix, define
  supports_constraint(::AmplNLWriter.InnerModel{T}, ::Type{MathOptInterface.SingleVariable}, ::Type{#s118} where #s118<:Union{MathOptInterface.Integer, MathOptInterface.ZeroOne, MathOptInterface.EqualTo{T}, MathOptInterface.GreaterThan{T}, MathOptInterface.Interval{T}, MathOptInterface.LessThan{T}, MathOptInterface.Semicontinuous{T}, MathOptInterface.Semiinteger{T}})
Stacktrace:
 [1] get(::MathOptInterface.Utilities.UniversalFallback{AmplNLWriter.InnerModel{Float64}}, ::MathOptInterface.ListOfConstraintIndices{MathOptInterface.SingleVariable,MathOptInterface.EqualTo{Float64}}) at /home/nevevini/.julia/packages/MathOptInterface/XE04a/src/Utilities/universalfallback.jl:227
 [2] optimize!(::MathOptInterface.Utilities.UniversalFallback{AmplNLWriter.InnerModel{Float64}}) at /home/nevevini/.julia/packages/AmplNLWriter/V1gW5/src/MOI_wrapper.jl:295
 [3] optimize!(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.UniversalFallback{AmplNLWriter.InnerModel{Float64}}}) at /home/nevevini/.julia/packages/MathOptInterface/XE04a/src/Bridges/bridge_optimizer.jl:199
 [4] optimize!(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/nevevini/.julia/packages/MathOptInterface/XE04a/src/Utilities/cachingoptimizer.jl:189
 [5] #optimize!#78(::Bool, ::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(optimize!), ::Model, ::Nothing) at /home/nevevini/.julia/packages/JuMP/iGamg/src/optimizer_interface.jl:141
 [6] optimize! at /home/nevevini/.julia/packages/JuMP/iGamg/src/optimizer_interface.jl:111 [inlined] (repeats 2 times)
 [7] top-level scope at /home/nevevini/Bureau/Recherche/teste.jl:7
 [8] include at ./boot.jl:328 [inlined]
 [9] include_relative(::Module, ::String) at ./loading.jl:1094
 [10] include(::Module, ::String) at ./Base.jl:31
 [11] exec_options(::Base.JLOptions) at ./client.jl:295
 [12] _start() at ./client.jl:464

I am using Julia 1.2.0, JuMP 0.2.0 and my OS is Oracle Linux Server 7.6>

Also, I have another question: is it possible to generate a .mod or a .nl file with this library?

Error with LazyList when using AmplNLSolver in JuMP

I'm using Knitro through AMPL:

model = Model(solver=AmplNLSolver("/Applications/ampl/knitro"))

The error comes from the following (simplified) constraint

@NLconstraint(model, constraint[j=1:n], a <= (sum{A[i]^a , i=find(Ω[:,j].==1)})

I get the error:

ERROR: LoadError: MethodError:^has no method matching ^(::Lazy.LazyList, ::Float64)

This does not happen when instead of using AMPL I use IPopt directly through IpoptSolver().

Couenne gives error for output

Hi there!
I’m trying to use Couenne optimizer with AmplNLWriter, but can’t make it work.
Here’s my hello-world level model:

using JuMP, AmplNLWriter
model = Model(with_optimizer(AmplNLWriter.Optimizer, "/usr/local/bin/couenne"))
@variable(model, 0 <= x <= 2 )
@variable(model, 0 <= y <= 30 )
@objective(model, Max, 5x + 3*y )
@constraint(model, 1x + 5y <= 3.0 )
optimize!(model)

Couenne produces error for result, when I try to printout results of the optimization with

println(JuMP.termination_status(model))
println("Objective value: ", JuMP.objective_value(model))
println("x = ", JuMP.value(x))
println("y = ", JuMP.value(y))

I get

OTHER_ERROR
Objective value: NaN
x = NaN
y = NaN

Juniper gives expected (no-error) result solving this problem and Couenne does too if I run it with AMPL or as a standalone solver.

IpoptNLSolver returns infeasible solution with status :Optimal

I attempted to solve a non-linear model using Ipopt.jl, IpoptNLSolver and BonminNLSolver (I have CoinOptServices installed).

Ipopt.jl and BonminNLSolver return nearly identical solutions that appear to be local optima.

However, IpoptNLSolver returns an infeasible solution, but with the status Ipopt 3.12.2: Optimal Solution Found.

I used AmplNLWriter to create a nl file, and then passed it to the ipopt and bonmin executables. The offending .nl and .sol files can be found in the following gist.
https://gist.github.com/odow/18bff3fb0a4601d1353f

I am hesitant to post the Julia model publicly. I'll try tomorrow to trim down a minimal example, but a lot of things interconnect.

I guess the first thing is working out if

  • The .nl file is not being created correctly @JackDunnNZ
  • There is a bug in the Ipopt executable @tkelman

BonminNLSolver failing on a simple problem

The following (simple) problem fails on windows with "WARNING: Not solved to optimality, status: Error"

m2 = Model(solver =  BonminNLSolver(["bonmin.nlp_log_level=1"; "bonmin.bb_log_level=1"]))
@variable(m2, 0 <= yp <= 1, Int)
@variable(m2, 0 <= l <= 1000.0)
@variable(m2, 0 <= f <= 1000.0)
@NLconstraint(m2, .087*l >= f^2)
@constraint(m2,  l <= yp*1000.0)
v = solve(m2)
println(v)

However, it succeeds with OSIL

m4 = Model(solver =  OsilBonminSolver())
@variable(m4, 0 <= yp <= 1, Int)
@variable(m4, 0 <= l <= 1000.0)
@variable(m4, 0 <= f <= 1000.0)
@NLconstraint(m4, .087*l >= f^2)
@constraint(m4,  l <= yp*1000.0)
v = solve(m4)
println(v)

Packages were updated as of Sept. 20, 2016, using the installations of Bonmin built by CoinOptServices and AMPLNLWriter

tests fail on 1.2 (dependency issue?)

It looks like this package brings in an older version of JuMP during testing, which fails to load on 1.2 due to the invalid :import expression thing that was fixed a while ago in JuMP. Can the test dependencies just be updated or is there a deeper problem?

Using Couenne options from AmplNLWriter

I'm not sure if this is a problem with this interface or with my understanding of Couenne's options, but it doesn't seem like the options I specify affect the solver's behavior.

Here's a simple example:

using JuMP, AmplNLWriter

solver=AmplNLSolver("couenne", Dict("bonmin.allowable_gap"=>1, 
                         "outlev"=>0,
                         "output_file"=>"results.out"))
m = Model(solver=solver)

srand(4)
nvar = 10
@defVar(m, -10 <= x[i=1:nvar] <= 10)
@setNLObjective(m, Min, sum{1/(1+exp(-x[i])), i=1:nvar})
@addConstraint(m, sum{x[i], i=1:nvar} <= .4*nvar)
@assert(solve(m) == :Optimal)

The output is exactly the same regardless of how I specify the allowable gap and outlev, and the file results.out never shows up. However, it does seem like Couenne knows something about these options, since the output starts with

Couenne 0.5 -- an Open-Source solver for Mixed Integer Nonlinear Optimization
Mailing list: [email protected]
Instructions: http://www.coin-or.org/Couenne
couenne: output_file=results.out
bonmin.allowable_gap=100
outlev=0

Assertion failure while solving MINLPs with fixed variables in JuMP

This simple piece of code gives me an assertion failure:

using JuMP, AmplNLWriter

x0 = 1.0   # continuous fixed variable value

m1 = Model(solver=BonminNLSolver())
@defVar(m1, x == x0)
@defVar(m1, y, Bin)
@setNLObjective(m1, Min, x*y)
solve(m1)

Error message:

ERROR: assertion failed: all(x->begin  # /Users/Ashwin/.julia/v0.3/AmplNLWriter/src/AmplNLWriter.jl, line 195:
            $(Expr(:in, :x, :([:Cont,:Bin,:Int])))
        end,cat)
 in setvartype! at /Users/Ashwin/.julia/v0.3/AmplNLWriter/src/AmplNLWriter.jl:195
 in _buildInternalModel_nlp at /Users/Ashwin/.julia/v0.3/JuMP/src/nlp.jl:543
 in buildInternalModel at /Users/Ashwin/.julia/v0.3/JuMP/src/solvers.jl:623
 in solve at /Users/Ashwin/.julia/v0.3/JuMP/src/solvers.jl:39
 in include at /Applications/Juno.app/Contents/Resources/app/julia/lib/julia/sys.dylib
 in include_from_node1 at /Applications/Juno.app/Contents/Resources/app/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in reload at loading.jl:85
while loading /Users/Ashwin/Documents/GitHub/Ashwin/Research/Research/MIPLaneChange/testBonminAmplSolve2.jl, in expression starting on line 10

I was able to resolve that by using the Variable() method to specify the variable category:

m2 = Model(solver=BonminNLSolver())
x = Variable(m2, x0, x0, :Cont)
@defVar(m2, y, Bin)
@setNLObjective(m2, Min, x*y)
solve(m2)

If no binary variables are involved, fixed continuous variables defined by the @defvar macro work fine. Eg:

m3 = Model(solver=BonminNLSolver())
@defVar(m3, x == x0)
y = 1.0
@setNLObjective(m3, Min, x*y)
solve(m3)

Not an issue if I call Bonmin via CoinOptServices.

Failure with JuMP

I get an UndefVarError when running JuMP master + AmplNLWriter.
In the following problem, this happens with IpoptNLSolver, whilst IpoptSolver works:

using JuMP, Ipopt, AmplNLWriter

m = Model(solver=IpoptNLSolver()) # Fails
# m = Model(solver=IpoptSolver()) # Works
n = 30

l = -ones(n); l[1] = 0
u = ones(n)
@defVar(m, l[i] <= x[i=1:n] <= u[i])
@defNLExpr(m, f1, x[1])
@defNLExpr(m, g, 1 + 9*sum{x[j]^2, j=2:n}/(n-1))
@defNLExpr(m, h, 1 - (f1/g)^2)
@defNLExpr(m, f2, g*h)

setValue(x[1], 1)
setValue(x[2:n], zeros(n-1))
@setNLObjective(m, :Min, f2)
solve(m)

The error is:

ERROR: UndefVarError: x not defined
 in eval at /home/riseth/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:1
 in pull_up_constants at /home/riseth/.julia/v0.4/AmplNLWriter/src/nl_linearity.jl:123
 in map! at abstractarray.jl:1277
 in map! at abstractarray.jl:1274
 in pull_up_constants at /home/riseth/.julia/v0.4/AmplNLWriter/src/nl_linearity.jl:125
 in map! at abstractarray.jl:1277
 in map! at abstractarray.jl:1274
 in pull_up_constants at /home/riseth/.julia/v0.4/AmplNLWriter/src/nl_linearity.jl:125
 in map! at abstractarray.jl:1277
 in map! at abstractarray.jl:1274
 in pull_up_constants at /home/riseth/.julia/v0.4/AmplNLWriter/src/nl_linearity.jl:125
 in map! at abstractarray.jl:1277
 in map! at abstractarray.jl:1274
 in pull_up_constants at /home/riseth/.julia/v0.4/AmplNLWriter/src/nl_linearity.jl:125
 in process_expression! at /home/riseth/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:367
 in loadproblem! at /home/riseth/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:210
 in _buildInternalModel_nlp at /home/riseth/.julia/v0.4/JuMP/src/nlp.jl:924
 in buildInternalModel at /home/riseth/.julia/v0.4/JuMP/src/solvers.jl:307
 in solve at /home/riseth/.julia/v0.4/JuMP/src/solvers.jl:131

[PkgEval] AmplNLWriter may have a testing issue on Julia 0.3 (2015-05-17)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.3

  • On 2015-05-17 the testing status was N/A - new package.
  • On 2015-05-17 the testing status changed to Package doesn't load.

Package doesn't load. means that PackageEvaluator did not find tests for your package. Additionally, trying to load your package with using failed.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

>>> 'Pkg.add("AmplNLWriter")' log
INFO: Cloning cache of AmplNLWriter from git://github.com/JackDunnNZ/AmplNLWriter.jl.git
INFO: Cloning cache of MathProgBase from git://github.com/JuliaOpt/MathProgBase.jl.git
INFO: Installing AmplNLWriter v0.0.1
INFO: Installing Compat v0.4.4
INFO: Installing MathProgBase v0.3.11
INFO: Package database updated

>>> 'using AmplNLWriter' log
Julia Version 0.3.6
Commit 0c24dca (2015-02-17 22:12 UTC)
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3
/home/vagrant/testpkg/v0.3/AmplNLWriter

ERROR: AmplNLWriter not found
 in require at loading.jl:47
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:285
 in _start at ./client.jl:354
while loading /vagrant/releaseAL/PKGEVAL_AmplNLWriter_using.jl, in expression starting on line 4

>>> test log
no tests to run
>>> end of log

Overwrite temporary files

I am running a series of optimization models consecutively. The loop fails 'sometimes' due to the fact that the temporary .nl files do not seem to be able to overwrite each other.
I guess the culprit is at line 375 in AmplNLWriter.jl :

# Rename file to have .nl extension (this is required by solvers)
mv(file_basepath, m.probfile)

this should use the option remove_destination=true

CodeCov

Any reason not to set this up?

The lower case `l` really bugs me

in the name, its impossible to read in sans-serif fonts. What to do you think about a rename?

(complaining about names is such a productive use of time :D)

Julia v0.6 Support?

Is this currently working in Julia v0.6? I notice that v0.6 is not in CI. I am currently getting this message,

ERROR: LoadError: MethodError: no method matching setsolver(::JuMP.Model, ::AmplNLWriter.AmplNLSolver)
The applicable method may be too new: running in world age 21832, while current world is 21843.

@jac0320, @kaarthiksundar do you two have similar issues?

Support for JuMP v0.19?

I saw that the master branch supports MOI now. Thanks for working on it. Any plan to support JuMP v0.19?

Slow invocation?

I am trying AmplNKWriter with Knitro. The code is working reasonably fast with JuMP+Knitro (Knitro is called within seconds, problem is solved in around 300 sec), however when AmplNLWriter is used, I have been waiting for 25 minutes, but I do not see any signs of knitroampl being invoked, while the physical memory usage almost tripled compared to JuMP+Knitro. Any suggestions? The code is attached. Thank you!
TestOptProblem9_2.txt

How to call Knitro

I cannot figure out, using the examples, how I can call knitro (I have a full lisence, including for knitroampl) using AmplNLWriter. What line should I call? Many thanks!

Nonlinear objective evaluation after solve is very slow

I have the following simple regularized logistic regression problem:

function logistic_regression(X::Matrix{Float64}, y::Vector{Int}, λ::Float64)
  n, p = size(X)

  # Convert y to (-1, +1)
  Y = y * 2 - 3

  m = Model(solver=IpoptNLSolver())

  @variables m begin
    β[1:p]
    β0
    z[1:p] >= 0
  end
  @NLexpression(m, fx[i=1:n], sum{X[i, j] * β[j], j= 1:p} + β0)

  @constraints m begin
    pos_abs[i = 1:p], z[i] >=  β[i]
    neg_abs[i = 1:p], z[i] >= -β[i]
  end

  @NLobjective(m, Max, -sum{log(1 + exp(-Y[i] * fx[i])), i = 1:n} -
                          n * λ * sum{z[j], j=1:p})

  @time status = solve(m)

  getvalue(β), getvalue(β0)
end

X is a 23000x128 matrix. Ipopt finishes in about 10 seconds, but then evaluating the objective value takes around 80s at this step, in particular the substitute_vars! call is the slow one, which traverses the expression tree and replaces variable nodes with their numeric solutions.

@mlubin Do you have any ideas about how we could make this step faster?

Interestingly, solving the same problem via Ipopt.jl takes 250 seconds (all in function evaluations), so there's about a 25x difference in just the raw solve time, ignoring this objective value time.

clean_solverdata and tmp files

At some point this package generates tmp* files in the .solverdata directory. I did not see the proper way to clean these. clean_solverdata focuses on .nl and .sol files.

Assertion error: numcons == m.ncon

Something seems to have broken now (after trying to use the SCIP solver, but without having made any changes in the AmplNLWriter repo compared to master), and I'm not sure what to look for in debugging it. With a simple example that used to work (pasted below), I now get the following error:

~/.julia/v0.4/SigmoidalProgramming (a6a8a5f ✘)✭ ᐅ julia examples/simple.jl
ERROR: LoadError: AssertionError: num_cons == m.ncon
 in read_sol at /Users/madeleine/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:574
 in read_results at /Users/madeleine/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:484
 in optimize! at /Users/madeleine/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:353
 in optimize! at /Users/madeleine/.julia/v0.4/AmplNLWriter/src/AmplNLWriter.jl:663
 in solvenlp at /Users/madeleine/.julia/v0.4/JuMP/src/nlp.jl:547
 in solve at /Users/madeleine/.julia/v0.4/JuMP/src/solvers.jl:135
 in include at /Applications/Julia-0.4.0.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at /Applications/Julia-0.4.0.app/Contents/Resources/julia/lib/julia/sys.dylib
 in process_options at /Applications/Julia-0.4.0.app/Contents/Resources/julia/lib/julia/sys.dylib
 in _start at /Applications/Julia-0.4.0.app/Contents/Resources/julia/lib/julia/sys.dylib
while loading /Users/madeleine/.julia/v0.4/SigmoidalProgramming/examples/simple.jl, in expression starting on line 14

and here's examples/simple.jl:

using JuMP, AmplNLWriter, MathProgBase

srand(4)
nvar = 10

solver=AmplNLSolver("couenne")
m = Model(solver=solver)
@defVar(m, -10 <= x[i=1:nvar] <= 10)
@setNLObjective(m, Min, sum{1/(1+exp(-x[i])), i=1:nvar})
@addConstraint(m, sum{x[i], i=1:nvar} <= .4*nvar)
@assert(solve(m) == :Optimal)

Bonmin options with AmplNLWriter

I have some problems with running the following code, which is given in Book (Julia Programming for Operations Research 2/e by Changhyun Kwon).
Error log is also attached.

How can I run this code? Please help me.
my environment: ubuntu16.04, julia1.1.1
bonmin file path: home/sychoi/workspace/bonmin

===================code===================
using JuMP, AmplNLWriter
m = Model(with_optimizer(AmplNLWriter.Optimizer, "home/sychoi/workspace/bonmin"))

@variable(m, x>=0)
@variable(m, y[1:2])
@variable(m, s[1:5]>=0)
@variable(m, l[1:5]>=0)

@objective(m, Min, -x -3y[1] + 2y[2])

@constraint(m, -2x +  y[1] + 4y[2] + s[1] ==  16)
@constraint(m,  8x + 3y[1] - 2y[2] + s[2] ==  48)
@constraint(m, -2x +  y[1] - 3y[2] + s[3] == -12)
@constraint(m,       -y[1]         + s[4] ==   0)
@constraint(m,        y[1]         + s[5] ==   4)
@constraint(m, -1 + l[1] + 3l[2] +  l[3] - l[4] + l[5] == 0)
@constraint(m,     4l[2] - 2l[2] - 3l[3]               == 0)
for i in 1:5
  @NLconstraint(m, l[i] * s[i] == 0)
end

optimize!(m)

println("** Optimal objective function value = ", JuMP.objective_value(m))
println("** Optimal x = ", JuMP.value(x))
println("** Optimal y = ", JuMP.value.(y))
println("** Optimal s = ", JuMP.value.(s))
println("** Optimal l = ", JuMP.value.(l))

===================error log===================
ERROR: LoadError: IOError: could not spawn `home/sychoi/workspace/bonmin /home/sychoi/.julia/packages/AmplNLWriter/V1gW5/.solverdata/tmp6U6ZEx.nl -AMPL`: no such file or directory (ENOENT)
Stacktrace:
 [1] _spawn_primitive(::String, ::Cmd, ::Array{Any,1}) at ./process.jl:400
 [2] #505 at ./process.jl:413 [inlined]
 [3] setup_stdios(::getfield(Base, Symbol("##505#506")){Cmd}, ::Array{Any,1}) at ./process.jl:497
 [4] _spawn(::Base.CmdRedirect, ::Array{Any,1}) at ./process.jl:412
 [5] #run#515(::Bool, ::Function, ::Base.CmdRedirect) at ./process.jl:729
 [6] #run at /home/sychoi/.julia/packages/AmplNLWriter/V1gW5/src/AmplNLWriter.jl:0 [inlined]
 [7] optimize!(::AmplNLWriter.AmplNLMathProgModel) at /home/sychoi/.julia/packages/AmplNLWriter/V1gW5/src/AmplNLWriter.jl:417
 [8] optimize!(::AmplNLWriter.AmplNLNonlinearModel) at /home/sychoi/.julia/packages/AmplNLWriter/V1gW5/src/AmplNLWriter.jl:777
 [9] optimize!(::MathOptInterface.Utilities.UniversalFallback{AmplNLWriter.InnerModel{Float64}}) at /home/sychoi/.julia/packages/AmplNLWriter/V1gW5/src/MOI_wrapper.jl:375
 [10] optimize!(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.UniversalFallback{AmplNLWriter.InnerModel{Float64}},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Bridges.AllBridgedConstraints{Float64}}}) at /home/sychoi/.julia/packages/MathOptInterface/C3lip/src/Bridges/bridgeoptimizer.jl:73
 [11] optimize!(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{JuMP._MOIModel{Float64}}}) at /home/sychoi/.julia/packages/MathOptInterface/C3lip/src/Utilities/cachingoptimizer.jl:170
 [12] #optimize!#79(::Bool, ::Bool, ::Function, ::Model, ::Nothing) at /home/sychoi/.julia/packages/JuMP/ibcEh/src/optimizer_interface.jl:132
 [13] optimize! at /home/sychoi/.julia/packages/JuMP/ibcEh/src/optimizer_interface.jl:105 [inlined] (repeats 2 times)
 [14] top-level scope at none:0
 [15] include at ./boot.jl:326 [inlined]
 [16] include_relative(::Module, ::String) at ./loading.jl:1038
 [17] include(::Module, ::String) at ./sysimg.jl:29
 [18] exec_options(::Base.JLOptions) at ./client.jl:267
 [19] _start() at ./client.jl:436

in expression starting at /home/sychoi/workspace/julia_practice/my/myExample01.jl:24

Using SCIP fron AmplNLWriter

It looks like the invocation of SCIP from AmplNLWriter is different than what my installed version of SCIP expects. I get the error:

invalid parameter </Users/madeleine/.julia/v0.4/AmplNLWriter/.solverdata/model.nl>
invalid parameter <-AMPL>

syntax: /Applications/scipoptsuite-3.2.0/scip-3.2.0/bin/scip [-l <logfile>] [-q] [-s <settings>] [-f <problem>] [-b <batchfile>] [-c "command"]
  -l <logfile>  : copy output into log file
  -q            : suppress screen messages
  -s <settings> : load parameter settings (.set) file
  -f <problem>  : load and solve problem file
  -b <batchfile>: load and execute dialog command batch file (can be used multiple times)
  -c "command"  : execute single line of dialog commands (can be used multiple times)

The command that seems to be being called inside AmplNLWriter is
scip /Users/madeleine/.julia/v0.4/AmplNLWriter/.solverdata/model.nl -AMPL
Being unfamiliar with SCIP in its non-Julian incarnations, I'm not sure what the right syntax for calling it with an AMPL file is, or whether I need to compile a different version of it...? I've compiled it according to the instructions here.

Bonmin: Cannot open .nl file

I was trying to solve a Non linear integer optimization problem as given below: But getting an error saying cannot open .nl file. I did not understood why does it happening.

Max 4.085381103967961e-6 * (0.02789 * (1.0 - 0.3 ^ zj[1]) * ()(0.3 ^ zj[1]) + 0.011173 * (1.0 - 0.3 ^ zj[3]) * (0.3 ^ zj[1] * 0.3 ^ zj[3]) + 0.00097 * (1.0 - 0.3 ^ zj[2]) * (0.3 ^ zj[1] * 0.3^ zj[3] * 0.3 ^ zj[2])) + 1.08158767640779e-5 * (0.081077 * (1.0 - 0.3 ^ zj[1]) * ()(0.3 ^ zj[1]) + 0.003674 * (1.0 - 0.3 ^ zj[3]) * (0.3 ^ zj[1] * 0.3 ^ zj[3]) + 0.002916 * (1.0 - 0.3 ^ zj[2]) * (0.3 ^ zj[1] * 0.3 ^ zj[3] * 0.3 ^ zj[2])) + 1.714709723075756e-5 * (0.041535 * (1.0 - 0.3 ^ zj[1]) * ()(0.3 ^ zj[1]) + 0.004603 * (1.0 - 0.3 ^ zj[2]) * (0.3 ^ zj[1] * 0.3 ^ zj[2]) + 0.002542 * (1.0 - 0.3 ^ zj[3]) * (0.3 ^ zj[1] * 0.3 ^ zj[2] * 0.3 ^ zj[3])) + 6.390409502891686e-6 * (0.015394 * (1.0 - 0.3 ^ zj[1]) * ()(0.3 ^ zj[1]) + 0.004482 * (1.0 - 0.3 ^ zj[3]) * (0.3 ^ zj[1] * 0.3 ^ zj[3]) + 0.001687 * (1.0 - 0.3 ^ zj[2]) * (0.3 ^ zj[1] * 0.3 ^ zj[3] * 0.3 ^ zj[2])) + 3.2934853036645695e-5 * (0.213073 * (1.0 - 0.3 ^ zj[1]) * ()(0.3 ^ zj[1]) + 0.008925 * (1.0 - 0.3 ^ zj[2]) * (0.3 ^ zj[1] * 0.3 ^ zj[2]) + 0.00121 * (1.0 - 0.3 ^ zj[3]) * (0.3 ^ zj[1] * 0.3 ^ zj[2] * 0.3 ^ zj[3])) + 1.3446277661500508e-6 * (0.458692 * (1.0 - 0.3 ^ zj[1]) * ()(0.3 ^ zj[1]) + 0.026056 * (1.0 - 0.3 ^ zj[2]) * (0.3 ^ zj[1] * 0.3 ^ zj[2]) + 0.000423 * (1.0 - 0.3 ^ zj[3]) * (0.3 ^ zj[1] * 0.3 ^ zj[2] * 0.3 ^ zj[3]))
Subject to
zj[1] ≤ 1
zj[2] ≤ 1
zj[3] ≤ 1
zj[1] + zj[2] + zj[3] ≤ 3
zj[i] ≥ 0, integer, ∀ i ∈ {1,2,3}
Bonmin 1.8.6 using Cbc 2.9.9 and Ipopt 3.12.8
bonmin:
Cannot open .nl file

The code I used is

    mexclp_pr_ssbp = Model(solver=AmplNLSolver(
                                "/path/to/bonmin/bonmin"))

    @variable(mexclp_pr_ssbp, zj[1:n] >=0 , Int)

    for j in 1:n
        @constraint(mexclp_pr_ssbp, zj[j] <= cj[j] )
    end
    @constraint(mexclp_pr_ssbp, sum(zj) <= q )

    @NLobjective(mexclp_pr_ssbp, Max,
                sum(di[i] *
                sum(P[i,ij[i,j]] *
                (1 - p[ij[i,j]]^zj[ij[i,j]]) * prod(p[ij[i,j]] ^ zj[ij[i,u]] for u in 1:(j)) for j in 1:n )
                for i in 1:m))

    print(mexclp_pr_ssbp)

    # setvalue(zj, cj)
    solve(mexclp_pr_ssbp)

It is working if I used sum instead of prod in the code. Any help would be appreciated.

Syntax for Knitro with outlev

Hi, I'm currently using Knitro solver in AmplNLWriter and I want to set the outlev for Knitro. Actually I have only m = Model(with_optimizer(AmplNLWriter.Optimizer, "knitro")) and I want to add the outlev.

Failure using Couenne

I get the following output

julia> solve(m)
Couenne 0.5.6 -- an Open-Source solver for Mixed Integer Nonlinear Optimization
Mailing list: [email protected]
Instructions: http://www.coin-or.org/Couenne
couenne: 
ANALYSIS TEST: Couenne: new cutoff value 2.1671658600e+05 (0.00837 seconds)
NLP0012I 
              Num      Status      Obj             It       time                 Location
NLP0014I             1         OPT 23.897379       13 0.003589
Couenne: new cutoff value 2.3897378895e+01 (0.012458 seconds)
Loaded instance "/home/michael/.julia/v0.6/AmplNLWriter/.solverdata/tmpGXI1IL.nl"
Constraints:           12
Variables:             12 (0 integer)
Auxiliaries:           40 (0 integer)

Coin0506I Presolve 61 (-6) rows, 40 (-12) columns and 268 (-12) elements
Clp0006I 0  Obj 23.899869 Primal inf 5.4012592 (1) Dual inf 3405.4599 (1)
Clp0006I 9  Obj -9.99e+12
Clp0006I 15  Obj -9.99e+12
ERROR: SystemError: opening file /home/michael/.julia/v0.6/AmplNLWriter/.solverdata/tmpGXI1IL.sol: No such file or directory
Stacktrace:
 [1] read_sol(::AmplNLWriter.AmplNLMathProgModel) at /home/michael/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:596
 [2] read_results(::AmplNLWriter.AmplNLMathProgModel) at /home/michael/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:532
 [3] optimize!(::AmplNLWriter.AmplNLMathProgModel) at /home/michael/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:387
 [4] optimize!(::AmplNLWriter.AmplNLNonlinearModel) at /home/michael/.julia/v0.6/AmplNLWriter/src/AmplNLWriter.jl:722
 [5] #solvenlp#165(::Bool, ::Function, ::JuMP.Model, ::JuMP.ProblemTraits) at /home/michael/.julia/v0.6/JuMP/src/nlp.jl:1271
 [6] (::JuMP.#kw##solvenlp)(::Array{Any,1}, ::JuMP.#solvenlp, ::JuMP.Model, ::JuMP.ProblemTraits) at ./<missing>:0
 [7] #solve#116(::Bool, ::Bool, ::Bool, ::Array{Any,1}, ::Function, ::JuMP.Model) at /home/michael/.julia/v0.6/JuMP/src/solvers.jl:172
 [8] solve(::JuMP.Model) at /home/michael/.julia/v0.6/JuMP/src/solvers.jl:150
 [9] macro expansion at ./REPL.jl:97 [inlined]
 [10] (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:73

This occurs when I try to run the following model:

using Distributions
using Plots
using JuMP
using AmplNLWriter

Lambda=10
n=rand(Poisson(Lambda))
u=rand(Uniform(0,1),n)
logistic(x)=exp(x)/(1+exp(x))
intensity(x)=Lambda*logistic(4*sin(10*x))
a=filter(x-> rand(Uniform(0,1)) < intensity(x)/Lambda,u )
na=length(a)
r=setdiff(u,a)
s2=0.1
k(x,y)=exp(- (0.5/s2) * norm(y-x,2)^2)
K=[k(a[i],a[j]) for i=1:na, j=1:na]
grid=collect(0:0.001:1)
KU=[k(a[i],grid[j]) for i=1:length(a), j=1:length(grid)]
Kg=KU*(KU')


#m = Model(solver=AmplNLSolver("couenne"))
m = Model(solver=CouenneNLSolver())
@variable(m, alpha[1:na])
setvalue(alpha,ones(na))
scalar=100

@NLobjective(m, Min,
             -sum(log(scalar * (sum(K[i,j]*alpha[j] for j=1:na))^2) for i=1:na)
             + sum(alpha[i]*((scalar/na)*Kg[i,j]+0.001*K[i,j])*alpha[j] for i=1:na, j=1:na)
             )


solve(m)

Can somebody please help me diagnose this problem. If I switch the solver to Bonmin, then it works fine and I get a local solution out. I would like to use Couenne though for the spatial branch and bound component as this model is nonconvex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.