Wednesday, December 14, 2011

Logs in NLP

In http://www.or-exchange.com/questions/4214/entropy-maximization-log0-ampl-tricks someone seems to claim that:

y=log(x),
x.lo = 0.001

is not good modeling practice. I don’t agree: the better NLP solvers don’t evaluate nonlinear functions outside their bounds. And from my more than 20 year experience in nonlinear modeling: I have used this construct all the time.

A large percentage of NLPs that fail can be fixed by looking into specifying:

  • better bounds
  • better starting point
  • and better scaling

The comment below talks about IPOPT. No, even IPOPT will not evaluate nl funcs outside their bounds (come on, it is an interior point code!!). See  http://drops.dagstuhl.de/volltexte/2009/2089/pdf/09061.WaechterAndreas.Paper.2089.pdf . Slight complication is bound_relax_factor, but in practice that is not a problem. Maybe the poster is confusing infeasibility and interior points: a point can be inside the bounds but still be infeasible.

Apart from this remark, indeed IPOPT is an excellent solver. It is a fantastic complement to the well-known active-set solvers like CONOPT, MINOS and SNOPT. If you have large models with many superbasics: these solvers tend to have problems with that, while an interior point algorithm does not really care.

3 comments:

  1. "the better NLP solvers don’t evaluate nonlinear functions outside their bounds"

    I'm not sure that is true. IPOPT is an excellent infeasible path solver which uses a filter line search that is based on minimizing constraint violations. Perhaps your experience has primarily been with feasible path solvers like CONOPT? Then yes, that would be true. There is also the other issue of nonlinearity. The gradient of log(x) changes very rapidly for values of x near 0, while the gradient for x=exp(y) is more benign for x near 0 (from a numerical scaling standpoint, the absolute ranges are much smaller).

    ReplyDelete
  2. You are quite right -- my comment was an utterly ignorant one. The filter method elects to decreases the constraint violation in the equality constraints c(x) only. Interior point methods always stay within the bounds of inequality constraints because of the log-barrier term. I retract my ignorant statement unreservedly.

    ReplyDelete
  3. @Erwin: I agree with you, provided that (a) we can be confident that the a priori bound does not cut off the optimum and (b) it's not close enough to zero that a little rounding error could less you into hot water.

    ReplyDelete