solve a nonlinear program in Matrix Form
NLPSolve(n, p, nc, nlc, lc, bd, opts)
NLPSolve(n, p, lc, bd, opts)
posint; number of variables
procedure; objective function
(optional) nonnegint or list of 2 nonnegints; number of nonlinear constraints
(optional) procedure; nonlinear constraints
(optional) list; linear constraints
(optional) list; bounds
(optional) equation(s) of the form option = value where option is one of assume, constraintjacobian, feasibilitytolerance, infinitebound, initialpoint, iterationlimit, maximize, method, objectivegradient, optimalitytolerance, or output; specify options for the NLPSolve command
The NLPSolve command solves a nonlinear program (NLP), which involves computing the minimum (or maximum) of an objective function, possibly subject to constraints. Generally, a local minimum is returned unless the problem is convex. However, global search is available in limited situations, as described in the following Notes section. An NLP has the following form:
minimize (or maximize) f⁡x
v⁡x≤0 (nonlinear inequality constraints)
w⁡x=0 (nonlinear equality constraints)
A·x≤b (linear inequality constraints)
Aeq·x=beq (linear equality constraints)
where x is the vector of problem variables; f⁡x is a real-valued function of x; v⁡x and w⁡x are vector-valued functions of x; b, beq, bl and bu are vectors; and A and Aeq are matrices. The relations involving matrices and vectors are element-wise.
Most of the algorithms used by the NLPSolve command assume that the objective function and the constraints are twice continuously differentiable. NLPSolve will sometimes succeed even if these conditions are not met.
This help page describes how to specify the problem in Matrix form. For details about the exact format of the objective function and the constraints, see the Optimization/MatrixForm help page. The algebraic and operator forms for specifying an NLP are described in the Optimization[NLPSolve] help page. The Matrix form is more complex, but leads to more efficient computation.
It is recommended that you use the Optimization[LPSolve] command for linear programs (problems with linear objective functions and linear constraints). Use the Optimization[QPSolve] command for quadratic programs (problems with quadratic objective functions and linear constraints). The Optimization[LSSolve] command is available for objective functions that can be put into least-squares form.
Consider the first calling sequence. The first parameter n is the number of problem variables. The second parameter p is a procedure that takes one input Vector parameter of size n, representing x, and returns the value of f⁡x.
The third parameter nc is a list of two non-negative integers representing the number of nonlinear inequality constraints and the number of nonlinear equality constraints. If there are no inequality constraints, nc can be a single integer value.
The fourth parameter nlc is a procedure, proc⁡x,y...end proc, that computes the values of the nonlinear constraints. The current point is passed as the Vector x, and the values of v⁡x followed by the values of w⁡x are returned using the Vector parameter y.
The fifth parameter lc is an optional list of linear constraints. The most general form is A,b,Aeq,beq, where A and Aeq are Matrices, and b and beq are Vectors. This parameter can take other forms if either inequality or equality constraints do not exist. For a full description of how to specify general linear constraints, see the Optimization/MatrixForm help page.
The sixth parameter bd is an optional list bl,bu of lower and upper bounds. In general, bl and bu must be n-dimensional Vectors. The Optimization/MatrixForm help page describes alternate forms that can be used when either bound does not exist and provides more convenient ways of specifying the Vectors. Non-negativity of the variables is not assumed by default, but can be specified using the assume = nonnegative option.
If there are no nonlinear constraints, the second calling sequence, in which parameters nc and nlc are omitted, can be used.
Maple returns the solution as a list containing the final minimum (or maximum) value and a point (the extremum). If the output = solutionmodule option is provided, then a module is returned. See the Optimization/Solution help page for more information.
The opts argument can contain one or more of the following options. These options are described in more detail in the Optimization/Options help page.
assume = nonnegative -- Assume that all variables are non-negative.
constraintjacobian = procedure -- Use the provided procedure to compute the Jacobian matrix of the constraints. The form required for the procedure is described in the Nonlinear Constraints section of the Optimization/MatrixForm help page.
evaluationlimit = posint -- Set the maximum number of objective function evaluations performed by the algorithm. This option is only available when the method option is set to branchandbound, modifiednewton, nonlinearsimplex or quadratic.
feasibilitytolerance = realcons(positive) -- Set the maximum absolute allowable constraint violation.
infinitebound = realcons(positive) -- Set any value of a variable greater than the infinitebound value to be equivalent to infinity during the computation.
initialpoint = Vector -- Use the provided initial point, which is an n-dimensional Vector of numeric values. The initial point is ignored when the quadratic interpolation method is used. For more information, see the Optimization/Methods help page.
iterationlimit = posint -- Set the maximum number of iterations performed by the algorithm. This option is only available when the method option is set to pcg or sqp.
maximize or maximize = truefalse -- Maximize the objective function when equal to true and minimize when equal to false. The option 'maximize' is equivalent to maximize = true. The default is maximize = false.
method = branchandbound, modifiednewton, nonlinearsimplex, pcg, quadratic, or sqp -- Specify the method. See the Optimization/Methods help page for more information.
nodelimit = posint -- Set the maximum number of nodes searched in the branch-and-bound tree. This option is only available with the method = branchandbound option.
objectivegradient = procedure -- Use the provided procedure to compute the gradient of the objective function. The form required for the procedure is described in the Nonlinear Objective section of the Optimization/MatrixForm help page.
objectivetarget = realcons -- Set the target objective function value which, if reached, causes the global search to terminate. This option is only available with the method = branchandbound option.
optimalitytolerance = realcons(positive) -- Set the tolerance that determines whether an optimal point has been found.
output = solutionmodule -- Return a module as described in the Optimization/Solution help page.
The NLPSolve command uses various methods implemented in a built-in library provided by the Numerical Algorithms Group (NAG). See the Optimization/Methods help page for more details. The solvers are iterative in nature and require an initial point. The quality of the solution can depend greatly on the point chosen, so it is recommended that you provide a point using the initialpoint option. Otherwise, a point is automatically generated.
The NLPSolve command also provides a global branch-and-bound search algorithm for univariate problems having finite bounds but no other constraints. This method, specified with the method = branchandbound option, returns a global solution on the given interval.
The computation is performed in floating-point. Therefore, all data provided must have type realcons and all returned solutions are floating-point, even if the problem is specified with exact values. For best performance, Vectors and Matrices should be constructed with the datatype = float option and all procedures should work with evalhf. Because the solver fails when a complex value is encountered, it is sometimes necessary to add additional constraints to ensure that the objective function and constraints always evaluate to real values. For more information about numeric computation in the Optimization package and suggestions on how to obtain the best performance using the Matrix form of input, see the Optimization/Computation help page.
For certain methods, it is highly recommended that you provide derivatives of the objective function and constraints through the objectivegradient and constraintjacobian options, because NLPSolve performs more efficiently when this information is available. For information on the methods that use derivatives, see the Optimization/Methods help page.
Although the assume = nonnegative option is accepted, general assumptions are not supported by commands in the Optimization package.
An answer is returned when necessary first-order conditions for optimality have been met and the iterates have converged. If the initial point already satisfies the conditions, then a warning is issued. Generally, the result is a local extremum but it is possible for the solver to return a saddle point. It is recommended that you try different initial points with each problem to verify that the solution is indeed an extremum.
Occasionally the solver will return a solution even if the iterates have not converged but the point satisfies the first-order conditions. Setting infolevel[Optimization] to 1 or higher will produce a message indicating this situation if it occurs.
Unlike the situation for linear programming, it is difficult to detect unboundedness in the nonlinear case and no warning is issued by NLPSolve. If the solution values seem unexpectedly large or small, it is possible that the solution is unbounded.
If NLPSolve returns an error saying that no solution could be found, it is recommended that you try a different initial point or use tolerance parameters that are less restrictive.
The following example demonstrates how to specify a nonlinear program in Matrix form and solve it using the NLPSolve command.
Consider the objective function w3⁢v−w2+w−x−12+x−y−22+y−z−32 and constraints w+x+y+z≤5 and 3⁢z+2⁢v−3=0.
Express the objective function as a procedure with the single parameter V representing the Vector with v, x, w, y, and z as components.
p := proc (V)
As recommended previously, provide the gradient of the objective function using the objectivegradient option. Other Maple commands such as VectorCalculus[Gradient] can be helpful in constructing such procedures.
objgrd := proc (V, W)
W := 2*V^3*(V-V);
W := 3*V^2*(V-V)^2-2*V^3*(V-V)+
Express the linear constraints in Matrix form.
A ≔ Matrix⁡0,1,1,1,1,datatype=float:
b ≔ Vector⁡5,datatype=float:
Aeq ≔ Matrix⁡2,0,0,0,3,datatype=float:
beq ≔ Vector⁡3,datatype=float:
lc ≔ A,b,Aeq,beq:
Solve the problem with NLPSolve, specifying that all variables must be non-negative. The second calling sequence is used because there are no nonlinear constraints.
Download Help Document
What kind of issue would you like to report? (Optional)