scipy.optimize.NonlinearConstraint#
- class scipy.optimize.NonlinearConstraint(fun, lb, ub, jac='2-point', hess=<scipy.optimize._hessian_update_strategy.BFGS object>, keep_feasible=False, finite_diff_rel_step=None, finite_diff_jac_sparsity=None)[source]#
Nonlinear constraint on the variables.
The constraint has the general inequality form:
lb <= fun(x) <= ub
Here the vector of independent variables x is passed as ndarray of shape (n,) and
fun
returns a vector with m components.It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.
- Parameters:
- funcallable
The function defining the constraint. The signature is
fun(x) -> array_like, shape (m,)
.- lb, ubarray_like
Lower and upper bounds on the constraint. Each array must have the shape (m,) or be a scalar, in the latter case a bound will be the same for all components of the constraint. Use
np.inf
with an appropriate sign to specify a one-sided constraint. Set components of lb and ub equal to represent an equality constraint. Note that you can mix constraints of different types: interval, one-sided or equality, by setting different components of lb and ub as necessary.- jac{callable, ‘2-point’, ‘3-point’, ‘cs’}, optional
Method of computing the Jacobian matrix (an m-by-n matrix, where element (i, j) is the partial derivative of f[i] with respect to x[j]). The keywords {‘2-point’, ‘3-point’, ‘cs’} select a finite difference scheme for the numerical estimation. A callable must have the following signature:
jac(x) -> {ndarray, sparse matrix}, shape (m, n)
. Default is ‘2-point’.- hess{callable, ‘2-point’, ‘3-point’, ‘cs’, HessianUpdateStrategy, None}, optional
Method for computing the Hessian matrix. The keywords {‘2-point’, ‘3-point’, ‘cs’} select a finite difference scheme for numerical estimation. Alternatively, objects implementing
HessianUpdateStrategy
interface can be used to approximate the Hessian. Currently available implementations are:A callable must return the Hessian matrix of
dot(fun, v)
and must have the following signature:hess(x, v) -> {LinearOperator, sparse matrix, array_like}, shape (n, n)
. Herev
is ndarray with shape (m,) containing Lagrange multipliers.- keep_feasiblearray_like of bool, optional
Whether to keep the constraint components feasible throughout iterations. A single value set this property for all components. Default is False. Has no effect for equality constraints.
- finite_diff_rel_step: None or array_like, optional
Relative step size for the finite difference approximation. Default is None, which will select a reasonable value automatically depending on a finite difference scheme.
- finite_diff_jac_sparsity: {None, array_like, sparse matrix}, optional
Defines the sparsity structure of the Jacobian matrix for finite difference estimation, its shape must be (m, n). If the Jacobian has only few non-zero elements in each row, providing the sparsity structure will greatly speed up the computations. A zero entry means that a corresponding element in the Jacobian is identically zero. If provided, forces the use of ‘lsmr’ trust-region solver. If None (default) then dense differencing will be used.
Notes
Finite difference schemes {‘2-point’, ‘3-point’, ‘cs’} may be used for approximating either the Jacobian or the Hessian. We, however, do not allow its use for approximating both simultaneously. Hence whenever the Jacobian is estimated via finite-differences, we require the Hessian to be estimated using one of the quasi-Newton strategies.
The scheme ‘cs’ is potentially the most accurate, but requires the function to correctly handles complex inputs and be analytically continuable to the complex plane. The scheme ‘3-point’ is more accurate than ‘2-point’ but requires twice as many operations.
Examples
Constrain
x[0] < sin(x[1]) + 1.9
>>> from scipy.optimize import NonlinearConstraint >>> import numpy as np >>> con = lambda x: x[0] - np.sin(x[1]) >>> nlc = NonlinearConstraint(con, -np.inf, 1.9)