nelder_mead

Implements the Nelder-Mead algorithm for maximizing a function with one or more variables.

quantecon.optimize.nelder_mead.nelder_mead(fun, x0, bounds=array([], shape=(0, 2), dtype=float64), args=(), tol_f=1e-10, tol_x=1e-10, max_iter=1000)[source]

Maximize a scalar-valued function with one or more variables using the Nelder-Mead method.

This function is JIT-compiled in nopython mode using Numba.

Parameters:
funcallable

The objective function to be maximized: fun(x, *args) -> float where x is an 1-D array with shape (n,) and args is a tuple of the fixed parameters needed to completely specify the function. This function must be JIT-compiled in nopython mode using Numba.

x0ndarray(float, ndim=1)

Initial guess. Array of real elements of size (n,), where ‘n’ is the number of independent variables.

bounds: ndarray(float, ndim=2), optional

Bounds for each variable for proposed solution, encoded as a sequence of (min, max) pairs for each element in x. The default option is used to specify no bounds on x.

argstuple, optional

Extra arguments passed to the objective function.

tol_fscalar(float), optional(default=1e-10)

Tolerance to be used for the function value convergence test.

tol_xscalar(float), optional(default=1e-10)

Tolerance to be used for the function domain convergence test.

max_iterscalar(float), optional(default=1000)

The maximum number of allowed iterations.

Returns:
resultsnamedtuple

A namedtuple containing the following items:

"x" : Approximate local maximizer
"fun" : Approximate local maximum value
"success" : 1 if the algorithm successfully terminated, 0 otherwise
"nit" : Number of iterations
"final_simplex" : Vertices of the final simplex

Notes

This algorithm has a long history of successful use in applications, but it will usually be slower than an algorithm that uses first or second derivative information. In practice, it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does.

References

[1]

J. C. Lagarias, J. A. Reeds, M. H. Wright and P. E. Wright, Convergence Properties of the Nelder–Mead Simplex Method in Low Dimensions, SIAM. J. Optim. 9, 112–147 (1998).

[2]

S. Singer and S. Singer, Efficient implementation of the Nelder–Mead search algorithm, Appl. Numer. Anal. Comput. Math., vol. 1, no. 2, pp. 524–534, 2004.

[3]

J. A. Nelder and R. Mead, A simplex method for function minimization, Comput. J. 7, 308–313 (1965).

[4]

Gao, F. and Han, L., Implementing the Nelder-Mead simplex algorithm with adaptive parameters, Comput Optim Appl (2012) 51: 259.

[7]

Chase Coleman’s tutorial on Nelder Mead

[8]

SciPy’s Nelder-Mead implementation

Examples

>>> @njit
... def rosenbrock(x):
...     return -(100 * (x[1] - x[0] ** 2) ** 2 + (1 - x[0])**2)
...
>>> x0 = np.array([-2, 1])
>>> qe.optimize.nelder_mead(rosenbrock, x0)
results(x=array([0.99999814, 0.99999756]), fun=-1.6936258239463265e-10,
        success=True, nit=110,
        final_simplex=array([[0.99998652, 0.9999727],
                             [1.00000218, 1.00000301],
                             [0.99999814, 0.99999756]]))
class quantecon.optimize.nelder_mead.results(x, fun, success, nit, final_simplex)

Bases: tuple

Attributes:
final_simplex

Alias for field number 4

fun

Alias for field number 1

nit

Alias for field number 3

success

Alias for field number 2

x

Alias for field number 0

Methods

count(value, /)

Return number of occurrences of value.

index(value[, start, stop])

Return first index of value.

final_simplex

Alias for field number 4

fun

Alias for field number 1

nit

Alias for field number 3

success

Alias for field number 2

x

Alias for field number 0