Вы находитесь на странице: 1из 14

Optimization Toolbox 6.

Solve standard and large-scale optimization problems
Optimization Toolbox provides widely used algorithms for standard and large-scale optimization. These algorithms solve constrained and unconstrained continuous and discrete problems. The toolbox includes functions for linear programming, quadratic programming, binary integer programming, nonlinear optimization, nonlinear least squares, systems of nonlinear equations, and multiobjective optimization. You can use them to find optimal solutions, perform tradeoff analyses, balance multiple design alternatives, and incorporate optimization methods into algorithms and models. Key Features Interactive tools for defining and solving optimization problems and monitoring solution progress Solvers for nonlinear and multiobjective optimization Solvers for nonlinear least-squares, data fitting, and nonlinear equations Methods for solving quadratic and linear programming problems Methods for solving binary integer programming problems Parallel computing support in selected constrained nonlinear solvers

Finding a local minimum of the peaks function using a gradient-based optimization solver from Optimization Toolbox.

A blurred image recovered using the large-scale linear least-squares algorithm.

Defining, Solving, and Assessing Optimization Problems Optimization Toolbox includes the most widely used methods for performing minimization and maximization. The toolbox implements both standard and large-scale algorithms, enabling you to solve problems by exploiting their sparsity or structure. You can access toolbox functions and solver options with the Optimization Tool or at the command line.

An optimization routine running at the command line (left) that calls MATLAB files defining the objective function (top right) and constraint equations (bottom right).

The Optimization Tool simplifies common optimization tasks. It enables you to: Select a solver and define an optimization problem Set and inspect optimization options and their default values for the selected solver Run problems and visualize intermediate and final results View solver-specific documentation in the optional quick reference window Import and export problem definitions, algorithm options, and results between the MATLAB workspace and the Optimization Tool Automatically generate MATLAB code to capture work and automate tasks Access Global Optimization Toolbox solvers Nonlinear Programming Optimization Toolbox provides widely used optimization algorithms for solving nonlinear programming problems in MATLAB. The toolbox includes solvers for unconstrained and constrained nonlinear optimization and solvers for least-squares optimization. Unconstrained Nonlinear Optimization Optimization Toolbox uses three algorithms to solve unconstrained nonlinear minimization problems: The Quasi-Newton algorithm uses a mixed quadratic and cubic line search procedure and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula for updating the approximation of the Hessian matrix. The Nelder-Mead algorithm (or downhill simplex) is a direct-search algorithm that uses only function values (does not require derivatives) and handles nonsmooth objective functions. Global Optimization Toolbox provides additional derivative-free optimization algorithms for nonlinear optimization. The trust-region algorithm is used for unconstrained nonlinear problems and is especially useful for large-scale problems where sparsity or structure can be exploited.

Unconstrained nonlinear programming used to search an engine performance map for peak efficiency.

Constrained Nonlinear Optimization Constrained nonlinear optimization problems are composed of nonlinear objective functions and may be subject to linear and nonlinear constraints. Optimization Toolbox uses four algorithms to solve these problems: The interior point algorithm is used for general nonlinear optimization. It is especially useful for large-scale problems that have sparsity or structure, and tolerates user-defined objective and constraint function evaluation failures. It is based on a barrier function, and optionally keeps all iterates strictly feasible with respect to bounds during the optimization run. The SQP algorithm is used for general nonlinear optimization. It honors bounds at all iterations and tolerates user-defined objective and constraint function evaluation failures. The active-set algorithm is used for general nonlinear optimization. The trust-region reflective algorithm is used for bound constrained problems or linear equalities only. It is especially useful for large-scale problems. The interior point and trust-region reflective algorithms enable you to estimate Hessians using different approaches. For the interior point algorithm, you can estimate Hessians using: BFGS (dense) Limited memory BFGS (for large-scale problems) Hessian-multiply function Actual Hessian (sparse or dense)

Finite difference of gradients, without requiring knowledge of sparsity structure For the trust-region reflective algorithm, you can use: Finite difference of gradients, Hessian with known sparsity structure Actual Hessian (sparse or dense) Hessian-multiply function Additionally, the interior point and trust-region reflective algorithms enable you to calculate Hessian-times-vector products in a function without having to form the Hessian matrix explicitly. Optimization Toolbox also includes an interface to Ziena Optimizations KNITRO libraries for solving constrained nonlinear optimization problems.

Constrained nonlinear programming used to design an optimal suspension system.

Multiobjective Optimization Multiobjective optimization is concerned with the minimization of multiple objective functions that are subject to a set of constraints. Optimization Toolbox provides functions for solving two formulations of multiobjective optimization problems: The goal attainment problem involves reducing the value of a linear or nonlinear vector function to attain the goal values given in a goal vector. The relative importance of the goals is indicated using a weight vector. The goal attainment problem may also be subject to linear and nonlinear constraints.

The minimax problem involves minimizing the worst-case value of a set of multivariate functions, possibly subject to linear and nonlinear constraints. Optimization Toolbox transforms both types of multiobjective problems into standard constrained optimization problems and then solves them using an active-set approach. Global Optimization Toolbox provides an additional multiobjective solver for nonsmooth problems.

Multiobjective optimization used to design a low-pass filter.

Nonlinear Least-Squares, Data Fitting, and Nonlinear Equations Optimization Toolbox can solve linear and nonlinear least-squares problems, data fitting problems, and nonlinear equations. Linear and Nonlinear Least-Squares Optimization The toolbox uses two algorithms for solving constrained linear least-squares problems: The medium-scale algorithm implements an active-set algorithm and is used to solve problems with bounds and linear inequalities or equalities. The large-scale algorithm implements a trust-region reflective algorithm and is used to solve problems that have only bound constraints. The toolbox uses two algorithms for solving nonlinear least-squares problems: The trust-region reflective algorithm implements the Levenberg-Marquardt algorithm using a trust-region approach. It is used for unconstrained and bound-constrained problems.

The Levenberg-Marquardt algorithm implements a standard Levenberg-Marquardt method. It is used for unconstrained problems.

Fitting a transcendental equation using nonlinear least squares.

Data Fitting The toolbox provides a specialized interface for data fitting problems in which you want to find the member of a family of nonlinear functions that best fits a set of data points. The toolbox uses the same algorithms for data fitting problems that it uses for nonlinear least-squares problems.

Fitting a nonlinear exponential equation using least-squares curve fitting.

Nonlinear Equation Solving Optimization Toolbox implements a dogleg trust-region algorithm for solving a system of nonlinear equations where there are as many equations as unknowns. The toolbox can also solve this problem using the trust-region reflective and Levenberg-Marquardt algorithms.

Solving an n-dimensional Rosenbrock function using the nonlinear equation solver.

Linear Programming Engineers and scientists use mathematical modeling to describe the behavior of systems under study. System requirements, when defined mathematically as constraints on the decision variables input into the mathematical system model, form a mathematical program. This mathematical program, or optimization problem description, can then be solved using optimization techniques. Linear programming is one class of mathematical programs where the objective and constraints consist of linear relationships. Linear programming problems consist of a linear expression for the objective function and linear equality and inequality constraints. Optimization Toolbox includes three algorithms used to solve this type of problem: The interior point algorithm is based on a primal-dual predictor-corrector algorithm used for solving linear programming problems. Interior point is especially useful for large-scale problems that have structure or can be defined using sparse matrices. The active-set algorithm minimizes the objective at each iteration over the active set (a subset of the constraints that are locally active) until it reaches a solution. The simplex algorithm is a systematic procedure for generating and testing candidate vertex solutions to a linear program. The simplex algorithm is the most widely used algorithm for linear programming.

Linear programming used in the design of a plant for generating steam and electrical power.

Binary Integer Programming Binary integer programming problems involve minimizing a linear objective function subject to linear equality and inequality constraints. Each variable in the optimal solution must be either a 0 or a 1. Optimization Toolbox solves these problems using a branch-and-bound algorithm that: Searches for a feasible binary integer solution Updates the best binary point found as the search tree grows Verifies that no better solution is possible by solving a series of linear programming relaxation problems


Binary integer programming used to solve an investment problem.

Quadratic Programming Quadratic programming problems involve minimizing a multivariate quadratic function subject to bounds, linear equality, and inequality constraints. Optimization Toolbox includes three algorithms for solving quadratic programs: The interior-point-convex algorithm solves convex problems with any combination of constraints. The trust-region-reflective algorithm solves bound constrained problems or linear equality constrained problems. The active-set algorithm solves problems with any combination of constraints. Both the interior-point-convex and trust-region-reflective algorithms are large-scale, meaning they can handle large, sparse problems. Furthermore, the interior-point-convex algorithm has optimized internal linear algebra routines and a new presolve module that can improve speed, numerical stability, and the detection of infeasibility.


Quadratic programming used to perform a returns-based style analysis for three mutual funds.

Solving Optimization Problems Using Parallel Computing Optimization Toolbox can be used with Parallel Computing Toolbox to solve problems that benefit from parallel computation. You can use parallel computing to decrease time to solution by enabling built-in parallel computing support or by defining a custom parallel computing implementation of an optimization problem. Built-in support for parallel computing in Optimization Toolbox enables you to accelerate the gradient estimation step in select solvers for constrained nonlinear optimization problems and multiobjective goal attainment and minimax problems.


Accelerating time to solution for an electrostatics problem using the built-in support for parallel computing in a nonlinear programming solver. The built-in functionality is enabled by specifying the UseParallel option (left) for the objective (middle right) and constraint (bottom right) functions, with the solution shown in the top right.

You can customize a parallel computing implementation by explicitly defining the optimization problem to use parallel computing functionality. You can define either an objective function or a constraint function to use parallel computing, enabling you to decrease the time required to evaluate the objective or constraint.


Accelerating time to solution (top right) for a suspension system design (bottom left and bottom right) subject to uncertainty by customizing the objective function with a single line change in code (top left).

Product Details, Demos, and System Requirements www.mathworks.com/products/optimization Trial Software www.mathworks.com/trialrequest Sales www.mathworks.com/contactsales Technical Support www.mathworks.com/support Online User Community www.mathworks.com/matlabcentral Training Services www.mathworks.com/training Third-Party Products and Services www.mathworks.com/connections Worldwide Contacts www.mathworks.com/contact

2011 The MathWorks, Inc. MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be trademarks or registered trademarks of their respective holders.