Вы находитесь на странице: 1из 6

CHAPTER

13 13

One-Dimensional Unconstrained Optimization

This section will describe techniques to find the minimum or maximum of a function of a single variable, f(x). A useful image in this regard is the one-dimensional, “roller coaster”– like function depicted in Fig. 13.1. Recall from Part Two that root location was complicated by the fact that several roots can occur for a single function. Similarly, both local and global optima can occur in optimization. Such cases are called multimodal. In almost all instances, we will be interested in finding the absolute highest or lowest value of a func- tion. Thus, we must take care that we do not mistake a local result for the global optimum. Distinguishing a global from a local extremum can be a very difficult problem for the general case. There are three usual ways to approach this problem. First, insight into the behavior of low-dimensional functions can sometimes be obtained graphically. Sec- ond, finding optima based on widely varying and perhaps randomly generated starting guesses, and then selecting the largest of these as global. Finally, perturbing the starting point associated with a local optimum and seeing if the routine returns a better point or always returns to the same point. Although all these approaches can have utility, the fact is that in some problems (usually the large ones), there may be no practical way to ensure that you have located a global optimum. However, although you should always

FIGURE 13.1 A function that asymptotically approaches zero at plus and minus q and has two maximum and two minimum points in the vicinity of the origin. The two points to the right are local optima, whereas the two to the left are global.

 f (x ) Global Local maximum maximum x Global minimum  Local minimum

355

356

ONE-DIMENSIONAL UNCONSTRAINED OPTIMIZATION

be sensitive to the issue, it is fortunate that there are numerous engineering problems where you can locate the global optimum in an unambiguous fashion. Just as in root location, optimization in one dimension can be divided into bracket- ing and open methods. As described in the next section, the golden-section search is an example of a bracketing method that depends on initial guesses that bracket a single optimum. This is followed by an alternative approach, parabolic interpolation, which often converges faster than the golden-section search, but sometimes diverges. Another method described in this chapter is an open method based on the idea from calculus that the minimum or maximum can be found by solving f 9(x) 5 0. This reduces the optimization problem to finding the root of f 9(x) using techniques of the sort described in Part Two. We will demonstrate one version of this approach—Newton’s method. Finally, an advanced hybrid approach, Brent’s method, is described. This ap- proach combines the reliability of the golden-section search with the speed of para- bolic interpolation.

13.1 GOLDEN-SECTION SEARCH

In solving for the root of a single nonlinear equation, the goal was to find the value of the variable x that yields a zero of the function f(x). Single-variable optimization has the goal of finding the value of x that yields an extremum, either a maximum or minimum of f(x). The golden-section search is a simple, general-purpose, single-variable search tech- nique. It is similar in spirit to the bisection approach for locating roots in Chap. 5. Recall that bisection hinged on defining an interval, specified by a lower guess (x l ) and an upper guess (x u ), that bracketed a single root. The presence of a root between these bounds was verified by determining that f(x l ) and f(x u ) had different signs. The root was then estimated as the midpoint of this interval,

x r 5 x l 1 x u

2

The final step in a bisection iteration involved determining a new smaller bracket. This was done by replacing whichever of the bounds x l or x u had a function value with the same sign as f(x r ). One advantage of this approach was that the new value x r replaced one of the old bounds. Now we can develop a similar approach for locating the optimum of a one-dimensional function. For simplicity, we will focus on the problem of finding a maximum. When we discuss the computer algorithm, we will describe the minor modifications needed to simu- late a minimum. As with bisection, we can start by defining an interval that contains a single answer. That is, the interval should contain a single maximum, and hence is called unimodal. We can adopt the same nomenclature as for bisection, where x l and x u defined the lower and upper bounds, respectively, of such an interval. However, in contrast to bisection, we need a new strategy for finding a maximum within the interval. Rather than using only two function values (which are sufficient to detect a sign change, and hence a zero), we would need three function values to detect whether a maximum occurred. Thus, an ad- ditional point within the interval has to be chosen. Next, we have to pick a fourth point.

13.2 PARABOLIC INTERPOLATION

363

f ( x) Parabolic
approximation
True maximum
of maximum
True function
Parabolic
function
x
x 0
x 1
x 3
x 2

FIGURE 13.6 Graphical description of parabolic interpolation.

2. Time-consuming evaluation. For pedagogical reasons, we use simple functions in most of our examples. You should understand that a function can be very complex and time- consuming to evaluate. For example, in a later part of this book, we will describe how optimization can be used to estimate the parameters of a model consisting of a system of differential equations. For such cases, the “function” involves time-consuming model integration. Any method that minimizes such evaluations would be advantageous.

13.2 PARABOLIC INTERPOLATION

Parabolic interpolation takes advantage of the fact that a second-order polynomial often provides a good approximation to the shape of f(x) near an optimum (Fig. 13.6). Just as there is only one straight line connecting two points, there is only one qua- dratic polynomial or parabola connecting three points. Thus, if we have three points that jointly bracket an optimum, we can fit a parabola to the points. Then we can differenti- ate it, set the result equal to zero, and solve for an estimate of the optimal x. It can be shown through some algebraic manipulations that the result is

x 3 5

f(x 0 )(x 2

1

2 x 2 2 ) 1 f(x 1 )(x 2 2 2 x 0 ) 1 f(x 2 )(x 0 2 x 2

2

2

1

)

2 f(x 0 )(x 1 2 x 2 ) 1 2 f(x 1 )(x 2 2 x 0 ) 1 2 f(x 2 )(x 0 2

x 1 )

(13.7)

where x 0 , x 1 , and x 2 are the initial guesses, and x 3 is the value of x that corresponds to the maximum value of the parabolic fit to the guesses. After generating the new point, there are two strategies for selecting the points for the next iteration. The simplest ap- proach, which is similar to the secant method, is to merely assign the new points se- quentially. That is, for the new iteration, z 0 5 z 1 , z 1 5 z 2 , and z 2 5 z 3 . Alternatively, as illustrated in the following example, a bracketing approach, similar to bisection or the golden-section search, can be employed.

13.3 NEWTON’S METHOD

365

We should mention that just like the false-position method, parabolic interpolation can get hung up with just one end of the interval converging. Thus, convergence can be slow. For example, notice that in our example, 1.0000 was an endpoint for most of the iterations. This method, as well as others using third-order polynomials, can be formulated into algorithms that contain convergence tests, careful selection strategies for the points to retain on each iteration, and attempts to minimize round-off error accumulation.

13.3 NEWTON’S METHOD

Recall that the Newton-Raphson method of Chap. 6 is an open method that finds the root x of a function such that f(x) 5 0. The method is summarized as

x i 1 1 5 x i 2

f(x i )

f ¿(x i )

A similar open approach can be used to find an optimum of f(x) by defining a new function, g(x) 5 f 9(x). Thus, because the same optimal value x* satisfies both

f ¿(x*) 5 g(x*) 5 0

we can use the following,

x i 1 1 5 x i 2

f ¿(x i )

f (x i )

(13.8)

as a technique to find the minimum or maximum of f(x). It should be noted that this equation can also be derived by writing a second-order Taylor series for f(x) and setting the derivative of the series equal to zero. Newton’s method is an open method similar to Newton-Raphson because it does not require initial guesses that bracket the optimum. In addition, it also shares the disadvantage that it may be divergent. Finally, it is usually a good idea to check that the second derivative has the correct sign to confirm that the technique is converging on the result you desire.

EXAMPLE 13.3

Newton’s Method

Problem Statement.

f(x) 5 2 sin x 2

Use Newton’s method to find the maximum of

x 2

10

with an initial guess of x 0 5 2.5.

Solution.

The first and second derivatives of the function can be evaluated as

f ¿(x) 5 2 cos x 2 x

5

f

(x) 5 22 sin x 2 1

5

366

ONE-DIMENSIONAL UNCONSTRAINED OPTIMIZATION

which can be substituted into Eq. (13.8) to give

x i 1 1 5 x i 2

2 cos x i 2 x i y5

22 sin x i 2 1y5

Substituting the initial guess yields

x 1 5 2.5 2

2 cos 2.5 2 2.5y5

5 0.99508

22 sin 2.5 2 1y5

which has a function value of 1.57859. The second iteration gives

x 1 5 0.995 2

2 cos 0.995 2 0.995y5

22 sin 0.995 2 1y5

5 1.46901

which has a function value of 1.77385. The process can be repeated, with the results tabulated below:

 i x f ( x ) f’( x) f’’ ( x ) 0 2.5 0.57194 2 2.10229 2 1.39694 1 0.99508 1.57859 0.88985 2 1.87761 2 1.46901 1.77385 2 0.09058 2 2.18965 3 1.42764 1.77573 2 0.00020 2 2.17954 4 1.42755 1.77573 0.00000 2 2.17952

Thus, within four iterations, the result converges rapidly on the true value.

Although Newton’s method works well in some cases, it is impractical for cases where the derivatives cannot be conveniently evaluated. For these cases, other approaches that do not involve derivative evaluation are available. For example, a secant-like version of Newton’s method can be developed by using finite-difference approximations for the derivative evaluations. A bigger reservation regarding the approach is that it may diverge based on the nature of the function and the quality of the initial guess. Thus, it is usually employed only when we are close to the optimum. As described next, hybrid techniques that use bracketing approaches far from the optimum and open methods near the optimum attempt to exploit the strong points of both approaches.

13.4 BRENT’S METHOD

Recall that in Sec. 6.4, we described Brent’s method for root location. This hybrid method combined several root-finding methods into a single algorithm that balanced reliability with efficiency. Brent also developed a similar approach for one-dimensional minimization. It combines the slow, dependable golden-section search with the faster, but possibly unreliable, parabolic interpolation. It first attempts parabolic interpolation and keeps applying it as long as ac- ceptable results are obtained. If not, it uses the golden-section search to get matters in hand. Figure 13.7 presents pseudocode for the algorithm based on a MATLAB software M-file developed by Cleve Moler (2005). It represents a stripped-down version of the

368

ONE-DIMENSIONAL UNCONSTRAINED OPTIMIZATION

function, which is the professional minimization function employed in MATLAB. For that reason, we call the simplified version . Note that it requires another function that holds the equation for which the minimum is being evaluated. This concludes our treatment of methods to solve the optima of functions of a single variable. Some engineering examples are presented in Chap. 16. In addition, the techniques described here are an important element of some procedures to optimize multivariable functions, as discussed in Chap. 14.

PROBLEMS

13.1

Given the formula

f(x) 5 2x 2 1 8x 2 12

 (a) Determine the maximum and the corresponding value of x for this function analytically (i.e., using differentiation). (b) Verify that Eq. (13.7) yields the same results based on initial guesses of x 0 5 0, x 1 5 2, and x 2 5 6.

13.2 Given

f (x) 5 21.5x 6 2 2x 4 1 12x

13.8 Employ the following methods to find the maximum of the

function from Prob. 13.7:

 (a) Golden-section search (x l 5 22, x u 5 1, e s 5 1%). (b) Parabolic interpolation (x 0 5 2 2, x 1 5 2 1, x 2 5 1, itera- tions 5 4). Select new points sequentially as in the secant method. (c) Newton’s method (x 0 5 2 1, e s 5 1%).

13.9 Consider the following function:

f(x) 5 2x 1 3

x

 (a) Plot the function. (b) Use analytical methods to prove that the function is concave for all values of x. Perform 10 iterations of parabolic interpolation to locate the mini- mum. Select new points in the same fashion as in Example 13.2. (c) Differentiate the function and then use a root-location method to solve for the maximum f(x) and the corresponding Comment on the convergence of your results. (x 0 5 0.1, x 1 5 0.5, x 2 5 5) value of x. 13.10 Consider the following function:

13.3 Solve for the value of x that maximizes f(x) in Prob. 13.2

using the golden-section search. Employ initial guesses of x l 5 0

and x u 5 2 and perform three iterations.

13.4 Repeat Prob. 13.3, except use parabolic interpolation in the same

fashion as Example 13.2. Employ initial guesses of x 0 5 0, x 1 5 1, and

x 2 5 2 and perform three iterations.

13.5 Repeat Prob. 13.3 but use Newton’s method. Employ an ini-

tial guess of x 0 5 2 and perform three iterations.

13.6 Employ the following methods to find the maximum of

f (x) 5 4x 2 1.8x 2 1 1.2x 3 2 0.3x 4

 (a) Golden-section search (x l 5 2 2, x u 5 4, e s 5 1%). (b) Parabolic interpolation (x 0 5 1.75, x 1 5 2, x 2 5 2.5, itera- tions 5 4). Select new points sequentially as in the secant method. (c) Newton’s method (x 0 5 3, e s 5 1%).

13.7 Consider the following function:

f(x) 5

2 x 4 2 2x 3 2 8x 2 2 5x

Use analytical and graphical methods to show the function has a maximum for some value of x in the range 2 2 # x # 1.

f(x) 5 3 1 6x 1 5x 2 1 3x 3 1 4x 4

Locate the minimum by finding the root of the derivative of this function. Use bisection with initial guesses of x l 5 2 2 and x u 5 1.

13.11 Determine the minimum of the function from Prob. 13.10

with the following methods:

 (a) Newton’s method (x 0 5 2 1, e s 5 1%). (b) Newton’s method, but using a finite difference approximation for the derivative estimates.

f ¿(x) 5 f(x i 1 dx i ) 2 f(x i 2 dx i ) 2dx i

f (x) 5 f(x i 1 dx i ) 2 2f(x i ) 2 f(x i 2 dx i )

(dx i ) 2

where d 5 a perturbation fraction ( 5 0.01). Use an initial guess of x 0 5 2 1 and iterate to e s 5 1%.

13.12 Develop a program using a programming or macro language

to implement the golden-section search algorithm. Design the pro- gram so that it is expressly designed to locate a maximum. The subroutine should have the following features: