Академический Документы
Профессиональный Документы
Культура Документы
(1)
where d is the chosen search direction, g(x) = Vf(x), the gradient of f at x, and
d'g(x), the directional derivative off(x) at x along d, is always a negative number.
In addition, a must satisfy
(2)
The convergence of the BFGS variable metric algorithm under the conditions
(1) and (2) has been studied by Powell [6], while the convergence of the conjugate
gradient method has been studied by Shanno [8].
The major difference between the two methods insofar as the linear search is
concerned is that if the first trial a satisfies (1) and (2), it is accepted if a variable
metric method is used, but at least two trial a's are required before accepting an
a satisfying (1) and (2) for the conjugate gradient method. Reasons for requiring
this are detailed in Shanno [9].
The second difference between the two methods is that for a variable metric
method, a -- 1 is always the initial a tried, while for the conjugate gradient
ACM Transactions on Mathematmal Software, Vol. 6, No. 4, December 1980, Pages 618-622.
Algorithms
619
method, a ffi I is tried only for restart iterations, whereas for nonrestart iterations
- I d t k+~k+l,
the initial a at step k + 1, denoted by ak+l, is chosen to be ah+l m a k ~J ~ ~k/
where dk and gk and dk+l and gk+l are the search vectors and gradients at the kth
and k + 1st points, respectively.
The linear search contains safeguards to ensure that the search procedure
cannot become stuck or attempt to move past a local maximum to a more distant
local minimum.
Convergence is determined to have occurred when Jig II< max(l, II x U ), where
[[. [[ is the Euclidean norm and e is user supplied.
3. Test Results
Table I contains test results for a variety of test problems for both the BFGS and
conjugate gradient method. The test functions are Wood's function for the listed
initial estimates, the extended Rosenbrock function documented in [7], Brodlie's
[1] variable-dimensioned Watson function, Oren's [5] power function, Powell's
Table I
BFGS
ITER
W O O D {n ffi 4)
-3, -1, -3, -1
- 3 , 1, - 3 , 1
- 1 . 2 , 1, -1.2, 1
-1.2, 1, 1.2, 1
EROSEN
- 1 . 2 , 1, 1, 1, 1 (n = 5)
- 1 , . . . , - 1 (n ffi 10)
WATSON
0, . . . , 0 (n = 5)
0, . . . , 0 (n ffi 10)
POWER
1. . . . . 1 (n ffi 20)
1. . . . . 1 (n ffi 50)
P O W E L L (n -- 4)
- 3 , - 1 , 0, 1
TRIG
(n ffi 5)
(n ffi 10)
(n = 15)
MANCINO
(n -- 10)
(n ffi 20)
(n -- 30)
BOUNDARY VALUE
(n ffi 10)
(n = 20)
(n = 30)
BROYDEN*TOINT
- 1 . . . . . - 1 (n = 10)
- 1 . . . . . - 1 (n = 20)
- 1 , . . . , - 1 ( n - - 30)
Conjugate Gradient
IFUN
ITER
IFUN
36
91
87
48
43
114
107
57
48
90
77
46
106
210
181
100
117
674
150
893
132
946
278
1940
37
92
39
95
34
179
69
360
280
539
281
540
16
30
33
61
48
49
28
57
20
35
57
22
37
59
20
44
100
41
89
201
9
10
11
10
14
17
12
14
18
28
33
49
28
56
86
30
58
88
25
48
121
51
97
243
27
36
47
28
37
48
23
36
46
47
73
93
620
Algor=thms
The authors are deeply indebted to Professor M.J.D. Powell for many useful
suggestions concerning the linear search method adopted and for illustrating
several adopted methods for coding optimization routines.
REFERENCES
1. BRODLIE, K.W. An assessment of two approaches to variable metric methods. Math. Program.
12 (1977), 334-355.
2. FLETCHER, R., AND POWELL, M.J.D. A rapidly convergent descent method for minimization.
Comput. J. 6 (1963), 163-168.
3. HIMMELBLAU, D.M. Applied Nonhnear Programming. McGraw-Hill, New York, 1972.
4. MORE, J J., GARBOW, B.S., AND HILLSTROM, K.E. Testing unconstrained m~mmizatlon software
TM-324, Appl. Math. Dlv, Argonne National Lab., Argonne, Ill., 1978.
5. ORES, S.S. Self-scaling variable metric algorithm. Part II. Manage. Sci 20 (1974), 863-874.
6. POWELL, M.J.D. Some global convergence properties of a variable metric algorithm for minimization without exact line searches. In Nonhnear Programming, R.W. Cottle and C.E. Lemke,
Eds., American Mathematical Soc, Providence, R.I., 1976
7. SHAI~NO, D.F. Conjugate gradient methods with inexact searches. Math. Oper. Res. 3 (1978),
244-256.
8. SHAI~I~O,D.F. On the convergence of a new conjugate gradient method. SIAM. J Numer Anal
15 (1978), 1247-1257.
ACM Transactions on MathemaUcal Software, Vol 6, No 4, December 1980.
Algorithms
621
9. SHANNO, D.F. Quadratm termmatmn of coniugate gradient methods. In Extrenal Methods and
Systems Analysts, A.V. Fiacco and K.O. Kortanek, Eds, Sprmger-Verlag, New York, 1980, pp.
433-441.
10. SHANNO, D.F., AND PHUA, K.H. Matrix conditiomng and nonlinear optimization. Math. Program 14 (1978), 149-160.
11. SHANNO, D.F., AND PHUA, K.H. Algorithm 500. Minimization of unconstrained multivariate
functions. ACM Trans. Math. Softw. 2, 1 (1976), 87-94.
12. TOINT, PH.L. Some numerical results using a sparse matrix updating formula in unconstrained
optimization. Math. Comput. 32 (1978), 839-852.
ALGORITHM
[ A part of the listing is printed here. The complete listing is available from the
ACM Algorithm# Distribution Service (see page 627 for order form) or may be
found in "Collected Algorithms from ACM."]
622
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C REMARKS:
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
Algorithms
MXFUN
1OUT
MDIM
IDEV
ACC
NMETH