Вы находитесь на странице: 1из 1

Maximization of a function of two variables.

Jery Stedinger
October 2006
Consider the maximization of L[a,b]. The general solution recommended by multivariate
calculus is to solve for a stationary point defined by the two equations:
L[a, b]
a
= 0
L[a, b]
b
= 0
!"
One needs to chec# the second derivatives to confirm a ma$imum has been found%
Alternatively, one could decide that one wanted to first solve for the best value of b&a" for every
a% This best value would be obtained by for each a solving the equation:
L[a, b]
b
= 0 2"
One could then attempt to ma$imi'e over a the function ()a* b&a" +, thus for every a* the second
parameter b ta#es the best possible value: now ( is a function only of a%
-a$imi'ing ( as a function of a* using the chain rule* we see# the a value where:
dL[a, b *(a)]
da
=
[, ()]

+
[, ()]

= 0 ."
/ecall that the function solves b&a" the equation
L[a, b]
b
= 0 * so that the second term in the
equation above vanishes, thus we obtain
dL[a, b * (a)]
da
=
[, ()]

0"
The conclusion is that one can see# the point a*b" that ma#es both partial derivatives vanish* as in
eqn% !, or we can solve eqn% 2" to obtain the best b for every a* denoted b&a"* and then see# the
best a overall% 1t turns out that the second approach is equivalent analytically to the first because
see#ing the ma$imum of ()a* b&a" + at the point satisfying equation ." is equivalent to see#ing a
solution to the first and second constraints in eqn% !"%
2umerically* if one see#s to ma$imi'e ()a*b+* the second approach can be very attractive*
particularly if one can determine b&a" analytically% Then one has only a univariate optimi'ation
problem* rather than a bivariate problem% 3hen solving the problem analytically* the two
approaches yield e$actly the same equations that need to be solved%

Вам также может понравиться