You are on page 1of 5



This is the outline of a course I held at school. It is aimed at studying the behavior of minima and maxima points of multivariable dierentiable functions dened on compact regions, such as closed intervals, and which are subject to various constraints. Since by Weierstrasss theorem we already know that such minima and maxima are attained at some points, we are interested in developing a method to nd out their exact position. The purpose is to see when and how this kind of calculus methods apply to olympiad problems. 1. Monovariable Functions In order to fully cover all the possible phenomena and to easily observe them, we will start o with a seemingly trivial case. Consider a dierentiable function f : [a, b] R. The critical observation is that if x0 is a local extremum point, then f (x0 ) = 0. Indeed, if this was not the case, by considering f (x0 + ) and f (x0 ) we can obtain values which are greater or smaller than f (x0 ), for any small enough . As trivial as this observation may seem, there are a few things to take care when using it: (1) Take care around the borders. For example, consider the function f (x) = x on the interval [0, 1]. It attains its minimum and maximum in 0 and 1 respectively, and the derivative is 1 overall. This happens because at the endpoints of the interval we cannot consider values on both sides of the point, thus the above argument fails. In every problem encountered, the border area of the region on which the function is dened should be considered separately. (2) Not all the points thus obtained are necessarily extremum points. For example, consider the real-valued function f (x) = x3 on the interval [1, 1]. It is negative on [1, 0), 0 in 0 and positive on (0, 1], thus the point 0 is not a point of extremum, but f (x) = 3x2 which cancels in 0. This kind of points are called inexion points, and in order to rule them out we can usually look at the sign of the derivative around such a point. If the sign is the same on both sides, then it means that the function is monotonous on a region around the concerned point, thus the point is an inexion one. If the sign changes, then the monotony of the function changes in that point, so the point is an extremum one. I took this much time with seemingly trivial observations because they are most easily seen here, on the most simple case, and because they generalize when we raise the number


of variables. We will now study the case in which the number of variables is arbitrary, i.e. n.

2. Multivariable Functions Consider a vector x = (x1 , x2 , . . . , xn ) Rn and a function f : [a, b](n) R. As before, if a point x0 is a point of extrema, then we have the relation: df df = 0 for all xi , where is the partial derivative of the function f with respect dxi dxi df f (x) f (x0 ) to the variable xi , that is, = lim , where x0 = (x1 , x2 , . . . , xn ) and h0 dxi h x = (x1 , x2 , . . . , xi1 , xi + h, xi+1 , . . . , xn ). There is nothing new here, actually. If a point is a point of extremum for all the variables in a region around it, then it is for each variable at a time; we can, thus, isolate a variable, treat the other as constants and act exactly as in the preceding case. Again, we need to take care about the same things as in the preceding cases: study the border of the region separately (in our case, set a variable equal to a or b and study the variation of the other n 1) and to keep in mind that not all points obtained like this are necessarily points of extremum. We now know how to look after the points where a function attains a minimum or maximum value. This may be useful in competition application, as you might know, but we can also encounter cases where the variables are subject to a certain constraint, such as an inequality where a2 + b2 + c2 = 1. We would like to devise a method to deal with these cases.

3. Multivariable Conditioned Functions Suppose that we want to nd the points of extremum of a certain dierentiable multivariable function f : [a, b](n) R, subject to a certain constraint g(x) = 0. After giving this issue a thought, Lagrange came up with this idea: we will apply the standard rule of nding a maximum and minimum points for the n + 1-variable real-valued function L dened as following: dL = g(x0 ) = 0, thus Any extremum point x0 of this function must necessarily have d any extremum point of this function satisfy the required condition. Also, since x0 is a minimum for L(x, ), it is a minimum for f (x), so it is one of the points we sought. The function L is called the Lagrange function for the function f and constraint g, and is called the Lagrange multiplier, hence the name of the method. L(x, ) = f (x) + g(x), R.


4. A Few Observations Observation 1. The observations made in the rst section of the lecture are still valid. The method only tells us about the extremum points which exist in the interior of the domain; the border of the domain (i.e. when one of the variables is equal to one of the endpoints of the domain interval, for example) should be studied by hand. Observation 2. The denition domain given above is not at its best; the function may be dened on any compact region. Keep in mind: we have to nd the border of the concerned compact and study it accordingly. Observation 3. We can construct a general method to use for multiple constraints. As above, suppose we want to nd the extremum points of f (x), subject to the constraints gi (x), i {1, 2, . . . , k}. We consider the Lagrange function:

L(x, 1 , 2 , . . . , k ) = f (x) +

i gi (x)

Notice how

dL = gi (x) = 0, and everything works out exactly analogous. di

5. An easy example The theory should be pretty clear, but you may wonder how can we apply it. Lets see how should it work... Problem. Find the maximum value of ax + by + cz for some constants a, b, c > 0 and the variables x, y, z respecting x2 + y 2 + z 2 = 1. Solution. The idea is now pretty straightforward. Denote by f (x, y, z) = ax + by + cz, and dene it on [1, 1] [1, 1] [1, 1] since the constraint limits the study of the function inside this interval (that is, 1 x, y, z 1). The Lagrange function and its partial derivatives write down as: L(x, y, z, ) = ax + by + cz + (x2 + y 2 + z 2 1) a dL = a + 2x = 0 = x = (if = 0 then a = b = c = 0, a trivial case) dx 2 dL b = b + 2y = 0 = y = dy 2 dL c = c + 2z = 0 = z = dz 2 dL = x2 + y 2 + z 2 1 = 0 (the trivial equality) d Now we come to the challenging part of any problem solved in this way: how to solve the remaining system of equations. In this case, substituting the variables in the by a2 + b2 + c2 a2 + b2 + c2 constraint relation, we get = 1 = = . 42 2


Thus, the only possible point of extremum of this function in the interior of the region b c a , , ), where the function attains is (x0 , y0 , z0 ) = ( 2 + b2 + c2 2 + b2 + c2 2 + b2 + c2 a a a the value of a2 + b2 + c2 (trivial computation). We now need to make sure that this is a point of maximum, and to check whether is there any other point on the border of the domain where the function attains a greater value. The formal argument looks like this: The function is continuous, real-valued and its domain is a compact, thus by the theorem of Weierstrass there is a point where the maximum is attained. By the above argument, this point is either on the border or it is exactly (x0 , y0 , z0 ). Looking at the border points, it is easily seen that if we set a variable in one of the endpoints of the domain segment, that is, x, y or z = 1, by the constraint we get the other two variables equal to 0, and the value of the function is either a, b or c. But f (x0 , y0 , z0 ) = a2 + b2 + c2 is greater that these values, and our point is proved. 6. Proposed Problems (1) What is the maximum value of the expression 4xy on positive reals for which y 2 = 2 2x2 ? (2) What is the minimum value of the expression x2 + y 2 + z 2 for which (x y)2 + (y z)2 + (z x)2 = 1?
1 (3) What is the minimum value of the expression 4xy + xy , on positive reals for which (2x + 1)(3y + 5) = 6? (4) If a, b, c, d R+ for which a + b + c = 1, prove that: 1 6(a3 + b3 + c3 + d3 ) a2 + b2 + c2 + d2 + 8 Baraj Hong Kong

(5) Let a, b, c R+ for which: a b c + + =2 1+a 1+b 1+c Prove that: a+ b+ c 1 1 1 + + . 2 a c b Middle European Mathematical Olympiad 2011 (6) Let n be a positive integer greater than or equal to 3. Determine the real numbers x1 0, x2 0, . . . , xn 0, x1 + x2 + + xn = n, for which the expression (n 1)(x2 + x2 + + x2 ) + nx1 x2 . . . xn n 1 2 takes its minimum value. Danube Day, Calarasi, 2010, problem 5


(7) Determine the greatest real constant K such that, for any k, 0 k K and for any positive real numbers a, b, c for which a2 + b2 + c2 + kabc = k + 3, it implies a + b + c 3. Stelele Matematicii (Stars of Mathematics) 2010, problem 3 (8) Prove that a, b, c. IMO 2001, problem 2 7. Conclusion As you might have noticed already, the theory behind the Lagrange Multiplier method is not too advanced for the students of grade 11 and 12, who already have a strong foothold in calculus. The point in mastering this technique is not necessarily beauty nor complexity, but the power which it holds. I selected on purpose problems from the recent contests and from the IMO in order to illustrate how such dicult problems can be reduced to mere computations, and how this method can reveal in such a simple way so many information about inequalities and multivariable functions. In my opinion, a students arsenal in order to tackle the IMO and its selection tests should be vast enough to cover all types of problem. Among other weapons, such as polynomial irreducibility criteria, graph theory theorems, homothety and inversion, generating functions, Dirichlets prime numbers theorem, one should certainly add the power of Lagrange Multipliers. a2 b c a + + 1, for any positive real numbers 2 + 8ca 2 + 8ab + 8bc b c