You are on page 1of 52

Lecture Notes: Optimisation

Kausik Chaudhuri
Most economic and financial problems require the simultaneous choice of several
variables. A consumer chooses what quantities of different commodities to purchase.
A profit-maximising producer selects amounts of different inputs, as well as the level
o output. Therefore, most economic activities involve functions of more than one
independent variable. We, however, concentrate on the two-variable cases.

Let us consider a function y = f(x1, x2) defined on a set S in the x1x2-plane, where the
variables xi; i = 1; 2 are all independent of one other, i.e. each varies without affecting
others. To measure the effect of a change in a single independent variable on the
dependent variable in a multivariate function, the partial derivative is needed. Our
next question: what is partial derivative?

The partial derivative of y with respect to x1 measures the instantaneous rate of change
of y with respect to x1 while x2 is held constant. It is written as:
∂y ∂f
, , f x1 ( x1 , x2 ), f1/ ( x1 , x2 ), f x1 , y x1 .
∂x1 ∂x1

Similarly, the partial derivative of y with respect to x2 measures the instantaneous rate
of change of y with respect to x2 while x1 is held constant. It is written as:
∂y ∂f
, , f x 2 ( x1 , x2 ), f 2/ ( x1 , x2 ), f x 2 , y x 2 .
∂x2 ∂x2

We define the partial derivative of y with respect to xi (i = 1, 2,…,n) as:

∂y ∆y f ( x1 , x2 ,.., xi + ∆xi ,..., xn ) − f ( x1 , x2 ,.., xi ,.., xn )


= lim = lim
∂xi ∆xi → 0 ∆xi ∆xi → 0 ∆xi
If these limits exist, our function is differentiable with respect to all arguments.

In the two variable cases, we then can define partial derivative as:

∂y ∆y f ( x1 + ∆x1 , x2 ) − f ( x1 , x2 )
= lim = lim
∂x1 ∆x1 → 0 ∆x1 ∆x1 → 0 ∆x1

∂y ∆y f ( x1 , x2 + + ∆x2 ) − f ( x1 , x2 )
= lim = lim
∂x 2 ∆x 2 → 0 ∆x2 ∆x 2 → 0 ∆x2

Partial differentiation with respect to one of the independent variables follows the
same rules as ordinary differentiation while the other independent variables are
treated as constant.
Example

Find out the partial derivatives of a multivariate function such as: z = 4 x 3 + xy + y

Solution

Step 1: When differentiating with respect to x, we treat y term as a constant


and then take the derivative of the x term, holding the y term constant:

∂z ∂
= ( 4 x 3 + xy + y )
∂x ∂x
d d d
= (4 x3 ) + y ( x) + ( y)
dx dx dx
= 3( 4 x 2 ) + y(1) + 0
= 12 x 2 + y

Step 2: When differentiating with respect to y, we treat x term as a constant


and then take the derivative of the y term, holding the x term constant:

∂z ∂
= ( 4 x 3 + xy + y )
∂y ∂y
d d d
= (4 x3 ) + x ( y ) + ( y )
dy dy dy
= 0 + x(1) + 1
= x +1

Rules of Partial Differentiation

Partial derivatives follow the same basic patterns as the rules of differentiation.

a) Product Rule

Given

z = u ( x, y )v( x, y )
∂z ∂v ∂u
= u ( x , y ) + v( x , y )
∂x ∂x ∂x
∂z ∂v ∂u
= u ( x , y ) + v( x , y )
∂y ∂y ∂y
Example

Given z = (3x+6)(4x+7y), by the product rule:

∂z
= (3 x + 6)( 4) + ( 4 x + 7 y )(3) = 12 x + 24 + 12 x + 21 y = 24 x + 21 y + 24
∂x
∂z
= (3 x + 6)(7 ) + ( 4 x + 7 y )(0) = 21x + 42
∂y

b) Quotient Rule
Given
z = u ( x, y ) / v( x, y ); v( x, y ) ≠ 0
∂u ∂v
v( x , y ) − u( x, y )
∂z ∂x ∂x
=
∂x [v( x, y )]2

∂u ∂v
v( x , y ) − u ( x , y )
∂z ∂y ∂y
=
∂y [v( x, y )]2

Example
Given z = (3x+5y)/(4x+7y), by the quotient rule:

∂z ( 4 x + 7 y )(3) − (3 x + 5 y )( 4) 12 x + 21 y − 12 x − 20 y y
= = =
∂x [4 x + 7 y ]2
[4 x + 7 y ]2
[4 x + 7 y ]2
∂u ∂v
v( x , y ) − u ( x, y )
∂z ∂y ∂y ( 4 x + 7 y )(5) − (3 x + 5 y )(7 ) −x
= = =
∂y [v( x, y )]2 [4 x + 7 y ]2 [4 x + 7 y ]2

c) Generalised Power Function Rule

z = [u ( x, y )]n
∂z ∂u
= n[u ( x, y )]n −1
∂x ∂x
∂z ∂u
= n[u ( x, y )]n −1
∂y ∂y
Example

Given z = ( x 3 + 5 y 2 ) 4 , by the generalised power function rule :


∂z
= 4(x 3 + 5 y 2 )4 −1( 3 x 2 ) = 12 x 2(x 3 + 5 y 2 )3
∂x
∂z
= 4(x 3 + 5 y 2 )4 −1( 10 y) = 40 y(x 3 + 5 y 2 )3
∂x
Second Order Partial Derivatives

Given a function y = f(x1, x2), the second–order (direct) partial derivative signifies that
the function has been differentiated partially with respect to one of the independent
variables twice while the other independent variable has been held constant. We can
define it as:

∂2 y ∂
= f xi x j ( x1 ,.., xn ) = f x ( x1 ,.., xn ), i , j = 1,2,.., n
∂xi ∂x j ∂x j i

Note in our case, the above translates as:

∂2 y ∂
= f x1x1 ( x1 , x2 ) = f x ( x1 , x2 )
∂x12 ∂x1 1
2
∂ y ∂
= f x 2 x2 ( x1 , x2 ) = f x ( x1 , x2 )
∂x22 ∂x2 2

In effect f x1 x1 measures the rate of change of the first-order partial derivative


f x1 with respect to x1 while x2 is held constant. On the other hand, f x 2 x 2 measures
the rate of change of the first-order partial derivative f x 2 with respect to x2 keeping x1
as fixed.

The cross partial derivatives indicate that first the primitive function has been partially
differentiate with respect to one independent variable and then partial derivative has
in turn been partially differentiated with respect to the other independent variable. It
implies:

∂2 y ∂
= f x1x 2 ( x1 , x2 ) = f x ( x1 , x2 )
∂x2 ∂x1 ∂x2 1
∂2 y ∂
= f x 2 x1 ( x1 , x2 ) = f x ( x1 , x2 )
∂x1∂x2 ∂x1 2

So, a cross partial measures the rate of change of a first – order partial derivative with
respect to the other independent variable.

Given the definition of the cross partial derivatives, we have the following
proposition:

Proposition (Schwarz's Theorem or Young's Theorem)


If at least one of the two partials is continuous, then f x1 x 2 = f x 2 x1 .
Example

Given, z = 2 x 3 + xy + 3 y 4 , calculate (a) the first; (b) the second, and (c) verify
Young’s Theorem.
(a)

∂z
zx = = 6x2 + y
∂x
∂z
zy = = x + 12 y 3
∂y

(b)

∂2z
z xx = = 12 x
∂x 2
∂2 z
z yy = = 36 y 2
2
∂y
(c) To verify Young’s theorem, we need to evaluate the cross partial derivatives.

∂2z ∂ ∂z ∂
z xy = = ( ) = (6 x 2 + y ) = 1
∂y∂x ∂y ∂x ∂y
∂2z ∂ ∂z ∂
z yx = = ( ) = ( x + 12 y 3 ) = 1
∂x∂y ∂x ∂y ∂x

Note the above shows: z xy = z yx = 1 . Hence the Young’s theorem is valid.

Unconstrained Optimization in the Case of More than One Variable

Necessary Condition

Given a differentiable function y = f(x1, x2), it can have a maximum or minimum if y


has a stationary point at x* = (x*1, x*2). This implies all f x′ should vanish at x*.
i
Therefore, the first-order necessary conditions (FONCs) states:

f x′i = 0

In the two-variable case it implies:

f x′1 ( x1 , x2 ) = 0
f x′2 ( x1 , x2 ) = 0
Example: Profit Maximisation

Suppose that Q = 80 − ( K − 3) 2 − 2( L − 6) 2 − ( K − 3)( L − 6) is a production function


with K as the capital input and L as the labour input. The price per unit of output p =
1, the cost per unit of capital is 0.65 and the wage rate is 1.2. Find only the possible
values of K and L that maximise profits.

Solution

The profit π from producing and selling Q units is:

π ( K , L ) = pQ − rK − wL
= 1[80 − ( K − 3) 2 − 2( L − 6)2 − ( K − 3)( L − 6)] − 0.65K − 1.2 L

The first-order necessary conditions (FONCs) are:

π K/ ( K , L ) = −2( K − 3) − ( L − 6) − 0.65 = 0 ⇒ −2 K + 6 − L + 6 − 0.65 = 0


π L/ ( K , L ) = −4( L − 6) − ( K − 3) − 1.2 = 0 ⇒ −4 L + 24 − K + 3 − 1.2 = 0

From the above two conditions we get:

2 K + L = 11.35 (1)
K + 4 L = 25.8 (2)

Multiplying (2) by 2 and subtracting from (1) we get:

2 K + L − 2 K − 8 L = 11.35 − 51.6
− 7 L = −40.25
L = 5.75

Given, L = 5.75 from (1) we get

2 K + 5.75 = 11.35
2 K = 5 .6
K = 2 .8

The solution is: ( K , L ) = ( 2.8,5.75) .

Sufficient Condition

Suppose f is a function of one variable which is twice differentiable in an interval I.

In this case, a simple sufficient condition for a stationary point in I to be a maximum


point is that f//(x) ≤ 0 for all x in I. The function f is then called “concave”.

Similarly, a simple sufficient condition for a stationary point in I to be a minimum


point is that f//(x) ≥ 0 for all x in I. The function f is then called “convex”.

For functions of two variables, we can extend the above involving the second-order
partial derivatives.

We state the conditions below:

Suppose that (x01, x02) is a stationary point for a C2 – function y = f(x1, x2) in a convex
set S.

(a) If for all (x1, x2) in S,

′′ ( x1 , x2 ) ≤ 0, f 22
f11 ′′ ( x1 , x2 ) ≤ 0, f11
′′ ( x1 , x2 ) f 22 ′′ ( x1 , x2 )]2 ≥ 0
′′ ( x1 , x2 ) − [( f12

Then (x01, x02) is a maximum point for f(x1, x2) in S.

(b) If for all (x1, x2) in S,

′′ ( x1 , x2 ) ≥ 0, f 22
f11 ′′ ( x1 , x2 ) ≥ 0, f11
′′ ( x1 , x2 ) f 22 ′′ ( x1 , x2 )]2 ≥ 0
′′ ( x1 , x2 ) − [( f12

Then (x01, x02) is a minimum point for f(x1, x2) in S.


Note, if a twice differentiable function y = f(x1, x2) satisfies the inequalities in (a)
throughout a convex set S, it is called concave, whereas it is called convex if it
satisfies the inequalities in (b) throughout S.
Example: Utility Maximisation
In this example, suppose the utility function of a consumer is described by: U = xyz.
This implies that consumer derives utility from consuming x, y and z units of three
commodities. The prices per unit of the three commodities are 1, 3 and 5 respectively
and we assume that the income of the consumer equals 108. We also assume that the
consumer saves nothing. Find the only possible maximum value of the utility of the
consumer.
Solution

Note given the prices and the income of the consumer, we get x + 3y + 4z = 108
So, x = 108 – 3y – 4z.
Substituting value of x in the utility function, we obtain:
U = (108- 3y - 4z)yz = 108yz – 3y2z -4yz2
As a first-order necessary condition for maximisation (optimisation) we set Uy = 0 and
Uz = 0.
∂U ∂
Uy = = (108 yz − 3 y 2 z − 4 yz 2 ) = 0
∂y ∂y
U y = 108 z − 6 yz − 4 yz 2 = 0 ⇒ z (108 − 6 y − 4 z ) = 0
⇒ 6 y + 4 z = 108 as z > 0

Similarly

∂U ∂
Uz = = (108 yz − 3 y 2 z − 4 yz 2 ) = 0
∂z ∂z
U z = 108 y − 3 y 2 − 8 yz = 0 ⇒ y(108 − 3 y − 8 z ) = 0
⇒ 3 y + 8 z = 108 as y > 0

We write down the above as:

6 y + 4 z = 108 (1)
3 y + 8 z = 108 (2)

Multiplying (1) by 2 and subtracting from (2), we get:

12 y + 8 z − 3 y − 4 z = 216 − 108
9 y = 108
y = 12

Using y = 12, from either (1) or (2), we obtain z = 9.

U y = 108 z − 6 yz − 4 z 2
U yy = −6 z

U z = 108 y − 3 y 2 − 8 yz
U zz = −8 y

U y = 108 z − 6 yz − 4 z 2
U yz = 108 − 6 y − 8 z

Given y = 12 and z = 9, we get:

U yy = −6 z = −54 < 0, U zz = −8 y = −96 < 0

U yyU zz − (U yz ) 2 = 48 yz − (108 − 6 y − 8 z )2

U yyU zz − (U yz ) 2 = 48(12)(9) − (108 − 72 − 72)2 = 3888 > 0

Note y = 12 and z = 9 meets both the first-order and second-order conditions.


Given the budget constraint, we get, x = 108 – 3(12) – 4(9) = 36

So, utility attains the maximum value at 3888 with x = 36, y = 12 and z = 9. However,
we note that we could not directly apply the theorem.

Local Extreme Points

We have so far discussed maximum and minimum (extreme) points. However, we


need to distinguish between local and global extreme points. A global extreme point
is a local extreme point however, the converse is not true.

Definition

There exists a positive number ε such that f(x1, x2) ≤ f(x01, x02) for all (x1, x2) in S that
lie inside the circle with centre (x01, x02) and radius ε. If the inequality is strict, for (x1,
x2) ≠ (x01, x02), then (x01, x02) is a strict local maximum point.

Similarly, there exists a positive number ε such that f(x1, x2) ≥ f(x01, x02) for all (x1, x2)
in S that lie inside the circle with centre (x01, x02) and radius ε. If the inequality is
strict, for (x1, x2) ≠ (x01, x02), then (x01, x02) is a strict local minimum point.

The first-order conditions are necessary for a differentiable function to have a local
extreme point. However, a stationary point does not have to be a local extreme point.
A stationary point can be neither a local maximum nor a local minimum point, and
this is known as a saddle point of f.

A saddle point (x01, x02) is a stationary point with the property that there exits points
(x1, x2) arbitrarily close to (x01, x02) with f(x1, x2) < f(x01, x02) and there also exist such
points with f(x1, x2) > f(x01, x02).

The stationary points of a function can thus be classifies into following categories:

(i) Local maximum points (ii) Local minimum points (iii) Saddle points

We can use the second-derivative test to examine whether a given stationary point if
of type (i), (ii) or (iii). In this case, (functions involving two variables) the cross
second-order partials must be considered along with own second-order derivatives.
We use the following theorem to classify the stationary points in most cases.

Second Derivative Test for Local Extrema

Suppose f(x1, x2) is a function with continuous second-order partial derivatives in a


domain S, and let (x01, x02) be an interior stationary point of S for f(x1, x2). We define:

// 0 0 // 0 0 // 0 0
A = f11 ( x1 , x2 ), B = f12 ( x1 , x2 ) and C = f 22 ( x1 , x2 )

a) If A < 0 and AC – B2 > 0, then (x01, x02) is a strict local maximum point.
b) If A > 0 and AC – B2 > 0, then (x01, x02) is a strict local minimum point.

c) If AC – B2 < 0, then (x01, x02) is a saddle point.

d) If AC – B2 = 0, then (x01, x02) could be a local maximum, local minimum or a


saddle point.

Example

Find the stationary point and classify them when f(x,y) = x3 – 8y3 + 6xy + 1

Solution

The stationary points must satisfy the first-order necessary conditions:

f1/ ( x, y ) = 3( x 2 + 2 y ) = 0 and f 2/ ( x, y ) = 6( −4 y 2 + x ) = 0

From the first one, we get:

x2 + 2 y = 0
− 2 y = x2

From the second one, we get:

− 4y2 + x = 0
x = 4y2
x = ( −2 y )( −2 y )
x(1 − x 3 ) = 0
Either x = 0 or (1 − x 3 ) = 0 ⇒ x = 1

If x = 0 then y = 0.

If x = 1 then y = -1/2

We obtain two stationary points: x = y = 0 and x = 1; y = -1/2.


//
f11 ( x, y ) = 6 x
//
f 22 ( x, y ) = −48 y
//
f12 ( x, y ) = 6

We use the following table to classify the stationary points:

(x, y) A B C AC – B2 Type of Point


(0, 0) 0 6 0 -36 Saddle Point
(1, -1/2) 3 6 24 66 Local Minimum
Linear Models with Quadratic Objectives

We will consider some more economic applications of optimisation theory when there
are two variables.

Example

The profit function of a firm is π(x, y) = px + qy – ax2 - by2, where p and q are the
prices per unit and ax2 + by2 are the costs of producing and selling x units of the first
good and y units of the second good.

(a) Find the values of x and y that maximize profits. Denote them by x* and y*.
Verify that the second order conditions are satisfied.

(b) Define π*(p, q) = π*( x*, y*). Verify that δπ*(p, q)/δp = x* and δπ*(p, q)/δq = y*

Solution

(a) The first-order necessary conditions for profit maximisation problem are:

∂π p
π 1/ = = p − 2ax = 0 ⇒ x =
∂x 2a
∂π q
π 2/ = = q − 2by = 0 ⇒ y =
∂y 2b

Thus the only stationary point is: x* = p/2a, y* = q/2b

//
π 11 = −2 a < 0
//
π 22 = −2b < 0
//
π 12 =0
// // // 2
π 11π 22 − (π 12 ) = 4ab > 0

Therefore the second-order conditions are satisfied.

(b)
π * ( p, q ) = px* + qy* − ax*2 − by*2
p q p q
= p( ) + q ( ) − a ( ) 2 − b( ) 2
2a 2b 2a 2b
p2 q2
= +
4a 4b
∂π * p
= = x*
∂p 2a
∂π * q
= = y*
∂q 2b

Example

Each of two firms produces its own brand of a commodity, denoted by x and y, and
these are sold at prices p and q per unit respectively. Each firm determines its own
price and produces exactly as much as is demanded. The demand for each brands are
given by:

x = 29 - 5p + 4q y = 16 + 4p – 6q

Firm A has total costs 5 + x, whereas firm B has total costs 3 + 2y.

(a) Initially, the two firms cooperate in order to maximise their combined profits.
Find the prices (p, q), the production levels (x, y) and the profits for firm A
and firm B.

(b) Then cooperation breaks down, with each producer maximising its own
profit, taking the other’s price as given. If q is fixed, how will firm A choose
p? If p is fixed, how will firm B choose q?

(c) Under the assumptions in part (b), what constant equilibrium prices are
possible? What are the production levels and profits in this case?

Solution

(a) In this case, firms cooperate to maximise profits. Hence, profit can be defined
as the sum of total revenues earned by two firms minus the total costs faced by
them.

π = px + qy − (5 + x ) − (3 + 3 y )
= p( 29 − 5 p + 4q ) + q(16 + 4 p − 6q )
− (34 − 5 p + 4q ) − (35 − 8 p + 12q )
= 29 p − 5 p 2 + 4 pq + 16q + 4 pq − 6q 2
− 34 + 5 p − 4q − 35 + 8 p − 12q
= 26 p + 24q − 5 p 2 − 6q 2 + 8 pq − 69

The first-order necessary condition for profit maximisation gives:

∂π
= 26 − 10 p + 8q = 0 ⇒ −5 p + 4q = −13 (1)
∂p
∂π
= 24 − 12q + 8 p = 0 ⇒ 2 p − 3q = −6 (2)
∂q

We use equation (1) and (2) to solve for p and q. We get p = 9 and q = 8.

Substituting p = 9 and q = 8 in x = 29 - 5p + 4q, we get x = 16.

Substituting p = 9 and q = 8 in y = 16 + 4p - 6q, we get y = 4.

Firm A’s profit = (16)(9) – 5 – 16 = 123

Firm B’s profit = (4)(8) – 3 – 8 = 21

(b) In this case, cooperation breaks down. Each of two firms maximises its own
profit taking the other’s price as given.

Firm A’s profit function is:

π A = px − (5 + x )
= p( 29 − 5 p + 4q ) − (34 − 5 p + 4q )
= 29 p − 5 p 2 + 4 pq − 34 + 5 p − 4q
= 34 p − 5 p 2 + 4 pq − 4q − 34

Firm B’s profit function is:

π B = qy − (3 + 2 y )
= q(16 + 4 p − 6q ) − (35 + 8 p − 12q )
= 16q + 4 pq − 6q 2 − 35 − 8 p + 12q
= 28q − 6q 2 + 4 pq − 8 p − 35

The first-order necessary condition for profit maximisation (note firms are taking
other firm’s price as given (as a constant)) gives:

∂π A
= 34 − 10 p + 4q = 0 ⇒ 5 p = 2q + 17 (1)
∂p

∂π B
= 28 − 12q + 4 p = 0 ⇒ 3q = p + 7 (2)
∂q

(c) Using (1) and (2), we get p = 5 and q = 4.

Substituting p = 5 and q = 4 in x = 29 - 5p + 4q, we get x = 20.

Substituting p = 5 and q = 4 in y = 16 + 4p - 6q, we get y = 12.

Firm A’s profit = (20)(5) – 5 – 20 = 75


Firm B’s profit = (12)(4) – 3 – 24 = 21

The Extreme-Value Theorem

Let z = f(x, y) be a continuous, finite-valued function for all points (x, y) within a
closed and bounded region R in the xy-plane. The Extreme Value Theorem states that
the function z = f(x,y) is then guaranteed to have both an absolute maximum point and
an absolute minimum point when restricted to the region R.

The terms closed and bounded have distinct mathematical meanings. A point (a,b) is
called an interior point of a set S in the plane if there exists a circle centred at (a,b)
such that all points strictly inside the circle lie in S. A set is an open set if it consists
only of interior points. The point (a,b) is called a boundary point of a set S if every
circle centred at (a,b) contains points of S as well as points in its compliments. A
boundary point of S does not necessarily lie in S. If S contains all of its boundary
point, then S is called a closed set.

The typical problem will be given with a series of constraints, which are usually
inequalities, and boundary points occur where one or more of these inequalities
satisfied with equality.

For example, example, the set of constraints x ≥ 0, y ≥ 0, x + 2y ≤ 12 form a closed and


bounded region in the xy-plane. However, the set of constraints x ≥ 0, y ≥ 0, 2x + y ≥ 6,
x + y ≥ 4 form an unbounded region.

In general, if g(x, y) is a continuous function and c is a real number, then the sets

{(x, y) : g(x, y) ≥ c}, {(x, y) : g(x, y) ≤ c}, {(x, y) : g(x, y) = c}


are all closed. However, if we replace ≥ by >, or ≤ by <, or = by ≠, then the
corresponding set becomes open.
We state the Extreme-Value Theorem formally:
Suppose the function f(x, y) is continuous throughout a non-empty, closed and
bounded set S in the plane. Then there exists both a point (a, b) in S where f attains a
minimum and a point (c, d) where f attains a maximum:

f(a, b) ≤ f(x, y) ≤ f(c, d) for all (x, y) in S.

Maxima and Minima

We set the following procedures to find the maximum and minimum values of a
differentiable function f(x, y) defined on a closed, bounded set S in the plane.

a) We find all stationary points of f in the interior of S.

b) We then find the largest and the smallest value of f on the boundary of
S, along with the associated points. We may want to divide the
boundary into several parts, and find the largest and the smallest value
in each part of the boundary.

c) We compute the values of the function at all the points found in a) and
b). The largest function value is the maximum value of f in S. The
smallest function value is the minimum value of f in S.

Example

Find the extreme values for f(x, y) defined over S when

f ( x, y ) = x 2 + 2 y 2 − x, S = {( x, y ) : x 2 + y 2 ≤ 1}

Solution

a) We start by finding all stationary points in the interior of S. These stationary


points satisfy the two equations:

f1/ ( x, y ) = 2 x − 1 = 0
f1/ ( x, y ) = 4 y = 0

So (x, y) = (1/2, 0) is the stationary point that lie in the interior of S with f(1/2, 0) =
-1/2.

b) The boundary of S consists of x2 + y2 = 1. Inserting x2 + y2 = 1, the behaviour


of f(x, y) is determined by:

g ( x ) = x 2 + 2(1 − x 2 ) − x = 2 − x − x 2 with x in [−1,1] . Now note

g ′( x ) = −1 − 2 x = 0 at x = −1 / 2

From x2 + y2 = 1, if x = -1/2, then we get y = ±√3/2.

So the points, (-1/2, ±√3/2) are optimality candidates.

We also note g(-1) = 2 and g(1) = 0. So (-1,0) and (1,0) are also optimality
candidates.
c) We compare the values of f at all these points.

f(-1/2, √3/2) = 9/4

f(-1/2, -√3/2) = 9/4

f(-1, 0) = 2

f(1, 0) = 0
Therefore, we conclude that f has maximum 9/4 at (−1/2, √3/2) and at (−1/2,
−√3/2). The minimum is −1/4 at (1/2, 0).

Example

Find the extreme values for f(x, y) defined over S when

f ( x, y ) = 3 + x 3 − x 2 − y 2 , S = {( x, y ) : x 2 + y 2 ≤ 1 and x ≥ 0}

Solution

a) We start by finding all stationary points in the interior of S. These stationary


points satisfy the two equations:

f1/ ( x, y ) = 3 x 2 − 2 x = 0 ⇒ x(3 x − 2) = 0
f1/ ( x, y ) = 4 y = 0

So (x, y) = (2/3, 0) is the stationary point that lie in the interior of S along and at
the boundary point (0, 0).

b) The boundary of S consists of x2 + y2 = 1 and x = 0.

Along the edge, x = 0, y ∈ [-1, 1].

Hence f(0, y) = 3 – y2. This is largest when y = 0 and smallest when y = ±1.

Along the edge, x2 + y2 = 1, x ∈ [0, 1], hence f(x, y) = 2 + x3. This is strictly
increasing, smallest at x = 0 and largest at x = 1.

(c) We compare the values of f at all these points.

f(0, 0) = 3 f(0, ±1) = 2 f(1, 0) = 3 f(2/3, 0) = 2.85

Therefore, we conclude that f has maximum 3 at (0, 0) and at (1, 0). The minimum
is 2 at (0, 1) and at (0, -1).

Example

Find the extreme values for f(x, y) defined over S when

f ( x, y ) = x 2 + y 2 + xy − 5 x − 4 y + 1, S = {( x, y ) : 0 ≤ x ≤ 4,0 ≤ y ≤ 4}
Solution

a) We start by finding all stationary points in the interior of S. These stationary


points satisfy the two equations:
f1/ ( x, y ) = 2 x + y − 5 = 0
f1/ ( x, y ) = 2 y + x − 4 = 0

So (x, y) = (2, 1) is the stationary point that lie in the interior of S and f(2,1) = -6.

b) The boundary of S consists of 0 ≤ x ≤ 4 and 0 ≤ y ≤ 4. If we sketch the region,


we get a square with vertex points (0,0), (0,4), (4,0) and (4,4). By direct
evaluation we have the following functional values:
f(0, 0) = 1 f(4, 0) = -3 f(0, 4) = 1 f(4, 4) = 13

For the rightmost edge, which is defined as x = 4 for 0 ≤ y ≤ 4, we evaluate:


f(4, y) = 16 + y2 + 4y - 20 - 4y + 1 = y2 – 3.

The derivative of f(4, y) with respect to y, 2y and setting that equal to zero, we get
y = 0.

For the top edge, which is defined as y = 4 for 0 ≤ x ≤ 4, we evaluate:

f(x, 4) = x2 + 16 + 4x – 5x - 16 + 1 = x2 – x + 1.
The derivative of f(x, 4) with respect to x, 2x -1 and setting that equal to zero, we get
x = ½ and f (1/2, 4) = ¾.

For the left edge, which is defined as x = 0 for 0 ≤ y ≤ 4, we evaluate:

f(0, y) = y2 - 4y + 1.

The derivative of f(0, y) with respect to y, 2y - 4 and setting that equal to zero, we get
y = 2 and f (0, 2) = -3.

For the bottom edge, which is defined as y = 0 for 0 ≤ x ≤ 4, we evaluate:


f(x, 0) = x2 - 5x + 1.

The derivative of f(x, 0) with respect to x, 2x - 5 and setting that equal to zero, we get
x = 5/2 and f (5/2, 0) = -21/4.
(c) We compare the values of f at all these points. We have a total of eight. We
see the maximum value of the function is 13 at (4, 4) while the minimum value is
-6 at the point (2,1).

Reading

Knut Sydsaeter, Peter Hammond (2006), Sections 13.1 – 13.5, pp 463 - 492

Carl Simon and Lawrence Blume (1994), Chapter 17, pp 396-410

Alpha C. Chiang (2005) Chapter 11


Constrained Optimisation
Because resources are scarce, all optimisation problems in economics are problems of
constrained optimisation: maximising or minimising some objective function subject
to one or more constraints, for example, maximising profits subject to the state of
technical knowledge and exogenous prices or demand function. Minimising the cost
of producing some exogenous level of output subject to the state of technical
knowledge and input prices (the production manager’s problem) is another example.
Finding the bundle of goods that maximises an individual’s utility subject to his or her
budget constraint can also act as a constrained optimisation problem.

Let us motivate this further in terms of the production manager’s problem. You have
been hired to run a factory producing a commodity; that is, you are in charge of
production of that commodity. On your day one, the old guy that you are replacing
describes to you the state of knowledge for producing the commodity. That is, he
explains that you will need labour and/or capital to produce, and explains how many
units can be produced at every possible nonnegative amount of labour and capital.
Every morning when you get to work you read the daily newspaper to determine the
going rate for labour and capital, w and r. The minute you put down the paper, the
marketing department calls to tell you how many units need to be produced that day.
Each day your goal is to hire the amount of labour and capital that minimises the cost
of producing whatever number you were told to produce. Or at least this is what we
are going to assume is you goal. Your problem is the production manager’s problem.

Your choice variables (the endogenous variables) are the amount of labour, l, and
capital k to hire. However, you are constrained by the number you must produce q,
input prices, w and r, and the state of technical knowledge for producing widgets. So,
your exogenous variables are q, w, and r. There are lot of ways to describe the state of
technical knowledge for production, one of which is the production function:

q = f(k, l), k ≥ 0 and l ≥ 0

Note your objective. You want to minimize costs c, subject to a number of


constraints, namely, input prices and number of units of output. We can state this as:

min c = wl + rk subject to q = f ( k , l )

To find a solution, we make use much use of a method, known as the Lagrange
multiplier method. Let us introduce the method in a general way.
Consider first the problem of minimising or maximising a function f(x, y) when x and
y are restricted to satisfy an equality constraint g(x, y) = c. In the production
manager’s problem, note that g(x, y) = c is nothing but q = f(k, l). We state the
problem as:

min(max) f(x, y) subject to g(x, y) = c (1)


In the first step of the Lagrange multiplier method, we introduce a Lagrange
multiplier, denoted by λ, which is associated with the constraint g(x, y) = c. In the
second step, we define the Lagrangian function L by:
L(x, y) = f(x, y) - λ [g(x, y) – c] (2)

Note the expression [g(x, y) – c] equals zero when the constraint is satisfied. In that
case, L(x, y) = f(x, y) for all (x, y) that satisfies the constraint g(x, y) = c.

How does the Lagrange multiplier method work?

To obtain the only possible solutions of the problem

min(max) f(x, y) subject to g(x, y) = c

we follow the following steps:

Step 1: We construct the Lagrangian function L:

L(x, y) = f(x, y) - λ [g(x, y) – c]

Step 2: We differentiate the Lagrangian function L with respect to x, y and λ


and equate the partial derivatives to zero as a first-order necessary
condition for optimisation.

Step 3: Step 2 gives us the following three equations:

L1/ ( x, y) = f1/ ( x, y) − λg1/ ( x, y) = 0


L/2 ( x, y) = f 2/ ( x, y) − λg2/ ( x, y) = 0
g( x, y) = c

Step 4: We solve the above three equations obtained in Step 3 for three
unknowns x, y and λ.

Example

Max x 2 + 3 xy + y 2 subject to x + y = 100

Solution

Step 1: We construct the Lagrangian function L:

L( x, y ) = x 2 + 3 xy + y 2 - λ ( x + y − 100)

Step 2: We differentiate the Lagrangian function L with respect to x, y and λ


and equate the partial derivatives to zero as a first-order necessary
condition for optimisation.
L1/ ( x, y ) = 2 x + 3 y − λ = 0
L/2 ( x, y ) = 3 x + 2 y − λ = 0
x + y = 100

Step 3: From Step 2, we get the following three equations:

2x + 3y = λ
3x + 2 y = λ
x + y = 100

Step 4: We solve the above three equations obtained in Step 3 for three
unknowns x, y and λ.

From the first two equations:

2 x + 3 y = 3x + 2 y
−x+ y=0
x + y = 100

So we get x = 50, y = 50.

From the equation: 2 x + 3 y = λ , we get λ = 250.

Therefore, we obtain the solutions as: x = 50, y = 50, λ = 250.

Example

Suppose that U(x, y) is a utility function for a society, with x denoting the economic
activity level and y the level of pollution. Let U(x, y) be defined for all x > 0, y > 0
by:

U ( x, y ) = ln( xα + yα ) − ln yα (1)

We also assume that the level of pollution y depends on the activity level by the
equation:

y 3 − ax 4 − b = 0 (2)

Use Lagrange’s method to find the activity level that maximizes U(x, y) subject to the
constraint given by equation (2).

Solution

Step 1: We construct the Lagrangian function L:


L( x, y ) = ln( xα + yα ) − ln yα - λ ( y 3 − ax 4 − b )

Step 2: We differentiate the Lagrangian function L with respect to x, y and λ


and equate the partial derivatives to zero as a first-order necessary
condition for optimisation.
1
L1/ ( x, y ) = (αxα −1 ) + 4aλx 3 = 0
α α
x +y
1 1
L/2 ( x, y ) = (αyα −1 ) − (αyα −1 ) − 3λy 2 = 0
xα + y α yα
L/λ = y 3 − ax 4 − b = 0

Step 3: From Step 2, we get the following three equations:

αxα −1
= −4aλx 3 (A)
α α
x +y

1 1
(αyα −1 ) − (αyα −1 ) = 3λy 2
α α α
x +y y
1 1
αyα −1 ( α α

α
) = 3λy 2
x +y y
yα − xα − yα
αyα −1[ α α α
] = 3λy 2
(x + y )y
− xα
αyα −1[ α α α
] = 3λy 2
(x + y )y
α xα
− = 3λy 2 (B)
α α
y( x + y )

y 3 − ax 4 − b = 0 (C)

Step 4: We solve the above three equations obtained in Step 3 for three
unknowns x, y and λ.

We use (A) and (B) to get:


αxα −1
xα + yα 4 aλ x 3
− =−
αxα 3λy 2
y ( xα + yα )
y 4ax 3
− =−
x 3y2
3 y3
x4 =
4a
3y3
Substituting x 4 = in (C), we get
4a

y 3 − ax 4 − b = 0
3 y3
y 3 − a( )−b = 0
4a
y 3 = 4b
y = 3 4b
3y3 4ax 4
On the other hand, note x 4 = implies that y 3 = .
4a 3
From (C), we get:
4ax 4
− ax 4 − b = 0
3
ax 4 = 3b
3b
x4 =
a
3b
x=4
a

Interpretation of the Lagrange Multiplier

Consider again the problem:

min(max) f(x, y) subject to g(x, y) = c

Suppose, we get x* and y* as the solutions of x and y when we solve the problem. Note
x* and y* depend on c: x* = x* (c) and y* = y* (c). We assume that they differentiable
functions of c.

Given this, the associated value of function f(x, y) also depends on c. Therefore, we
get:

f*(c) = f(x* (c), y* (c)) (3)


We call f*(c) as the optimal value function.

We take the differential of (3) and obtain:

df * (c ) = df * ( x* , y* )
= f1/ ( x* , y* )dx* + f 2/ ( x* , y* )dy*
= λg1/ ( x* , y* )dx* + λg 2/ ( x* , y* )dy*
= λ[ g1/ ( x* , y* )dx* + g 2/ ( x* , y* )dy* ]

Note taking differential of the identity :


g ( x* (c ), y* (c )) = c
g1/ ( x* , y* )dx* + g 2/ ( x* , y* )dy* = dc

Therefore, we get:

df * ( c ) = λ[ g1/ ( x* , y* )dx* + g 2/ ( x* , y * )dy* ] = λdc


df * ( c )
λ (c ) =
dc

The above defines the rate at which the optimal value of the objective function
changes with respect to changes in the constraint constant c. Economists refer λ as
the shadow price of the resource (c).

What does the shadow price mean in utility maximisation? It is essentially the ‘utility
value’ of relaxing the budget constraint by one unit (e.g., one £).

What does the shadow price mean in profit maximisation? It is essentially the ‘profit
value’ of relaxing the budget constraint by one unit (e.g., one £).

What does the shadow price mean in cost maximisation? It is essentially the ‘cost
value’ of relaxing the budget constraint by one unit (e.g., one £).

Example

Solve the utility maximisation problem

max U ( x, y ) = x + y subject to x + 4 y = 100

a) Using the Lagrange method and find the quantities demanded for the two goods.

b) Suppose income increases from 100 to 101. What is the exact increase in the
optimal value of U(x, y)? Compare the value found in a) for the Lagrange multiplier.

a) L( x , y ) = x + y - λ ( x + 4 y − 100)
1
/ 1 −2
L1( x, y ) = x − λ = 0
2
1
L/2 ( x, y ) = 1 − 4λ = 0 ⇒ λ =
4
x + 4 y = 100
Then
1 1
1 −2 1 − 1
x = ⇒x 2 = ⇒x=4
2 4 2
From x + 4 y = 100, we get y = 24
b) Note if income has changed from 100 to 101, then the new budget constraint is:

x + 4y = 101

The first-order necessary conditions in terms of L1/ ( x, y ) and L/2 ( x, y ) will be the
same as stated in a). Therefore, the values of λ and x remain unchanged. However,
the equilibrium y increases from 24 to 97/4 (note the new budget constraint).

The exact increase in the optimal value of U(x, y) (∆U): 105/4 – 26 = ¼. This is same
as the value of the Lagrange multiplier λ. But what is the reason behind it?

There is exact equality here because U is linear in one of the variables – in which one
that I leave as an exercise for you to decide!!!

Lagrange’s Theorem

Suppose that f(x, y) and g(x, y) have continuous partial derivatives in a domain A
of the xy-plane, and that (x0, y0) is both an interior point of A and a local extreme
point for f(x, y) subject to the constraint g(x, y) = c. Suppose further that g/1(x0, y0)
and g/2(x0, y0) are not both 0. Then there exists a unique number λ such that the
Lagrangian function

L(x, y) = f(x, y) – λ [g(x, y) – c]

has a stationary point at (x0, y0).

Sufficient (Second-Order) Conditions

Under the hypotheses of Lagrange’s theorem, the Lagrange multiplier method for the
problem

min(max) f(x, y) subject to g(x, y) = c

gives necessary conditions for the solutions. However, we need to make sure that we
really have obtained the solutions. If the constraint set is closed and bounded, then
the extreme value theorem guarantees that a continuous function will attain both the
maximum and the minimum over this set. There may be situations where the first-
order necessary conditions will give more than one points as solutions. We need to
check amongst them which one gives f its highest (maximisation problem) and lowest
value (minimisation problem). The sufficient conditions help us to achieve this task.
We start with Concave/Convex Lagrangian.

Concave/Convex Lagrangian

Suppose, we get x* and y* as the solutions of x and y when we solve the problem:

min(max) f(x, y) subject to g(x, y) = c

then the Lagrangian function L(x, y) = f(x, y) – λ [g(x, y) - c] is stationary at (x*, y*).
But it does not guarantee that L(x, y) attains a maximum (minimum) at (x*, y*).

Suppose, (x*, y*) happens to maximise L(x, y) among all (x, y). Then we can write:

L(x*, y*) = f(x*, y*) – λ [g(x*, y*) - c] ≥ f(x, y) – λ [g(x, y) - c] for all (x, y)

If (x*, y*) also satisfies the constraint g(x*, y*) = c. Then we obtain f(x*, y*) ≥ f(x, y)
for all (x, y) such that g(x, y) = c. In this case, we claim that (x*, y*) is really the
solution for the maximisation problem.

We can follow the same for the minimisation problem.

Suppose, (x*, y*) happens to minimise L(x, y) among all (x, y). Then we can write:

L(x*, y*) = f(x*, y*) – λ [g(x*, y*) - c] ≤ f(x, y) – λ [g(x, y) - c] for all (x, y)

If (x*, y*) also satisfies the constraint g(x*, y*) = c. Then we obtain f(x*, y*) ≤ f(x, y)
for all (x, y) such that g(x, y) = c. In this case, we claim that (x*, y*) is really the
solution for the minimisation problem.

Note a stationary point for a concave (convex) function maximises (minimises) the
function. Therefore, we get the following result:

Consider the problem: min(max) f(x, y) subject to g(x, y) = c, and suppose (x*, y*) as a
stationary point for the Lagrangian L(x, y). We assume L(x, y) as a C2-function and
(x*, y*) lies in a convex set S.

Result 1: If for all (x, y) in S,


/
L11 ( x, y ) ≤ 0, L/22 ( x, y ) ≤ 0, and L11
/
( x, y ) L/22 ( x, y ) − [ L12
/
( x, y )]2 ≥ 0
then (x*, y*) solves the maximization problem.

Result 2: If for all (x, y) in S,


/
L11 ( x, y ) ≥ 0, L/22 ( x, y ) ≥ 0, and L11
/
( x, y ) L/22 ( x, y ) − [ L12
/
( x, y )]2 ≥ 0
then (x*, y*) solves the minimisation problem.
Note under result 1, the Lagrangian is concave and under result 2, the Lagrangian is
convex.
Example
Consider an individual consuming positive amounts x and y of two goods X and Y
respectively. The individual wants to minimise the total expenditure on purchasing
the goods. Price per unit of good X is p and that of Y is q. However, the individual
wants to attain a certain level of utility given by the utility function: U = Axαyβ with
α > 0, β > 0 and α + β ≤ 1. Therefore, the individual wants to minimise expenditure
given by px + qy subject to U = Axαyβ . Explain why the Lagrangian is convex, so
that a stationary point of the Lagrangian must minimize expenditure.

Solution

L = px + qy − λ ( Axα y β − U )
∂L
= p − λαAxα −1 y β = 0
∂x
∂L
= q − λβ Axα y β −1 = 0
∂y
∂L
= U − Axα y β = 0
∂λ

The second-order partial derivatives are:

∂2L
/
L11 = = −λα (α − 1) Axα − 2 y β
2
∂x
∂2L
L/22 = 2
= −λβ ( β − 1) Axα y β − 2
∂y
∂2L
/
L12 = = −λα ( β − 1) Axα −1 y β −1
∂x∂y
/
Note from above we can infer that: L11 > 0 , L/22 > 0 , provided λ > 0.

/
L11 ( x, y ) L/22 ( x, y ) − [ L12
/
( x, y )]2
= λ2αβ (α − 1)( β − 1) A 2 x 2(α −1) y 2( β −1) − λ2α 2 ( β − 1) 2 A 2 x 2(α −1) y 2( β −1)
= λ2αβ A2 x 2(α −1) y 2( β −1) [1 − (α + β )] ≥ 0

So the sufficient (second-order) conditions for the Lagrangian to be convex are


satisfied.

Local Sufficient (Second-Order) Conditions

We sometimes are interested in conditions that are sufficient for local extreme points
of f(x, y) subject to g(x, y) = c. We would start with an example to show that the
Lagrange-multiplier method does not always provide the solution to an optimization
problem. Consider the problem:

min x 2 + y 2 subject to ( x-1)3 − y 2 = 0


Note the restriction:

( x-1)3 − y 2 = 0 ⇒ ( x-1)3 = y 2 ⇒ x ≥ 1

If x = 1, then y = 0 and x2 + y2 = 1

If x > 1, then y > 0 and x2 + y2 > 1

Therefore, x2 + y2 can only attain minimum for (1, 0).

Now we form the Lagrangian and set the first-order partial derivatives to be equal to
zero as a first-order necessary condition for minimisation (optimisation) problem

L( x, y ) = x 2 + y 2 − λ[( x − 1)3 − y 2 ]
∂L
= 2 x − 3λ ( x − 1) 2 = 0
∂x
∂L
= 2 y + 2λy = 0
∂y
∂L
= ( x − 1)3 − y 2 = 0
∂λ

From the first first-order condition, we can state that it is not satisfied for x = 1. So
the Lagrange-multiplier method is not able to detect the solution (1, 0).

What do we need here? The answer is Local Sufficient (Second-Order) Conditions,


which we state as follows:

We consider the problem local min(max) f(x, y) subject to g(x, y) = c and suppose that
(x*, y*) satisfies the first-order necessary conditions:

f1/ ( x, y ) = λg1/ ( x, y ), f 2/ ( x, y ) = λg 2/ ( x, y )

We define
// //
D( x, y , λ ) = ( f11 − λg11 )( g 2/ ) 2 − 2( f12
// //
− λg12 ) g1/ g 2/ + ( f 22
// //
− λg 22 )( g1/ ) 2
Now

(A) If D(x*, y*, λ) > 0 then (x*, y*) solves the local minimisation problem.

(B) If D(x*, y*, λ) < 0 then (x*, y*) solves the local maximisation problem.

The conditions on the sign of D(x*, y*, λ) is known as the local second-order
conditions.
Our question: Can we express D(x*, y*, λ) in some other way? The answer is yes.

If we note the expression, we can express the determinant D(x*, y*, λ) as:

0 g1/ ( x, y ) g 2/ ( x, y )
D( x, y , λ ) = − g1/ ( x, y ) /
f11 //
( x, y ) − λg11 ( x, y ) /
f12 //
( x, y ) − λg12 ( x, y )
g 2/ ( x, y ) /
f 21 //
( x, y ) − λg 21 ( x, y ) /
f 22 //
( x, y ) − λg 22 ( x, y )
evaluated at (x*, y*).

We observe that the 2 x 2 submatrix:

/ // / //
f11 ( x, y ) − λg11 ( x, y ) f12 ( x, y ) − λg12 ( x, y )
/ // / //
f 21 ( x, y ) − λg 21 ( x, y ) f 22 ( x, y ) − λg 22 ( x, y )

is the Hessian of the Lagrangian function. Hence, the determinant of D(x*, y*, λ) is
called a bordered Hessian as its borders part from 0 are the first-order partial
derivatives of g.

In case of optimisation problem with one constraint [g(x, y) = c], with two variables
[(x, y)], we can express the determinant alternatively as:

0 g1/ ( x, y ) g 2/ ( x, y )
D( x, y , λ ) = − g1/ ( x, y ) /
L11 ( x, y ) /
L12 ( x, y )
g 2/ ( x, y ) L/21 ( x, y ) L/22 ( x, y )

evaluated at (x*, y*).

/
Note L12 ( x, y ) = L/21( x, y ) (recall Young’s theorem)

0 g1/ ( x, y ) g 2/ ( x, y )
D( x, y , λ ) = − g1/ ( x, y ) /
L11 ( x, y ) /
L12 ( x, y )
g 2/ ( x, y ) L12
/
( x, y ) L/22 ( x, y )

If we expand the above determinant along the first row, we get:

[ g1/ ( x, y )]2 L/22 ( x, y ) − g1/ ( x, y ) g 2/ ( x, y ) L12


/
( x, y )
− g 2/ ( x, y ) g1/ ( x, y ) L12
/
( x, y ) + [ g 2/ ( x, y )]2 L11
/
( x, y )
= [ g 2/ ( x, y )]2 L11
/
( x, y ) − 2 g1/ ( x, y ) g 2/ ( x, y ) L12
/
( x, y ) + [ g1/ ( x, y )]2 L/22 ( x, y )
We note that the above expression is same as the one that we have obtained earlier as:

// //
D( x, y , λ ) = ( f11 − λg11 )( g 2/ ) 2 − 2( f12
// //
− λg12 ) g1/ g 2/ + ( f 22
// //
− λg 22 )( g1/ ) 2
Example

The preferences of a consumer over two goods x and y are given by the utility
function: U(x, y) = (x + 1)(y + 1) = xy + x + y + 1. The prices of goods x and y are 1
and 2, respectively, and the consumer's income is 30. What bundle of goods will the
consumer choose? Does it satisfy second-order conditions?

Solution

The consumer's optimization problem can be stated as follows:

max U(x, y) subject to x + 2y = 30


The Lagrangian function is:

L = U(x, y) - λ (x + 2y - 30) = xy + x + y + 1 - λ (x + 2y - 30)

The first-order necessary conditions are:

∂L
= y +1− λ = 0
∂x
∂L
= x + 1 − 2λ = 0
∂y
∂L
= x + 2 y − 30 = 0
∂λ

From the first equation, we get λ = y +1 and we substitute the value of λ in the second
equation. We obtain:

x – 2y – 1 = 0 (1)

x + 2y – 30 = 0 (2)

We solve equations (1) and (2) for x and y. We obtain x = 31/2 and y = 29/4.

To check whether this solution really maximizes the objective function, let us apply
the second-order sufficient conditions. The second-order conditions involve the
bordered Hessian:
0 g1/ ( x, y ) g 2/ ( x, y )
D( x, y , λ ) = − g1/ ( x, y ) /
L11 ( x, y ) /
L12 ( x, y )
g 2/ ( x, y ) L12
/
( x, y ) L/22 ( x, y )

0 1 2
= −1 0 1
2 1 0
= −[0(0 − 1) − 1(0 − 2) + 2(1 − 0)]
= −[2 + 2]
= −4 < 0

Therefore, the solutions x = 31/2 and y = 29/4 satisfies the second order conditions an
represents the bundle demanded by the consumer.

Simple Rule

We calculate determinants of principal minors of the Hessian. Then we want to


examine whether the determinants alternate their sign. For a local maximum, even
and odd-numbered minors have opposite signs: (D2 < 0 and D3 > 0 and so on). For the
local minimum, the determinants of principal minors should have the same sign (D2 >
0 and D3 > 0 and so on). We would in deal with this in detail when we deal with the
optimisation problem with more than two variables.

Reading

Knut Sydsaeter, Peter Hammond (2006), Sections 14.1 – 14.4, pp 503 - 519

Carl Simon and Lawrence Blume (1994), Section: 18.1 – 18.2 pp 411- 424

Alpha C. Chiang (2005) Chapter 12


Constrained Optimisation
More Variables and More Constraints
Constrained optimisation problem in economics often involve more than just two
variables. Consider the problem involving n variables and m constraints:

max (min) f(x1,….., xn) subject to gj(x1,….., xn) = bj ; j = 1, 2,…., m < n (1)

f is called the objective function, g1, g2,… , gm are the constraint functions, b1, b2,..,bm
are the constraint constants. The difference n - m is the number of degrees of freedom
of the problem. Note that n is strictly greater than m.

If it is possible to explicitly express (from the constraint functions) m independent


variables as functions of the other n - m independent variables, we can eliminate m
variables in the objective function, thus the initial problem will be reduced to the
unconstrained optimization problem with respect to n - m variables. However, in
many cases it is not technically feasible to explicitly express one variable as function
of the others.

1 2 m 1 ∂g j
We assume that f and g , g ,…., g are C functions and the Jacobian ( J = ), i =
∂xi
1,2,…, n and j = 1,2,…,m have the full rank, rank (J) = m. Associating a Lagrange
multiplier λj with the constraint, we write the Lagrangian as:

m
L( x1 ,..., xn , λ1,... , λm ) = f ( x1 ,..., x n ) − ∑ λ j [ g j ( x1 ,..., x n ) − b j ]
j =1
What are the necessary conditions for the solutions of (1)?

We partially differentiate L with respect to x1,…., xn, λ1,…, λj and set all the partials
to be equal to zero. This gives us:

∂L( x1 ,..., xn , λ1 ,..., λ m ) ∂f ( x1 ,..., x n ) m ∂g j ( x1 ,..., xn )


= − ∑ λj = 0, i = 1,2,..., n
∂xi ∂xi j =1 ∂ x i
∂L( x1 ,..., xn , λ1 ,..., λ m )
= − g j ( x1 ,..., xn ) + b j = 0, j = 1,2,..., m
∂λ j

Note the above system gives us a total of n + m equations to solve for n + m


unknowns. If we face the problem with only one constraint, then we will have n +1
equations to solve for n + 1 unknowns.

x* = (x*1, … , x*n) is a solution of (1), it should be a stationary point of L. It is


important that rank (J) = m and the functions are continuously differentiable.
It is crucial at this juncture to explain the Jacobian (J). We explain it in terms of the
following example.

Example 1: We consider the problem of maximising the function f(x, y) = ax + y


subject to the constraint x2 + ay2 = 1 where x > 0, y > 0 and a is a positive parameter.

Solution

Here the Lagrangian function is:

L = ax + y − λ ( x 2 + ay 2 − 1)

Differentiation of the Lagrangian Function with respect to x, y and λ gives us the first-
order necessary conditions:

∂L
= F 1 ( x, λ ; a ) = a − 2 xλ = 0
∂x
∂L
= F 2 ( y , λ ; a ) = 1 − 2 yaλ = 0
∂y
∂L
= F 3 ( x, y; a ) = x 2 + ay 2 − 1 = 0
∂λ

Given the system of three equations (formed by the first-order conditions), the
Jacobian (J) takes the following form:

 ∂F 1 ∂F 1 ∂F 1 
 
 ∂x ∂y ∂λ 
 2 
 ∂F ∂F 2 ∂F 2 
J =
∂x ∂y ∂λ 
 
 ∂F 3 ∂F 3 ∂F 3 
 ∂x ∂y ∂λ 
 
 − 2λ 0 − 2x 
 
= 0 − 2aλ − 2ay 
 2x 2ay 0 

Note the determinant of the Jacobian is | J | = -8aλ(x2+ay2) = -8aλ < 0 at the optimal
point. We also note that the rank of the Jacobian is 3. Here we have three equations
in three unknowns.

Next we introduce more examples, first with the case of more variables and then with
both more variables and more constraints.

Example 2: Consumer Optimisation Problem


A consumer’s demands x, y, z for three goods X, Y and Z to maximize the utility
function: U ( x, y , z ) = x + y − 1 / z subject to the budget constraint px + qy + rz = m .

Solution

The Lagrangian function is:

L = x + y − 1 / z − λ ( px + qy + rz − m)

The first-order conditions give:

∂L
= 1 − λp = 0 (2)
∂x
1
∂L 1 − 2
= y − λq = 0 (3)
∂y 2
∂L
= z − 2 − λr = 0 (4)
∂z
∂L
= px + qy + rz − m = 0 (5)
∂λ

From (2) we get λ = 1/p. Substituting λ = 1/p in equation (3) we get:

1
1 −2 q
y − =0
2 p
1
− 2q
y 2 =
p
p2
y=
4q 2

Substituting λ = 1/p in equation (4) we get:

z −2 −
r
=0
p
p
z=
r
Inserting the values of y and z in equation (5) (budget constraint) we obtain the values
of x:
m p r
x= − −
p 4q p

Given p, q and r > 0, we know that y and z > 0. For x > 0, we need that
m p r p2
− − > 0 ⇒ m > pr + .
p 4q p 4q
Note the optimal solutions of x, y and z is the individual demand functions. We
observe that the demand for both goods Y and Z not only depends on their own prices,
but also on the price of good X. An increase in the price of good X (p), leads to an
increase in the demand for good Y as well as in the demand for good Z.

∂y 2p
= >0
∂p 4q 2

∂z 1 −1 / 2 1
= p >0
∂p 2 r

We also note that demand for good X depends on prices of all the three goods, namely
X, Y and Z.

If we substitute the optimal values of x, y and z, in the utility function, then we can
express the utility function as a function of prices of different goods and income of
the consumer, known as the Indirect Utility Function.

Example 3 (More Variables, More Constraints)

Consider the problem minimize x2 + y2 + z subject to x2 + 2xy + y2 + z2 = a and x + y +


z = 1. We denote this as problem (A), where a is a constant.

(a) Use Lagrange’s method to set up necessary conditions for a minimum.


(b) Find the solution of (A) when a = 5/2. (You can take it as given that the minimum
exists.)

Solution

(a) The Lagrangian is:

L = x 2 + y 2 + z − λ1( x 2 + 2 xy + y 2 + z 2 − a ) − λ2 ( x + y + z − 1)

The first-order (necessary conditions):


∂L
= 2 x − 2λ1 x − 2λ1 y − λ2 = 0 ⇒ 2 x − 2λ1 ( x + y ) − λ 2 = 0 (6)
∂x
∂L
= 2 y − 2λ1 x − 2λ1 y − λ 2 = 0 ⇒ 2 y − 2λ1 ( x + y ) − λ2 = 0 (7)
∂x
∂L
= 1 − 2λ1 z − λ2 = 0 ⇒ 1 − 2λ1 z − λ2 = 0 (8)
∂z
∂L
= x 2 + 2 xy + y 2 + z 2 − a = 0 (9)
∂λ1
∂L
= x + y + z −1 = 0 (10)
∂λ1

(b) From (6) and (7), we get x = y. Now it is also given that a = 5/2. We use (9)
and (10) to get:

4x 2 + z 2 − 5 / 2 = 0
2x + z − 1 = 0 ⇒ z = 1 − 2x
4 x 2 + (1 − 2 x ) 2 − 5 / 2 = 0
4x 2 + 1 − 4x + 4x 2 − 5 / 2 = 0
8x 2 − 4x − 3 / 2 = 0
4 ± 16 − ( 4)(8)( −3 / 2)
x=
2*8
⇒ x = 3 / 4 or − 1 / 4

Then y = 3 / 4 or − 1 / 4

Given x and y, z = −1 / 2 or 3 / 2

If x = ¾, y = ¾ and z = -1/2 then

f(x, y, z) = 5/8

If x = -1/4, y = -1/4 and z = 3/2 then

f(x, y, z) = 13/8

Note the first set of values solves the minimisation problem.

Comparative Statics

We generalise the interpretation of the Lagrange multiplier with n variables and m


constraints. Consider the problem involving n variables and m constraints:
max (min) f(x1,….., xn) subject to gj(x1,….., xn) = bj ; j = 1, 2,…., m < n (11)
Let x* = (x*1, …., x*n) is a solution of (1) that depends on b1,….,bm. We assume that
each x*i = x*i(b1,….,bm) is a differentiable function of b1,….,bm. The associated value
f* = f*(x*1, …., x*n) is also then a function of b1,….,bm. If we put b = (b1,….,bm), the
resulting value is:

f*(b) = f (x*(b)) = f (x*1(b),…., x*n(b)) (12)

The function f* is called the optimal value function for problem (11). The Lagrange
multipliers associated with x* also depends on b1,….,bm and under certain regularity
conditions, we get the following result:
∂f * (b )
= λ j (b ) (13)
∂b j
The Lagrange multiplier λj is referred as the shadow price or marginal value imputed
to a unit of resource j. If we change b = (b1,….,bm) by db = (db1,….,dbm) and if
db1,….,dbm are small in absolute value, then from (3) we get:

f*(b + db) - f*(b) ≈ λ1(b)db1 + ….+ λm(b)dbm (14)

The Implicit Function Theorem

If f(x, y) ∈ C(k) in a set D and (x*, y*) is an interior point of D, f(x*, y*) = b (b is a
constant), and f/y( x*, y*) ≠ 0, then the equation f(x, y) = b defines y as C(k) function of x
dy f x/ ( x, y )
in some neighbourhood of (x , y ), i.e., y = φ(x) and
* *
=− .
dx f y/ ( x, y )

Let us generalise the above in case of the system of m equations with n exogenous
variables (x1,….., xn) and m endogenous variables (y1,….., ym):
f 1 ( x1 ,..., xn , y1 ,..., y m ) = 0

f 2 ( x1 ,..., x n , y1 ,..., y m ) = 0
(15)
.
f m ( x1 ,..., xn , y1 ,..., y m ) = 0
dy j
We want to evaluate . Taking the total differentiation of all fj, we get:
dxi
∂f 1 ∂f 1 ∂f 1 ∂f 1 ∂f 1 ∂f 1
dy1 + dy 2 + ... + dy m = −( dx1 + dx 2 + ... + dx n )
∂y1 ∂y 2 ∂y m ∂x1 ∂x2 ∂xn

∂f 2 ∂f 2 ∂f 2 ∂f 2 ∂f 2 ∂f 2
dy1 + dy 2 + ... + dy m = −( dx1 + dx 2 + ... + dxn )
∂y1 ∂y 2 ∂y m ∂x1 ∂x 2 ∂x n
.
∂f m ∂f m ∂f m ∂f m ∂f m ∂f m
dy1 + dy 2 + ... + dy m = −( dx1 + dx2 + ... + dxn )
∂y1 ∂y 2 ∂y m ∂x1 ∂x 2 ∂xn
We allow only xi to vary keeping all other x to be fixed and divide each remaining
term by dxi. We get:

∂f 1 ∂y1 ∂f 1 ∂y 2 ∂f 1 ∂y m ∂f 1
+ + ... + =−
∂y1 ∂xi ∂y 2 ∂xi ∂y m ∂xi ∂xi

∂f 2 ∂y1 ∂f 2 ∂y 2 ∂f 2 ∂y m ∂f 2
+ + ... + =−
∂y1 ∂xi ∂y 2 ∂xi ∂y m ∂xi ∂xi
.
∂f m ∂y1 ∂f m ∂y 2 ∂f m ∂y m ∂f m
+ + ... + =−
∂y1 ∂xi ∂y 2 ∂xi ∂y m ∂xi ∂xi
We can solve the above system of equations using Cramer’s rule or inversion of
∂y 1 ∂y
matrix for ,…., m . We define the Jacobian matrix of f1,…,fm with respect to
∂xi ∂xi
y1,…,ym as:

 ∂f 1 
∂f 1
 . . 
 ∂y1 
∂y m
 . . . .
J = 
 . . . . 
 ∂f m ∂f m 
 . .
 ∂y ∂ym 
 1

Suppose f1,…,fm are C(k) –functions in a set D ⊂ Rn+m and let (x*, y*) = (x*1, .., x*n, y*1,
.., y*m) be a solution to (A) in the interior of D. We also suppose that | J | exists at (x*,
y*). Given this (A) defines (y1,…,ym) as C(k) functions of (x1,…,xn) in some
neighbourhood of (x*, y*) and in that neighbourhood:

 ∂y1   ∂f 1 
   
 ∂x j   ∂x j 
 .   
−1  . 
  = −J (16)
 .   . 

 m
y  m
 ∂x   ∂f 
 j   ∂x j 
 

The above (given by (B)) is the General Implicit Function Theorem.

Next, we raise the Question: What is the use of Implicit Function Theorem? We
show this in terms of the following example with profit maximisation.

Example 4 (The Profit-Maximising Firm)


Consider that a firm has the profit function π(l, k) = pf(l, k) - wl - rk where f is the
firm's production function, p is the price of output, l, k are the amount of labour and
capital employed by the firm, w is the real wage and r is the real rental price of
capital. We assume that p, w and r as exogenously given. Assume that the Hessian
matrix of f is negative definite. Prove that if the wage increases by a small amount,
then the firm decides to employ less labour.
Solution

The firm wants to maximise profit by employing optimal l and k. The first-order
conditions are:

pf l (l , k ) − w = 0 F 1 ( l , k ; w) = 0
or
pf k (l , k ) − r = 0F 2 (l , k ; r ) = 0
The Jacobian matrix takes the following form:

 ∂F 1 ∂F 1 


J =  ∂l ∂k  = p 2  f11 (l , k ) f12 (l , k ) 

2   f (l , k ) f 22 (l , k ) 
 ∂F ∂F 2   12

 ∂l ∂k 

where fij is the second-order partial derivative of f with respect to the jth and the ith
arguments.

The determinant of the Jacobian:

J = p2 H > 0

|H| is the Hessian of f. Given that H is negative definite, f11 < 0 and |H| > 0 (which
also implies f22 < 0).

Since |J| ≠ 0, we can apply the implicit-function theorem. Note in this context, l and k
are implicit functions of w. The first-order partial derivative of l with respect to w is:

−1 pf12 (l , k )
∂l 0 pf 22 (l , k ) − pf 22 (l , k ) f 22
=− =− = <0
∂w J 2
p H p H

The above implies that if the wage increases by a small amount, then the firm decides
to employ less labour.

The Envelope Theorem

Let us consider the following problem:

max(min)x f(x, r) subject to gj(x, r) = 0, j = 1, ..., m (17)


where x = (x1,…., xn) and r = (r1,…., rk)

Suppose that λj = λj(r), j = 1, ..., m are the Lagrange multipliers obtained from the
first-order conditions for problem (1). In this case, let the Lagrangian function be:

m
L(x, r) = f (x, r) - ∑ λ j g j (x, r)
j =1

Under certain conditions, we get the following result known as Envelope Theorem:
∂f * (r)  ∂L(x, r) 
=  , i = 1, …, k (18)
∂ri  ∂ri  x = x* (r )

Note, f * (r) changes for two reasons:

a) With a change in ri, the vector r changes and hence f(x, r) changes directly

b) With a change in ri, all the functions (x*1 (r), …., x*n(r)) and therefore f(x*(r),
r) is changed indirectly.

However, the Envelope theorem tells us that we can ignore the second effect as the
effect is negligible.

Example 5 (Revisiting the Consumer Optimisation Problem)

Note in Example 2, we said that if we substitute the optimal values of x, y and z, in the
utility function, then we can express the utility function as a function of prices of
different goods and income of the consumer, known as the Indirect Utility Function.
We start with this in a general setup.

Let U * ( p1 , p 2 ,.., p n , m) be the indirect utility function, i.e., the maximum utility
obtainable when prices are ( p1 , p 2 ,.., p n ) and the income is m. Given equation (13),
we get:

∂U *
λ= .
∂m

Hence, λ is the rate of change in maximum utility when income changes. λ is called
the marginal utility of income.

Note, we can write the Lagrangian as:

L( x1 ,..., xn , p1 ,..., p n , m ) = U ( x1 ,..., x n ) − λ ( p1 x1 + ... + p n x n − m )

∂L ∂L
= λ and = −λxi
∂m ∂pi
Therefore, using the Envelope Theorem (equation (18)), we get:

∂U * ( p1 ,..., p n , m ) ∂L( p1 ,..., p n , m )


= =λ
∂m ∂m
∂U * ( p1 ,..., p n , m ) ∂L( p1 ,..., p n , m )
= = −λxi*
∂pi ∂pi

The last expression is known as Roy’s Identity. Roy’s identity tells us that the
marginal disutility of a price increase is the marginal utility of income multiplied by
the optimal quantity demanded.

Example
Solve the utility maximisation problem: max x + a ln y subject to px + qy = m, where
0 ≤ a < m/p. Find the value function f*(a, p, q,m) and compute its partial derivatives
with respect to all the four variables. Check if the results accord with the envelope
result.
Solution
The Lagrangian function is:
L = x + a ln y − λ ( px + qy − m )

The first-order conditions are:


1
L/x = 1 − λp = 0 ⇒ λ =
p
a a ap
L/y = − λq = 0 ⇒ y = =
y λq q
ap m
L/ = px + qy − m = 0 ⇒ px = m − qy = m − q( ) = m − ap , so x = − a
λ q p
The value function is:
m ap m
f * ( a , p , q , m) = − a + a ln( ) = − a + a ln a + a ln p − a ln q
p q p
The partial derivatives of the value function with respect to all the four variables give:
∂f *
= ln( ap ) − ln( q )
∂a
∂f * m a ap − m
=− + =
∂p p2 p p2
∂f * a
=−
∂q q
∂f * 1
=
∂m p
Given the Langangian as L = x + a ln y − λ ( px + qy − m)
∂L*
= ln y *
∂a
∂L*
= −λ x *
∂p
∂L*
= −λ y *
∂q
∂L*

∂m

Note we are able to verify not only the Envelope Theorem as well as Roy’s Identity.
A similar situation arises in case of cost minimisation problem for a firm. Suppose a
firm wants to minimise cost of production subject to a given level of output.
Therefore, the problem can be stated as:

Minimise C = r K + w L subject to the constraint F(K, L) = Q

We want to find the values of K and L that minimises the cost of production of Q
units.

Let C* = C(r, w, Q) be the value function for the problem.

Note the Lagrangian is:

L = rK + wL − λ ( F ( K , L ) − Q )

Therefore,

∂L
=K
∂r
∂L
=L
∂w
∂L

∂Q

According to the Envelope Theorem,

∂C *
= K*
∂r
∂L*
= L*
∂w
∂L*

∂Q
The first two equalities are known as Shephard’s Lemma. Note the last one shows the
rate at which the minimum cost changes with respect to changes in output. Hence, λ
equals the marginal cost. Note in case of consumer optimisation exercise, the
consumer expenditure function can be defined as: E ( p , U ) = min ∑ pi xi ,
{U ( x ) >U }
where U is specified utility level. In this case, Shephard’s Lemma gives demand as a
function of prices and utility, known as the compensated demand curve.

Reading

Knut Sydsaeter, Peter Hammond (2006), Sections14.3- 14.6, pp. 520-532

Carl Simon and Lawrence Blume (1994), Section: 19.1 – 19.3 pp. 448 - 469
Nonlinear Programming
In the classical method for constrained optimization seen thus far, the constraints have
always been strict equalities. Some economic problems call for weak inequality
constraints, however, as when individuals want to maximize utility subject to
spending not more that x pounds, or business seek to minimize costs subject to
producing no less than x units of output.

Nonclassical optimization, known as mathematical programming, tackles problems


with inequality constraints. Mathematical programming includes linear programming
and nonlinear programming. In linear programming, the objective function and all
inequality constraints are linear. When either the objective function or an inequality
constraint is nonlinear, we face a problem of nonlinear programming. The nonlinear
programming problem is that of choosing nonnegative values of certain variables so
as to maximize or minimize a given (non-linear) function subject to a given set of
(non-linear) inequality constraints.

Let us begin with the simple nonlinear programming problem:

max f(x, y) subject to z(x, y) ≤ c (1)

Note the above problem involves one inequality constraint. Here we want to want to
find out the largest value attained by f(x, y) in the feasible set S of all pairs (x, y)
satisfying z(x, y) ≤ c. We can think f(x, y) representing the utility function of a
consumer and z(x, y) ≤ c as the budget constraint. In this case, we seek to determine
the largest utility attained by the consumer given that consumption bundles lies in the
in the feasible set S of all pairs (x, y) satisfying the budget constraint.

Generalisation of the problem given by (1) would imply:

max f ( x1 ,..., x n ) subject to z j ( x1 ,..., xn ) ≤c j , j = 1,2,..m, x1 ≥ 0,...., x n ≥ 0 (2)

The minimisation problem can then be:

min f ( x1 ,..., x n ) subject to z j ( x1 ,..., xn ) ≥c j , j = 1,2,..m, x1 ≥ 0,...., x n ≥ 0 (3)

If we compare (2) and (3), we can say that the direction of inequality at the constraint
is only a convention as the first inequality can be converted to the second one by
simply multiplying it with -1. We can also say that an equality constraint can be
written in terms of two inequality constraints. For example, the constraint z k = ck
can be written as z k ≤ ck and − z k ≤ −c k . A constraint z k ≤ ck is binding at x* if
z k ( x* ) = ck .
Let us examine the problem graphically using (1) with the example of consumer
optimisation problem. Given the price, the consumer can either maximise utility
subject to the budget constraint or minimise expenditure subject to utility constraint.
This implies:
max U ( x ) subject to ∑ p i xi ≤ m or min ∑ p i xi ) subject toU ( x ) ≥ U

where m is the income of the consumer and U is the level of utility that the consumer
wants to attain (pre-specified).

There is no reason to insist that a consumer spend all her wealth, so that her
optimization problem should be formulated with inequality constraints.

We now try to solve this problem. Using an extension of the Lagnrange multiplier
method, due originally to H. W. Kuhn and A. W. Tucker, we solve such problems.

Kuhn-Tucker Conditions

Assuming the functions in (2) are differentiable and if x* is the optimal solution to (2),
it should satisfy the following conditions (the Kuhn-Tucker necessary maximum
conditions or KT conditions):
∂L ∂L
≤ 0, xi ≥ 0 and xi =0
∂xi ∂xi
∂L ∂L
≥ 0, λ j ≥ 0 (complimentary slckness condition) and λ j =0
∂λ j ∂λ j
where
m
L( x1 , x 2 ,..., x n , λ1 , λ 2 ,..., λ m ) = f ( x1 , x 2 ,..., x n ) + ∑ λ j (c j − z j (x1 , x 2 ,..., x n ))
j =1

Similarly, the Kuhn-Tucker necessary minimum conditions can be stated as:

∂L ∂L
≥ 0, xi ≥ 0 and xi =0
∂xi ∂xi
∂L ∂L
≤ 0, λ j ≥ 0 (complimentary slckness condition) and λ j =0
∂λ j ∂λ j
where
m
L( x1 , x 2 ,..., x n , λ1 , λ 2 ,..., λ m ) = f ( x1 , x 2 ,..., x n ) + ∑ λ j (c j − z j (x1 , x 2 ,..., x n ))
j =1

Example 1:

Maximise U = xy subject to x + y ≤ 100, x ≤ 40 and x, y ≥ 0

Solution

The Lagrangian is:

L = xy + λ1 (100 − x − y ) + λ 2 ( 40 − x )

The Kuhn-Tucker conditions for maximisation are:

∂L
Lx = = y − λ1 − λ 2 ≤ 0, x ≥ 0, xL x = 0
∂x
∂L
Ly = = x − λ1 ≤ 0, y ≥ 0, yL y = 0
∂y
∂L
Lλ1 = = 100 − x − y ≥ 0, λ1 ≥ 0, λ1 Lλ1 = 0
∂λ1
∂L
Lλ 2 = = 40 − x ≥ 0, λ 2 ≥ 0, λ 2 Lλ 2 = 0
∂λ2

Here x = 0 and y = 0 does not make any sense as this would imply U = xy = 0 .
Therefore, both x and y has to be nonzero and then this would imply Lx = 0 and Ly = 0.
This implies

y − λ1 − λ 2 = x − λ1 = 0
y − λ2 = x
Now if x ≤ 40 is not binding, then λ2 = 0 and this would imply x = y. Given the budget
constraint, then the trial solution is x = y = 50. But this would violate the
constraint x ≤ 40 and hence the optimal solution for x must be x* = 40. This would
also imply the optimal solution for y must be y* = 60. The optimal values for the
Lagrangian multipliers would be λ1* = 40 and λ*2 = 20 .

Please go through Example 2 (page 542) of the textbook.

Interpretations of Kuhn-Tucker Conditions

Suppose we are dealing with a maximisation problem. The firm wants to maximise
revenue (profit) producing n goods subject to m resource (factor) constraints.

xi is the amount produced of the i-th product; rj is the amount of the j-th resource
available; f is the profit (revenue) function; zj is a function which shows how the j-th
resource is used in producing the n goods. The optimal solution to the maximization
program indicates the optimal quantities of each good the firm should produce. We
define the following:

∂f
fi = : marginal profit (revenue) of product i
∂xi
λj: shadow price of resource j
∂z j
z ij = : amount of resource j used in producing a marginal unit of product i
∂xi
λ j z ij : imputed cost of resource j to produce a marginal unit of product i
∂L ∂f m
The Kuhn-Tucker condition ≤ 0 is nothing but f i = ≤ ∑ λ j z ij . This implies
∂xi ∂xi j =1
that marginal profit of the i-th product cannot be more than the aggregate marginal
∂L
imputed cost of the i-th product. The condition xi = 0 implies that in order to
∂xi
produce product i, the marginal profit must be equal to the aggregate imputed
marginal cost; otherwise the good would not be produced.

∂L
The Kuhn-Tucker condition ≥ 0 states that the total amount of resource j used in
∂λ j
producing all the n goods should not exceed total amount available. The condition
∂L
λj = 0 implies that if a resource is not fully used, then its shadow price equals 0.
∂λj

Kuhn-Tucker Sufficient Condition


Consider the problem: max f(x, y) subject to z(x, y) ≤ c and assume that (x*, y*)
satisfies the Kuhn-Tucker necessary conditions for maximum. If the Lagrangian
L(x,y) is concave, then (x*, y*) solves the problem.

Value Function

Consider the following problem:


max f ( x1 ,..., x n ) subject to z j ( x1 ,..., xn ) ≤c j , j = 1,2,..m, x1 ≥ 0,...., x n ≥ 0 (2)
The optimal value of the objective function f(x) depends on c1, c2, …, cm. Then the
function defined as: f * (c) = max{ f ( x ) : z j ( x) ≤c j , j = 1,2,.., m} assigns to each c =
(c1, c2, …, cm) the optimal value of the objective function. It is known as the value
function for the problem. The value function satisfies the following two properties:

i) f * (c) is non-decreasing in each variable c1, c2, …, cm.

∂f * (c)
ii) If exists, then it equals λ j (c) .
∂c j

Note, the value function f * (c) need not be differentiable and can have sudden changes
of slope.

Please go through Example 3 (page 543-544) of the textbook.

Example 2:

Consider the problem: max xy subject to x + y ≤ 6, x ≥ 0 and y ≥ 0.

Solution

The Lagrangian is:

L = xy + λ (6 − x − y )

The Kuhn-Tucker conditions for maximisation are:

∂L
Lx = = y − λ ≤ 0, x ≥ 0, xL x = x( y − λ ) = 0
∂x
∂L
Ly = = x − λ ≤ 0, y ≥ 0, yL y = y( x − λ ) = 0
∂y
∂L
Lλ = = 6 − x − y ≥ 0, λ ≥ 0, λLλ = λ (6 − x − y ) = 0
∂λ

If x > 0 then from the first set of conditions we have y = λ. If y = 0 in this case then λ
= 0, so that the second set of conditions implies x ≤ 0, contradicting x > 0. Hence y >
0, and thus x = λ, so that x = y = λ = 3.
If x = 0 then if y > 0 we have λ = 0 from the second set of conditions, so that the first
condition contradicts y > 0. Thus y = 0 and hence λ = 0 from the third set of condition.

We conclude that there are two solutions of the Kuhn-Tucker conditions, in this case
(x, y, λ) = (3, 3, 3) and (0, 0, 0). Since the value of the objective function at (3, 3) is
greater than the value of the objective function at (0, 0), the solution of the problem is
(3, 3).

Example 3:

Consider the problem: max x2y2 subject to 2x + y ≤ 2, x ≥ 0 and y ≥ 0.

Solution

The Lagrangian is:

L = x 2 y 2 + λ (2 − 2 x − y)

The Kuhn-Tucker conditions for maximisation are:

∂L
Lx = = 2 xy 2 − 2λ ≤ 0, x ≥ 0, xL x = x( 2 xy 2 − 2λ ) = 0
∂x
∂L
Ly = = 2 x 2 y − λ ≤ 0, y ≥ 0, yL y = y( 2 x 2 y − λ ) = 0
∂y
∂L
Lλ = = 2 − 2 x − y ≥ 0, λ ≥ 0, λLλ = λ ( 2 − 2 x − y ) = 0
∂λ

If x = 0 and y = 0 then from the third set of conditions we have λ = 0.

If x > 0 and y = 0 then from the first set of conditions we get λ = 0.

If x = 0 and y > 0 then from the second set of conditions we get λ = 0.

If x > 0 and y > 0 then from the first two set of conditions we get 2xy2 = 2λ

⇒ xy2 = λ and 2x2y = λ. This implies that 2x = y and λ > 0. From the third set of
conditions, we get y = 1 and hence x = ½.

We conclude that there are two solutions of the Kuhn-Tucker conditions, in this case
(x, y) = (1/2, 1) and (0, 0). Since the value of the objective function at (1/2, 1) is
greater than the value of the objective function at (0, 0), the solution of the problem is
(1/2, 1).

Example 4:

Suppose a firm wants to maximize wants to maximise its sales revenue subject to the
constraint that its profit is not less than 10. Let q be the amount of the good to be
supplied in the market. The revenue function is R( q ) = 20q − q 2 and the cost function
is C ( q ) = q 2 + 6q + 2 . Find the sales-maximising quantity for the firm.

Solution

Firm’s problem:

Max R(q ) subject to R( q ) − C ( q ) ≥ 10 and q ≥ 0 . This implies

Max

R(q ) subject to
20q − q 2 − q 2 − 6q − 2 ≥ 10 ⇒ −2q 2 + 14q − 12 ≥ 0 ⇒ − q 2 + 7 q − 6 ≥ 0 ⇒ q 2 − 7 q + 6 ≤ 0
and q ≥ 0 .

The Lagrangian Function is:

L = 20q − q 2 − λ ( q 2 − 7 q + 6)

The Kuhn-Tucker Conditions are:

∂L
= 20 − 2q − 2λq + 7λ ≤ 0, q ≥ 0, q( 20 − 2q − 2λq + 7λ ) = 0
∂q
∂L
= − q 2 + 7 q − 6 ≥ 0, λ ≥ 0, λ ( − q 2 + 7 q − 6 ) = 0
∂λ

If q = 0, the inequality − q 2 + 7 q − 6 ≥ 0 would not be satisfied. Therefore q > 0 and


this implies that 20 − 2q − 2λq + 7λ = 0 .

∂L
If λ = 0, the condition = 0 implies that q = 10, but this is inconsistent
∂q
with − q 2 + 7 q − 6 ≥ 0 . Therefore λ > 0 and this implies that ( −q 2 + 7 q − 6) = 0 .

The quadratic equation ( −q 2 + 7 q − 6) = 0 has the roots q1 = 1 and q 2 = 6 .

If q1 = 1 , then 20 − 2(1) − 2λ (1) + 7λ = 0 ⇒ 5λ = −18 ⇒ λ = −18 / 5 contradicting that


λ > 0.

If q 2 = 6 , then 20 − 2(6) − 2λ (6) + 7λ = 0 ⇒ −5λ = −8 ⇒ λ = 8 / 5 .

Therefore, the sales-maximising output for the firm is 6 units.


Example 5:

Suppose a consumer wants to maximise utility U = x + y subject to the budget


constraint px + y ≤ M and x ≥ 0 , y ≥ 0 . It is given that p > 0 and M > 0. Solve this
utility maximisation problem.

Solution

The Lagrangian function is:

L( x, y ) = x + y + λ ( M − px − y )

The Kuhn-Tucker Conditions are:

1 1
∂L 1 − 2 1 −
= x − λ p ≤ 0, x ≥ 0, x ( x 2 − λ p ) = 0
∂x 2 2
∂L
= 1 − λ ≤ 0, y ≥ 0, y(1 − λ ) = 0
∂y
∂L
= px + y ≤ M , λ ≥ 0, λ ( px + y − M ) = 0
∂λ

From the first set of conditions, if x > 0, then


1 1 1
1 − 1 − − 1
x ( x 2 − λ p ) = 0 ⇒ x 2 − λ p = 0 ⇒ x 2 = 2λ p ⇒ x =
2 2 ( 2λp ) 2

From the second set of conditions, either λ = 1 or y = 0 .

1 1 1
If λ = 1 , then x = and y = M − px = M − p =M − . This gives a
4 p2 4 p2 4p
1 1 1
solution if y ≥ 0 implying that M − >0⇒M > ⇒ p> .
4p 4p 4M

M 1
If y = 0 , then x = and λ = . This gives a solution if λ ≥ 1 implying that
p 1
2( pM ) 2
1
p≤ .
4M

Therefore the solution is:


 M 1 
 ( ,0) if p ≤ 
p 4M
 
( 1 1 1 
,M − ) if p >
 4 p2 4p 4M 
 

Reading List

Knut Sydsaeter, Peter Hammond (2006), Sections 14.7 – 14.8, pp. 532 -548
Carl Simon and Lawrence Blume (1994), Section 18.6
Alpha C. Chiang (2005) Chapter 13