Вы находитесь на странице: 1из 139

Part I

THE CLASSICAL THEORY OF


GAMES
1 Static games of complete information
In this chapter we consider games of the following simple form: rst the play-
ers simultaneously choose actions, then the players receive payos that depend
on the combination of actions just chosen. Within the class of such static (or
simultaneous-move) games we restrict attention to games of complete infor-
mation. That is, each players payo function (the function that determines
the players payo from the combination of action by the players) is common
knowledge among all the players.
1
1.1 Zero-sum two-person games
We consider a game with two players, the player 1 and the player 2. What
player 1 wins is just what player 2 loses, and vice versa.
In order to have an intuitive understanding of a such game we introduce
some related basic ideas through a few simple examples.
Example 1.1. (Matching pennies)
Each of two participants (players) puts down a coin on the table without
letting the other player see it. If the coins match, that is, if both coins show
heads or both show tails, player 1 wins the two coins. If they do not match,
player 2 wins the two coins. In other words, in the rst case, player 1 receives a
payment of 1 from player 2, and, in the second case, player 1 receives a payment
of 1.
These outcomes can be listed in the following table:
Player 2
Player 1
1 (heads) 2 (tails)
1 (heads) 1 1
2 (tails) 1 1
Also, they can be written in the payos matrices for the two players:
H
1
=

1 1
1 1

. H
2
=

1 1
1 1

.
We say that each player has two strategies (actions, moves). In the
matrix H
1
the rst row represents the rst strategy of player 1, the second row
represents the second strategy of player 1. If player 1 chooses his strategy 1, it
means that his coin shows heads up. Strategy 2 means tails up. Similarly, the
rst and the second columns of matrix H
1
correspond respectively to the rst
and the second strategies of player 2. In H
2
we have the same situations, but
for player 2.
Remark 1.1. This gambling contest is a zero-sum two-person game.
Briey speaking, a game is a set of rules, in which the regulations of the entire
procedure of competition (or contest, or struggle), including players, strate-
gies, and the outcome after each play of the game is over, etc., are specically
described.
Remark 1.2. The entries in above table form a payo matrix (of player
1, that is H
1
). The matrix H
2
is the payo matrix of player 2, and we have
H
1
H
t
2
= O
2
.
where H
t
2
is the transpose of H
2
.
Remark 1.3. The payo is a function of the strategies of the two players.
If, for instance, players 1s coin shows heads up (strategy 1) and player 2s coin
also shows heads up (strategy 1), then the element /
11
= 1 denotes the amount
which player 1 receives from player 2. Again, if player 1 chooses strategy 2
(tails) and player 2 chooses strategy 1 (heads), then the element /
21
= 1 is
2
the payment that player 1 receives. In this case, the payment that player 1
receives is a negative number. This means that player 1 loses one unit, that is,
player 1 pays one unit to player 2.
Example 1.2. (Stone-paper-scissors)
Scissors defeats paper, paper defeats stone, and stone in turn defeats scissors.
There are two players: 1 and 2. Each player has three strategies. Let strategies
1, 2, 3 represent stone, paper, scissors respectively. If we suppose that the
winner wins one unit from the loser, then the payo matrix is
Player 2
Player 1
1 2 3
1 0 1 1
2 1 0 1
3 1 1 0

Remark 1.4. The payo matrices for the two players are:
H
1
=

0 1 1
1 0 1
1 1 0

, H
2
=

0 1 1
1 0 1
1 1 0

.
We have H
1
= H
2
and H
1
H
t
2
= O
3
.
Example 1.3. We consider zero-sum two-person game for which the payo
matrix is given in the following table:
Player 2
Player 1
j`c 0 1 2
0 0 1 4
1 -1 2 7
2 -4 1 8
3 -9 -2 7
We have the payo matrices:
H
1
=

0 1 4
1 2 7
4 1 8
0 2 7

. H
2
=

0 1 4 0
1 2 1 2
4 7 8 7

Player 1 has four strategies, while player 2 has three strategies.


Remark 1.5. The payo of player 1 (that is, the amount that player 2 pays
to player 1) can be determined by the function
1 : 0. 1. 2. 8 0. 1. 2 Z. 1(j. c) = c
2
j
2
2jc.
In each of the above examples there are two players, namely player 1 and
player 2, and a payo matrix, H
1
(there is H
2
too such that H
1
H
t
2
= 0). Each
3
player has several strategies. The strategies of player 1 are represented by the
rows of the payo matrix H
1
, and those of player 2 by the columns of the payo
matrix H
1
. (The strategies of player 2 are represented by the rows of the payo
matrix H
2
, and those of player 1 by the columns of the payo matrix H
2
.)
The player 1 chooses a strategy from his strategy set, and player 2, indepen-
dently, chooses a strategy from his strategy set. After the two choices have been
made, player 2 pays an amount to player 1 as the outcome of this particular
play of the game. The amount is shown in the payo matrix. This amount may
be with positive, 0, or negative value. If the payo is positive, player 1 receives
a positive amount from player 2, that is, player 1 wins an amount from player
2. If the payo is negative, player 1 receives a negative amount from player 2,
that is, player 1 loses an amount to player 2 (player 2 wins an amount from
player 1). The gain of player 1 equals the loss of player 2. What player 1 wins
is just what player 2 loses, and vice versa, For this, such a game is called a
zero-sum game.
1.2 Matrix games
In what follows we suppose that player 1 has : strategies and player 2 has :
strategies. We denote by a
ij
,i = 1. :, , = 1. :, the payo which player 1 gains
from player 2 if player 1 chooses strategy i and player 2 chooses strategy ,. So,
we obtain the payo matrix H
1
(= ):
= (a
ij
) = (1)
=

a
11
a
12
... a
1n
... ... ... ...
a
m1
a
m2
... a
mn

Denition 1.1. We call matrix game the game which is completely de-
termined by above matrix .
To solve the game, that is, to nd out the solution (what maximum payo
has the player 1 and what strategies are chosen by both players to do this) we
examine the elements of matrix .
In this game, player 1 wishes to gain a payo a
ij
as large as it is possible,
while player 2 will do his best to reach a value a
ij
as small as it is possible. The
interests of the two players are completely conicting.
If player 1 chooses strategy i he can be sure to obtain at least the payo
min
1<j<n
a
ij
. (2)
This is the minimum of the i
th
-row element in the payo matrix .
Since player 1 wishes to maximize his payo he can choose strategy i so as
to make the value in (2) as large as it is possible. That is to say, player 1 can
choose strategy i in order to receive a payo not less than
max
1<i<m
min
1<j<n
a
ij
. (3)
4
In other words, if player 1 makes his best choice, the payo which player 1
receives cannot be less than the value given in (3).
Similarly, if player 2 chooses his strategy ,, he will lose at most
max
1<i<m
a
ij
. (4)
Now, player 2 wishes to minimize his lose so, he will try to choose strategy
, so as to obtain the minimum of the value in (4). Namely, player 2 can choose
, so as to have his loss not greater than
min
1<j<n
max
1<i<m
a
ij
. (5)
So, if player 2 makes his best choice, the payo which player 1 receives cannot
be greater than the value given by (5).
We have seen that player 1 can choose the strategy i to ensure a payo which
is at least

1
= max
1<i<m
min
1<j<n
a
ij
.
while player 2 can choose the strategy , to make player 1 get at most

2
= min
1<j<n
max
1<i<m
a
ij
.
Is there any relationship between these two values,
1
and
2
?
Lemma 1.1. The following inequality holds:
1
_
2
, that is

1
= max
1<i<m
min
1<j<n
a
ij
_ min
1<j<n
max
1<i<m
a
ij
=
2
. (6)
Proof. For every i we have
min
1<j<n
a
ij
_ a
ij
. , = 1. :.
and for every , we have
a
ij
_ max
1<i<m
a
ij
. i = 1. :.
Hence the inequality
min
1<j<n
a
ij
_ max
1<i<m
a
ij
holds, for all i = 1. : and all , = 1. :.
Since the left-hand side of the last inequality is independent of ,, taking the
minimum with respect to , on both sides we have
min
1<j<n
a
ij
_ min
1<j<n
max
1<i<m
a
ij
=
2
. i = 1. :.
that is,
min
1<j<n
a
ij
_
2
.
5
Since the right-hand side of the last inequality is independent of i, taking
the maximum with respect to i on both sides we obtain
max
1<i<m
min
1<j<n
a
ij
_
2
.
that is,
1
_
2
, and the proof is completed.
Let us examine the three examples from the section 1.1.
In Example 1.1 we have : = 2, : = 2, therefore

1
= max
1<i<2
min
1<j<2
a
ij
= max(1. 1) = 1.

2
= min
1<j<2
max
1<i<2
a
ij
= min(1. 1) = 1.
So, in Example 1.1 we have
1
<
2
.
In Example 1.2, we have : = 8, : = 8, therefore

1
= max
1<i<3
min
1<j<3
a
ij
= max(1. 1. 1) = 1.

2
= min
1<j<3
max
1<i<3
a
ij
= min(1. 1. 1) = 1.
So, in Example 1.2 we have
1
<
2
.
In Example 1.3, we have : = 4, : = 8, therefore

1
= max
1<i<4
min
1<j<3
a
ij
= max(0. 1. 4. 0) = 0.

2
= min
1<j<3
max
1<i<4
a
ij
= min(0. 2. 8) = 0.
So, in Example 1.3 we have
1
=
2
.
1.3 Saddle points in pure strategies
There are situations in which
1
=
2
. Consequently we give
Denition 1.2. If the elements of the payo matrix of a matrix game
satisfy the following equality

1
= max
1<i<m
min
1<j<n
a
ij
= min
1<j<n
max
1<i<m
a
ij
=
2
. (7)
then the quantity (=
1
=
2
) is called the value of the game.
Remark 1.6. The value is the common value of those given in (3) and
(5).
The value of the game in Example 1.3 is = 0.
If the equality (7) holds, then there exist an i
+
and a ,
+
such that
min
1<j<n
a
i

j
= max
1<i<m
min
1<j<n
a
ij
= .
and
max
1<i<m
a
ij
= min
1<j<n
max
1<i<m
a
ij
= .
6
Therefore
min
1<j<n
a
i

j
= max
1<i<m
a
ij
.
But, obviously we have
min
1<j<n
a
i

j
_ a
i

j
_ max
1<i<m
a
ij
.
Thus
max
1<i<m
a
ij
= a
i

j
= = min
1<j<n
a
i

j
.
Therefore, for all i and all ,
a
ij
_ a
i

j
= _ a
i

j
. (8)
Consequently, if player 1 chooses the strategy i
+
, then the payo cannot
be less than if player 2 departs from the strategy ,
+
; if player 2 chooses the
strategy ,
+
, then the payo cannot exceed if player 1 departs from the strategy
i
+
.
Denition 1.3. We call i
+
and ,
+
optimal strategies of players 1 and 2
respectively. The pair (i
+
. ,
+
) is a saddle point (in pure strategies) of the
game. We say that i = i
+
, , = ,
+
is a solution (or Nash equilibrium) of the
game.
Remark 1.7. The relationship (8) shows us that the payo at the saddle
point (i
+
. ,
+
) (solution of the game) is the value of the game. When player 1
sticks to his optimal strategy i
+
, he can hope to increase his payo if player 2
departs from his optimal strategy ,
+
. Similarly, if player 2 sticks to his optimal
strategy ,
+
, player 1s payo may decrease if he departs from his optimal strategy
i
+
. Thus if the game has a saddle point (i
+
. ,
+
) then the equality (7) holds and
a
i

j
= .
Remark 1.8. A matrix game may have more than one saddle point. How-
ever, the payos at dierent saddle points are all equal, the common value being
the value of the game.
Example 1.4. Consider the matrix game with the payo matrix
=

4 8 6 2
1 2 0 0
6 7

.
We have for the minimum of its rows
min(4. 8. 6. 2) = 2. min(1. 2. 0. 0) = 0. min(. 6. 7. ) =
and then the maximum of these minimums:
max(2. 0. ) = =
1
.
Now, we have for the maximum of its columns
max(4. 1. ) = . max(8. 2. 6) = 6. max(6. 0. 7) = 7. max(2. 0. ) = .
7
and then the minimum of these maximums:
min(. 6. 7. ) = =
2
.
How
1
=
2
= we have saddle point. It is easy to verify that (8. 1) and
(8. 4) are both saddle points because
a
31
= a
34
= = .
Remark 1.9. If the matrix game has a saddle point (i
+
. ,
+
), then it is very
easy to found it. Really, by the Denition 1.3 of a saddle point (8), the value
a
i

j
is an element in the payo matrix =(a
ij
) which is at the same time the
minimum of its row and the maximum of its column.
In Example 1.3, (1. 1) is a saddle point of the game because a
11
= 0 is the
smallest element in the rst row and at the same time the largest element in
the rst column. In Example 1.4 a
31
= a
34
= are two smallest elements in
the third row, and at the same time the largest element in the rst and fourth
columns, respectively.
A matrix game can have several saddle points. In this case we can prove the
following result:
Lemma 1.2. Let (i
+
. ,
+
) and (i
++
. ,
++
) be saddle points of a matrix game.
Then (i
+
. ,
++
) and (i
++
. ,
+
) are also saddle points, and the values at all saddle
points are equal, that is
a
i

j
= a
i

j
= a
i

j
= a
i

j
. (9)
Proof. We prove that (i
+
. ,
++
) is a saddle point. The fact that (i
++
. ,
+
) is a
saddle point can be proved in a similar way.
Since (i
+
. ,
+
) is a saddle point, we have
a
ij
_ a
i

j
_ a
i

j
for all i = 1. : and all , = 1. :. Since (i
++
. ,
++
) is a saddle point, we have
a
ij
_ a
i

j
_ a
i

j
for all i = 1. : and , = 1. :. From these inequalities we obtain
a
i

j
_ a
i

j
_ a
i

j
_ a
i

j
_ a
i

j
.
which proves (9). By (9) and the above inequalities, we have
a
ij
_ a
i

j
_ a
i

j
for all i = 1. : and all , = 1. :. Hence (i
+
. ,
++
) is a saddle point.
From this lemma we see that a matrix game with saddle points has the
following properties:
the exchangeability or rectangular property of saddle points,
8
the equality of the values at all saddle points.
Example 1.5. The game with the payo matrix
=

8 0
7 1 4
2 1 1

has the saddle point (8. 2) because


1
= 1,
2
= 1 and a
32
= = 1.
Example 1.6. The pair (8. 8) is a saddle point for the game with the payo
matrix
=

0 1 1
1 0 1
1 1 0

.
We have = 0.
Example 1.7. The pair (2. 8) is a saddle point for the game with the payo
matrix
=

2 8 1 4
1 2 0 1
2 8 1 2

.
The value of the game is = 0.
Example 1.8. The game with the payo matrix
=

4 1 1
2 1 1
7 1 4

has four saddle points because we have (see Lemma 1.2)


a
12
= a
13
= a
22
= a
23
= 1 = .
Example 1.9. The game with the payo matrix
=

7 6
0 0 4
14 1 8

hasnt a saddle point in the sense of Denition 1.2 because

1
= max(. 0. 1) =
and

2
= min(14. 0. 8) = 8.
1.4 Mixed strategies
We have seen so far that there exist matrix games which have saddle points and
matrix games that dont.
9
When a matrix game hasnt saddle point, that is, if

1
= max
1<i<m
min
1<j<n
a
ij
< min
1<j<n
max
1<i<m
a
ij
=
2
(10)
we cannot solve the game in the sense given in the previous section. The payo
matrix given in Example 1.2 (Stone-paper-scissors) hasnt saddle point because

1
= 1 < 1 =
2
. The same situation is in Example 1.9, where
1
= < 8 =
2
.
About the game given in Example 1.2, with the payo matrix
=

0 1 1
1 0 1
1 1 0

we can say the following.


Player 1 can be sure to gain at least
1
= 1, player 2 can guarantee that
his loss is at most
2
= 1. In this situation, player 1 will try to gain a payo
greater than 1, player 2 will try to make the payo (to player 1) less than 1.
For these purposes, each player will make eorts to prevent his opponent from
nding out his actual choice of strategy. To accomplish this, player 1 can use
some chance device to determine which strategy he is going to choose; similarly,
player 2 will also decide his choice of strategy by some chance method. This is
the mixed strategy that we introduce in this section.
We consider a matrix game with the payo matrix = (a
ij
) where i = 1. :,
, = 1. :.
Denition 1.4. A mixed strategy of player 1 is a set of : numbers
r
i
_ 0, i = 1. : satisfying the relationship

m
i=1
r
i
= 1. A mixed strategy of
player 2 is a set of : numbers n
j
_ 0, , = 1. :, satisfying

n
j=1
n
j
= 1.
Remark 1.10. The numbers r
i
and n
j
are probabilities. Player 1 chooses
his strategy i with probability r
i
, and player 2 chooses his strategy , with
probability n
j
. Hence r
i
n
j
is the probability that player 1 chooses strategy i
and player 2 chooses strategy , with payo a
ij
for player 1 (and a
ij
for player
2).
In opposite to mixed strategies, the strategies in the saddle points are called
pure strategies. The pure strategy i = i
t
is a special mixed strategy: r
i
0 = 1,
r
i
= 0 for i = i
t
.
Let A = (r
1
. r
2
. . . . . r
m
) and 1 = (n
1
. n
2
. . . . . n
n
) be the mixed strategies of
players 1 and 2, respectively.
Denition 1.5. The expected payo of player 1 is the following real
number
m

i=1
n

j=1
a
ij
r
i
n
j
(11)
which is obtained through multiplying every payo a
ij
by the corresponding
probability r
i
n
j
and summing for all i and all ,.
Player 1 wishes to maximize the expected payo, while player 2 wants to
minimize it.
10
Let o
m
and o
n
be the sets of all A = (r
1
. r
2
. . . . . r
m
) and 1 = (n
1
. n
2
. . . . . n
n
)
respectively, satisfying the following conditions
r
i
_ 0. i = 1. :.
m

i=1
r
i
= 1:
n
j
_ 0. , = 1. :.
n

j=1
n
j
= 1.
If player 1 uses the mixed (or no) strategy A o
m
, then his expected payo
is at least
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
. (12)
Player 1 can choose A o
m
such as to obtain the maximum of the value in
(12), that is he can be sure of an expected payo not less than

1
= max
XSm
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
. (13)
If player 2 chooses the strategy 1 o
n
, then the expected payo of player
1 is at most
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
. (14)
Player 2 can choose 1 o
n
such as to obtain the minimum of the value in
(14), that is, he can prevent player 1 from gaining an expected payo greater
than

2
= min
Y Sn
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
. (15)
As in the case studied in section 1.2 (Lemma 1.1) we have the following
result:
Lemma 1.3. For all A = (r
1
. r
2
. . . . . r
m
) o
m
and all 1 = (n
1
. n
2
. . . . . n
n
)
o
n
the following inequality holds
1
_
2
, that is

1
= max
XSm
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
_ min
Y Sn
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
=
2
(16)
Proof. For all A o
m
and all 1 o
n
, we have
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
_
m

i=1
n

j=1
a
ij
r
i
n
j
.
11
Then, taking the maximum for all A o
m
on both sides of the inequality,
we get

1
= max
XSm
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
_ max
XSm
n

i=1
n

j=1
a
ij
r
i
n
j
.
This inequality holds for all 1 o
n
. Therefore,

1
= max
XSm
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
_ min
Y Sn
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
=
2
.
that is,
1
_
2
, and the proof is completed.
The main result of this chapter is the well-known fundamental theorem of
the theory of matrix game, the minimax theorem. This is the aim of the
following section.
1.5 The minimax theorem
J. von Neumann was the rst which proved this theorem. We present here
von Neumanns proof given in [15].
Theorem 1.1. If the matrix game has the payo matrix = (a
ij
), then

1
=
2
, that is,

1
= max
XSm
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
= min
Y Sn
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
=
2
. (17)
To prove this theorem we need some auxiliary notions and results.
Let = (a
ij
) be a :: matrix, and
a
(1)
= (a
11
. a
21
. . . . . a
m1
). a
(2)
= (a
12
. a
22
. . . . . a
m2
). . . . .
a
(n)
= (a
1n
. a
2n
. . . . . a
mn
)
obtained by using the columns of matrix , that are : points in the :-dimensional
Euclidean space R
m
.
Denition 1.6. We call the convex hull (CH) of the : points a
(1)
. a
(2)
. . . . . a
(n)
the set CH = CH(a
(1)
. a
(2)
. . . . . a
(n)
) dened by
CH =

a[ a R
m
. a = t
1
a
(1)
t
2
a
(2)
t
n
a
(n)
.
t
k
R. t
k
_ 0. / = 1. :.
n

k=1
t
k
= 1

.
Remark 1.11. The elements of CH are expressed as a convex linear com-
bination of the : points a
(1)
. a
(2)
. . . . . a
(n)
. CH is a convex set, this can be easy
veried by showing that every convex linear combination of two arbitrary points
of CH also belongs to CH.
12
Lemma 1.4. Let CH be the convex hull of a
(1)
. a
(2)
. . . . . a
(n)
. If 0 CH,
then there exist : real numbers c
1
. c
2
. . . . . c
m
such that for every point a CH,
a = (a
1
. a
2
. . . . . a
m
) we have
c
1
a
1
c
2
a
2
c
m
a
m
0.
Proof. Since 0 CH, there exists a point c = (c
1
. c
2
. . . . . c
m
) CH,
c = 0, such that the distance [c[ from c to 0 is the smallest. This is equivalent
to the statement that c
2
1
c
2
2
c
2
m
0 is the smallest.
Now, let a = (a
1
. a
2
. . . . . a
m
) be an arbitrary point in CH. Then
`a (1 `)c CH. 0 _ ` _ 1.
and
[`a (1 `)c[
2
_ [c[
2
.
or
m

i=1
[`a
i
(1 `)c
i
[
2
=
m

i=1
[`(a
i
c
i
) c
i
[
2
=
= `
2
m

i=1
(a
i
c
i
)
2
2`
m

i=1
(a
i
c
i
)c
i

m

i=1
c
2
i
_
m

i=1
c
2
i
.
Thus, if ` = 0, we obtain
`
m

i=1
(a
i
c
i
)
2
2
m

i=1
(a
i
c
i
c
2
i
) _ 0.
Now let ` 0; we get
m

i=1
a
i
c
i
_
m

i=1
c
2
i
0.
and the lemma is proved.
Remark 1.12. This result is usually referred to as the theorem of the
supporting hyperplanes. It states that if the origin 0 doesnt belong to the
convex hull CH of the : points a
(1)
. a
(2)
. . . . . a
(n)
, then there exists a supporting
hyperplane j passing through 0 such that CH lies entirely in one side of j, that
is, in one of the two half-spaces formed by j.
Lemma 1.5. Let = (a
ij
) be an arbitrary :: matrix. Then either
(1) there exist numbers n
1
. n
2
. . . . . n
n
with
n
j
_ 0. , = 1. :.
n

j=1
n
j
= 1.
such that
n

j=1
a
ij
n
j
= a
i1
n
1
a
i2
n
2
a
in
n
n
_ 0. i = 1. ::
13
or
(2) there exist numbers r
1
. r
2
. . . . . r
m
with
r
i
_ 0. i = 1. :.
m

i=1
r
i
= 1
such that
m

i=1
a
ij
r
i
= a
1j
r
1
a
2j
r
2
a
mj
r
m
0. , = 1. :.
Proof. We consider the convex hull of the : : points
a
(1)
= (a
11
. a
21
. . . . . a
m1
). a
(2)
= (a
12
. a
22
. . . . . a
m2
). . . . .
a
(n)
= (a
1n
. a
2n
. . . . . a
mn
)
c
(1)
= (1. 0. . . . . 0). c
(2)
= (0. 1. 0. . . . . 0). . . . . c
(m)
= (0. 0. . . . . 1).
We denote by CH this convex hull. We distinguish two cases:
(1) 0 CH, respectively (2) 0 CH.
Let 0 CH be. Then there exist real numbers
t
1
. t
2
. . . . . t
n+m
_ 0.
n+m

j=1
t
j
= 1
such that
t
1
a
(1)
t
2
a
(2)
t
n
a
(n)
t
n+1
c
(1)

t
n+2
c
(2)
t
n+m
c
(m)
= 0.
that is, 0 was written as a convex linear combination of the above :: points.
Expressed in terms of the components, the i
th
equation (there are : equa-
tions), is
t
1
a
i1
t
2
a
i2
t
n
a
in
t
n+i
1 = 0.
Hence
t
1
a
i1
t
2
a
i2
t
n
a
in
= t
n+i
_ 0. i = 1. :. (18)
It follows that t
1
t
2
t
n
0, for otherwise we have
t
1
= t
2
= = t
n
= 0 = t
n+1
= = t
n+m
.
which contradicts that

n+m
j=1
t
j
= 1.
Dividing each inequality of (18) by t
1
t
2
t
n
0 and putting
n
1
=
t
1
t
1
... t
n
. n
2
=
t
2
t
1
... t
n
. . . . . n
n
=
t
n
t
1
... t
n
14
we obtain
n

j=1
a
ij
n
j
= a
i1
n
1
a
in
n
n
_ 0. i = 1. :.
(2) 0 CH. By Lemma 1.4, there exists c = (c
1
. . . . . c
m
) CH such that
ca
(j)
= c
1
a
1j
c
2
a
2j
c
m
a
mj
0. , = 1. :. cc
(i)
= c
i
0. i = 1. :.
(19)
Dividing each inequality in (19) by c
1
c
m
0 and putting
r
1
=
c
1
c
1
...c
m
. r
2
=
c
2
c
1
...c
m
. . . . . r
m
=
c
m
c
1
...c
m
we obtain
m

i=1
a
ij
r
i
= a
1j
r
1
a
2j
r
2
a
mj
r
m
0. , = 1. :.
This complete the proof of Lemma.
Proof of Theorem 1.1. We have proved that
1
_
2
in Lemma 1.3, so it
is sucient to give the proof for
1
_
2
.
By Lemma 1.5, one of the following two statements holds.
(1) There exist n
1
. n
2
. . . . . n
n
_ 0,

n
j=1
n
j
= 1, such that
n

j=1
a
ij
n
j
_ 0. i = 1. :.
Hence, for any A = (r
1
. r
2
. . . . . r
m
) o
m
we have
m

i=1

j=1
a
ij
n
j

r
i
_ 0.
Therefore
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
_ 0.
It follows that

2
= min
Y Sn
max
XSm
m

i=1
n

j=1
a
ij
r
i
n
j
_ 0. (20)
(2) There exist r
1
. r
2
. . . . . r
m
_ 0,

m
i=1
r
i
= 1, such that
m

i=1
a
ij
r
i
0. , = 1. :.
15
Hence, for any 1 = (n
1
. n
2
. . . . . n
n
) o
n
, we have
n

j=1

i=1
a
ij
r
i

n
j
_ 0.
Therefore,
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
_ 0.
It follows that

1
= max
XSm
min
Y Sn
m

i=1
n

j=1
a
ij
r
i
n
j
_ 0. (21)
By (20) and (21) it follows that, either
1
_ 0 or
2
_ 0, that is, never

1
< 0 <
2
. We repeat the above judgement with the new matrix 1 = (a
ij
/),
where / is an arbitrary number. Because

m
i=1
r
i
= 1 and

n
j=1
n
j
= 1 we
obtain never
1
/ < 0 <
2
/, or never
1
< / <
2
. Therefore,
1
<
2
is impossible, for otherwise there would be a number / satisfying
1
< / <
2
,
thus contradicting the statement "never
1
< / <
2
". We have proved
1
_
2
.

Remark 1.13. For another proof of the minimax theorem an inductive


proof see [20]. Here the new statement of minimax theorem is the following:
Let = (a
ij
) be an arbitrary :: matrix, and o
m
and o
n
respectively sets
of points A = (r
1
. r
2
. . . . . r
m
) and 1 = (n
1
. n
2
. . . . . n
n
) satisfying
r
i
_ 0. i = 1. :.
m

i=1
r
i
= 1. n
j
_ 0. , = 1. :.
n

j=1
n
j
= 1.
Then we have
max
XSm
min
1<j<n
m

i=1
a
ij
r
i
= min
Y Sn
max
1<i<n
n

j=1
a
ij
n
j
. (22)
1.6 Saddle points in mixed strategies
In this section, we show, that for any matrix game, a saddle point always
exists.
Let = (a
ij
) be the payo matrix of an : : matrix game. If A =
(r
1
. r
2
. . . . . r
m
) o
m
and 1 = (n
1
. n
2
. . . . . n
n
) o
n
are respectively mixed
strategies of players 1 and 2, then the expected payo

m
i=1

n
j=1
a
ij
r
i
n
j
can
be written in matrix notation
m

i=1
n

j=1
a
ij
r
i
n
j
= A1
t
.
16
Denition 1.7. A pair (A
+
. 1
+
) o
m
o
n
is called a saddle point (in
mixed strategies)(or Nash equilibrium) of the matrix game = (a
ij
) if
A1
+t
_ A
+
1
+t
_ A
+
1
t
. (23)
for all A o
m
and all 1 o
n
.
The following important result establishes the equivalence between the ex-
istence of a saddle point and the minimax theorem.
Theorem 1.2. The :: matrix game = (a
ij
) has a saddle point if and
only if the numbers
max
XSm
min
Y Sn
A1
t
and min
Y Sn
max
XSm
A1
t
(24)
exist and are equal.
Proof. "==" The two numbers in (24) both exist, obviously (there are
optimal values of continuous functions dened on compact sets). Assume that
: : matrix game has a saddle point (A
+
. 1
+
). That is to say that, the
inequalities from relationship (23) hold for all A o
m
and all 1 o
n
. From
the rst inequality in (23), we obtain
max
XSm
A1
+t
_ A
+
1
+t
hence
min
Y Sn
max
XSm
A1
t
_ A
+
1
+t
. (25)
Similarly, from the second inequality in (23), we have
A
+
1
+t
_ min
Y Sn
A
+
1
t
_ max
XSm
min
Y Sn
A1
t
. (26)
From (25) and (26) it follows that

2
= min
Y Sn
max
XSm
A1
t
_ max
XSm
min
Y Sn
A1
t
=
1
.
But it is known (see Lemma 1.3) that the reverse inequality
1
_
2
holds.
Therefore,

1
= max
XSm
min
Y Sn
A1
t
= min
Y Sn
max
XSm
A1
t
=
2
.
and the necessity of the condition is proved.
"==" Assume that the two values in (24) are equal. Let A
+
o
m
and
1
+
o
n
be, such that
max
XSm
min
Y Sn
A1
t
= min
Y Sn
A
+
1
t
. (27)
min
Y Sn
max
XSn
A1
t
= max
XSm
A1
+t
. (28)
17
By the denitions of minimum and maximum, we have
min
Y Sn
A
+
1
t
_ A
+
1
+t
. A
+
1
+t
_ max
XSm
A1
+t
. (29)
Since the left-hand sides of (27) and (28) are equal, all terms in (27) through
(29) are equal to each other. In particular, we have
max
XSm
A1
+t
= A
+
1
+t
.
Therefore, for all A o
m
,
A1
+t
_ A
+
1
+t
. (30)
Similarly, for all 1 o
n
,
A
+
1
+t
_ A
+
1
t
. (31)
By (30) and (31), it results that (A
+
. 1
+
) is a saddle point of A1
t
, and
the suciency of the condition is proved.
Denition 1.8. If (A
+
. 1
+
) is a saddle point (see Denition 1.7), then
we say that A
+
. 1
+
are respectively optimal strategies of players 1 and 2,
and = A
+
1
+t
is the value of the game. We also say that (A
+
. 1
+
) is a
solution(or a Nash equilibrium) of the game.
Remark 1.14. By Theorem 1.2 the value of the game is the common
value of
1
= max
XSm
min
Y Sn
A1
t
and
2
= min
Y Sn
max
XSm
A1
t
.
The denition of a saddle point shows us that, as long as player 1 sticks
to his optimal strategy A
+
, he can be sure to get at least the expected payo
= A
+
1
+t
no matter which strategy player 2 chooses; similarly, as long as
player 2 sticks to his optimal strategy 1
+
, he can hold player 1s expected payo
down to at most no matter how player 1 makes his choice of strategy.
Now, we give some essential properties of optimal strategies. To do this, we
introduce rst, some notations.
For the matrix = (a
ij
) we denote the i
th
row vector of by
i
and the
,
th
column vector of by
j
. Thus
A
j
=
m

i=1
a
ij
r
i
.
i
1
t
=
n

j=1
a
ij
n
j
.
and A
j
is the expected payo when player 1 chooses the mixed strategy A and
player 2 chooses the pure strategy ,, again
i
1
t
is the expected payo when
player 2 chooses the mixed strategy 1 and player 1 chooses the pure strategy i.
We give some essential properties of optimal strategies.
Lemma 1.6. Let = (a
ij
) be the payo matrix of an : : matrix game
whose value is . The following statements are true:
(1) If 1
+
is an optimal strategy of player 2 and
i
1
+t
< , then r
+
i
= 0 in
every optimal strategy A
+
of player 1.
18
(2) If A
+
is an optimal strategy of player 1 and A
+

j
, then n
+
j
= 0 in
every optimal strategy 1
+
of player 2.
Proof. We prove only (1). The proof of (2) is similar. Since 1
+
is an
optimal strategy of player 2, we have
i
1
+t
_ , i = 1. :. We denote by
o
1
= i[
i
1
+t
< , o
2
= i[
i
1
+t
= .
Then we can write
= A
+
1
+t
=
m

i=1
r
+
i

i
1
+t
=
=

iS1
r
+
i

i
1
+t


iS2
r
+
i

i
1
+t
=

iS1
r
+
i

i
1
+t


iS2
r
+
i
.
Hence

1

iS2
r
+
i

=

iS1
r
+
i

i
1
+t
.
that is,


iS1
r
+
i
=

i o
1
r
+
i

i
1
+t
. or

iS1
(
i
1
+t
)r
+
i
= 0.
Since i o
1
implies
i
1
+t
0, we have r
+
i
= 0.
Remark 1.15. This result states that if player 2 has an optimal strategy
1
+
in a matrix game with value , and if player 1, by using the i
th
pure strategy
cannot attain the expected payo , then the pure strategy i is a bad strategy
and cannot appear in any of his optimal mixed strategies.
Lemma 1.7. Let = (a
ij
) be the payo matrix of an : : matrix game
whose value is . The following statements are true:
(1) A
+
o
m
is an optimal strategy of player 1 if and only if _ A
+

j
,
, = 1. :.
(2) 1
+
o
n
is an optimal strategy of player 2 if and only if
i
1
+t
_ ,
i = 1. :.
Proof. We prove only (1), the proof of (2) is similar. Necessity ("==") of
the condition follows directly from the denition of a saddle point.
To prove the suciency ("==") of the condition, assume that _ A
+

j
,
, = 1. :.
Let (A

. 1

) be a saddle point of the game, that is A1


t
_ A

_
A

1
t
, for all A o
m
and all 1 o
n
.
We prove that (A
+
. 1

) is a saddle point of the game. Let 1 = (n


1
. n
2
. . . . . n
n
)
o
n
be any mixed strategy of player 2. Multiplying both sides of inequality
_ A
+

j
, , = 1. :, by n
j
and summing for , = 1. : we obtain
_
n

j=1
A
+

j
n
j
= A
+
1
t
.
In particular, _ A
+
1
t
. But, the denition of saddle point implies
A
+
1
t
_ A

1
t
= . It follows that A1
t
_ A
+
1
t
_ A
+
1
t
, which
19
proves us that (A
+
. 1

) is a saddle point of the game. Hence, A


+
is an optimal
strategy of player 1.
Remark 1.16. If the value of a game is known, the above lemma can be
used to examine whether a given strategy A
+
of player 1 is optimal, or a given
strategy 1
+
of player 2 is optimal.
Example 1.10. The matrix game with the payo matrix
=

2 8 1
1 2 8
8 1 2

has the value = 2, and A


+
= 1
+
=

1
3
.
1
3
.
1
3

are the optimal strategies for the


players 1 and 2. According to Remark 1.16 the pure strategy r
2
= 1, namely
A
2
= (0. 1. 0) is a bad strategy. Really, we have
A
2

1
= 2 (0. 1. 0)

2
1
8

= 2 1 = 1.
so A
2

1
.
Thus the pure strategy A
2
= (0. 1. 0). is a bad strategy. The same for the
others strategies.
Also, according to Lemma 1.7, the strategy A
+
= (18. 18. 18) of player
1 is optimal. Really, we have = 2. and
A
+

:1
= (18. 18. 18)
2
1
8
= 2.
A
+

:2
= (18. 18. 18)
8
2
1
= 2.
A
+

:3
= (18. 18. 18)
1
8
2
= 2.
therefore = 2 = A
+

:j
. , = 1. 8.
Moreover, we have
20

2
1
+t
= 2 (1. 2. 8)

1
3
1
3
1
3

= 2 2 = 0.

1
1
+t
= 2 (2. 8. 1)

1
3
1
3
1
3

= 2 2 = 0.
and

3
1
+t
= 2 (8. 1. 2)

1
3
1
3
1
3

= 2 2 = 0.
so, 1
+
is an optimal strategy.
The game hasnt saddle point in pure strategy because we have

1
= max mina
ij
= max(1. 1. 1) = 1.
while

2
= minmax a
ij
= min(8. 8. 8) = 8.
1.7 Domination of strategies
There are situations in which, an examination of the elements of the payo
matrix shows us that player 1 will never use a pure strategy since each element
of this row (pure strategy) is smaller than the corresponding element in the
other row (pure strategy). For example, we consider the matrix game whose
payo matrix is
=

2 1 1
0 1 1
1 2 0

.
In this matrix the elements of third row are smaller than the corresponding
elements in the rst row. Consequently, the player 1 will never use his third
strategy. Hence, regardless of which strategy player 2 chooses, player 1 will gain
more by choosing strategy 1 than by choosing strategy 3. Strategy 3 of player
1 can only appear in his optimal mixed strategies with probability zero.
Thus, in order to solve the matrix game with the payo matrix , the third
row can be deleted and we need to consider only the resulting matrix

t
=

2 1 1
0 1 1

.
Now, in this matrix
t
each element of the rst column is greater than
the corresponding element of the third column. So, player 2 will lose less by
choosing strategy 3 than by choosing strategy 1. Thus, the rst strategy of
player 2 will never be included in any of his optimal mixed strategies with
positive probability.
21
Therefore, the rst column of the matrix
t
can be deleted to obtain " =
1 1
1 1
.
It is easy to verify that this 22 matrix game has the mixed strategy solution
A
+
= 1
+
=

1
2
.
1
2

and = 0.
Returning to the original 88 matrix game with payo matrix , its solution
is
A
+
=

1
2
.
1
2
. 0

. 1
+
=

0.
1
2
.
1
2

. = 0.
Remark 1.17. We have seen that in matrix game with the payo matrix
, player 1 will never use his strategy 3 since strategy 1 gives him a greater
payo than strategy 3. Similarly, in matrix game with the payo matrix
t
,
player 2 will never use his strategy 1 since it always costs him a greater loss
than strategy 3. Therefore the strict dominated strategies will not play by a
rational player 1, so they can be eliminated, and the strict dominant strategies
will not play by a rational player 2, so they can be eliminated.
Denition 1.9. Let = (a
ij
) be the payo matrix of an : : matrix
game. If
a
kj
_ a
lj
. , = 1. : (32)
we say that player 1s strategy / dominates strategy |.
If
a
ik
_ a
il
. i = 1. : (33)
we say that player 2s strategy / dominates strategy |.
If the inequalities in (32) or (33) are replaced by strict inequalities, we say
that the strategy / of player 1 or 2 strictly dominates his strategy |.
Remark 1.18. It can be proved that in the case in which a pure strategy is
strict dominated by a pure strategy (or by a convex linear combination of several
other pure strategies), then we can delete the row or column in the payo matrix
corresponding to the dominated pure strategy and solve the reduces matrix
game. The optimal strategies of the original matrix game can be obtain from
those of the reduced one by assigning the probability zero to the pure strategy
corresponding to the deleted row or column.
Remark 1.19. If the domination isnt strict, we can still obtain a solution
for the original game from that of the reduced game. But, the deletion of a row
or column may involve loss of some optimal strategies of the original game.
Example 1.11. Let be the payo matrix of a matrix game
=

2 1 4
8 1 2
1 0 8

.
Strategy 3 of player 2 is dominated by his strategy 2, so we can delete the
third column of the payo matrix and we obtain

t
=

2 1
8 1
1 0

.
22
Then, strategy 1 of player 2 is dominated by his strategy 2, so the rst
column can be deleted; one obtain

tt
=

1
1
0

.
Strategy 3 of player 1 is dominated by his strategy 2 (or 1), so we delete the
third row and it result

ttt
=

1
1

.
The reduced game has the pure strategies A
+
1
= (1. 0), A
+
2
= (0. 1), 1
+
= (1),
hence the original game has the pure strategies A
+
1
= (1. 0. 0), A
+
2
= (0. 1. 0),
1
+
= (0. 1. 0). The value of game is = 1.
Remark 1.20. The game in Example 1.11 has the saddle points (1. 2) and
(2. 2). The optimal strategies of this game are A
+
= (t
1
. t
2
. 0), 1
+
= (0. 1. 0)
where t
1
. t
2
_ 0, t
1
t
2
= 1, that is, A
+
is the convex linear combination of
pure strategies A
+
1
and A
+
2
.
Example 1.12.In the matrix game with the payo matrix
=

8 2 4 0
8 4 2 8
4 8 4 2
0 4 0 8

.
we can delete the strategies dominated and so we get the reduce game with
the matrix
4 2
0 8
. It is easy to verify that the optimal strategies of 2 x 2 matrix
game are A
+
= (
4
5
.
1
5
). 1
+
= (
3
5
.
2
5
) and the value of game is =
16
5
. Therefore
A
+
= (0. 0.
4
5
.
1
5
), 1
+
1
= (0.0.
3
5
.
2
5
) are optimal strategies of the original matrix
game, and =
16
5
. There exists the optimal strategy 1
+
2
= (0.
8
15
.
1
3
.
2
15
) too.
Remark 1.21. In the 8 8 matrix game, and in the 8 2 matrix game
obtained above we used domination of a strategy by a convex linear combination
with t
1
= t
2
=
1
2
.
Remark 1.22. The deletion of a certain row or column of a payo matrix
using non-strict domination of strategies may result in a reduced game whose
complete set of solutions does not lead to the complete set of solutions of the
original larger game. That is, the solution procedure may lose some optimal
strategies of the original game. This situation appears, for example, for matrix
game with payo matrix
=

8 8
4 8 2
8 2 8

.
We get the reduced game with the matrix

tt
=

8
8 2

.
23
which has the optimal mixed strategies A
+
1
=

1
3
.
2
3

, 1
+
1
=

1
2
.
1
2

. Thus the
original game has the optimal mixed strategies A
+
1
=

1
3
.
2
3
. 0

, 1
+
1
=

0.
1
2
.
1
2

.
But, we have again the optimal pure strategies A
+
2
= (1. 0. 0), 1
+
2
= (0. 0. 1).
Really, all convex linear combinations of A
+
1
and A
+
2
are optimal (mixed)
strategies of player 1, respectively, all convex linear combinations of 1
+
1
and 1
+
2
are optimal strategies of player 2.
1.8 Solution of 2 2 matrix game
Writing these equations in terms of elements of the payo matrix, we have:
aj c(1 j) = . ac /(1 c) = . /j d(1 j) = . cc d(1 c) = .
The equations in j give us j
+
=
dc
a+dbc
, and the equations in c give us
c
+
=
db
a+dbc
. Then =
adbc
a+dbc
.
Remark 1.23. The above formulae are also valid for the case a d, a c,
d /, d c.
Example 1.13. The 2 2 matrix game with the payo matrix
=

8
8 2

has solution in pure strategies A


+
= (1. 0), 1
+
= (0. 1), = 8. We have

1
= max(8. 8) = 8,
2
= min(. 8) = 8 and a
12
= 8.
Example 1.14. The 2 2 matrix game with the payo matrix
=

8 2
0

hasnt solution in pure strategies. We have


1
= max(2. 0) = 2,
2
= min(8. ) =
8. Thus, we obtain
j
+
=
0
8 2 0
=

6
. c
+
=
2
8 2 0
=
8
6
=
1
2
.
hence A
+
=

5
6
.
1
6

, 1
+
=

1
2
.
1
2

. Then the value of game is =


150
6
=
5
2
.
Indeed we have
= A
+
1
+t
=

6
.
1
6

8 2
0

1
2
1
2

=
=

1
6
.
1
6

1
2
1
2

=
1
6
=

2
.
Remark 1.24. For the 2 2 matrix game with no saddle point, an inter-
esting technique of solution is described by Williams. Let be the payo matrix
=

a /
c d

.
24
First, subtract each element of the second column from the corresponding
element of the rst column: a / and c d. Then take absolute values of the
two dierences and reverse the order of the absolute values: [c d[ and [a /[.
The ratio
|cd|
|ab|
is the ratio of r
1
and r
2
in player 1s optimal strategy, namely
A
+
= (r
1
. r
2
) = (j. 1j). Hence
x1
x2
=
|cd|
|ab|
, and how r
1
r
2
= 1, we get r
1
. r
2
.
The similar technique, but with the rows, lead us to 1
+
= (n
1
. n
2
) = (c. 1 c).

Example 1.15. In the case of Example 1.14, we have


=

8 2
0

.
hence, 8 2 and 0
. .. .
that is 1 and
. .. .
in the rst step.
Then 1 and
. .. .
and 1
. .. .
in the second step.
In the end we have
x1
x2
= , hence r
1
= r
2
. How r
1
r
2
= 1 we obtain
6r
2
= 1, that is r
2
=
1
6
, r
1
=
5
6
. Thus A
+
=

5
6
.
1
6

.
In the rst step we have 8 0 and 2
. .. .
that is 8 and 8
. .. .
with the elements of rows. Then, in the second step, we take absolute values of
the two dierences and reverse the order of the absolute values
8 and 8
. .. .
8 and 8
. .. .
The ratio 88 is the ratio of n
1
to n
2
in player 2s optimal strategy, hence
n
1
= n
2
. So, we obtain n
1
= n
2
=
1
2
, that is 1
+
=

1
2
.
1
2

. These results are the


same as those of Example 1.14.
1.9 Graphical solution of 2 n and m2 matrix games
In the case of 2: and :2 matrix games we can present a graphical method
for nding the solution. We illustrate the method by a 8 2 matrix game.
Suppose that the payo matrix is
=

a /
c d
c 1

.
Denote player 1s pure strategies by T. '. 1 and player 2s pure strategies
by 1. 1. Assume that player 2 uses the mixed strategy 1 = (n
1
. n
2
) = (n. 1n),
where 0 _ n _ 1. Suppose that n = 1 and n = 0 represent the pure strategies 1
25
and 1 respectively. So, we can write
=
n 1 n
1 1
T
'
1

a /
c d
c 1

If player 2 chooses the pure strategy 1, that is, n = 1, and if player 1 chooses
the pure strategy T, the payo is a, as it is shown in Fig. 1.1. If player 2 chooses
the pure strategy 1, that is, n = 0, the payo corresponding to T is /. We join
the line a/ in Fig. 1.1.
Figure 1.1: Mixed strategy Y
Now, we suppose that player 2 chooses a mixed strategy 1 = (n. 1 n),
represented by 1 in the gure. Then it can see that the height 1Q represents
the expected payo when player 2 uses 1 and player 1 uses T. This amount is

1
1
t
= an /(1 n).
Similarly, corresponding to player 1s strategies ' and 1 we have the line
cd and c1 and the amounts are

2
1
t
= cn d(1 n)

3
1
t
= cn 1(1 n).
The heights of the points on these lines represent the expected payo if
player 2 uses 1 while player 1 uses ' and 1, respectively.
For any mixed strategy 1 of player 2, his expected lost is at more the
maximum of the three ordinates on the lines a/. cd. c1 at the point n, that
is,
max
1<i<3

i
1
t
= max
1<i<3
2

j=1
a
ij
n
j
. (34)
The graphic of this function is represented by the heavy black line in the
Fig. 1.1.
Player 2 wishes to choose an 1 so as to minimize the maximum function in
(34). We see from the gure that he should choose the mixed strategy corre-
sponding to the point
t
. At this point the expected payo is

t
1
t
= min
Y S2
max
1<i<3
2

j=1
a
ij
n
j
and
t
1
t
is the value of the game.
26
The graphical solution of a 2 : matrix game is similar. We explain it for
the case : = 8 and let the payo matrix of the game be
=

a / c
d c 1

.
Denote player 1s pure strategies by l. 1 and player 2s pure strategies by
1. '. 1.
Assume that player 1 uses the mixed strategy A = (r
1
. r
2
) = (r. 1 r),
where 0 _ r _ 1. Suppose that r = 1 represents the pure strategy l and r = 0
represents the pure strategy 1.
If player 1 chooses the pure strategy l, that is when r = 1, and if player 2
chooses the pure strategy 1, the payo is a, as it is shown in Fig. 1.2. If player
1 chooses 1, that is, r = 0, the payo corresponding to 1 is d. We join the line
ad in the gure.
Now suppose that player 1 chooses a mixed strategy A = (r. 1 r) repre-
sented by 1 in the gure. Then it can see that the height 1Q represents the
expected payo when player 1 uses A and player 2 uses 1. The amount is
A
1
=
2

i=1
a
i1
r
i
= ar d(1 r).
Similarly, corresponding to player 2s strategies ' and 1 we have the line /c
and c1. The heights of the points on these lines represents the expected payos
if player 1 uses A while player 2 uses ' and 1 respectively.
Figure 1.2: Mixed strategy X
For any mixed strategy A of player 1, his expected payo is at least the
minimum of the three ordinates on the lines ad. /c. c1 at the point r, that is,
min
1<j<3
A
j
= min
1<j<3
2

i=1
a
ij
r
i
. (35)
The graphic of this function is represented by the heavy black line in the
gure.
Player 1 wishes to choose an A so as to maximize the minimum function in
(35). We see from the gure that he should choose the mixed strategy corre-
sponding to the point
t
. At this point the expected payo is

t
1
t
= max
XS2
min
1<j<3
2

i=1
a
ij
r
i
= max
XS2
min
1<j<3
A
j
.
which is the value of the game.
27
We note that the point 1
t
in Fig. 1.2. is the intersection of the lines ad and
c1. The abscissa r = r
+
of the point
t
and the value of
t
1
t
can be evaluated
by solving a system of two linear equations in two unknowns.
Remark 1.25. The graph also shows us that player 2s optimal strategy
doesnt involve his pure strategy M. Therefore, the solution of the 2 x 3 matrix
game can be obtained from the solution of the 2 x 2 matrix game

a c
d 1

.
The graphical method described above can be used to solve all 2 : matrix
games.
Example 1.16. Find out the solution of 2 4 matrix game with the payo
matrix
=

1 8
4 1 8 2

.
The third column is dominated by the fourth column and so it can be elim-
inate. We have the payo matrix
=
1 ' 1
l
1

1 8
4 1 2

Now suppose that player 1 chooses a mixed strategy A = (r. 1 r). In the
Figure 1.3 we have the lines ad, /c and c1 corresponding to player 2s strategies
1. ' and 1.
Figure 1.3: X for Example 1.16.
We see from the gure that player 1 should choose the mixed strategy cor-
responding to the point
t
. The abscissa r = r
+
of the point
t
and the value
of
t
1
t
can be evaluated by solving the system of two linear equations corre-
sponding to strategies 1 and 1. The system of linear equations is

8r n = 4 (1)
r n = 2 (1)
and the solution is r =
1
2
, n =
5
2
. Thus the optimal mixed strategy of player
1 is A =

1
2
.
1
2

, and the value of game is =


5
2
. To nd the optimal mixed
strategy of player 2 we have 1 = (c
1
. c
2
. c
3
) and equality

1
2
.
1
2

1 8
4 1 2

c
1
c
2
c
3

=

2
.
So, we obtain
5
2
c
1
8c
2

5
2
c
3
=
5
2
, and because c
1
c
2
c
3
= 1 we get
c
2
= 0 and c
1
c
3
= 1. Thus we have 1 = (c. 0. 1 c), where c = c
1
[0. 1[.
For the original matrix game the optimal strategies of player 2 are 1 =
(c. 0. 0. 1 c), c [0. 1[. The value of game is =
5
2
.
28
1.10 Solution of 3 3 matrix game
To obtain the solution of 88 matrix game we use the fact that a linear function
on a convex polygon can reach its maximum (minimum) only at a vertex of the
polygon.
Consider the payo matrix of an arbitrary 8 8 matrix game given by
=

a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

.
A mixed (pure) strategy for the player 1 has the form A = (r
1
. r
2
. r
3
) with
r
1
. r
2
. r
3
_ 0 and r
1
r
2
r
3
= 1. The value of the game is
= max
XS3
min
1<j<3
A
j
= max
XS3
minA
1
. A
2
. A
3
. (36)
Consider the equations
A
1
= A
2
. A
2
= A
3
. A
3
= A
1
. (37)
Each equation represents a straight line which divides the whole plane into
two half-planes.
The conditions r
1
. r
2
. r
3
_ 0, r
1
r
2
r
3
= 1 show us that r
1
. r
2
. r
3
are
baricentric coordinates of the point A = (r
1
. r
2
. r
3
). The set of all points
in the closed equilateral triangle 128 with the vertices (1. 0. 0), (0. 1. 0), (0. 0. 1)
is the simplex o
3
.
The numbers r
1
. r
2
. r
3
with the above conditions represents the distances
from A to the sides of triangle o
3
with the vertices 1, 2, 3 respectively. The
equations of the three sides 23, 31, 12 of the triangle are r
1
= 0, r
2
= 0, r
3
= 0,
respectively (see Fig. 1.4.).
Figure 1.4: Baricentric coordinates
Equation A
1
= A
2
, for instance, divides the whole plane into two half-
planes. The points A in one half-plane satisfy the condition A
1
< A
2
,
while those in the other half-plane satisfy the condition A
1
A
2
.
The same situation is for other two equations in (37).
The three lines (37) either intersect at one point or are parallel to each other.
In both cases these lines divide the whole plane into three regions 1
1
. 1
2
. 1
3
,
see Fig. 1.5. (The points outside of triangle can be regarded as points with one
or two of the three coordinates r
1
. r
2
. r
3
assuming negative values.)
Figure 1.5: The three regions
29
In the region 1
1
we have
min
1<j<3
A
j
= A
1
.
in the region 1
2
we have
min
1<j<3
A
j
= A
2
.
and in the region 1
3
we have
min
1<j<3
A
j
= A
3
.
Therefore, the value of game (36) can be written as
= max
XS3
min
1<j<3
A
j
=
= max

min
XS3|R1
A
1
. min
XS3|R2
A
2
. min
XS3|R3
A
3

. (38)
To determine the value , we should rst compute
min
XS3|Rj
A
j
. , = 1. :.
Each of the sets o
3
1
j
, , = 1. :, is a convex polygon. It is sucient
to evaluate the values of A
j
at the relevant vertices of this polygon and to
make a comparison between these values. The maximum value must be . The
optimal strategies of player 1 can be determined by comparison.
The optimal strategies of player 2 can be determined in a similar manner,
after the value of the game is determined. We have
= min
Y S3
max
1<i<3

i
1
t
=
= min
Y S3
max
1
1
t
.
2
1
t
.
3
1
t
=
= min

max
Y S3|T1

1
1
t
. max
Y S3|T2

2
1
t
. max
Y S3|T3

3
1
t

.
where T
i
is the region in which the linear function
i
1
t
satises

i
1
t
= max
1<i<3

i
1
t
. i = 1. :.
It suces to compute the values of
i
1
t
at the vertices of convex polygons
and to make a comparison between them. The minimum value must be , and
the vertices 1 at which the minimum is assumed are points corresponding to
the optimal strategies of player 2.
Remark 1.26. To simplify the computation we can add a convenient con-
stant to each element of the initial matrix.
30
Example 1.17. Let us compute the value of game and nd out the optimal
strategies of the game for which the payo matrix is
=

4 2 8
8 4 2
4 0 8

.
To simplify the computation we add the constant 4 to each element of the
matrix. The result is the matrix
=

0 2 1
1 0 2
0 4 4

.
For this matrix game we have, with a mixed strategy A = (r
1
. r
2
. r
3
),
A
1
= r
2
. A
2
= 2r
1
4r
3
. A
3
= r
1
2r
2
4r
3
.
The equation of the line A
1
= A
2
is
2r
1
r
2
4r
3
= 0. or 8r
1
r
3
= 1.
The equation of the line A
2
= A
3
is
r
1
2r
2
8r
3
= 0. or 8r
1
10r
3
= 2.
The equation of the line A
3
= A
1
is
r
1
r
2
4r
3
= 0. or r
3
= 1.
The regions 1
1
. 1
2
. 1
3
in which min
1<j<3
A
j
are equal with A
1
, A
2
,
A
3
respectively are shown in Fig. 1.6.
Figure 1.6: The three regions for Example 1.17
We evaluate A
1
at the point

0.
4
5
.
1
5

. It results

0.
4

.
1

0
1
0

=
4

.
The values of A
2
at the points

2
3
.
1
3
. 0

. (1. 0. 0). (0. 0. 1) and



0.
4
5
.
1
5

are:

2
8
.
1
8
. 0

2
0
4

=
4
8
. (1. 0. 0)

2
0
4

= 2
(0. 0. 1)

2
0
4

= 4.

0.
4

.
1

2
0
4

=
4

.
31
The value of A
3
at the points

2
3
.
1
3
. 0

. (0. 1. 0) and

0.
4
5
.
1
5

are

2
8
.
1
8
. 0

1
2
4

=
4
8
. (0. 1. 0)

1
2
4

= 2
and

0.
4

.
1

1
2
4

=
4

.
By comparison of the above ve values, (
4
5
. 4 and 2). we see that the maxi-
mum value of the matrix game is =
4
5
, and the vertex at which the maximum
is reached is A
+
=

0.
4
5
.
1
5

. Thus A
+
=

0.
4
5
.
1
5

is the optimal strategy of player


1.
We proceed in a similar way to nd out the optimal strategy of player 2.
We get that the vertices 1
+
1
=

0.
3
5
.
2
5

, 1
+
2
=

8
15
.
1
3
.
2
15

represent optimal
strategies of player 2. Hence 1
+
= `1
+
1
(1 `)1
+
2
, 0 _ ` _ 1. By coming
back to the original matrix game with the payo matrix 1 we obtain the value

B
=
A
4 that is
B
=
16
5
. The optimal strategies are
A
+
=

0.
4

.
1

.
1
+
=

8(1 `)
1
.
8

`
1 `
8
.
8

`
2(1 `)
1

. 0 _ ` _ 1.
Remark 1.27. We have the same result as that obtained in Example 1.12
where we used the elimination of dominated strategies.
1.11 Matrix games and linear programming
Next, we formulate the matrix game problem as a linear programming problem.
Let = (a
ij
) be the payo matrix of a matrix game. It isnt a restriction to
assume that a
ij
0 for all i = 1. : and all , = 1. :. Then the value of the
game must be a positive number.
By choosing a mixed strategy A o
m
player 1 can get at least the expected
payo
min
1<j<n
A
j
= n.
Therefore, we have A
j
_ n, , = 1. :, that is
m

i=1
a
ij
r
i
_ n. , = 1. :
with
m

i=1
r
i
= 1. r
i
_ 0. i = 1. :.
32
We denote
xii
u
= r
t
i
, i = 1. :. Then the above problem becomes
m

i=1
a
ij
r
t
i
_ 1. , = 1. :
m

i=1
r
t
i
=
1
n
r
t
i
_ 0. i = 1. :.
Player 1 wishes to maximize n, (this maximum is the value of the game),
that is, he wishes to minimize
1
u
. Thus the problem reduces to the following
linear programming problem

[min[1 = r
t
1
r
t
2
r
t
m

m
i=1
a
ij
r
t
i
_ 1. , = 1. :
r
t
i
_ 0. i = 1. :
(39)
Similarly, player 2, by choosing a mixed strategy 1 o
n
, can keep player 1
from getting more than
max
1<i<m

i
1
t
= n.
So, we have
i
1
t
_ n, i = 1. :, that is,
n

j=1
a
ij
n
j
_ n. i = 1. :.
where
n

j=1
n
j
= 1. n
j
_ 0. , = 1. :.
We denote
yj
w
= n
t
j
, , = 1. :.
Since player 2 wishes to minimize n (this minimum is also the value of
the game), that is, he wishes to maximize
1
w
, the above problem reduces to the
following linear programming problem, which is the dual of (39), formulated
above:

[max[o = n
t
1
n
t
2
n
t
n

n
j=1
a
ij
n
t
j
_ 1. i = 1. :
n
t
j
_ 0. , = 1. :
(40)
Thus the solution of a matrix game is equivalent to the problem of solving
a pair of dual linear programming problems.
Remark 1.28. Due to the duality theorem, well known in linear program-
ming, it is enough to solve one of those above problems.
33
Example 1.18. We consider the same matrix game as in Example 1.17.
Thus we have
1 =

4 2 8
8 4 2
4 0 8

.
To obtain a
ij
0 we add the constant 1 at each element of matrix 1 and
so we obtain
=

8 4
4 8
1 0

.
The corresponding linear programming problem (40) is

[max[o = n
t
1
n
t
2
n
t
3
n
t
1
8n
t
2
4n
t
3
_ 1
4n
t
1
n
t
2
8n
t
3
_ 1
n
t
1
n
t
2
0n
t
3
_ 1
n
t
1
. n
t
2
. n
t
3
_ 0
In order to solve this problem we use the simplex method. The simplex
matrix can be written successively

8 4 1 0 0 1
4 8 0 1 0 1
1 0 0 0 1 1
1 1 1 0 0 0 0

0 2 1 0 1 0
0 21 21 0 1 4 1
1 1 0 0 0 1 1
0 4 4 0 0 1 1

5;4;1

0 1 2 12 0 12 0
0 0 6810 2110 1 1810 1
1 0 2810 110 0 810 1
0 0 6 2 0 1 1

0 1 0 18 268 168 68
0 0 1 18 1068 1868 268
1 0 0 28 2868 1168 868
0 0 0 0 421 121 21

Thus the solution is o


max
=
5
21
, n
t
1
=
8
63
, n
t
2
=
5
63
, n
t
3
=
2
63
, n
t
4
= 0, n
t
5
= 0,
n
t
6
= 0, r
t
1
= 0, r
t
2
=
4
21
, r
t
3
=
1
21
.
34
We have o
max
=
1
w
, hence n =
21
5
is the value of game with the matrix .
Also, n
1
= n
t
1
n =
8
63

21
8
=
8
15
, n
2
=
1
3
, n
3
=
2
15
, r
1
= 0, r
2
=
4
5
, r
3
=
1
5
.
The problem has still another solution because we have

12 1 0 0 814 18126 821


12 0 1 0 142 87126 221
82 0 0 1 2842 1142 421
0 0 0 0 421 121 21

Therefore n
t
1
= 0, n
t
2
= 821, n
t
3
= 221, n
t
4
= 421, n
t
5
= 0, n
t
6
= 0, thus
n
1
= 0, n
2
= 8, n
3
= 2.
In conclusion, the solution of matrix game with payo matrix 1 is:
= n 1 =
21

1 =
16

. A
+
=

0.
4

.
1

.
1
+
1
=

8
1
.
1
8
.
2
1

. 1
+
2
=

0.
8

.
2

hence
1
+
= `1
+
1
(1 `)1
+
2
. 0 _ ` _ 1.
Remark 1.29. In a next section we will do an another approach for this
kind of problems.
1.12 Denition of the non-cooperative game
For each game there are : players, : N, : _ 2. In our mathematical consider-
ations it is important the existence of the players and the possibility to identify
and to distinguish them between the others players. The set of the players 1
is identied with the set of rst : non zero natural numbers 1 = 1. 2. . . . . :.
Each player i, i 1 can apply many strategies. In the case of an eective
game the player i, in the moments of the decision during the game, may choose
from a set of variants o
i
. We consider that o
i
is a nite set, for every i. Be-
cause from mathematical point of view the concrete nature of the variants isnt
essential but the possibility to identify them is important, we denote generally
o
i
= 1. . . . . :
i
and we consider in what follows the general notation o
i
= :
i
,
i = 1. : and for each xed i, :
i
= 1. :
i
. If we take a strategy of each player
then we obtain a situation (strategy) of the game : = (:
1
. . . . . :
n
) which it is
an element of the cartezian product o
1
o
n
=

iI
o
i
. For every situation
:, each player i obtains a payo H
i
(:). So, H is a function dened on the set
of all situations : and we call it the payo matrix of the player i.
Denition 1.10. The ensemble I = < 1. o
i
. H
i
. i 1 is called non-
cooperative game. Here 1 and o
i
are sets which contain natural numbers,
H
i
= H
i
(:), i 1, are real functions dened on the set o, : o, o =

iI
o
i
.

35
Remark 1.30. We call the function H
i
the payo matrix because its set of
values can be eective written as a :-dimensional matrix of type :
1
. . . . . :
n
.
So, we can accept the name matrix game when we want to underline that this
game is given by a :-dimensional matrix.
Example 1.19. Two players put on the table a coin of same kind. If the
both players choose same face, then the rst player take the two coins, and in
contrary case, the second player take the two coins. (See the Example 1.1).
The rst player is denoted by 1 and the second by 2. So, 1 = 1. 2. Each
player has two strategies, o
1
= o
2
= 1. 2. If :
1
= 1 or :
1
= 2, then the player
1 chose "heads" respectively "tails". Similarly are the values :
2
= 1 respectively
:
2
= 2 for the player 2. It follows that o = o
1
o
2
= (1. 1). (1. 2). (2. 1). (2. 2).
Then this game is
I =< 1. o
1
. o
2
. H
1
. H
2
.
The payo matrix H
1
(:) of the player 1 can be written as:
H
1
(:) =

H
1
(1. 1) H
1
(1. 2)
H
1
(2. 1) H
1
(2. 2)

1 1
1 1

.
where the rows correspond to the strategies of player 1 and the columns to the
strategies of player 2.
The payo matrix of the player 2 is
H
2
(:) =

H
2
(1. 1) H
2
(2. 1)
H
2
(1. 2) H
2
(2. 2)

1 1
1 1

and here the rows correspond to the strategies of player 2 and the columns
correspond to the strategies of player 1.
Remark 1.31. A general notation for the payo matrices is given by the
following table:
Situation Payo matrix
:
1
. . . :
n
H
1
. . . H
n
So, for the game considered in Example 1.19 the payo matrices are
Situation Payo matrix
:
1
:
2
H
1
H
2
1 1 1 -1
1 2 -1 1
2 1 -1 1
2 2 1 -1

36
1.13 Denition of the equilibrium point
Let us consider a non-cooperative game
I =< 1. o
i
. H
i
. i 1 .
We suppose that the game repeats oneself many times.
Example 1.19 shows us that it isnt in advantage for every player to apply
the same strategy all the time. If, for example, the player 1 applies only the
strategy 1, then the player 2 observe this thing and he applies the strategy 2
and so the player 1 will be a loser all the time.
The similar situation is, if the player 1 applies the strategy 2 all the time.
Similarly, for the player 2.
So, it follows that in every situation : = (:
1
. :
2
), each player can choose
a preferred situation :
t
, in opposite with the strategy :, which exists at that
moment of time. This strategy can be obtained by modifying only the strategy
of the player with another one.
Given situation Preferred situation
: :
t
for the player
1 2
(1,1) (1,2)
(1,2) (2,2)
(2,1) (1,1)
(2,2) (2,1)
Hence, by repeating the game, it is necessary to apply each strategy :
i
with
the probability (relative frequency) j
isi
, in order to obtain a payo as much as
it is possible, for every player, in all games which are played. That is to ensure
the possible average value of the game for every player. For the row matrix of
all probabilities j
isi
, :
i
= 1. :
i
, which correspond to the player i, we use the
notation 1
i
= [j
i1
. . . . . j
imi
[. The vector 1
i
, for all values of the probabilities, is
called the mixed strategy of the player i. If only a probability from the vector 1
i
is dierent from 0, and it is equal with 1, then 1
i
is the pure strategy :
i
of the
player. If all strategies 1
i
, i = 1. : are pure strategies, then 1 = (1
1
. . . . . 1
n
) is
pure strategy (the situation : = (:
1
. . . . . :
n
)) of the whole game.
We denote J
i
the row matrix which contains 1 and, so, we can write 1
i
J
t
i
= 1,
where t is the symbol for the transposed matrix.
We denote 1
i
the mixed strategy of all players except the player i. We
suppose that each player xed his strategy which is independent of those of
the others players: 1
i
=

j,=i
1
j
, where we consider this product of the vec-
tors as a cartezian product (each component with each component). When
we write the elements of the vector 1
i
we consider the lexicographic ordo-
nation of the elements. For example, if 1
1
= [j
11
. j
12
[, 1
2
= [j
21
. j
22
. j
23
[,
1
3
= [j
31
. j
32
. j
33
. j
34
[ are the mixed strategies of the players 1 = 1. 2. 8, then
we have
1
1
= 1
2
1
3
= [j
21
j
31
. j
21
j
32
. j
21
j
33
. j
21
j
34
. j
22
j
31
. j
22
j
32
.
37
j
22
j
33
. j
22
j
34
. j
23
j
31
. j
23
j
32
. j
23
j
33
. j
23
j
34
[.
1
2
= 1
1
1
3
= [j
11
j
31
. j
11
j
32
. j
11
j
33
. j
11
j
34
. j
12
j
31
. j
12
j
32
. j
12
j
33
. j
12
j
34
[.
1
3
= 1
1
1
2
= [j
11
j
21
. j
11
j
22
. j
11
j
23
. j
12
j
21
. j
12
j
22
. j
12
j
23
[.
Denition 1.11. We say that the non-cooperative game is solved if we
can determine those mixed strategies (solutions) 1
i
, 1
i
J
t
i
= 1, i = 1. :, for
which considering a constant vector 1
i
, the payo function 1
i
= 1
i
H
i
1
t
i
has
the maximum value, for every i = 1. :.
We denote the strategies 1
i
, i = 1. : as 1 = (1
1
. . . . . 1
n
).
The mathematical object obtained here is called the equilibrium point
(Nash equilibrium) of the game.
Example 1.20. The data of non-cooperative game from Example 1.19 can
be represented in the following form: (1
1
= 1
2
, 1
2
= 1
1
):
1
1
1
1
`1
2
j
21
j
22
j
11
1 1
j
12
1 1
1
2
1
2
`1
1
j
11
j
12
j
21
1 1
j
22
1 1
In this case the corresponding system is:
j
11
j
12
= 1. j
21
j
22
= 1.
1
1
= (j
21
j
22
)j
11
(j
21
j
22
)j
12
.
1
2
= (j
11
j
12
)j
21
(j
11
j
12
)j
22
.
If 1
1
. 1
2
is the solution of the problem, then it isnt any vector with prob-
abilities 1
0
1
= [j
0
11
. j
0
12
[ for which 1
0
1
= 1
1
(1
0
1
. 1
2
) 1
1
= 1
1
(1
1
. 1
2
), where
1
0
1
= (j
21
j
22
)j
0
11
(j
21
j
22
)j
0
12
and there isnt any vector with proba-
bilities 1
0
2
= [j
0
21
. j
0
22
[ for which 1
0
2
= 1
2
(1
1
. 1
0
2
) 1
2
= 1
2
(1
1
. 1
2
), where
1
0
2
= (j
11
j
12
)j
0
21
(j
11
j
12
)j
0
22
.
For example, 1
1
=

1
2
.
1
2

, 1
2
=

1
2
.
1
2

is a solution of the game and we have


1
1
= 1
2
= 0.
1.14 The establishing of the equilibrium points of a non-
cooperative game
The solution of the game from Example 1.20 have been obtained by a private
procedure, by using the elements of the matrices of this game. But we dont use
a general method to solve every non-cooperative game and for every solution of
the game.
In order to solve the non-cooperative game as in the Denition 1.11, we
suppose that we obtained the mixed strategies 1
i
, i = 1. :, and we write the
payo functions 1
i
in the matriceal form 1
i
= 1
i
1
t
i
, where 1
i
= [1
i
. . . . . 1
i
[ is
a row matrix with :
i
components that are equal with 1
i
.
38
We remind that J
i
is a row vector with :
i
components all equal with 1.
So, by the given denition, it results that we can write 1
i
H
i
1
t
i
= 1
i
1
t
i
, hence
1
i
(H
i
1
i
1
t
i
) = 0 where 1
i
_ 0, i = 1. : and H
i
1
t
i
1
t
i
_ 0.
If the ,-component of the vector H
i
1
t
i
1
t
i
is positive, then by multiplying,
on the left, with the vector 1
+
i
, with all its components equal with 0, except the
,-component that it is equal with 1, it results that 1
+
i
= 1
+
i
H
i
1
t
i
1
i
. But
this is in opposite with Denition 1.11 which shows us that 1
i
is the maximum
value of the expression 1
i
H
i
1
t
i
, for xed 1
i
.
For every values of the probabilities j
isi
, 0 _ j
isi
_ 1, that is for the solution
that gives us the maximum too, the maximum value of the payo function 1
i
is obtained (between others values) for a strategy :
i
for which
H
i
(:
i
)1
t
i
= max
s
0
i
H
i
(:
0
i
)1
t
i
.
Here :
0
i
is an arbitrary strategy. We denote H
i
(:
i
)1
t
i
, respectively H
i
(:
0
i
)1
t
i
the element with row index :
i
, respectively :
0
i
of the matrix H
i
1
t
i
.
By introducing a row matrix T
i
with independent non-negative variables
t
isi
, :
i
= 1. :
i
, T
i
= [t
i1
. . . . . t
imi
[, for every player i, we can write a matriceal
equation that is with the inequation H
i
1
t
i
1
t
i
_ 0 equivalent: H
i
1i1
t
i
T
t
i
=
0, or 1
t
i
H
i
1
t
i
= T
t
i
.
We have
Theorem 1.3. The determination of the equilibrium points of a non-
cooperative game consists in solving, in non-negative numbers, of the system of
multilinear equations: 1
i
J
t
i
= 1, H
i
1
t
i
1
t
i
T
t
i
= 0, 1
i
T
t
i
= 0, where i = 1. :.
Remark 1.32. We consider that the unknown real values 1
i
have been
written as dierence between two non-negative values 1
t
i
and 1
tt
i
, 1
i
= 1
t
i
1
tt
i
,
in order to have all the unknowns as non-negative numbers.
Remark 1.33. To solve the problem formulated by Theorem 1.3, we can
apply a method for solving the systems of equations and inequations with an
arbitrary degree in non-negative numbers. Such a method can be the complete
elimination method.
Remark 1.34. Because the determination of the equilibrium points of a
non-cooperative game consists in solving of a system of the multilinear equa-
tions, we can call this theory as "the theory of multilinear games".
From the previous presentation we dont obtain that the solution is eective
and it is nonempty.
So, the following Nashs theorem is important:
Theorem 1.4. Every non-cooperative game has nonempty solution.
We dont present here the proof of this theorem.
Example 1.21. Because of Theorem 1.3, the problem given in Example
1.20 is with the problem of solving in non-negative numbers 1
1
_ 0, 1
2
_ 0,
T
1
_ 0, T
2
_ 0 of a system with multilinear equations equivalent:
1
1
J
t
1
= 1. H
1
1
t
1
1
t
1
T
t
1
= 0. 1
1
T
t
1
= 0
1
2
J
t
2
= 1. H
2
1
t
2
1
t
2
T
t
2
= 0. 1
2
T
t
2
= 0.
39
where
1
1
= [j
11
. j
12
[. 1
2
= [j
21
. j
22
[.
1
1
= 1
2
. 1
2
= 1
1
. J
1
= J
2
= [1. 1[
T
1
= [t
11
. t
12
[. T
2
= [t
21
. t
22
[.
H
1
=

1 1
1 1

. H
2
=

1 1
1 1

.
1
1
= [1
t
1
1
tt
1
. 1
t
1
1
tt
1
[. 1
2
= [1
t
2
1
tt
2
. 1
t
2
1
tt
2
[
and
1
t
1
_ 0. 1
tt
1
_ 0. 1
t
2
_ 0. 1
tt
2
_ 0. 1
1
= 1
t
1
1
tt
1
. 1
2
= 1
t
2
1
tt
2
or in the developed form:
j
11
j
12
= 1. j
21
j
22
= 1.
j
21
j
22
1
t
1
1
tt
1
t
11
= 0. j
21
j
22
1
t
1
1
tt
1
t
12
= 0.
j
11
j
12
1
t
2
1
tt
2
t
21
= 0. j
11
j
12
1
t
2
1
tt
2
t
22
= 0.
j
11
t
11
= 0. j
12
t
12
= 0. j
21
t
21
= 0. j
22
t
22
= 0.
By solving this system with complete elimination method we obtain the same
solution as that obtained by the private procedure given in Example 1.20.
Remark 1.35. Because of the non-negativity of the unknowns we have the
following equivalences:
j
11
t
11
j
12
t
12
= 0 =j
11
t
11
= 0. j
12
t
12
= 0
j
21
t
21
j
22
t
22
= 0 =j
21
t
21
= 0. j
22
t
22
= 0
and so the equation 1
i
T
t
i
= 0 can be replaced by :
i
equations of the form
j
isi
t
isi
= 0, :
i
= 1. :
i
, for every i, i = 1. :.
1.15 The establishing of the equilibrium points of a bi-
matrix game
Denition 1.12. The non-cooperative game for two players is called bi-matrix
game.
Such a game let us to solve it easily. The problem given by Theorem 1.2,
because 1
1
= 1
2
and 1
2
= 1
1
, can be decomposed in three problems that are
independent. The subproblem (41) consists in solving in non-negative numbers
1
2
of a system with linear equations

1
2
J
t
2
= 1
H
1
1
t
2
1
t
1
T
t
1
= 0.
(41)
40
the subproblem (42) consists in solving in non-negative numbers 1
1
of the
system of equations

1
1
J
t
1
= 1
H
2
1
t
1
1
t
2
T
t
2
= 0.
(42)
and both subproblems can be solved by simplex method. Because the general
solution is a linear convex combination of the basic solutions, it results that we
must select those basic solutions (1
1
. 1
2
) for which it is veried the subproblem
(43) too, that is given by the system of equations

1
1
T
t
1
= 0
1
2
T
t
2
= 0.
(48)
If for an arbitrary index :
1
, 1 _ :
1
_ :
1
, the unknown t
1s1
is a component
of a basic solution of subproblem (41) and t
1s1
= 0 (t
1s1
= 0 when there is
degenerate case), then j
1s1
= 0. So, in all cases t
1s1
= 0 we have j
1s1
= 0.
Similarly, if t
2s2
= 0 then it results j
2s2
= 0, the property that let us to nd
the solution which verify the system (43).
The general solution can be obtained by linear convex combination of all
basic solutions 1
1
corresponding to a xed 1
2
and linear convex combination of
all basic solutions 1
2
, which correspond to a 1
1
.
Example 1.22. The problem given in Example 1.21 refers to a bi-matrix
game. The three systems are the following:

j
21
j
22
= 1
j
21
j
22
1
t
1
1
tt
1
t
11
= 0
j
21
j
22
1
t
1
1
tt
1
t
12
= 0
(41
t
)

j
11
j
12
= 1
j
11
j
12
1
t
2
1
tt
2
t
21
= 0
j
11
j
12
1
t
2
1
tt
2
t
22
= 0
(42
t
)
j
11
t
11
= 0. j
12
t
12
= 0. j
21
t
21
= 0. j
22
t
22
= 0. (48
t
)
To subproblem (41
t
) it corresponds the simplex matrix given below. The
row corresponding to the objective function (that will be minimized) is equal 0.
o
1
=

1 1 0 0 0 0 1
1 1 1 1 1 0 0
1 1 1 1 0 1 0
0 0 0 0 0 0 0

41
We obtain the following basic solutions
A
11
=

1
2
.
1
2
. 0. 0. 0. 0

. A
12
= [1. 0. 1. 0. 0. 2[. A
13
= [0. 1. 1. 0. 2. 0[.
Here, we use the symbol A to have a uniformized notation of the unknowns,
r
1
= j
21
. r
2
= j
22
. r
3
= 1
t
1
. r
4
= 1
tt
1
. r
5
= t
11
. r
6
= t
12
.
Such uniformizations will be used in what follows every time when they are
useful to us. To subproblem (42
t
) it corresponds the following simplex matrix:
o
2
=

1 1 0 0 0 0 1
1 1 1 1 1 0 0
1 1 1 1 0 1 0
0 0 0 0 0 0 0

and it has the basic solutions


A
21
=

1
2
.
1
2
. 0. 0. 0. 0

. A
22
= [1. 0. 1. 0. 2. 0[. A
23
= [0. 1. 1. 0. 0. 2[.
We denote A
t
ij
, i = 1. 2, , = 1. 8 the vectors obtained by omission of com-
ponents 1
t
i
, 1
tt
i
, i = 1. 2. We obtain
A
t
11
=

1
2
.
1
2
. 0. 0

. A
t
12
= [1. 0. 0. 2[. A
t
13
= [0. 1. 2. 0[
A
t
21
=

1
2
.
1
2
. 0. 0

. A
t
22
= [1. 0. 2. 0[. A
t
23
= [0. 1. 0. 2[.
To establish the pairs of solutions (A
t
1i
. A
t
2j
) that would be the solutions
of bi-matrix game it must satisfy the condition: t
1s1
= 0 = j
1s1
= 0 and
t
2s2
= 0 =j
2s2
= 0.
We observe that there exists only a solution: 1
1
= 1
2
=

1
2
.
1
2

, obtained for
t
11
= t
12
= t
21
= t
22
= 0 and 1
t
1
= 1
tt
2
= 1
t
2
= 1
tt
2
= 0. So 1
1
= 1
2
= 0.
1.16 The establishing of equilibrium points of an antago-
nistic game
Denition 1.13. We call antagonistic game a bi-matrix game with the bi-
dimensional matrices H
1
and H
2
for which the following relationship is satised:
H
1
H
t
2
= 0, where 0 is the zero matrix.
Because every equality is with two inequality equivalent, the systems (41),
(42) and (43) from 1.15 can be written as:
42

H
1
1
t
2
1
t
1
_ 0
J
2
1
t
2
_ 1
J
2
1
t
2
_ 1
(44)

1
1
H
1
1
2
_ 0
1
1
J
t
1
_ 1
1
1
J
t
1
_ 1
(4)

1
1
(H
1
1
t
2
1
t
1
) = 0
(1
1
H
1
1
2
)1
t
2
= 0
(46)
where 1
1
contains as elements one and the same value 1
1
, and 1
2
contains
one and the same value 1
2
.
From subproblem (46) it results that 1
1
1
t
1
= 1
2
1
t
2
, hence 1
2
= 1
1
. We
can consider these values as minimax values (the minimum of some maxim
values) obtained by minimization of function 1
1
(the maximum is equal innite)
respectively maximin (the maximum of some minimum values) obtained by
maximization of function 1
2
= 1
1
(the minimum is equal innite). Adding
to system (44) the function 1
1
= 1
t
1
1
tt
1
and to system (45) the function
1
2
= 1
t
2
1
tt
2
, it results two linear programming problems, that are dual
problems.
Certainly, we can use the simplied notation 1 = 1
1
= 1
2
and we can
consider that we determine 1 = '1 by using the system (44) and 1 = 'A
by using the system (45). So, at least a private solution of the antagonistic game
can be obtained by solving only one of systems (44), (45). The antagonistic game
may have, as a bi-matrix game, another solutions which result by solving the
systems (41), (42) and (43), by setting 1
2
= 1
1
= 1.
Because of the symmetry of the systems (44) and (45) and setting 1 =
1
1
= 1
2
, it results the following theorem relative to antagonistic games (von
Neumann-Morgenstern theorem):
Theorem 1.5. The minimum with respect to 1
2
of the maximum (minimax)
of the function 1(1
1
. 1
2
) with respect to 1
1
, for xed 1
2
, is equal to the
maximum with respect to 1
1
of the minimum (maximin) of the function
1(1
1
. 1
2
) with respect to 1
2
, for xed 1
1
, namely
min
P2
max
P1
1(1
1
. 1
2
) = max
P1
min
P2
1(1
1
. 1
2
). (47)
Remark 1.36. In the case of a bi-matrix game, as a generalization of
the condition which appears in the antagonistic game, we can formulate the
question: for which solution (1
1
. 1
2
) the function 1
1
1
2
reaches the minimum
value?
43
This formulation leads us to the problem of cooperation: when there is
cooperation and when there isnt? For the antagonistic game we have 1
1
1
2
=
0.
Example 1.23. The bi-matrix game given in Example 1.22 is an antag-
onistic game. Because it has only a solution, this solution can be obtained if
we minimize the function 1 = 1
1
, namely 1 = 1
t
1
1
tt
1
, supposing that we
use, in order to solve the system (44), by replacing the row that contains only
zeros in the simplex matrix o
1
by the corresponding to the function to minimize
[0. 0. 1. 1. 0. 0. 0. 0[. So, we obtain the simplex matrix:

1 1 0 0 0 0 1
1 1 1 1 1 0 0
1 1 1 1 0 1 0
0 0 1 1 0 0 0

that, by reducing leads us to the same solution 1


1
= 1
2
=

1
2
.
1
2

for which
1
t
1
= 1
tt
2
= 0. So 1'1 = 0.
1.17 Applications in economics
In this sequel we give some applications of games in economics. We will be
evaluate the payo function for each player, which will be dependent of the
players strategies. Here the sets of strategies are real intervals.
1.17.1 Cournot model of duopoly [21]
We consider a very simple version of Cournots model. Let c
1
and c
2
denote the
quantities of a homogeneous product, produced by rms 1 and 2, respectively.
Let 1(Q) = a Q be the market-clearing price when the aggregate quantity on
the market is Q = c
1
c
2
. Hence we have
1(Q) =

a Q. 1or Q < a
0. 1or Q _ a.
Assume that the total cost to rm i of producing quantity c
i
is C
i
(c
i
) = cc
i
.
That is, there are no xed costs and the marginal cost is constant at c, where
we assume c < a. Suppose that the rms choose their quantities simultaneously.
We rst translate the problem into a "continuous" game. For this, we specify:
the players in the game (the two rms), the strategies available to each player
(the dierent quantities it might produce), the payo received by each player for
each combination of strategies that could be chosen by the players (the rms
payo is its prot). We will assume that output is continuously divisible and
negative outputs are not feasible. Thus, each rms strategy space is o
i
= [0. ),
the nonnegative real numbers, in which case a typical strategy :
i
is a quantity
choice, c
i
_ 0. Because 1(Q) = 0 for Q _ a, neither rm will produce a
44
quantity c
i
a. The payo to rm i, a function of the strategies chosen by it
and by the other rm, its prot, can be written as

i
(c
i
. c
j
) = c
i
[a (c
i
c
j
) c[.
How we know an equilibrium point (Nash equilibrium) is the pair (c
+
1
. c
+
2
)
where c
+
i
, for each rm i, solves the optimization problem
max
0<qi<o

i
(c
i
. c
+
j
) = max
0<qi<o
c
i
[a (c
i
c
+
j
) c[.
Assuming c
+
j
< a c (as will be shown to be true), the rst order condition for
rm is optimization problem is necessary and sucient
c
i
=
1
2
(a c
+
j
c). (48)
Thus, if the quantity pair (c
+
1
. c
+
2
) is to be a Nash equilibrium, the rms
quantity choices must satisfy
c
+
1
=
1
2
(a c
+
2
c). c
+
2
=
1
2
(a c
+
1
c).
Solving this pair of equations yields c
+
1
= c
+
2
=
ac
3
, which is indeed less than
a c, as assumed.
The intuition behind this equilibrium is simple.
Each rm would of course like to be a monopolist in this market, in which
case it would choose c
i
to maximize
i
(c
i
. 0) = c
i
(ac
i
c), it would produce the
monopoly quantity c
m
=
ac
2
, and earn the monopoly prot
i
(c
m
. 0) =
(ac)
2
4
.
Given that there are two rms, aggregate prots for the duopoly would be
maximized by setting the aggregate quantity c
1
c
2
equal to the monopoly
quantity c
m
, as would occur if c
i
=
qm
2
for each i, for example. The problem
with this arrangement is that each rm has an incentive to deviate: because
the monopoly quantity is low, the associated price 1(c
m
) is high, and at this
price each rm would like to increase its quantity, in spite of the fact that such
an increase in production drives down the market-clearing price. To see this
formally, use (48) to check that
qm
2
isnt rm 2s best response to the choice of
qm
2
by rm 1.
In the Cournot equilibrium, in contrast, the aggregate quantity is higher, so
the associated price is lower, so the temptation to increase output is reduced
reduced by just enough that each rm is just deterred from increasing its output
by the realization that the market-clearing price will fall.
Remark 1.37. Rather than solving for Nash equilibrium in the Cournot
game algebraically, one could instead proceed graphically, using the best re-
sponse to a rm:
1
2
(c
1
) =
1
2
(a c
1
c) rm 2s best response, and
1
1
(c
2
) =
1
2
(a c
2
c) rm 1s best response.
A third way to solve for this Nash equilibrium is to apply the process of
iterated elimination of strictly dominated strategies (see [7]).
45
1.17.2 Bertrand model of duopoly [21]
This Bertrands model is based on suggestion that rms actually choose prices,
rather than quantities as in Cournots model. The Bertrands model is a dif-
ferent game than Cournots model because: the strategy spaces are dierent,
the payo functions are dierent. Thus we obtain other equilibrium point, but
the equilibrium concept used is the Nash equilibrium dened in the previous
sections.
We consider the case of dierentiated products. If rms 1 and 2 choose prices
j
1
and j
2
, respectively, the quantity that consumers demand from rm i is
c
i
(j
i
. j
j
) = a j
i
/j
j
.
where / 0 reects the extent to which rm is product is a substitute for
rm ,s product. This is an unrealistic demand function because demand for
rm is product is positive even when rm i charges an arbitrarily high price,
provided rm , also charges a high enough price. We assume that there are no
xed costs of production and that marginal costs are constant at c, where c < a,
and that the rms act simultaneously (choose their prices). We translate the
economic problem into a non-cooperative game. There are again two players.
This time, however, the strategies available to each rm are the dierent prices
it might charge, rather than the dierent quantities it might produce. We will
assume that negative prices are not feasible but that any non-negative price
can be charged there is no restriction to prices denominated in pennies. Thus
each rms strategy space can again be represented as o
i
= [0. ), and a typical
strategy :
i
is now a price choice, j
i
_ 0.
We will again assume that the payo function for each rm is just its prot.
The prot to rm i when it chooses the price j
i
and its rival choose the price
j
j
is

i
(j
i
. j
j
) = c
i
(j
i
. j
j
)(j
i
c) = (a j
i
/j
j
)(j
i
c).
Thus, the price pair (j
+
1
. j
+
2
) is Nash equilibrium if, for each rm i, j
+
i
solves
the problem
max
0<pi<o

i
(j
i
. j
+
j
) = max
0<pi<o
(a j
i
/j
+
j
)(j
i
c).
The solution to rm is optimization problem is
j
+
i
=
1
2
(a /j
+
j
c).
Therefore, if the price pair (j
+
1
. j
+
2
) is to be a Nash equilibrium, the rms
price choices must satisfy
j
+
1
=
1
2
(a /j
+
2
c) a:d j
+
2
=
1
2
(a /j
+
1
c).
Solving this pair of equations yields
j
+
1
= j
+
2
=
a c
2 /
.
46
1.17.3 Final-oer arbitration [6]
Many public-sector workers are forbidden to strike; instead, wage disputes are
settled by binding arbitration. Many other disputes including medical malprac-
tice cases and claims by shareholders against their stockbrokers, also involve
arbitration. The two major forms of arbitration are conventional and nal-oer
arbitration. In nal-oer arbitration, the two sides make wage oers and then
the arbitrator piks one of the oers as the settlement. In conventional arbitra-
tion, in contrast, the arbitrator is free to impose any wage as the settlement.
We now derive the Nash equilibrium wage oers in a model of nal-oer
arbitration.
Suppose the parties to the dispute are a rm and a union and the dispute
concerns wages. First, the rm and the union simultaneously make oers, de-
noted by n
f
and n
u
, respectively. Second, the arbitrator chooses one of the two
oers as the settlement. Assume that the arbitrator has an ideal settlement she
would like to impose, denoted by r. Assume, further that, after observing the
parties oers, n
f
and n
u
, the arbitrator simply chooses the oer that is closer
to r: provided that n
f
< n
u
, the arbitrator chooses n
f
if r <
w
f
+wu
2
, chooses
n
u
if r
w
f
+wu
2
and chooses n
f
or n
u
if r =
w
f
+wu
2
. The arbitrator knows r
but the parties do not. The parties believe that r is randomly distributed ac-
cording to a probability distribution denoted by 1, with associated probability
density function denoted by 1. Thus, the parties believe that the probabilities
1n
f
c/o:c: and 1n
u
c/o:c: depend of arbitrators behavior, and can
be expressed as
1n
f
c/o:c: = 1

r <
n
f
n
u
2

= 1

n
f
n
u
2

and
1n
u
c/o:c: = 1 1

n
f
n
u
2

.
Thus, the expected wage settlement is
n
f
1n
f
c/o:c: n
u
1n
u
c/o:c: = n
f
1

n
f
n
u
2

n
u

1 1

n
f
n
u
2

.
We assume that the rm wants to minimize the expected wage settlement
imposed by the arbitrator and the union wants to maximize it.
If the pair of oers (n
+
f
. n
+
u
) is to be a Nash equilibrium of the game between
the rm and the union, n
+
f
must solve the optimization problem
min
w
f

n
f
1

n
f
n
+
u
2

n
+
u

1 1

n
f
n
+
u
2

47
and n
+
u
must solve the optimization problem
max
wu

n
+
f
1

n
+
f
n
u
2

n
u

1 1

n
+
f
n
u
2

.
Thus, the wage-oer pair (n
+
f
. n
+
u
) must solve the rst-order conditions for
these optimization problems
(n
+
u
n
+
f
)
1
2
1

f
+w

u
2

= 1

w

f
+w

u
2

a:d (40)
(n
+
u
n
+
f
)
1
2
1

f
+w

u
2

= 1 1

w

f
+w

u
2

.
It result
1

w

f
+w

u
2

=
1
2
. (0)
that is, the average of the oers must equal the median of the arbitrators
preferred settlement. Substituting (50) into either of the rst-order conditions
then yields
n
+
u
n
+
f
=
1
f

f
+w

u
2

. (1)
Remark 1.38. Suppose that the arbitrators preferred settlement is nor-
mally distributed with mean : and variance o
+
, in which case the density
function is given by
1(r) =
1
o

2
c

(xm)
2
2
2
. :. o R. o 0.
We know that, in the case of normal distribution, the median of the distri-
bution equals the mean : of the distribution. Thus, (50) and (51) become:
n
f
n
u
2
= :. n
+
u
n
+
f
=
1
1(:)
= o

2.
and the Nash equilibrium oers are
n
+
u
= :o

2
. n
+
f
= :o

2
.
48
1.17.4 The problem of the commons [9]
Consider the : farmers in a village. Each summer, all the farmers graze their
goats on the village green. Denote the number of goats the i
th
farmer owns
by o
i
and the total number of goats in the village by G = o
1
o
2
o
n
.
The cost of buying and caring for a goat is c, independent of how many goats a
farmer owns. The value to a farmer of grazing a goat on the green when a total
of G goats are grazing is (G) per goat. Since a goat needs at least a certain
amount of grass in order to survive, there is a maximum number of goats that
can be grazed on the green, G
max
: (G) 0 for G < G
max
but (G) = 0 for
G _ G
max
. Also since the rst few goats have plenty of room to graze, adding
one more does little harm to those already grazing, but when so many goats are
grazing that they are all just barely surviving, that is G is just below G
max
,
then adding one more dramatically harms the rest. Formally: for G < G
max
,

t
(G) < 0 and
tt
(G) < 0.
During the spring, the farmers simultaneously choose how many goats to
own. Assume goats are continuously divisible. A strategy for farmer i is the
choice of a number of goats to graze on the village green, o
i
. Assuming that
the strategy space is [0. ) covers all the choices that could possibly be of
interest to the farmer, [0. G
max
) would also suce. The payo to farmer i from
grazing o
i
goats, when the numbers of goats grazed by the other farmers are
(o
1
.
;
o
i1
. o
i+1
. . . . . o
n
). is
o
i
(o
1
o
i1
o
i
o
i+1
o
n
) co
i
. (2)
Thus, if (o
+
1
. . . . . o
+
n
) is to be a Nash equilibrium then, for each i, o
+
i
must
maximize (52) given that the other farmers choose (o
+
1
. . . . . o
+
i1
. o
+
i+1
. . . . . o
+
n
).
The rst-order condition for this optimization problem is
(o
i
o
+
1
o
+
i1
o
+
i+1
o
+
n
) (8)
o
i

t
(o
i
o
+
1
o
+
i1
o
+
i+1
o
+
n
) c = 0
Substituting o
+
i
into (53), summing over all : farmers rst-order conditions,
and then dividing by :, yields
(G
+
)
1
n
G
+

t
(G
+
) c = 0. (4)
where G
+
= o
+
1
o
+
n
.
The rst-order condition (52) reects the incentives faced by a farmer who is
already grazing o
i
goats but is considering adding one more, or a tiny fraction of
one more. The value of the additional goat is (o
i
o
+
1
o
+
i1
o
+
i+1
o
+
n
)
and its cost is c. The harm to the farmers existing goats is
t
(o
i
o
+
1

o
+
i1
o
+
i+1
o
+
n
) per goat, or o
i

t
(o
i
o
+
1
o
+
i1
o
+
i+1
o
+
n
) in
total. The common resource is over utilized because each farmer considers only
49
his her own incentives, not the eect of his or her actions on the other farmers,
hence the presence of G
+

t
(G
+
): in (54).
Remark 1.39. The social optimum, denoted by G
++
, solves the problem
max
0<G<o
G(G) Gc, the rst-order condition for which is
(G
++
) G
++

t
(G
++
) c = 0.
We have G
+
G
++
.
1.18 Exercises and problems solved
1. Let be a zero-sum two-person game with the payo matrix
H
1
= =

0 8 1
6 8
8 4 10
6 6

.
Which is the payo matrix of player 2? What strategies has the player 1
and 2, respectively?
Solution. The payo matrix of player 2 is
H
2
=

0 6 8 6
8 4
1 8 10 6

.
because H
1
H
t
2
= O
4;3
.
The player 1 has four strategies, because the matrix has four rows. The
player 2 has three strategies, because in the matrix there are three columns.
2. Two players write independently, one of the numbers 1, 2 or 3. If they
have written the same number then the player 1 pays to player 2 equivalent
in unities monetary of this number. In the contrary case the player 2 pays to
player 1 this number of unities monetary that he has chosen. Which is the
payo matrix of this game?
Solution. Easily we get that the payo matrix of player 1 is
=

1 1 1
2 2 2
8 8 8

.
3. What game in previous problems has the saddle point?
Solution. For the rst game we have

1
= max
1<i<4
min
1<j<3
a
ij
= max(1. . 8. ) = .

2
= min
1<j<3
max
1<i<4
a
ij
= min(0. . 10) = .
50
How
1
=
2
= it results that the rst game has saddle point. It is easily
to verify that (2,2) and (4,2) are both saddle points because a
22
= a
42
= = .
Thus i
+
= 2, i
++
= 4 are optimal strategies of player 1, and ,
+
= 2 is the
optimal strategy of player 2.
For the second game we have

1
= max
1<i<4
min
1<j<3
a
ij
= max(1. 2. 8) = 1.

2
= min
1<j<3
max
1<i<4
a
ij
= min(8. 8. 2) = 2.
Thus, the second game hasnt a saddle point in the sense of pure strategies
because
1
= 1 < 2 =
2
.
4. Which are the expected payos of player 1 in the previous games?
Solution. For the rst game, let A = (r
1
. r
2
. r
3
. r
4
), 1 = (n
1
. n
2
. n
3
) be
the mixed strategies of players 1 and 2, respectively. Then the expected payo
of player 1 is
4

i=1
3

j=1
a
ij
r
i
n
j
= 0r
1
n
1
8r
1
n
2
r
1
n
3
6r
2
n
1
r
2
n
2
6r
4
n
3
.
For the second game, let A = (r
1
. r
2
. r
3
), 1 = (n
1
. n
2
. n
3
) be the mixed
strategies of players 1 and 2, respectively. Then the expected payo of player 1
is
3

i=1
3

j=1
a
ij
r
i
n
j
= r
1
n
1
r
1
n
2
r
1
n
3
2r
2
n
1
2r
2
n
2
2r
2
n
3

8r
3
n
1
8r
3
n
2
8r
3
n
3
.
5. Using the iterated elimination of strictly dominated strategies solve the
matrix game with the payo matrix
=

0 1 1
1 0 1
1 1 0

.
Solution. In this matrix the elements of rst row is smaller than the
corresponding elements of the third row. Consequently, the player 1 will never
use his rst strategy. The rst row will be eliminated. We obtain the payo
matrix

t
=

1 0 1
1 1 0

.
Now, in this matrix
t
each element of rst column is greater than the
corresponding element of the third column. Thus, the rst strategy of player 2
will never be included in any of his optimal mixed strategies, therefore, the rst
column of the matrix
t
can be deleted to obtain

tt
=

0 1
1 0

.
51
Similarly, we obtain successive,

ttt
= [1 0[ a:d
IV
= [0[.
Thus, the optimal (pure) strategies are A
+
= (0. 0. 1), 1
+
= (0. 0. 1) and
the value of game is = 0. We have, actually, a saddle point (i
+
. ,
+
) = (8. 8),
because a
33
= = 0.
6. Find the optimal strategies of the following matrix game with the payo
matrix
a) =

2 0
1 8

; b) =

1 2
2 0

; c) =

1 1
1 1

.
Solution. These games are 2 2 matrix game. Thus, we can use the mixed
strategies A
+
= (j. 1 j), 1
+
= (c. 1 c) where
j
+
=
d c
a d / c
. c
+
=
d /
a d / c
. =
ad /c
a d / c
.
a) We obtain
j
+
=
8 1
2 8 0 1
=
1
2
. c
+
=
8 0
2 8 0 1
=
8
4
.
=
2.8 0.1
2 8 0 1
=
8
2
.
hence A
+
=

1
2
.
1
2

, 1
+
=

3
4
.
1
4

, =
3
2
.
b) We have
j
+
=
0 2
1 0 2 2
=
2
8
. c
+
=
0 2
1 0 2 2
=
2
8
.
=
1.0 2.2
1 0 2 2
=
4
8
.
hence A
+
=

2
3
.
1
3

, 1
+
=

2
3
.
1
3

, =
4
3
.
c) We obtain
j
+
=
1 (1)
1 1 (1) (1)
=
1
2
. c
+
=
1
2
. =
1.1 (1)(1)
4
= 0.
hence A
+
= 1
+
=

1
2
.
1
2

, = 0.
7. Solve the problem 6 with the procedure described in the Remark 1.24
(the Williams method).
Solution. Let A = (r
1
. r
2
), 1 = (n
1
. n
2
) be the mixed strategies for players
1 and 2, respectively. Here r
1
r
2
= 1 and
x1
x2
=
|cd|
|ab|
, n
1
n
2
= 1,
y1
y2
=
|db|
|ca|
.
a) We obtain r
1
r
2
= 1,
x1
x2
=
|13|
|20|
= 1, hence r
1
= r
2
, r
1
= r
2
=
1
2
,
respectively n
1
n
2
= 1,
y1
y2
=
|30|
|12|
= 8, hence 8n
2
= n
1
, n
1
=
3
4
, n
2
=
1
4
. Thus
A
+
=

1
2
.
1
2

, 1
+
=

3
4
.
1
4

and

+
=

1
2
.
1
2

2 0
1 8

84
14

=
52
=

8
2
.
8
2

84
14

=
0
8

8
8
=
12
8
=
8
2
.
b) We have r
1
r
2
= 1,
x1
x2
=
|20|
|12|
= 2, hence r
1
= 2r
2
, r
1
=
2
3
, r
2
=
1
3
,
respectively n
1
n
2
= 1,
y1
y2
=
|02|
|21|
= 2, n
1
=
2
3
, n
2
=
1
3
. Thus A
+
= 1
+
=

2
3
.
1
3

, and

+
=

2
8
.
1
8

1 2
2 0

28
18

=
=

4
8
.
4
8

28
18

=
8
0

4
0
=
12
0
=
4
8
.
c) We have r
1
r
2
= 1,
x1
x2
=
|11|
|1(1)|
= 1, hence r
1
= r
2
, r
1
= r
2
=
1
2
,
respectively n
1
n
2
= 1,
y1
y2
=
|1(1)|
|11|
= 1, n
1
= n
2
, n
1
= n
2
=
1
2
. Thus
A
+
= 1
+
=

1
2
.
1
2

, and

+
=

1
2
.
1
2

1 1
1 1

12
12

= (0. 0)

12
12

= 0.
8. Solve the problem 6 with the graphical method described for 2 : and
:2 matrix games.
Solution. Let A = (r. 1 r), 1 = (n. 1 n) be, respectively, the mixed
strategies for the players 1 and 2. The lines ac, /d, and a/, cd respectively, will
be represented in an illustrative gure.
a) The payo matrix is =

2 0
1 8

. thus we have the lines


Figure 1.7: The problem 8. a)
The intersection points have, respectively, the abscissa r =
1
2
. n =
3
4
. hence
A
+
= (
1
2
.
1
2
). 1
+
= (
3
4
.
1
4
). =
3
2
.
b) The payo matrix is =

1 2
2 0

. thus we have the lines


Figure 1.8: The problem 8. b)
The intersection points have, respectively, the abscissa r =
2
3
. n =
2
3
. hence
A
+
= 1
+
= (
2
3
.
1
3
). =
4
3
.
c) The payo matrix is =

1 1
1 1

. thus we have the lines


53
Figure 1.9: The problem 8. c)
The intersection points have, respectively, the abscissa r =
1
2
. n =
1
2
. hence
A
+
= 1
+
= (
1
2
.
1
2
). = 0.
9. Using the graphical method, solve the following matrix games with the
payo matrices:
a) =

2 0 6 8
8 8 7

. b) =

6
0 4
1 8

.
Solution. a) Let A = (r. 1r) be the mixed strategy for the player 1. The
lines ac, d1, co, d/ are represented in the following gure.
Figure 1.10: The problem 9.a)
The abscissa r = r
+
of the point
t
, and the value of
t
1
t
= , can be eval-
uated by solving the system of two linear equations corresponding to strategies
two and four of player 2. The linear equations correspond to lines which pass
through points (1,3), (0,5) and respectively (1,9), (0,3) (see the heavy black line
in gure), that is

2r n =
6r n = 8.
The solution is r =
1
4
, n =
9
2
. Hence A =

1
4
.
3
4

, =
9
2
.
For the mixed strategy of player 2 we have 1 = (c
1
. c
2
. c
3
. c
4
) and the equal-
ity

1
4
.
8
4

2 0 6 8
8 8 7

c
1
c
2
c
3
c
4

=
0
2
We obtain

26
4
.
18
4
.
27
4
.
18
4

c
1
c
2
c
3
c
4

=
0
2
.
hence
26
4
c
1

18
4
c
2

27
4
c
3

18
4
c
4
=
0
2
.
Thus we have

26c
1
18c
2
28c
3
18c
4
= 18.
c
1
c
2
c
3
c
4
= 1.
c
j
_ 0. , = 1. 4
54
with the solution c
1
= 0, c
3
= 0, c
2
= c, c
4
= 1 c.
The optimal strategies of player 2 are 1 = (0. c. 0. 1 c), where c [0. 1[.
b) Let 1 = (n. 1 n) be the mixed strategy for the player 2. The lines a/,
cd and c1 are represented in the following gure.
Figure 1.11: The problem 9. b)
The linear equations correspond to lines which pass through points (0,4),
(1,9) and (0,8), (1,1) (see the heavy black line in gure), that is,

n . = 4
7n . = 8.
The solution is n =
1
3
, . =
17
3
. Hence 1 =

1
3
.
2
3

, =
17
3
.
For mixed strategy of player 1 we have A = (j
1
. j
2
. j
3
) and the equality
(j
1
. j
2
. j
3
)

6
0 4
1 8

18
28

=
17
8
.
We obtain
(j
1
. j
2
. j
3
)

178
178
178

=
17
8
.
hence
17
8
j
1

17
8
j
2

17
8
j
3
=
17
8
.
Thus we have

17j
1
17j
2
17j
3
= 17
j
1
j
2
j
3
= 1
j
i
_ 0. i = 1. 8
For j
1
= 0 we obtain j
2
=
7
12
. j
3
=
5
12
from the equality A
:1
= A
:2
.
namely j
1
0j
2
j
3
= 6j
1
4j
2
8j
3
=
17
3
. The optimal strategies of player
1 is A = (0.
7
12
.
5
12
).
10. Solve the matrix game with the payo matrix
=

6 0 8
8 2 8
4 6

.
Solution. We use the method described for the 8 8 matrix game. Thus
we have, with the mixed strategy A = (r
1
. r
2
. r
3
).
A
1
= 6r
1
8r
2
4r
3
. A
2
= 2r
2
6r
3
. A
3
= 8r
1
8r
2
r
3
.
55
The equation of the line A
1
= A
2
is 6r
1
8r
2
4r
3
= 2r
2
6r
3
. or
6r
1
10r
2
2r
3
= 0. But r
1
r
2
r
3
= 1, so we get 2r
1
6r
3
= .
The equation of the line A
2
= A
3
is 2r
2
6r
3
= 8r
1
8r
2
r
3
or
8r
1
r
2
r
3
= 0, that is 2r
1
6r
3
= .
The equation of the line A
3
= A
1
is 8r
1
8r
2
r
3
= 6r
1
8r
2
4r
3
,
or 8r
1
r
2
r
3
= 0, that is 2r
1
6r
3
= .
We obtain only the equation 2r
1
6r
3
= , hence the solutions are r
1
= j,
r
2
=
14p
6
, r
3
=
52p
6
, j [0. 1[. Thus A =

j.
14p
6
.
52p
6

, j [0. 1[. The


values A
1
, A
2
, A
3
are
284p
6
, and this is maximum
14
3
when j = 0.
Hence A
+
=

0.
1
6
.
5
6

is optimal strategy of player 1.


For player 2, let 1 = (n
1
. n
2
. n
3
) be a mixed strategy. We have
1
1
t
=
6n
1
8n
3
,
2
1
t
= 8n
1
2n
2
8n
3
,
3
1
t
= 4n
1
6n
2
n
3
.
The equation of the line
1
1
t
=
2
1
t
is 6n
1
8n
3
= 8n
1
2n
2
8n
3
or
2n
1
2n
2
= 0, hence n
1
= n
2
.
The equation of the line
2
1
t
=
3
1
t
is 8n
1
2n
2
8n
3
= 4n
1
6n
2
n
3
,
or 4n
1
8n
2
2n
3
= 0, hence 2n
1
4n
2
= n
3
. This equation is 8n
1
8n
2
= 1
and it means that the lines n
1
= n
2
and 8n
1
8n
2
= 1 are parallel.
The equation of the line
3
1
t
=
1
1
t
is 4n
1
6n
2
n
3
= 6n
1
8n
3
, or
2n
1
6n
2
2n
3
= 0, hence n
1
8n
2
n
3
= 0. This equation is 2n
1
2n
2
= 1,
and thus this line is parallel to another.
The line 8n
1
8n
2
= 1 is essential, because the intersection of the regions
1
1
and 1
2
is this line, and the region 1
2
= O. So we must consider the values

1:
1
t
and
1:
1
t
in the cases 1
1
= (
2
3
.
1
3
. 0), and 1
2
=

1
3
. 0.
2
3

. We get that
1
+
1
=

2
3
.
1
3
. 0

, 1
+
2
=

1
3
. 0.
2
3

are the optimal strategies for the player 2. Hence


1
+
= `1
+
1
(1 `)1
+
2
, ` [0. 1[, is the solution for player 2 and also, =
14
3
.
11. Using the linear programming problem, solve the following matrix games
with the payo matrices:
a) =

0 2
1

. b) =

6 0 8
8 2 8 0
4 6 4

.
Solution. We consider the linear programming problem (40)
[max[o = n
t
1
n
t
2
n
t
n
n

j=1
a
ij
n
t
j
_ 1. i = 1. :
n
t
j
_ 0. , = 1. :.
a) We have
[max[o = n
t
1
n
t
2
2n
t
2
_ 1
n
t
1
n
t
2
_ 1
n
t
1
. n
t
2
_ 0
56
and so the simplex matrix is

0 2 1 0 1
1 0 1 1
1 1 0 0 0

0 2 1 0 1
1 1 0 1 1
0 4 0 1 1

0 1 12 0 12
1 0 110 1 110
0 0 2 1 8

.
Thus o
max
= 8 = 1n = n = 8, n
t
1
= 110 = n
1
= 16, n
t
2
= 12 =
n
2
= 6, n
t
3
= 0 = n
3
= 0, n
t
4
= 0 = n
4
= 0, r
t
1
= 2 = r
1
= 28,
r
t
2
= 1 =r
2
= 18. We have A
+
= (28. 18), 1
+
= (16. 6) and = 8.
b) The simplex matrix in this case is

6 0 8 1 0 0 1
8 2 8 0 0 1 0 1
4 6 4 0 0 1 1
1 1 1 1 0 0 0 0

0 82 84 74 1 84 0 14
1 14 88 08 0 18 0 18
0 7 72 12 0 12 1 12
0 4 8 18 0 18 0 18

0 0 0 2814 1 014 814 17


1 0 12 8128 0 828 128 17
0 1 12 114 0 114 17 114
0 0 0 128 0 128 28 814

We have o
max
= 814 = 1n, hence n = 148, n
t
1
= 17 = n
1
= 28,
n
t
2
= 114 = n
2
= 18, n
t
3
= 0 = n
3
= 0, n
t
4
= 0 = n
4
= 0, r
t
1
= 0 = r
1
= 0,
r
t
2
= 128 =r
2
= 16, r
t
3
= 28 =r
3
= 6.
So, an optimal solution is A
+
= (0. 16. 6), 1
+
1
= (28. 18. 0. 0), = 148.
There exists another optimal solution because we have the matrix

0 0 0 2814 1 014 814 17


1 1 0 116 0 28 828 114
0 2 1 17 0 17 27 17
0 0 0 128 0 128 28 814

.
Thus n
t
1
= 114 = n
1
= 18, n
t
2
= 0 = n
2
= 0, n
t
3
= 17 = n
3
= 28,
n
t
4
= 0 =n
4
= 0, and so we have 1
+
2
= (18. 0. 28. 0).
The optimal solution of matrix game is A
+
= (0. 16. 6), 1
+
= `1
+
1
(1
`)1
+
2
, ` [0. 1[, where 1
+
1
= (28. 18. 0. 0), 1
+
2
= (18. 0. 28. 0), and = 148.
12. The payo matrix in general representation. As technological
utilization, three rms use water from the same source. Each rm has two
strategies: the rm build a station that makes water pure (strategy 1) or it uses
57
water that isnt pure (strategy 2). We suppose that if at most one rm uses
water which isnt pure then the water that exists it is good to it and this rm
are not expenses. If at least two rms uses water that isnt pure, then every
rm that uses water loses 3 monetary unities (u.m.). By using the station that
makes water pure, it costs 1 u.m. for the rm that do it.
Write the payo matrix of this game.
Solution. The payo matrix is given in Table 1. Let us consider, for
example, the situation (1,2,2). The rms 2 and 3 use water that isnt pure. So,
every rm has 3 u.m. as a expense (negative payo). For the rm 1 that has
the station to do water pure, there is a expense equal 1 u.m. more.
Table 1
Situation Payo matrix
:
1
:
2
:
3
H
1
H
2
H
3
1 1 1 -1 -1 -1
1 1 2 -1 -1 0
1 2 1 -1 0 -1
1 2 2 -4 -3 -3
2 1 1 0 -1 -1
2 1 2 -3 -4 -3
2 2 1 -3 -3 -4
2 2 2 -3 -3 -3
13. The payo matrix in bi-dimensional representation. Two facto-
ries produce the same type of production , respectively 1 in two assortment

1
and
2
, respectively 1
1
and 1
2
. The products are interchangeable. By
making a test in advance, we obtain that the preferences given in percentages
are the following representation:
`1 1
1
1
2

1
40 90

2
70 20
The percentages given in the above table refer to the rst factory (rst
production), the percentage for the second factory (second production) are the
complementarities percentages (face from the total percentage 100%). Write
the payo matrix.
Solution.
We have the general representation:
Situation Payo matrix
:
1
:
2
H
1
H
2
1 1 40 60
1 2 90 10
2 1 70 30
2 2 20 80
58
which is with the following bi-dimensional representation equivalent:
H
1
=

40 00
70 20

. H
2
=

60 80
10 80

.
14. Solving of the bi-matrix game. We consider Problem 13 with
the following modication of purchasing conditions: we remark that 50% from
those buyers that buy the product
2
, respectively 1
2
, buy the product 1
2
,
respectively
2
, too. By expressing the sales in absolute value, by considering
1000 units that have been sale in the condition of the rst version of the problem,
we ask:
1. to express the payo matrix
2. to solve the non-cooperative bi-matrix game.
Solution. We have the table:
Situation Payo matrix
:
1
:
2
H
1
H
2
1 1 400 600
1 2 900 100
2 1 700 300
2 2 600 900
where in the situation (2,2) there are
600 = 200
1
2
800. 000 = 800
1
2
200.
The bi-dimensional writing is:
H
1
=

400 000
700 600

. H
2
=

600 800
100 000

.
2. The corresponding simplex matrices are:
o
A
=

i`, 1 2 3 4 5
1 1 1 0 0 1 =
2 400 900 -1 1 0 _
3 700 600 -1 1 0 _
4 0 0 0 0 0 MIN

o
B
=

i`, 1 2 3 4 5
1 1 1 0 0 1 =
2 600 300 -1 1 0 _
3 100 900 -1 1 0 _
4 0 0 0 0 0 MIN

By solving the linear programming problems we obtain the following solu-


tions:
59
1 j
1
j
2
j
3
j
4
j
5
j
6
1
1
0,55 0,45 463,64 0 0 0
1
2
1 0 600 0 0 500
1
3
0 1 900 0 600 0
Q c
1
c
2
c
3
c
4
c
5
c
6
Q
1
0,5 0,5 650 0 0 0
Q
2
1 0 700 0 300 0
Q
3
0 1 900 0 0 300
The value of the game for the rst factory is 1
A
= 60 and for the second is
1
B
= 468. 4. These pairs of solutions (1. Q) are equilibrium points that verify
the condition: j
4+i
= 0 = c
i
= 0 and c
4+i
= 0 = j
i
= 0. We observe that the
single equilibrium point is (1
1
. Q
1
): 1
1
= [0. : 0. 4[, Q
1
= [0. : 0. [.
15. Let us consider the game 13, that is a antagonistic game with constant
sum 100%.
1. Solve the game.
2. Write the structure matrices.
Solution. 1. The simplex table corresponding to this game is:
1 2 3 4 5
1 1 1 0 0 1 =
2 40 90 -1 1 0 _
3 70 20 -1 1 0 _
4 0 0 1 -1 0 MIN
By solving the linear programming problem we obtain: 1 = [0. : 0. [, Q =
[0. 7: 0. 8[. The value of the game is 55%.
2. The structures matrices of the game are:

A
=
`1 1
1
1
2

1
14 13,5 27,5

2
24,5 3 27,5
38,5 16,5 55

B
=
1`
1

2
1
1
21 10,5 31,5
1
2
1,5 12 13,5
22,5 22,5 45
So, we can see that the syntetique situation expressed in percentages about
the structure of the types of production are the following:

1
: 27,5%,
2
: 27,5%, 1
1
: 31,5%, 1
2
: 13,5%
if the production of both factories is 100%. Because of antagonistic market
competition the second factory realizes a less sale that the rst, that it is 45%
from all sales.
16. Relation between information and income. Let us consider Prob-
lem 15, by supposing that the second factory, at the moment of choosing its
strategy, knows the strategy applied by the rst factory.
60
We ask:
1. Write the matrix of game.
2. Solve the game.
3. Compare the results obtained here with those obtained by solving Problem
15 and interpret the dierence between these two solutions.
Solution. 1. Because the second factory knows the strategy applied by the
rst factory, it can apply another two strategies obtained by combination of
strategies 1
1
and 1
2
:
strategy 1
1
respond to the strategy
1
;
strategy 1
2
respond to the strategy
2
;
strategy 1
2
respond to the strategy
1
;
strategy 1
1
respond to the strategy
2
.
We denote Q
t
= [c
t
1
. c
t
2
. c
t
3
. c
t
4
[ the strategy of the second factory in agreement
with another four strategies to respond to two strategies 1 = [j
1
. j
2
[ of the rst
factory.
We denote \
t
the value of the new game. The matrix of the game is given
by the following table:
`1 1
1
1
1
1
2
1
2
1
1
1
2
1
1
1
2

1
40 40 90 90

2
70 20 70 20
2. By elimination of the dominate column 3, the corresponding simplex table
is:
i`, 1 2 3 4 5 6
1 1 1 1 0 0 1 =
2 40 40 90 -1 1 0 _
3 70 20 20 -1 1 0 _
4 0 0 0 1 -1 0 MIN
Solving this linear programming problem we obtain: 1 = [1. 0[, Q
t
=
`
1
[0. 1. 0. 0[ `
2
[0. 4: 0. 6: 0. 0[, `
1
. `
2
_ 0, `
1
`
2
= 1,
t
= 40/.
3. To compare the results with those obtained in 15, we write the structure
matrices of the game (rows and columns for which the strategy is equal zero
will be empty).

A
=
`1 1
1
1
1
1
2
1
2

1
16`
2
40`
1
24`
2
40
16`
2
40`
1
24`
2

B
=
1`
1
1
1
: 1
1
24`
2
24`
2
1
1
: 1
2
60`
1
86`
2
60`
1
86`
2
60 60
61
We remark a decreasing equal \
t
\ = 1/ for the rst factory and an
increasing equal 15% for the second factory, as a result of the fact that it owns
an information important to it.
How is separated all production 100% of both factories?
The rst factory produces only the assortment
1
as 40% and the second
factory produces only the assortment 1
1
, as 24`
2
/ (inside the strategy 1
1
: 1
1
),
(60`
1
86`
2
)/ (inside the strategy 1
1
: 1
2
), namely a total of 60%.
1.19 Exercises and problems unsolved
Let be a zero-sum two-person game with the payo matrix
H
1
= =

8 6 0 6
10 6 1 8
4 8

.
Which is the payo matrix of player 2?
What strategies have the player 1 and the player 2?
2. (The Morra game) Two players show simultaneous one or two ngers from
the left hand and in the same time yells the number of ngers that the believe
that shows the opponent. If a player forecasts the number of ngers showed
by the opponent, he receives so many unities monetary as much as ngers they
showed together. If the both players forecast or neither forecast no then neither
receives nothing. Which is the payo matrix of this game?
3. What game in previous problems has the saddle point?
4. Which are the expected payos of player 1 in the previous games?
5. Using the iterated elimination of strictly dominated strategies solve the
matrix game with the payo matrix
=

1 1 2 0
8 0 2 4
4 1
2 8 1 8

.
6. Find the optimal strategies of the following matrix game with the payo
matrix:
a) =

2 8
2

. b) =

6 1
4

. c) =

2 4
8 1

.
7. Solve the problem 6 with the Williams method.
8. Solve the problem 6 with the graphical method for 2: and :2 matrix
games.
9. Using the graphical method, solve the following matrix games with the
payo matrices:
a) =

2 1 4
8 1

. b) =

2 4
8 1
1 6
0

.
62
10. Solve the matrix game with the payo matrix
=

1 1 2
1 1 1
2 1 0

.
11. Using the linear programming problem solve the following matrix game
with the payo matrix:
a) =

2 8 0
1 8 8
0 1 2

. b) =

7 6
0 0 4
14 1 8

.
12. A factory produces three types of production :
1
,
2
,
3
. To produce
one unit of product we use three types of materials: 1: 1
1
metal, 1
2
wooden
material, 1
3
plastic material. The expenses with pole materials in a unit of
production are given in the table:
1`
1

2

3
1
1
4 4 6
1
2
3 5 3
1
3
5 2 4
Write the matrix of the game in general representation.
13. Two branches have to do investments in four objectives. The strategy i
consists to nance the objective i, i = 1. 4. In accordance to all considerations,
the payos of the rst branch are given by the matrix:
=

0 1 1 2
1 0 8 2
0 1 2 1
2 0 0 0

.
We suppose that every branch materializes its payo in agreement with
another one: that is what the rst wins the second loses and what the rst loses
the second wins.
Write the matrix of the game in general representation.
14. Let us consider two persons playing a bi-matrix non-cooperative game,
given by the matrices
=

1 7
8 4

. 1 =

1 8
7 8

.
Solve the game.
15. In order to get an economical and social development of a town, it
appears the problem to build or not to build two economical objectives. There
are two strategies for the corresponding ministry and for the leaders of the
town: 1 the building of rst objective; 2 the building of second objective.
The people that represent the town may have two strategies: 1 they agree
63
with the proposal of Ministry; 2 they dont agree with it. The strategies
apply independent. The payos are given by the matrices:
=

10 2
1 1

. 1 =

2
1 1

.
Solve the non-cooperative game.
16. Let us consider Problem 12, and we ask:
16.1. What are the percentages j
1
: j
2
: j
3
that we have to make the supply
in advance (supply before to know the volume of the contracts for the next
period of time) with prime materials in order to obtain that the stock will be
surely used and to ensure a maximum value of the production?
16.2. Find a production plan corresponding to a total production of 4 mil-
lions u.m.
17. Solve the antagonistic game given in Problem 13.
Answers
1. H
2
=

8 10 4
6 6
0 1 8
6 8

: three strategies for player 1 and for strategies


for player 2.
2. H
1
= =

0 2 8 0
2 0 0 8
8 0 0 4
0 8 4 0

The rows are: 1


11
. 1
12
. 1
21
. 1
22
. where 1
11
means 1 nger, 1 yells, 1
12

1 nger, 2 yells, 1
21
2 ngers, 1 yells, 1
22
2 ngers, 2 yells.
3.
1
= max mina
ij
= 8,
2
= minmax a
ij
= 6, there isnt saddle point, in
pure strategy, for rst game;
1
= 2,
2
= 2, there isnt saddle point, in pure
strategy, for the second game.
4.

3
i=1

4
j=1
a
ij
r
i
n
j
= 8r
1
n
1
6r
1
n
2
r
3
n
4
;

4
i=1

3
j=1
a
ij
r
i
n
j
= 2r
1
n
2
8r
1
n
3
4r
4
n
3
.
5. A
+
= (0. 28. 18. 0), 1
+
= (0. 16. 6. 0), = 8.
6. a) A
+
= (14. 84), 1
+
= (84. 14), = 1
b) A
+
= (84. 14), 1
+
= (18. 78), = 174
c) A
+
= (84. 14), 1
+
= (12. 12), = 2.
9. a) A
+
= (12. 12), 1
+
= (84. 0. 14), = 2.
b) A
+
= (0. 18. 0. 28), 1
+
= (18. 28), = 18.
10. A
+
= (0. 8. 2), 1
+
= (2. 8. 0), = 1.
11. a) A
+
= (12. 0. 12), 1
+
1
= (12. 0. 12), 1
+
2
= (0. 18. 28), = 1
b) A
+
= (0. 712. 12), 1
+
= (0. 18. 28), = 178.
14. First solution: (1. Q), 1 = [1. 0[, Q = [0. 1[, 1
A
= 1
B
= 7.
64
Second solution: (1. Q): 1 = [0. 1[, Q = `
1
Q
1
`
2
Q
2
, Q
1
= [0. 6: 0. 4[,
Q
2
= [1. 0[, `
1
_ 0, `
2
_ 0, `
1
`
2
= 1, 1
A
= 8. 4`
1
8`
2
, 1
B
= 8.
15. (1. Q), 1 = [0. 88: 0. 67[, Q = [0. 21: 0. 70[, 1
A
= 0. 7, 1
B
= 0. 88.
16. 16.1. 1:0:0
16.2.
1
: a = 2680000`
1
2000000`
2
u.m.

2
: / = 1820000`
1
2000000`
2
u.m.
`
1
. `
2
_ 0. `
1
`
2
= 1.
17. 1 = [0. 8: 0. 11: 0. 26: 0. 88[, Q = [0. 28: 0. 88: 0. 17: 0. 17[.
The rst factory wins 0,56 u.m.
1.20 References
1. Blaga, P., Muresan, A.S., Lupas, Al., Applied mathematics, Vol. II, Ed.
Promedia Plus, Cluj-Napoca, 1999 (In Romanian)
2. Ciucu, G., Craiu, V., Stef anescu, A., Mathematical statistics and opera-
tional research, Ed. Did. Ped., Bucuresti, 1978 (In Romanian)
3. Craiu, I., Mihoc, Gh., Craiu, V., Mathematics for economists, Ed. Sti-
intic a, Bucuresti, 1971 (In Romanian)
4. Dani, E., Numerical methods in games theory, Ed. Dacia, ClujNapoca,
1983 (In Romanian)
5. Dani, E., Muresan, A.S., Applied mathematics in economy, Lito. Univ.
Babes-Bolyai, Cluj-Napoca, 1981 (In Romanian)
6. Faber, H., An analysis of nal-oer arbitration, J. of Conict Resolution,
35, 1980, 683-705
7. Gibbons, R., Games theory for applied economists, Princeton University
Press, New Jersey, 1992
8. Guiasu, S., Malita, M., Games with three players, Ed. Stiintic a, Bu-
curesti, 1973 (In Romanian)
9. Hardin, G., The tragedy of the commons, Science, 162, 1968, 1243-1248
10. Muresan, A.S., Operational research, Lito. Univ., Babes-Bolyai, Cluj-
Napoca, 1996 (In Romanian)
11. Muresan, A.S., Applied mathematics in nance, banks and exchanges,
Ed. Risoprint, Cluj-Napoca, 2000 (In Romanian)
12. Muresan, A.S., Blaga, P., Applied mathematics in economy, Vol. II, Ed.
Transilvania Press, Cluj-Napoca, 1996 (In Romanian)
13. Muresan, A.S., Rahman, M., Applied mathematics in nance, banks and
exchanges, Vol. I, Ed. Risoprint, Cluj-Napoca, 2001 (In Romanian)
14. Muresan, A.S., Rahman, M., Applied mathematics in nance, banks and
exchanges, Vol. II, Ed. Risoprint, Cluj-Napoca, 2002 (In Romanian)
15. von Neumann, J., Morgenstern, O., Theory of games and economic
behavior (3 rd edn), Princeton University Press, New Jersey, 1953
16. Onicescu, O., Strategy of games with applications to linear programming,
Ed. Academiei, Bucuresti, 1971 (In Romanian)
17. Owen, G., Game theory (2 nd edn), Academic Press, New York, 1982
65
18. Schatteles, T., Strategically games and economic analysis, Ed. Stiintic a,
Bucuresti, 1969 (In Romanian)
19. Tirole, J., The theory of industrial organization, M I T Press, 1988
20. Wang, J., An inductive proof of von Neumanns minimax theorem, Chi-
nese J. of Operations Research, 1 (1987), 68-70
21. Wang, J., The theory of games, Clarendon Press, Oxford, 1988
2 Static games of incomplete information
In this chapter we consider games of incomplete information (Bayasian
games) that is, games in which at least one player is uncertain about another
players payo function. One common example of a static game of incomplete
information is a sealed-bid auction: each bidder knows his own valuation for the
good being sold but doesnt know any other bidders valuation; bids are submit-
ted in sealed envelopes, so the players move can be thought of as simultaneous.
2.1 Static Bayesian games and Bayesian Nash equilibrium
In this section we dene the normal-form representation of a static Bayesian
game and a Bayesian Nash equilibrium in such a game. Since these denitions
are abstract and bit complex, we introduce the main ideas with a simple exam-
ple, namely Cournot competition under asymmetric information. Consider
a Cournot duopoly model with inverse demand given by 1(Q) = a Q, where
Q = c
1
c
2
is the aggregate quantity on the market. Firm 1s cost function
is C
1
(c
1
) = cc
1
. Firm 2s cost function is C
2
(c
2
) which has the probabilistic
distribution
C
2
(c
2
) :

c
L
c
2
c
H
c
2
1 0 0

.
where c
L
< c
H
. Furthermore, information is asymmetric because rm 2 knows
its cost function and rm 1s, but rm 1 knows its cost function and only that
rm 2s marginal cost c has the probabilistic distribution
c :

c
L
c
H
1 0 0

.
This situation may be when rm 2 could be a new entrant to the industry,
or could have just invented a new technology. All of this is common knowledge:
rm 1 knows that rm 2 has superior information, rm 2 knows that rm 1
knows this, and so on. Naturally, rm 2 may want to chose a dierent (and
presumable lower) quantity if its marginal cost is high than if it is low. Firm 1,
for its part, should anticipate that 2 may tailor its quantity to its cost in this
way. Let c
2
+
(c) be denote rm 2s quantity choices as a function of its cost, that
is
c
+
2
=

c
+
2
(c
L
). if c = c
L
c
+
2
(c
H
). if c = c
H
.
(41)
66
Let c
+
1
be denote rm 1s single quantity choice. If rm 2s cost is low, it will
chose c
+
2
(c
L
) to solve the problem
:ar
q2
[(a c
+
1
c
2
) c
L
[c
2
.
Similarly, if rm 2s cost is high, c
+
2
(c
H
) will solve the problem
:ar
q2
[(a c
+
1
c
2
) c
H
[c
2
Firm 1 knows that rm 2s cost is low with probability 1 0 and should
anticipate that rm 2s quantity choice will be c
+
2
(c
L
) or c
+
2
(c
H
), depending on
rm 2s cost. Thus rm 1 chooses c
+
1
to solve the problem
:ar
q1
(1 0)[(a c
1
c
+
2
(c
L
)) c[c
1
0[(a c
1
c
+
2
(c
H
)) c[c
1
so as to maximize expected prot. The rst-order conditions for these three
optimization problems are
c
+
2
(c
L
) =
a c
+
1
c
L
2
. c
+
2
(c
H
) =
a c
+
1
c
H
2
.
and
c
+
1
= .
(1 0)[a c
+
2
(c
L
) c[ 0[a c
+
2
(c
H
) c[
2
Assume that these rst-order condition characterize the solutions to the
earlier optimization problems. Then, the solutions to the three rst-order con-
ditions are
c
+
2
(c
L
) =
a 2c
L
c
8

0
6
(c
H
c
L
).
c
+
2
(c
H
) =
a 2c
H
c
8

1 0
6
(c
H
c
L
).
and
c
+
1
=
a 2c (1 0)c
L
0c
H
8
Compare c
+
2
(c
L
). c
+
2
(c
H
) and c
+
1
to the Cournot equilibrium under complete
information with costs c
1
and c
2
. Assuming that the values of c
1
and c
2
are
such that both rms equilibrium quantities are both positive, rm i produces
c
+
i
=
a2ci+cj
3
in this complete-information case. In the incomplete-information,
in contrast, c
+
2
(c
H
) is greater than
a2c
H
+c
3
and c
+
2
(c
L
) is less than
a2c
L
+c
3
. This
occurs because rm 2 not only tailors its quantity to its cost but also responds
to the fact that rm 1 cannot do so. If rm 2s cost is high, for example, it
produces less because its cost is high but also produces more because it knows
that rm 1 will produce a quantity that maximizes its expected prot and thus
is smaller than rm 1 would produce if it know rm 2s cost to be high.
67
2.2 Normal-form representation of static Bayesian games
Recall that the ensemble I =< 1. o
i
. H
i
. i 1 is a non-cooperative game
(see Denition 1.10), where o
i
is player is strategy space and H
i
is player is
payo, hence H
i
(:) = H
i
(:
1
. :
2
. . . . . :
n
) is player is payo when the players
choose the strategies (:
1
. :
2
. . . . . :
n
).
Remark 2.1. The non-cooperative game can also describe it as I = <
1.
i
. H
i
. i 1 , where
i
is player is action space and H
i
is player
is payo, hence H
i
(a) = H
i
(a
1
. a
2
. .... a
n
) is player is payo when the players
choose the actions a = (a
1
. a
2
. .... a
n
). In a simultaneous-move game of complete
information a strategy for a player is simply an action, but in a dynamic game
of complete information (nitely or innitely repeated game) a strategy can be
dierent of action. A players strategy is a complete plan of action - it species
a feasible action for the player in every contingency in which the player might be
called upon to act. Hence, in a dynamic game a strategy is more complicated.

To prepare for our description of the timing of a static game of incomplete


information, we describe the timing of a static game of complete information
as follows: (1) the players simultaneously choose actions (player i chooses a
i
from the feasible set
i
), and then (2) payos H
i
(a
1
. a
2
. .... a
n
) are received.
Now we want to develop the normal-form representation of a static Bayesian
game, namely a simultaneous-move game of incomplete information.
The rst step is to represent the idea each player knows his own payo
function but may be uncertain about the other players payo functions. Let
player is possible payo functions be represented H
i
(a
1
. a
2
. .... a
n
: t
i
), where t
i
is
called player is type and belongs to a set of possible types (or type space) T
i
.
Each type t
i
corresponds to a dierent payo function that player i might have.
Given this denition of a players type, saying that player i knows his own
payo function is equivalent to saying that player i knows his type. Likewise,
saying that player i may be uncertain about the other players payo function
is equivalent to saying that player i may be uncertain about the types of other
players, denoted by t
i
= (t
1
. .... t
i1
. t
i+1
. .... t
n
).
We use T
i
to denote the set of all possible values of t
i
, and we use the
probability distribution j
i
(t
i
[t
i
) to denote player i
t
: belief about the other
players types, t
i
, given player i
t
: knowledge of his type, t
i
.
Remark 2.2. In most of application the players types are independent, in
which case j
i
(t
i
[t
i
) doesnt depend on t
i
, so we can write player i
t
: belief as
j
i
(t
i
).
Denition 2.1. The normal-form representation of an n-player sta-
tic Bayesian game species the players action spaces
1
.
2
. . . . .
n
, their
type spaces T
1
. T
2
. . . . . T
n
, their beliefs j
1
. j
2
. . . . . j
n
, and their payo functions
H
1
. H
2
. . . . . H
n
.
Remark 2.3. We use I = < 1.
i
. T
i
. j
i
. H
i
. i 1 to denote
n-player static Bayesian game.
Remark 2.4. Player i
t
: type, t
i
, is privately known by player i, determines
player i
t
: payo function, H
i
(a
1
. a
2
. . . . . a
n
: t
i
) and is a member of the set of
68
possible types T
i
. Player i
t
: belief j
i
(t
i
[t
i
) describes i
t
: uncertainty about the
: 1 other players possible types, t
i
, given i
t
: own type, t
i
.
Example 2.1. In the Cournot game the rms actions are their quantity
choices, c
1
and c
2
. Firm 2 has two possible cost functions and thus two possible
prot or payo functions:
H
2
(c
1
. c
2
: c
L
) = [(a c
1
c
2
) c
L
[c
2
and
H
2
(c
1
. c
2
: c
H
) = [(a c
1
c
2
) c
H
[c
2
.
Firm 1 has only one possible payo function
H
1
(c
1
. c
2
: c) = [(a c
1
c
2
) c[c
1
.
Thus, rm 1s type space is T
1
= c, and rm 2s type space is T
2
=
c
L
. c
H
.
Example 2.2. Suppose that player i has two possible payo functions. We
would say that player i has two types, t
i1
and t
i2
, that player i
t
: type space is
T
i
= t
i1
. t
i2
, and that player i
t
: two payo functions are H
i
(a
1
. a
2
. . . . . a
n
: t
i1
)
and H
i
(a
1
. a
2
. . . . . a
n
: t
i2
). We can use the idea that each of a players types
corresponds to a dierent payo function the player might have to represent
the possibility that the player might have dierent sets of feasible actions, as
follows. Suppose that player i
t
: set of feasible actions is a. / with probability
c and a. /. c with probability 1 c. Then we can say that i has two types and
we can dene i
t
: feasible set of actions to be a. /. c for both types but dene
the payo from taking action c to be for type t
i1
.
Remark 2.5. The timing of a static Bayesian game is as follows:
(1) nature draws a type vector t = (t
1
. t
2
. . . . . t
n
), where t
i
is drawn from
the set of possible types T
i
;
(2) nature reveals t
i
to player i but not to any other player;
(3) the players simultaneously choose actions, player i choosing a
i
from the
feasible set
i
;
(4) payos H
i
(a
1
. a
2
. . . . . a
n
: t
i
) are received.
Because nature reveals player i
t
: type to player i but no to player , in step
(2), player , doesnt know the complete history of the game when actions are
chosen in step (3).
Remark 2.6. There are games in which player i has private information
not only about his own payo function but also about another players payo
function. We capture this possibility by allowing player i
t
: payo to depend not
only on the actions (a
1
. a
2
. . . . . a
n
) but also on all the types (t
1
. t
2
. . . . . t
n
). We
write this payo as H
i
(a
1
. a
2
. . . . . a
n
: t
1
. t
2
. . . . . t
n
).
Remark 2.7. The second technical point involves the beliefs, j
i
(t
i
[t
i
). We
will assume that it is common knowledge that in step (1) of the timing of a
static Bayesian game, nature draws a type vector t = (t
1
. t
2
. . . . . t
n
) according
to the prior probability distribution j(t). When nature then reveals t
i
to player
i, he can compute the belief j
i
(t
i
[t
i
) using Bayes rule
69
j
i
(t
i
[t
i
) =
j(t
i
. t
i
)
j(t
i
)
=
j(t
i
. t
i
)

tiTi
j(t
i
. t
i
)
.

70
2.3 Denition of Bayesian Nash equilibrium
First, we dene the players strategy spaces in the static Bayesian games. We
know that a players strategy is a complete plan of action, specifying a feasible
action in every contingency in which the player might be called on to act. Given
the timing of a static Bayesian game, in which nature begins the game by
drawing the players types, a (pure) strategy for player i must specify a feasible
action for each of player i
t
: possible types.
Denition 2.2. In the static Bayesian game I, a strategy for player i is
a function :
i
, where for each type t
i
T
i
, :
i
(t
i
) species the action from the
feasible set
i
that type t
i
would choose if drawn by nature.
The strategy spaces arent given in the normal-form representation of the
Bayesian game. Instead, in a static Bayesian game the strategy spaces are
constructed from the type and action spaces: player i
t
: set of possible (pure)
strategies, o
i
, is the set of all possible functions with domain T
i
and range
i
.
Remark 2.8. In discussion of dynamic games of incomplete information we
will do distinction between two categories of strategies. Thus, in a separating
strategy each type t
i
T
i
chooses a dierent action a
i

i
. In pooling
strategy all types choose the same action. We introduce the distinction here
only to help describe the wide variety of strategies that can be constructed from
a given pair of type and action spaces, T
i
and
i
.
Example 2.3. In the asymmetric-information Cournot game in Example 2.1
the solution consists of three quantity choices: c
+
2
(c
L
). c
+
2
(c
H
) and c
+
1
. In terms
of Denition 2.2 of a strategy, the pair (c
+
2
(c
L
). c
+
2
(c
H
)) is rm 2s strategy and
c
+
1
is rm 1s strategy. Firm 2 will choose dierent quantity depending on its
cost. It is important to note, however, that rm 1s single quantity choice should
take into account that rm 2s quantity will depend on rm 2s cost in this way.
Thus, if our equilibrium concept is to require that rm 1s strategy be a best
response to rm 2s strategy, then rm 2s strategy must be a pair of quantities,
one for each possible cost type, else rm 1 simply cannot compute whether its
strategy is indeed a best response to rm 2s.
Given the denition of a strategy in a Bayesian game, we turn next to the
denition of a Bayesian Nash equilibrium. The central idea is both simple and
familiar: each players strategy must be a best response to the other players
strategies. That is, a Bayesian Nash equilibrium is simply a Nash equilibrium
in a Bayesian game.
Denition 2.3. In the static Bayesian game I the strategies :
+
= (:
+
1
. :
+
2
. . . . . :
+
n
)
are a (pure-strategy) Bayesian Nash equilibrium if for each player i and for
each of i
t
: types t
i
T
i
, :
i
(t
i
) solves the problem
max
aiAi

tiTi
H
i
(:
+
1
(t
1
). . . . . :
+
i1
(t
i1
). a
i
. :
+
i+1
(t
i+1
). . . . . :
+
n
(t
n
): t)j
i
(t
i
[t
i
).

Remark 2.9. In the Bayesian Nash equilibrium no player wants to chance


his strategy, even if the chance involves only one action by one type.
71
Remark 2.10. We can show that a nite static Bayesian game there exists
a Bayesian Nash equilibrium, perhaps in mixed strategies.
2.4 The revelation principle
An important tool for designing games when the players have private informa-
tion, due to Myerson [ ], in context of Bayesian games, is the revelation principle.
It can be applied in the auction and bilateral-trading problems described in the
previous sections, as well as in a wide variety of other problems. Before we state
and prove the revelation principle for static Bayesian games, we sketch the way
the revelation principle is used in the auction and bilateral-trading problems.
Consider a seller who wishes to design an auction to maximize his expected
revenue. The highest bidder paid money to the seller and received the good, but
there are many other possibilities. The bidders might have to pay an entry fee.
More generally, some of the losing bidders might have to pay money, perhaps
in amounts that depend on their own and others bids. Also, the seller might
set a reservation price - a oor below which bids will not be accepted. More
generally, the good might stay with the seller with some probability, and might
not always go to the highest bidder when the seller does release it.
The seller can use the revelation principle to simplify this problem in two
ways. First, the seller can restrict attention to the following class of games:
1) The bidders simultaneously make claims about their types (their valua-
tions). Bidderican claim to be any type t
i
from i
t
: set of feasible types T
i
, no
matter what i
t
: true type, t
i
.
2) Given the bidders claims (t
1
. t
2
. . . . . t
n
), bidder i pays r
i
(t
1
. t
2
. . . . . t
n
)
and receives the good with probability c
i
(t
1
. t
2
. . . . . t
n
).
For each possible combination of claims (t
1
. t
2
. . . . . t
n
). the sum of the prob-
ability c
1
(t
1
. t
2
. . . . . t
n
) c
n
(t
1
. t
2
. . . . . t
n
) must be less than or equal to
one. The second way the seller can use the revelation principle is to restrict
attention to those direct mechanisms in which it is a Bayesian Nash equilibrium
for each bidder to tell the truth - that is, payment and probability functions
r
1
(t
1
. t
2
. . . . . t
n
). . . . . r
n
(t
1
. t
2
. . . . . t
n
):
c
1
(t
1
. t
2
. . . . . t
n
). . . . . c
n
(t
1
. t
2
. . . . . t
n
)
such that each player i
t
: equilibrium strategy is to claim t
i
(t
i
) = t
i
for each
t
i
T
i
.
Denition 2.4. Static Bayesian game in which each players only action
is to submit a claim about his type is called direct mechanism. A direct
mechanism in which truth - telling is a Bayesian Nash equilibrium is called
incentive - compatible.
Remark 2.11. Outside the context of auction design, the revelation princi-
ple can again be used in these two ways. Any Bayesian Nash equilibrium in an
appropriately chosen new Bayesian game, where by "represented" we mean that
for each possible combination of the players types (t
1
. t
2
. . . . . t
n
), the players
72
actions and payos in the new equilibrium are identical to those in the old equi-
librium. No matter what the original game, the new Bayesian game is always a
direct mechanism; no matter what the original equilibrium, the new equilibrium
in the new game is always truth - telling.
The following result hold
Theorem 2.1. (The revelation principle).
Any Bayesian Nash equilibrium of any Bayesian game can be represented by
an incentive - compatible direct mechanism.
Proof. Consider the Bayesian Nash equilibrium :
+
= (:
+
1
. :
+
2
. . . . . :
+
n
) in
Bayesian game I = < 1.
i
. T
i
. j
i
. H
i
. i 1 . We will construct
a direct mechanism with a truth - telling equilibrium that represent :
+
. The
appropriate direct mechanism is a static Bayesian game with the same types
spaces and beliefs as I but with new action spaces and new payo functions.
The new action spaces are simple. Player i
t
: feasible actions in the direct
mechanism are claims about i
t
: possible types. That is, player i
t
: action space
is T
i
. The new payo functions are more complicated. They depend not only on
original game I, but also on the original equilibrium in that game, :
+
. The idea
is to use the fact that :
+
is an equilibrium in I to ensure that truth - telling is an
equilibrium of the direct mechanism, as follows. The fact that :
+
is a Bayesian
Nash equilibrium of I means that for each player i, :
+
i
is i
t
: best response to
the other players strategies (:
+
1
. . . . . :
+
i1
. :
+
i+1
. . . . . :
+
n
).
Hence, for each of i
t
: types t
i
T
i
, :
+
i
(t
i
) is the best action for i to choose
from
i
, given that the other players strategies are (:
+
1
. . . . . :
+
i1
. :
+
i+1
. . . . . :
+
n
).
Thus, if i
t
: type is t
i
and we allow i to choose an action from a subset of
i
that includes :
+
i
(t
i
), then i
t
: optimal choice remains :
+
i
(t
i
), again assuming that
the other functions in the direct mechanism are chosen so as to confront each
player with a choice of exactly this kind.
We dene the payos in the direct mechanism by substituting the players
type reports in the new game, t = (t
1
. t
2
. . . . . t
n
), into their equilibrium strate-
gies from the old game, :
+
, and then substituting the resulting actions in the
old game, :
+
(t) = (:
+
1
(t
1
). :
+
2
(t
2
). . . . . :
+
n
(t
n
)), into the payo functions from
the old game. Formally, i
t
: payo function is

i
(t. t) = H
i
(:
+
(t). t)
,
where t = (t
1
. t
2
. . . . . t
n
).
We conclude the proof by showing that truth - telling is a Bayesian Nash
equilibrium of this direct mechanism. By claiming to be type t
i
from T
i
, player i
is in eect choosing to take the action :
+
i
(t
i
) from
i
. If all the other players tell
the truth, then they are in eect playing the strategies (:
+
1
. . . . . :
+
i1
. :
+
i+1
. . . . . :
+
n
).
But we argued earlier that if they play these strategies, then when i
t
: type is t
i
the best action forito choose is :
+
i
(t
i
). Thus, if the other players tell the truth,
then when i
t
: type is t
i
the best type to claim to be is t
i
. That is, truth -
telling is an equilibrium. Hence, it is a Bayesian Nash equilibrium of the static
73
Bayesian game I
t
= < 1. T
i
. T
i
. j
i
. H
i
. i 1 for each player i to play
the truth - telling strategy t
i
(t
i
) = t
i
for every t
i
T
i
.
In [4], Harsanyi suggested that player ,
t
: mixed strategy represents player
i
t
: uncertainty about ,
t
: choice of a pure strategy, and that ,
t
: choice in turn
depends on the realization of a small amount of private information.
A mixed strategy Nash equilibrium in a game of complete information can
be interpreted as a pure - strategy Bayesian Nash equilibrium in a closed related
game with a little bit of incomplete information. The crucial feature of a mixed
- strategy Nash equilibrium is not that player , chooses a strategy randomly, but
rather than player i is uncertain about player ,
t
: choice; this uncertainty arise
either because of randomization or because of a little incomplete information,
as in the following example.
Example 2.4. Consider a bi - matrix game (like Battle of the sexes) in
which the players, although have known each other for quite some time, players
1 and 2 arent quite sure of each others payo. We suppose that players 1
payo if both attend the rst strategy is 2 t
1
, where t
1
is privately known by
player 1; players 2 payo if both attend the second strategy is 2 t
2
, where t
2
is privately known by player 2; and t
1
. t
2
are independent draws from a uniform
distribution on [0. r[.
In terms of the static Bayesian game in normal form I =< 1. 2.
1
.
2
. T
1
. T
2
. j
1
. j
2
. H
1
. H
2
,
the action spaces are
1
=
2
= 1. 2, the type spaces are T
1
= T
2
= [0. r[, the
beliefs are j
1
(t
2
) = j
2
(t
1
) =
1
x
for all t
1
and t
2
, and the payos are as follows
in the table
Situation Payo matrix
:
1
:
2
H
1
H
2
1 1 2 t
1
1
1 2 0 0
2 1 0 0
2 2 1 2 t
2
We will construct a pure - strategy Bayesian Nash equilibrium of this incom-
plete information static game, in which player 1 plays strategy 1 if t
1
exceeds
a critical value, c
1
, and plays strategy 2 otherwise and player 2 plays strategy
2 if t
2
exceeds a critical value, c
2
, and plays strategy 1 otherwise. In such an
equilibrium, player 1 plays strategy 1 with the probability
xc1
x
and player 2
plays strategy 2 with the probability
xc2
x
. We will show that as the incom-
plete information disappears, that is, as r approaches zero, the players behavior
in this pure strategy Nash equilibrium approaches their behavior in the mixed
strategy Nash equilibrium in the original game of complete information. The
original game have the payo matrices
H
1
=

2 0
0 1

. H
2
=

1 0
0 2

74
and there are two pure strategy Nash equilibria (1. 1) and (2. 2) and a mixed
strategy Nash equilibrium in which player 1 plays strategy 1 with the proba-
bility
2
3
and player 2 plays strategy 2 with the probability
2
3
. Really, the both
probabilities
xc1
x
and
xc2
x
approach
2
3
as r approaches zero.
Suppose that players 1 and 2 play the strategies just described. For a given
value of r, we will determine values of c
1
and c
2
such that these strategies are
a Bayesian Nash equilibrium. Given players 2 strategy, player 1s expected
payos from playing strategy 1 and from playing strategy 2 are
c
2
r
(2 t
1
) (1
c
2
r
).0 =
c
2
r
(2 t
1
)
and
c
2
r
.0 (1
c
2
r
).1 = 1
c
2
r
.
respectively. Thus playing strategy 1 is optimal if and only if
t
1
_
r 8
c
2
= c
1
.
Similarly, given players strategy, player 2s expected payos from playing
strategy 2 and from playing strategy 1 are
(1
c
1
r
).0
c
1
r
(2 t
2
) =
c
1
r
(2 t
2
)
and
(1
c
1
r
).1
c
1
r
.0 = 1
c
1
r
.
respectively. Thus, playing strategy 2 is optimal if and only if
t
2
_
r
c
1
8 = c
2
.
The above relationships yields to c
2
= c
1
and c
2
2
8c
2
r = 0. Solving
the quadratic then shows that the probability that player 1 plays strategy 1,
namely
xc1
x
, and the probability that player 2 plays strategy 2, namely
xc2
x
.
both equal
1
8

0 4r
2r
.
which approaches
2
3
as r approaches zero. Thus, as the incomplete information
disappears, the players behavior in this pure strategy Bayesian Nash equilib-
rium of the incomplete information game approaches their behavior in the mixed
strategy Nash equilibrium in the original game of complete information.
75
2.5 Exercises and problem solved
1. (An auction) There are two bidders, i = 1. 2. Bidder i has a valuation
i
for
the good - that is, if bidder i gets the good and pays the price j, then i
t
: payo is

i
j. The two bidders valuations are independently and uniformly distributed
on [0. 1[. Bids are constrained to be nonnegative. The bidders simultaneously
submit their bids. The higher bidder wins the good and pays the price she
bid; the other bidder gets and pays nothing. In the case of a tie, the winner
is determined by a ip of a coin. The bidder are risk - neutral. All of this is
common knowledge. Formulate this problem as a static Bayesian game, and
nd out a Bayesian Nash equilibrium.
Solution In terms of a static Bayesian game I =< 1.
1
.
2
. T
1
. T
2
. j
1
. j
2
. H
1
. H
2

where 1 = 1. 2, the action space is
i
= [0. ) that is, player i
t
: action is
to submit a nonnegative bid, /
i
, and his type is his valuation
i
, hence the
type space is T
i
= [0. 1[. We must identify the beliefs and the payo functions.
Because the valuations are independent, player i believes that
j
is uniformly
distributed on [0. 1[, no matter what the value of
i
. Player i
t
: payo function
H
i
:
1

2
T
1
T
2
R is given by the relationship
H
i
(/
1
. /
2
.
1
.
2
) =

i
/
i
. i1 /
i
/
j
vibi
2
. i1 /
i
= /
j
0. i1 /
i
< /
j
.
(42)
To derive a Bayesian equilibrium of this game we construct the players
strategy spaces. We know that in a static Bayesian game a strategy is a function
from type space to action space, /
i
: T
i

i
.
i
/
i
(
i
), where /
i
(
i
) species
the bid that each of i
t
: types (valuations) would choose. In a Bayesian Nash
equilibrium, player 1
t
: strategy /
1
(
1
) is a best response to player 2
t
: strategy
/
2
(
2
), and vice versa. The pair of strategies (/
1
(
1
). /
2
(
2
)) is a Bayesian Nash
equilibrium if for each
i
in [0. 1[, /
i
(
i
) solves the problems
max
bi
(
i
/
i
)1/
i
/
j
(
j
)
1
2
(
i
/
i
)1/
i
= /
j
(
j
). i = 1. 2.
We simplify the exposition and calculations by looking for a linear equilib-
rium /
1
(
1
) = a
1
c
1

1
and /
2
(
2
) = a
2
c
2

2
. For a given value of
i
, player
i
t
: best response solves the problem
max
bi
(
i
/
i
)1/
i
a
j
c
j

j
.
where we have used the fact that 1/
i
= /
j
(
j
) = 0, because /
j
(
j
) =
a
j
c
j

j
and
j
is uniformly distributed, so /
j
is uniformly distributed. Since it
is pointless for player i to bid above ,
t
: maximum bid, we have a
j
_ /
i
_ a
j
c
j
,
so
1/
i
a
j
c
j

j
= 1
j
<
/
i
a
j
c
j
=
/
i
a
j
c
j
.
76
Player i
t
: best response is therefore
/
i
(
i
) =

vi+ai
2
. i1
i
_ a
j
a
j
. i1 a
i
< a
j
.
(43)
We prove that a
j
_ 0. If we have 0 < a
j
< 1 then there are some values of

i
such that
i
< a
j
. in which case /
i
(
i
) isnt linear, rather, it is at at rst
and positively sloped later. Since we are looking for a linear equilibrium, we
therefore rule out 0 < a
j
< 1. focusing instead on a
j
_ 1 and a
j
_ 0. But the
former cannot occur in equilibrium since it is optimal for a higher type to bid
at least as much as a lower types optimal bid, we have c
j
_ 0. but then a
j
_ 1
would imply that /
j
(
j
) _
j
. which cannot be optimal. Thus, if /
i
(
i
) is to
be linear, then we must have a
j
_ 0. in which case /
i
(
i
) =
vi+ai
2
. so a
i
=
aj
2
and c
i
=
1
2
. We can repeat the same analysis for player , under the assumption
that player i adopts the strategy /
i
(
i
) = a
i
c
i

i
. This yields a
i
_ 0. a
j
=
ai
2
.
and c
j
=
1
2
. Combining these two sets of results then yields a
i
= a
j
= 0 and
c
i
= c
j
=
1
2
. That is, /
i
(
i
) =
vi
2
.
Remark 2.12. Note well that we arent restricting the players strategy
spaces to include only linear strategies. Rather, we allow the players to choose
arbitrary strategies but ask whether there is an equilibrium that is linear. It
turns out that because the players valuations are uniformly distributed, a linear
equilibrium not only exists but is unique. We nd out that /
i
(
i
) =
vi
2
. That
is, each player submit a bid equal to half her valuation. Such a bid reects the
fundamental trade - o a bidder faces in an action: the higher the bid, the more
likely the bidder is to win; the lower the bid, the larger the gain if the bidder
does win.
Remark 2.13. One might wonder whether there are other Bayesian Nash
equilibrium of game treated in the rst problem. Also, how equilibrium bidding
changes as the distribution of the bidders valuations changes. Neither of these
questions can be answered using the technique just applied: it is fruitless to
try to guess all the functional forms other equilibria of this game might have,
and a linear equilibrium doesnt exist for any other distribution of valuations.
We derive, next, a symmetric Bayesian Nash equilibrium (namely, the players
strategies are identical, there is a single function /(
i
) such that player 1
t
: strat-
egy /
1
(
1
) is /(
1
) and player 2
t
: strategy /
2
(
2
) is /(
2
). and this single strategy
is a best response to itself, again for the case of uniformly distributed valua-
tions. Under the assumption that the players strategies are strictly increasing
and dierentiable, we show that the unique symmetric Bayesian Nash equilib-
rium is the linear equilibrium. The technique we use can easily be extended to
a broad class of valuation distributions, as well as case of : bidders.
2. In the game of problem 1 there are other Bayesian Nash equilibria ?
Derive a symmetric Bayesian Nash equilibrium.
Solution As we just have mentioned in Remark 2.13, it is fruitless to try
to guess all the functional forms other equilibria. Suppose player , adopts the
strategy /, and assume that / is strictly increasing and dierentiable. Then for
a given value of
i
, player i
t
: optimal bid solves the problem
77
max
bi
(
i
/
i
)1/
i
/(
j
).
Let /
1
(/
j
) be denote the valuation that bidder , must have in order to bid
/
j
. That is, /
1
(/
j
) =
j
if /
j
= /(
j
). Since
j
is uniformly distributed on [0. 1[,
1/
i
/(
j
) = 1/
1
(/
i
)
j
= /
1
(/
i
). The rst - order condition for
player i
t
: optimization problem is therefore
/
1
(/
i
) (
i
/
i
)
d
d/
i
/
1
(/
i
) = 0.
This rst - order condition is an implicit equation for bidder i
t
: best response
to the strategy / played by bidder ,, given that bidder i
t
: valuation is
i
. If the
strategy / is to be a symmetric Bayesian Nash equilibrium, we require that the
solution to the rst - order condition be /(
i
) : that is, for each of bidder i
t
:
possible valuations, bidder i doesnt with to deviate from the strategy /, given
that bidder , plays this strategy. To impose this requirement, we substitute
/
i
= /(
i
) into the rst - order condition yielding
/
1
(/(
i
)) (
i
/(
i
))
d
d/
i
/
1
(/(
i
)) = 0.
We have /
1
(/(
i
)) =
i
. of course. Furthermore,
d
dbi
(/
1
(/(
i
))) =
1
b
0
(vi)
.
That is,
d
dbi
/
1
(
i
) measures how much bidder i
t
: valuation must change to
produce a unit change in the bid, whereas /
t
(
i
) measures how much the bid
changes in response to a unit change in the valuation. Thus, / must satisfy the
rst - order dierential equation,

i
(
i
/(
i
))
1
/
t
(
i
)
= 0.
which is more convenient expressed as
i
/
t
(
i
) /(
i
) =
i
. The left - hand
side of this dierential equation is
d
dvi
(
i
/(
i
)). Integrating both sides of the
equation therefore yields

i
/(
i
) =
1
2

2
i
/.
where / is a constant of integration. To eliminate /, we need a bounda-
ry condition. Fortunately, simple economic reasoning provides one: no player
should bid more that his valuation. Thus, we require /(
i
) _
i
for every
i
. In
particular, we require /(0) _ 0. Since bids are constrained to be nonnegative,
this implies that /(0) = 0. so / = 0 and /(
i
) =
vi
2
. as claimed.
3.(A double auction) Consider a trading game called a double auction.
The seller names an asking price, j
s
. and the buyer simultaneously names an
oer price, j
b
. If j
b
_ j
s
, then trade occurs at price j =
p
b
+ps
2
, if j
b
< j
s
, then
no trade occurs. The buyers valuation for the sellers good is
b
, the sellerss
is
s
.These valuations are private information and are drawn from independent
uniform distributions on [0.1[. If the buyer gets the good for price j, then the
78
buyers utility is
b
j: if there isnt trade, then the buyers utility is zero. If
the seller sells the good for price j, then the sellers utility is j
s
: if there isnt
trade, then the sellers utility is zero. Find out the Bayesian Nash equilibria.
Solution In this static Bayesian game, a strategy for the buyer is a func-
tion j
b
specifying the price the buyer will oer for each of the buyers possible
valuations, namely j
b
(
b
). Likewise, a strategy for seller is a function j
s
speci-
fying the price the seller will demand for each of the seller s valuations, namely
j
s
(
s
). A pair of strategies (j
b
(
b
). j
s
(
s
)) is a Bayesian Nash equilibrium if the
following two conditions hold. For each
b
in [0. 1[, j
b
(
b
) solves the problem
max
p
b

j
b
'[j
s
(
s
)[j
b
_ j
s
(
s
)[
2
1(j
b
_ j
s
(
s
)).
where '[j
s
(
s
)[j
b
_ j
s
(
s
)[ is the expected price the seller will demand, con-
ditional on the demand being less than buyers oer of j
b
. For each
s
in [0. 1[,
j
s
(
s
) solves the problem
max
ps

j
s
'[j
b
(
b
)[j
b
(
b
) _ j
s
[
2

s
1(j
b
(
b
) _ j
s
).
where '[j
b
(
b
)[j
b
(
b
) _ j
s
[ is the expected price the buyer will oer, con-
ditional on the oer being greater that the sellers demand of j
s
.
There are many Bayesian Nash equilibria of this game. Consider the fol-
lowing one - price equilibrium, for example, in which trade occurs at a single
price if it occurs at all. For any value r in [0. 1[, let the buyers strategy be
to oer r if
b
_ r and to oer zero otherwise, and let sellers strategy be to
demand r if
s
_ r and to demand one otherwise. Given the buyers strategy,
the sellers choices amount to trading at r or not trading, so the sellers strategy
is a best response to the buyers because the seller - types who prefer trading at
r to not trading do so, and vice versa. The analogous argument shows that the
buyers strategy is a best response to the sellers so these strategies are indeed
a Bayesian Nash equilibrium. In this equilibrium, trade occurs for the (
s
.
b
)
pairs which can be indicated in a gure; trade would be ecient for all (
s
.
b
)
pairs such that
b
_
s
, but doesnt occur in the two regions for which
b
_
s
and
b
_ r or
b
_
s
and
s
_ r.
We now derive a linear Bayesian Nash equilibrium of the double auction.
As in previous problem, we arent restricting the players strategy spaces to
include only linear strategies. Rather, we allow the players to choose arbitrary
strategies but ask whether there is an equilibrium that is linear.
Many other equilibria exist besides the one-price equilibria and the linear
equilibrium, but the linear equilibrium has interesting eciency properties,
which we describe later.
Suppose the sellers strategy is j
s
(
s
) = a
s
c
s

s
. Then j
s
is uniformly
distributed on [a
s
. a
s
c
s
[. so the rst relationship (the rst problem) becomes
max
p
b
[
b

1
2
(j
b

a
s
j
b
2
)[
j
b
a
s
c
s
.
the rst - order condition for which yields
79
j
b
=
2
8

b

1
8
a
s
.
Thus, if the seller plays a linear strategy, then the buyers best response is
also linear. Analogously, suppose the buyers strategy is j
b
(
b
) = a
b
c
b

b
.
Then j
b
is uniformly distributed on [a
b
. a
b
c
b
[, so the second relationship (the
second problem) becomes
max
ps
[
1
2
(j
s

j
s
a
b
c
b
2
)
s
[
a
b
c
b
j
s
c
b
.
the rst - order condition for which yields
j
s
=
2
8

s

1
8
(a
b
c
b
).
Thus, if the buyer plays a linear strategy, then the sellers best response
is also linear. If the players linear strategies are to be best responses to each
other, the relationship for j
b
implies that c
b
=
2
3
and a
b
=
as
3
, and relationship
for j
s
implies that c
s
=
2
3
and a
s
=
a
b
+c
b
3
. Therefore, the linear equilibrium
strategies are
j
b
(
b
) =
2
8

b

1
12
and
j
s
(
s
) =
2
8

s

1
4
.
Recall that trade occurs in the double auction if and only if j
b
_ j
s
. The
last relationship shows us that the trade occurs in the linear equilibrium if and
only if
b
_
s

1
4
.
A gure with this situations reveals that seller - types above
3
4
make demands
above the buyers highest oer, j
b
(1) =
3
4
, and buyer - types below
1
4
make oers
below the sellers lowest oer, j
s
(0) =
1
4
. The depictions of which valuation pairs
trade in the one - price and linear equilibrium, respectively. In both cases, the
most valuable possible trade, namely
s
= 0 and
b
= 1, does occur. But the one
- price equilibrium misses some valuable trades (such as
s
= 0 and
b
r -.
where c is small) and achieves some trades that are worth next to nothing,
such as
s
= r c and
b
r c. The linear equilibrium, in contrast, misses
all trades worth next to nothing but achieves all trades worth at least
1
4
. This
suggest that the linear equilibrium may dominate the one - price equilibria, in
terms of the expected gains the players, receive, but also raises the possibility
that the players might do even better in an alternative equilibrium.
In [9] Myerson and Satterthwaite show that, for the uniform valuation dis-
tributions considered here, the linear equilibrium yields higher expected gains
for the player than any other Bayesian Nash equilibria of the double auction
(including but far from limited to the one - price equilibria). This implies that
80
there isnt Bayesian Nash equilibrium of the double auction in which trade oc-
curs if and only if
b
_
s
. that is, it is ecient.
They also show that this latter result is very general: if
b
is continuously
distribution on [r
b
. n
b
[ and
s
is continuously distributed on [r
s
. n
s
[. where n
b

r
s
and n
s
r
b
, then there isnt bargaining game the buyer and seller would
willingly play that a Bayesian Nash equilibrium in which trade occurs if and
only if is ecient.
Remark 2.14. The revelation principle can be used to prove this general
result, and then translating the result into Hall and Lazears employment model.
If the rm has private information about the workers marginal product (:) and
the worker has private information about his outside opportunity (), then there
isnt bargaining game that the rm and the worker would willingly play that
produces employment if and only if it is ecient, that is, : _ .
81
2.6 Exercises and problems unsolved
1. Consider a Cournot duopoly operating in a market with inverse demand
1(Q) = aQ, where Q = c
1
c
2
is the aggregate quantity on the market. Both
rms have total costs C
i
(c
i
) = cc
i
, but demand is uncertain: it is low, a = a
L
,
with probability 1 0, and high, a = a
H
, with probability 0. Furthermore,
information is asymmetric: rm 1 knows whether demand is high or low, but
rm 2 doesnt. All of this is common knowledge. The two rms simultaneously
choose quantities. What are the strategy spaces for the two rms ? Make
assumption concerning a
H
. a
L
. 0. and c such that all equilibrium quantities are
positive. What is the Bayesian Nash equilibrium of this game ?
2. Consider the following asymmetric - information model of Bertrand
duopoly with dierentiated products. Demand for rm i is c
i
(j
i
. j
j
) = a j
i

/
i
j
j
.Costs are zero for both rms. The sensitivity of rm i
t
: demand to rms
,
t
: price is either high or low. That is, /
i
is either /
H
or /
L
, where /
H
/
L
0.
For each rm /
i
= /
H
with probability 0 and /
i
= /
L
with probability 1 0,
independent of the realization of /
j
. Each rm knows its own /
i
but not its
competitors. All of this is common knowledge. What are the action spaces,
type space, beliefs, and utility functions in this game ? What are the strategy
spaces ? What conditions dene a symmetric pure - strategy Bayesian Nash
equilibrium of this game ? Solve for such an equilibrium.
3. Find all the pure - strategy Bayesian Nash equilibrium in the following
static Bayesian game:
1. Nature determines whether the payos are as in Game 1 or as in Game
2, each game being equally likely.
2. Player 1 learns whether nature has drawn Game 1 or Game 2, but player
2 doesnt.
3. Player 1 chooses either T or B; player 2 simultaneously chooses either L
or R.
4. Payos are given by the game drawn by nature.
L R L R
T 1,1 0,0 T 0,0 0,0
B 0,0 0,0 B 0,0 2,2
Game 1 Game 2
4. Recall from Section 1.1 of Chapter 1 that Matching pennies hasnt pure -
strategy Nash equilibrium but has one mixed - strategy Nash equilibrium: each
player plays H with probability 12.
Player 2
Player 1
H T
H 1,-1 1,1
T 1,1 1,1
82
Provide a pure - strategy Bayesian Nash equilibrium of a corresponding game
of incomplete information such that as the incomplete information disappears,
the players behavior in the Bayesian Nash equilibrium approaches their behav-
ior in the mixed - strategy Nash equilibria in the original game of complete
information.
5. Consider a rst - price, sealed - bid auction in which the bidders val-
uations are independently and uniformly distributed on [0. 1[. Show that if
there are : bidders, then the strategy of bidding
n1
n
times ones valuation is a
symmetric Bayesian Nash equilibrium of this auction.
6. Consider a rst - price, sealed - bid auction in which the bidders val-
uations are independently and identically distributed according to the strictly
positive density 1(
i
) on [0. 1[. Compute a symmetric Bayesian Nash equilib-
rium for the two - bidder case.
7. Reinterpret the buyer and seller in the double auction analyzed in problem
3 (A double auction) from Section 2.5 as a rm that knows a workers marginal
product (:) and a worker who knows his outside opportunity (), respectively.
In this context, trade means that the worker is employed by the rm, and the
price at which the parties trade is workers wage n. If there is trade then the
rms payo is :n and the workers is n; if there isnt trade then the rms
payo is zero and the workers is . Suppose that : and are independent draws
from a uniform distribution on [0. 1[, as in the text. For purposes of comparison,
compute the players expected payos in the linear equilibrium of the double
auction. Now consider the following two trading games as alternatives to the
double auction.
Game I: Before the parties learn their private information, they sign a
contract specifying that if the worker is employed by the rm then the workers
wage will be n, but also that either side can escape from the employment
relationship at no cost. After the parties learn the values of their respective
pieces of private information, they simultaneously announce either that they
Accept the wage n or that they Reject that wage. If both announce Accept,
then trade occurs; otherwise it doesnt. Given an arbitrary value of n from
[0. 1[, what is the Bayesian Nash equilibrium of this game ? Draw a diagram
showing the type - pairs that trade. Find the value of n that maximize the sum
of the players expected payos and compute this maximized sum.
Game II: Before the parties learn their private information, they sign a
contract specifying that the following dynamic game will be used to determine
whether the worker joins the rm and if so at what wage. After the parties learn
the values of their respective pieces of private information, the rm chooses a
wage n to oer the worker, which the worker then accepts or rejects. Try to
analyze this game using backwards induction. Given n and , what will the
worker do ? If the rm anticipates what the worker will do, then given : what
will the rm do ? What is the sum of the players expected payos ?
83
2.7 References
1. Dani, E., Numerical method in games theory, Ed. Dacia, Cluj-Napoca,
1983
2. Dani, E., Muresan, A.S., Applied mathematics in economy, Lito.
Univ., Babes-Bolyai, Cluj-Napoca, 1981
3. Gibbons, R., Games theory for applied economists, Princeton Univer-
sity Press, New Jersey, 1992
4. Harsanyi, J., Games with randomly distributed payos: A new ra-
tionale for mixed strategy equilibrium points, International Journal of Game
Theory, 2, 1973, 1-23
5. Muresan, A.S., Operational research, Lito. Univ., Babes-Bolyai, Cluj-
Napoca, 1996
6. Muresan, A.S., Applied mathematics in nance, banks and exchanges,
Ed. Risoprint, Cluj-Napoca, 2000
7. Muresan, A.S., Applied mathematics in nance, banks and exchanges,
Vol. I, Ed. Risoprint, Cluj-Napoca, 2001
8. Muresan, A.S., Applied mathematics in nance, banks and exchanges,
Vol. II, Ed. Risoprint, Cluj-Napoca, 2002
9. Myerson, R., Satterthwaite, M., Ecient mechanisms for bilateral
trading, Journal of Economic Theory, 28, 1983, 265-281
10. Owen, G., Game theory (2 nd edn.) Academic Press, New York,
1982
11. Wang, J., The theory of games, Clarendon Press, Oxford, 1988
84
Part II
THE ABSTRACT THEORY OF
GAMES
85
3 Generalized games and abstract economies
Fixed point theorems are the basic mathematical tools in showing the existence
of solution in game theory and economics. While I have tried to integrate the
mathematics and applications this chapter isnt a comprehensive introduction to
either general equilibrium theory or game theory. Here only nite-dimensional
spaces are used. While many of the results presented here are true in arbi-
trary locally convex spaces, no attempt has been made to cover the innite-
dimensional results.
The main bibliographical source for this chapter is the Borders book [10],
which I have been used in my lectures with the students in Computer Science
from the Faculty of Mathematics and Informatics. Also, we use the recently
results obtained by Aliprantis, Tourky, Yannelis, Maugeri, Ray, DAgata, Oetli,
Schlager, Agarwal, ORegan, Rim, Kim, Husai, Tarafdar, Llinares, Muresan,
and so on.
3.1 Introduction
The fundamental idealization made in modelling an economy is the notion of a
commodity. We suppose that it is possible to classify all the dierent goods and
services in the world into a nite number, :, of commodities, which are available
in innitely divisible units. The commodity space is then R
m
. A vector in R
m
species a list of quantities of each commodity. It is commodity vectors that are
exchanged, manufactured and consumed in the course of economic activity, not
individual commodities; although a typical exchange involves a zero quantity of
most commodities. A price vector lists the value of a unit of each commodity
and so belongs to R
m
. Thus the value of commodity vector r at price j is

m
i=1
j
i
r
i
= j.r.
The principal participants in an economy are the consumers. We will assume
that there is a given nite number of consumers. Not every commodity vector
is admissible as a nal consumption for a consumer. The set A
i
R
m
of all
admissible consumption vectors for consumer i is his consumption set. There
are a variety of restrictions that might be embodied in the consumption set. One
possible restriction that might be placed on admissible consumption vectors is
that they be nonnegative. Under this interpretation, negative quantities of a
commodity in a nal consumption vector mean that the consumer is supplying
the commodity as a service.
In a private ownership economy consumers are also partially characterized
by their initial endowment of commodities. This is represented as a point
n
i
in the commodity space. In a market economy a consumer must purchase
his consumption vector at the market prices. The set of admissible commodity
vectors that he can aord at prices j given an income '
i
is called his budget
set and is just r A
i
[j.r _ '
i
. The budget set might well be empty. The
problem faced by a consumer in a market economy is to choose a consumption
vector or set of them from the budget set. To do this, the consumer must have
some criterion for choosing. One way to formalize the criterion is to assume that
86
the consumer has a utility index, that is, a real-valued function n
i
, n
i
: A
i

R. r n
i
(r). The idea is that a consumer would prefer to consume vector r
rather that vector n if n
i
(r) n
i
(n) and would be indierent if n
i
(r) = n
i
(n).
The solution to the consumers problem is then to nd all vectors r which
maximize n on the budget set. The set of solutions to a consumers problem for
given prices is his demand set.
The suppliers problem is simple. Suppliers are motivated by prots. Each
supplier , has a production set 1
j
of technological feasible supply vectors.
A supply vector species the quantities of each commodity supplied and the
amount of each commodity used as an input. Inputs are denoted by negative
quantities and outputs by positive ones. The prot or net income associated
with supply vector n at price j is just

m
i=1
j
i
n
i
= j.n. The suppliers problem
is then to choose a n from the set technologically feasible supply vectors which
maximizes the associated prot. The set of prot maximizing production vectors
is the supply set.
A variation on the notion of a noncooperative game is that of an abstract
economy. In an abstract economy, the set os strategies available to a player
depends on the strategy choices of the other players. For example, the problem of
nding an equilibrium price vector for a market economy. This can be converted
into a game where the strategy sets of consumers are their consumption sets
demands and those of suppliers are their production sets.
87
3.2 Equilibrium of excess demand correspondences
There is a fundamental theorem for proving the existence of a market equilib-
rium of an abstract economy [10].
If is the excess demand multivalued mapping, the j is an equilibrium price
if 0 (j). The price j is a free disposal equilibrium price if there is a . (j)
such that . _ 0.
Theorem 3.1. (Gale-Debreu-Nikaido Lemma). Let : ^ ( R
m
be
an upper hemi-continuous multivalued mapping with nonempty compact convex
values such that for all j ^
j.. _ 0 1or cac/ . (j).
Put = R
n+1
+
. Then j ^[

(j) = O of free disposal equilibrium


prices is nonempty and compact.
Proof. For each j ^ set
l(j) = c[ c.. 0. (\). (j).
Then l(j) is convex for each j and j l(j), and we have that l
1
(j) is
open for each j.
For if c l
1
(j), we have that j.. 0 for all . (c). Then since is
upper hemi-continuous,
+
[r[ j.r 0[ is a neighborhood of c in l
1
(j).
Now j is l-maximal if and only if
1or cac/ c ^. t/crc i: a . (j) nit/ c.. _ 0.
It is know that "if C R
m
is a closed convex cone and 1 R
m
is compact
and convex then 1

C
+
= O if and only if"
(\) j C. () . 1 j.. _ 0.
So, j is l-maximal if and only if (j)

= O. Thus by a Sonnenscheins
theorem, j[ (j)

= O is nonempty and compact.


Theorem 3.2. (Neuefeind Lemma). Let o = j[ j R
m
. j 0.

m
i=1
j
i
=
1. Let : o (R
m
be upper hemi-continuous with nonempty closed convex val-
ues and satisfy the strong form of Walras law
j.. = 0 for all . (j)
and the boundary condition
there is a j
+
o and a neighborhood \ of ^ ` o in ^ such that for all
j \

o. j
+
.. 0 for all . (j).
Then the set j[j o. 0 (j) of equilibrium prices for is compact and
nonempty.
Proof. Dene the binary relation l on ^ by
j l(c) i1j.. 0 1or a|| . (c) a:d j. c o or j o. c ^` o.
88
First show that the l-maximal elements are precisely the equilibrium prices.
Suppose that j is l-maximal, that is, l(j) = O. Since l(j) = o for all j ^`o,
it follows that j o. Since j o and l(j) = O.
for each c o. there is a . (j) with c.. _ 0. (*)
Now (*) implies 0 (j). Suppose by way of contradiction that 0 (j).
Then since 0 is compact and convex and (j) is closed and convex, by Sepa-
rating hyperplane theorem, there is
+
j R
m
satisfying
+
j.. 0 for all . (j).
Put j

= `
+
j(1`)j. Then for . (j), j

.. = `
+
j..(1`)j.. = `
+
j.. 0
for ` 0. (Recall that j.. = 0 for . (j) by Walras law). For ` 0 small
enough, j

0 so that the normalized price vector c

= (

i
)
1
j

o and
c

.. 0 for all . (j), which violates (*).


Conversely, if j is an equilibrium price, then 0 (j) and since j.0 = 0 for
all j, it follows that l(j) = O.
Next verify that l satises the hypotheses of Sonnenscheins theorem.
(ia) j l(j): For j o this follows from Walras law. For j ^` o. j
o = l(j).
(ib) l(j) is convex: For j o, let c
1
. c
2
l(j). that is, c
1
.. 0. c
2
.. 0
for . (j). Then [`c
1
(1 `)c
2
[.. 0 as well. For j ^ ` o. l(j) = o
which is convex.
(ii) If c l
1
(j), then there is a j
t
with c i:tl
1
(j
t
): There are two
cases: (a) c o and (b) c ^` o.
(iia) c o

l
1
(j). Then j.. 0 for all . (c). Let H = r[j.r 0,
which is open. Then by upper hemi-continuity,
+
[H[ is a neighborhood of c
contained in l
1
(j).
(iib) c (^` o)

l
1
(j). By boundary condition in state of the theorem,
c i:tl
1
(j
+
).
Theorem 3.3.(Grandmonts Lemma). Let o = j[ j R
m
. j 0.

m
i=1
j
i
=
1. Let : o ( R
m
be upper hemi-continuous with nonempty compact convex
values and satisfy the strong form of Walras law
j.. = 0 for all . (j)
and the boundary condition
for every sequence c
n
c ^` o and .
n
(c
n
), there is a j o (which
may depend on .
n
) such that j..
n
0 for innitely many :.
Then has an equilibrium price j, that is, 0 (j).
Proof. Set 1
n
= cor[r o. di:t(r. ^ ` o) _
1
n
. Then 1
n
is an
increasing family of compact convex sets and o =

n
1
n
. Let C
n
be the cone
generated by 1
n
. Use a Debreus theorem to conclude that for each :, there
is c
n
1
n
such that (c
n
)

C
+
n
= O. Let .
n
(c
n
)

C
+
n
. Suppose that
c
n
c ^ ` o. Then by boundary condition, there is a j o such that
j..
n
0 innitely often. But for large enough :. j 1
n
C
n
. Since .
n
C
+
n
.
it follows that j..
n
_ 0. a contradiction.
It follows then that no subsequence of c
n
converges to a point in ^ ` o.
Since ^ is compact, some subsequence must converge to some j o. Since is
upper hemi-continuous with compact values, by sequential characterization of
hemi-continuity, there is a subsequence of .
n
converging to

. (j). This

.
89
lies in

n
C
+
n
= R
m
+
. This fact together with the strong form of Walras law
imply that

. = 0.
3.3 Existence of equilibrium for abstract economies
3.3.1 Preliminaries
Let a subset of a topological space A. We shall denote by 2
A
the family
of all subsets of and by c| the closure of in A. If is a subset of a
vector space, we shall denote by co the convex hull of . If is a nonempty
subset of a topological vector space A and o. T : 2
X
are multivalued
mappings, then coT. c|T. T

o : 2
X
are multivalued mappings dened by
(coT)(r) = coT(r). (c|T)(r) = c|T(r) and (T

o)(r) = T(r)

o(r) for each


r . respectively. Let 1 be a nonempty subset of . Denote the restriction
of T on 1 by T[
B
.
Let A be a nonempty subset of a topological vector space and r A.
Let c : A 2
X
be a given multivalued mapping. A multivalued mapping
c
x
: A 2
X
is said to be a O - majorant of c at r if there exists an open
neighborhood
x
of r in A such that
(a) for each .
x
. c(.) c
x
(.),
(b) for each .
x
. . c| co c
x
(.) and
(c) c
x
[
Nx
has open graph in
x
A.
The multivalued mapping c is said to be O-majorised if for each r A
with c(r) = O, there exists a O- majorant of c at r.
It is clear that every multivalued mapping c having an open graph with
r c| co c(r) for each r A is a O - majorised multivalued mapping. However
the following simple multivalued mapping shows a O - majorised multivalued
mapping which doesnt have an open graph:
The multivalued mapping c : A = (0. 1) 2
X
is dened by
c(r) = (0. r
2
[ for each r A.
Then c hasnt open graph but c
x
(.) = (0. .) for all . A is a O - majorant
of c at any r A.
We now state the following denition.
Denition 3.1. Let A and 1 be two topological spaces. Then a multivalued
mapping T : A 2
Y
is said to be upper semicontinuous (respectively,
almost upper semicontinuous) if for each r A and each open set \ in
1 with T(r) \ , there exists an open neighborhood l of r in A such that
T(n) \ (respectively, T(n) c|\ ) for each n l.
Remark 3.1. An upper semicontinuous multivalued mapping is clearly al-
most upper semicontinuous. From the denition, if T is almost semicontinuous,
then c|T is also almost semicontinuous. And it should be noted that we dont
need the closedness assumption of T(r) for each r A in the denitions.
The following example shows us an almost upper semicontinuous multivalued
mapping which isnt upper semicontinuous.
Example 3.1. Let A = [0. ) and c : A 2
X
be dened by
c(2) = (1. 8) , and c(r) = [1. 8[ if r = 2.
90
Then c isnt upper semicontinuous at 2 since for an open neighborhood
(1,3) of c(2) there doesnt exists any desired neighborhood l of 2 such that
T(n) (1. 8) for all n l; however T(n) [1. 8[ for all n in any neighborhood
of 2. Therefore T is almost upper semicontinuous.
Now we given the following general denitions of equilibrium theory in math-
ematical economics. Let 1 be a nite set of agents. For each i 1, let A
i
be a
nonempty set of actions.
Denition 3.2. An abstract economy (or generalized game) I =
(A
i
.
i
. 1
i
. 1
i
)
iI
is dened as a family of ordered quadruples (A
i
.
i
. 1
i
. 1
i
)
where A
i
is a nonempty topological vector space (a choice set),
i
. 1
i
:

jI
A
j

2
Xi
are constraint multivalued mappings and 1
i
:

jI
A
j
2
Xi
is a prefer-
ence multivalued mapping. An equilibrium for I (Schafer-Sonnenchein type)
is a point r A =

iI
A
i
such that for each i 1. r
i
c|1
i
( r) and
1
i
( r)

i
( r) = O.
Remark 3.2. When
i
= 1
i
for each i 1, our denitions of an abstract
economy and an equilibrium coincide with the standard denitions of Shafer-
Sonnenchein.
For each i 1. 1
t
i
: A 2
X
will denote the multivalued mapping dened
by 1
t
i
(r) = n[n A. n
i
1
i
(r)(=
1
i
(1
i
(r)), where
i
: A A
i
is the i-th
projection).
And we shall use the following notation:
A
i
=

jI;j,=i
A
j
and let
i
: A A
i
.
i
: A A
i
be the projections of A onto A
i
and
A
i
, respectively. For any r A, we simply denote
i
(r) A
i
by r
i
and
r = (r
i
. r
i
).
In [28] Greenberg introduced a further generalized concept of equilibrium as
follows: Under same settings as above, w = c
i

iI
be a family of functions
c
i
: A R
+
for each i 1.
Denition 3.3. A w- quasi-equilibrium for I is a point r A such that
for all i 1,
(1) r
i
c|
i
( r).
(2) 1
i
( r)

i
( r) = O and/or c
i
( r) = 0.
Remark 3.3. Quasi-equilibrium can be of special interest for economies
with a tax authority and the result of Shafer-Sonnenschein cannot be applied
in this problem.
Next we give another denition of equilibrium for an abstract economy
given by utility functions. By following Debreu, an abstract economy I =
(A
i
.
i
. 1
i
)
iI
is dened as a family of ordered triples (A
i
.
i
. 1
i
) where A
i
is
a nonempty topological vector space (a choice set),
i
:

jI
A
j
= A 2
Xi
is a constraint multivalued mapping and 1
i
:

jI
A
j
R is a utility function
(payo function).
Denition 3.4. An equilibrium for I (Nash type) is a point r A
such that for each i 1. r
i
c|
i
( r) and
91
1
i
( r) = 1
i
( r
i
. r
i
) = i:11
i
( r
1
. .... r
i1
. .. r
i+1
. ...)[. c|
i
( r).

Remark 3.4. It should be noted that if


i
(r) = A
i
for all r A, then the
concept of an equilibrium for I coincides with the well-known Nash equilibrium.
The two types of equilibrium points coincide when the preference multivalued
mapping 1
i
can be dened by
1
i
(r) = .
i
A
i
[1
i
(r
i
. .
i
) < 1
i
(r) 1or cac/ r A.

3.3.2 A generalization of Himmelbergs xed point theorem


We begin with the following lemma.
Lemma 3.1. Let A be a nonempty subset of a topological space and 1
be a nonempty compacy subset of A. Let T : A 2
D
be an almost upper
semicontinuous multivalued mapping such that for each r A. T(r) is closed.
Then T is upper semicontinuous.
Proof. For any r A. let l be an open neighborhood of T(r) in 1. Since
T(r) is closed in 1, there exists an open neighborhood \ of T(r) such that
T(r) \ c|\ l.
Since T is almost upper semicontinuous at r, for such open neighborhood \
of T(r), we can nd an open neighborhood \ of r such that T(n) c|\ l
for all n \. Therefore T is upper semicontinuous at r.
Remark 3.5. For any upper semicontinuous multivalued mapping T : A
2
Y
. coT and c| co T arent necessarily upper semicontinuous in general even if
A = 1 is compact convex in a locally convex Hausdor topological vector space.

However the almost upper semicontinuty can be preserved as follows:


Lemma 3.2. Let A be a convex subset of a locally convex Hausdor topologi-
cal vector space 1 and 1 be a nonempty compact subset of A. Let T : A 2
D
be
an almost upper semicontinuous multivalued mapping such that for each r A,
coT(r) 1. Then c| co T is almost upper semicontinuous.
Proof. For any r A, let l be an open set containing c| co T(r). Since
c| co T(r) is closed in 1, we can nd an open convex neighborhood of 0 such
that
c| co T(r) c|(c| co T(r) ) = c| co T(r) c| l.
Clearly \ = c| co T(r) is an open convex set containing c| co T(r)
and \ l. Since T is almost upper semicontinuous, there exists an open
neighborhood \ of r in A such that T(n) c|\ for all n \. Since \ is
92
convex, c| co T(n) c|\ c|l for all n \. Therefore c| co T is almost upper
semicontinuous.
Remark 3.6. In the Lemma 3.2, we dont know whether the multivalued
mapping coT is almost upper semicontinuous even when T is upper semicontin-
uous.
We now prove the following generalization of Himmelbergs xed point the-
orem.
Theorem 3.4. Let A be a convex subset of a locally convex Hausdor
topological vector space 1 and 1 be a nonempty compact subset of A. Let
o. T : A 2
D
be almost upper semicontinuous multivalued mappings such
that
(1) for each r A. O = coo(r) T(r).
(2) for each r A. T(r) is closed.
Then there exists a point r 1 such that r T( r).
Proof. For each r A, since coo(r) T(r) is closed, we have c| co o(r)
T(r). By Lemma 3.2, the multivalued mapping c| co o : A 2
D
is also almost
upper semicontinuous, so that by Lemma 3.1, c| co o is upper semicontinuous
and closed convex valued in 1. Therefore by Himmelbergs xed point theorem,
there exists a point r 1 such that r c| co o( r) T( r), which completes
the proof.
Corollary 3.1. Let A be a convex subset of a locally convex Hausdor
topological vector space 1 and 1 be a nonempty compact subset of A. Let
o : A 2
D
be an almost upper semicontinuous multivalued mapping such that
for each r A, coo(r) is a nonempty subset of 1. Then there exists a point
r 1 such that r c| co o( r).
Proof. We dene a multivalued mapping T : A 2
D
by T(r) = c| co o(r)
for all r A. Then by Lemma 3.2, T is almost upper semicontinuous. Clearly
the pair (o. T) satises all conditions of Theorem 3.4, so that there exists a
point r 1 such that r T( r).
When o = T in Theorem 3.4, we obtain Himmelbergs xed point theorem
as a corollary:
Corollary 3.2. Let A be a convex subset of a locally convex Hausdor
topological vector space and 1 be a nonempty compact subset of A. Let T :
A 2
D
be an upper semicontinuous multivalued mapping such that for each
r A, T(r) is a nonempty closed convex subset of 1. Then there exists a point
r 1 such that r T( r).
3.3.3 Existence of equilibria in abstract economies
In this section we consider both kinds of economy described in the prelimi-
naries (that is, an abstract economy given by preference multivalued mappings
(Shafer-Sonnenschein type) in compact setting and an abstract economy given
by utility functions (Nash type) in non-compact settings) and prove the exis-
tence of equilibrium points or quasi-equilibrium points for either case by using
the xed point theorems in previous section.
93
First, using O-majorised multivalued mappings we shall prove an equilibrium
existence of a compact abstract economy, which generalizes the powerful result
of Shafer-Sonnenschein. For simplicity, we may assume that
i
= 1
i
for each
i 1 in a abstract economy.
Theorem 3.5. Let I = (A
i
.
i
. 1
i
)
iI
be an abstract economy where 1 is a
countable set such that for each i 1,
(1) A
i
is a nonempty compact convex subset of a metrisable locally convex
Hausdor topological vector space,
(2) for each r A =

iI
A
i
.
i
(r) is nonempty convex,
(3) the multivalued mapping c|
i
: A 2
Xi
is continuous,
(4) the multivalued mapping 1
i
is O-majorised.
Then I has an equilibrium choice r A, that is, for each i 1, r
i
c|
i
( r)
and
i
( r)

1
i
( r) = O.
Proof. Let i 1 be xed. Since 1
i
is O- majorised, for each r A,
there exists a multivalued mapping c
x
: A 2
Xi
and an open neighborhood
l
x
of r in A such that 1
i
(.) c
x
(.) and .
i
c| co c
x
(.) for each . l
x
,
and c
x
[l
x
has an open graph in l
x
A
i
. By compactness of A, the family
{l
x
[r A} of an open cover of A contains a nite subcover {l
xj
[, J}, where
J = 1. 2. .... :. For each , J, we now dene c
j
: A 2
Xi
by
c
j
(.) =

c
xj
(.). i1 . l
xj
A
i
. i1 . l
xj
.
(44)
and next we dene 1
i
: A 2
Xi
by
1
i
(.) =

jJ
c
j
(.)
for each . A.
For each . A, there exists / J such that . l
x
k
so that .
i

c| co c
x
k
(.) = c| co c
k
(.); thus .
i
c| co 1
i
(.). We now show that the graph
of 1
i
is open in A A
i
. For each (.. r) oraj/ o1 1
i
, since A =

jJ
l
xj
,
there exists i
1
. ...i
k
J such that . l
xi
1

...

l
xi
k
. Then we can nd
an open neighborhood l of . in A such that l l
xi
1

...

l
xi
k
. Since
c
xi
1
(.)

...

c
xi
k
(.) is an open subset of A
i
containing r, there exists an open
neighborhood \ of r in A
i
such that r \ c
xi
1
(.)

...

c
xi
k
(.). Therefore
we have an open neighborhood l \ of (.. r) such that l \ oraj/ o1 1
i
.
so that the graph of 1
i
is open in A A
i
. And it is clear that 1
i
(.) 1
i
(.)
for each . A.
Next, since A A
i
is compact and metrisable, so is perfectly normal. Since
the graph of 1
i
is open in A A
i
, by a result of Dugundji, there exists a
continuous function C
i
: A A
i
[0. 1[ such that C
i
(r. n) = 0 for all (r. n)
oraj/ o1 1
i
and C
i
(r. n) = 0 for all (r. n) oraj/ o1 1
i
. For each i 1, we
dene a multivalued mapping 1
i
: A 2
Xi
by
1
i
(r) = n[n c|
i
(r). C
i
(r. n) = max
zclAi(x)
C
i
(r. .).
94
Then by a result of Aubin and Ekeland, 1
i
is upper semicontinuous and for
each r A, 1
i
(r) is nonempty closed. Then a multivalued mapping G : A
2
X
dened by G(r) =

iI
1
i
(r) is also upper semicontinuous by a result of
Fan and G(r) is a nonempty compact subset of A for each r A. Therefore
by Corollary 3.1, there exists a point r A such that r c| co G( r); that is,
r c| co G( r)

iI
c| co 1
i
( r). Since 1
i
( r) c|
i
( r) and
i
( r) is convex,
c| co 1
i
( r) c|
i
( r). Therefore r
i
c|
i
( r) for each i 1. It remains to show
that
i
( r)

1
i
( r) = O. If .
i

i
( r)

1
i
( r) = O. then C
i
( r. .
i
) 0 so that
C
i
( r. .
t
i
) 0 for all .
t
i
1
i
( r). This implies that 1
i
( r) 1
i
( r). which implies
r
i
c| co 1
i
( r) c| co 1
i
( r): this is a contradiction. So the theorem is proved.

Remark 3.7. In a nite dimensional space, for a compact set , co is


compact and convex. Therefore when A
i
is a subset of R
n
, we can relax the
assumption (b) of the denition of O-majorant as follows without aecting the
conclusion of Theorem 3.5:
(b) for each .
x
, . co c
x
(.).
And in this case, Theorem 3.5 generalizes a Shafer-Sonnenscheins theorem
in two aspects, that is, (i) 1
i
need not have open graph and (ii) an index set 1
may not be nite.
Using the concept of w-quasi-equilibrium described in the preliminaries, we
further generalize Theorem 3.5 as follows:
Theorem 3.6. Let I = (A
i
.
i
. 1
i
)
iI
be an abstract economy where 1 is a
countable set such that for each i 1,
(1)A
i
is a nonempty compact convex subset of a metrisable locally convex
Hausdor topological vector space,
(2) c
i
: A =

iI
A
i
R
+
is a nonnegative real-valued lower semicontin-
uous function,
(3) for each r A,
i
(r) is nonempty convex,
(4) the multivalued mapping c|
i
: A 2
Xi
is continuous for all r with
c
i
(r) 0 and is almost upper semicontinuous for all r with c
i
(r) = 0.
(5) the multivalued mapping 1
i
is O-majorised.
Then I has a w-quasi-equilibrium choice r A, that is, for each i 1,
(a) r
i
c|
i
( r),
(b)
i
( r)

1
i
( r) = O and/or c
i
( r) = 0.
Proof. We can repeat the proof of Theorem 3.5 again. In the proof of
Theorem 3.5, for each i 1 we shall replace the multivalued mapping 1
i
by a new multivalued mapping 1
+
i
: A 2
Xi
dened by 1
+
i
(r) = n[n
c|
i
(r). C
i
(r. n) c
i
(r) = max
zcl Ai(x)
C
i
(r. .) c
i
(r) for each r A.
Since r[r A. c
i
(r) 0 is open, 1
+
i
is also upper semicontinuous. In fact,
for any open set \ containing 1
+
i
(r), if c
i
(r) = 0 then 1
+
i
(r) = c|
i
(r) \ .
Since c|
i
is upper semicontinuous, there exists an open neighborhood \ of r
such that 1
+
i
(n) c|
i
(n) \ for all n \; if c
i
(r) 0, then by a result of
Aubin and Ekeland, 1
+
i
(r) = 1
i
(r) is also upper semicontinuous at r, so that
there exists an open neighborhood \ of r such that 1
i
(n) \ for each n \.
Then \
t
= \

.[. A. c
i
(.) 0 is an open neighborhood of r such that
1
+
i
(n) \ for each n \
t
. Therefore 1
+
i
is upper semicontinuous.
95
Then G =

iI
1
i
: A 2
X
is also upper semicontinuous by a result of
Fan, and G(r) is a nonempty compact subset of A for each r A. Therefore
by the same proof as in Theorem 3.5, there exists a point r A such that
r
i
c|
i
( r) for each i 1. Finally, if c
i
( r) = 0, then the conclusion (b) holds.
In case c
i
( r) 0, if .
i

i
( r)

1
i
( r) = O, then C
i
( r. .
i
) 0 for all .
t
i
1
i
( r).
This implies that 1
i
( r) 1
i
( r), which implies r
i
c| co 1
i
( r) c| co 1
i
( r);
this is a contradiction. Therefore we have
i
( r)

1
i
( r) = O.
In most results on the existence of equilibria for abstract economies the
underlying spaces (commodity spaces or choice sets) are always compact and
convex. However, in recent papers, the underlying spaces arent always com-
pact and it should be noted that we will encounter many kinds of multivalued
mappings in various economic situations; so it is important that we shall con-
sider several types of multivalued mappings and obtain some existence results
in non-compact settings. Now we prove the quasi-equilibrium existence theorem
of Nash type non-compact abstract economy.
Theorem 3.7. Let 1 be any (possibly uncountable) index set and for each
i 1, let A
i
be a convex subset of a locally convex Hausdor topological vector
space 1
i
and 1
i
be a nonempty compact subset of A
i
. For each i 1, let 1
i
:
A =

iI
A
i
R be a continuous function and c
i
: A R
+
be a nonnegative
real-valued lower semicontinuous function. For each i 1, o
i
: A 2
Di
be a
continuous multivalued mapping for all r A with c
i
(r) 0 and be almost
upper semicontinuous for all r A with c
i
(r) = 0 such that
(1) o
i
(r) is a nonempty closed convex subset of 1
i
,
(2) r
i
1
i
(r
i
. r
i
) is quasi-convex on o
i
(r).
Then there exists an equilibrium point r 1 =

iI
1
i
such that for each
i 1,
(a) r
i
o
i
( r),
(b) 1
i
( r
i
. r
i
) = inf
zSi(^ x)
1
i
( r
i
. .) and/or c
i
( r) = 0.
Proof. For each i 1, we now dene a multivalued mapping \
i
: A 2
Xi
by
\
i
(r) = n [ n o
i
(r). 1
i
(r
i
. n)c
i
(r) = inf
zSi(x)
1
i
(r
i
. .)c
i
(r).
Since r [ r A. c
i
(r) 0 is open, for each r A with c
i
(r) 0, \
i
is upper semicontinuous at r by a result of Aubin and Ekeland and the same
argument of the proof of Theorem 3.6; and for each r A with c
i
(r) = 0,
\
i
(r) = o
i
(r) so that \
i
is also upper semicontinuous at r. Therefore for each
r A, \
i
is upper semicontinuous at r and \
i
(r) is nonempty compact and
convex.
Now we dene \ : A 2
D
by
\ (r) =

iI
\
i
(r)
for each r A.
Then by a result of Fan, \ is also upper semicontinuous, and \ (r) is a
nonempty compact convex subset of 1 for each r A. Therefore, by Corollary
96
3.2 there exists a point r 1 such that r \ ( r), that is, for each i 1, we
have
(a) r
i
\
i
( r) o
i
( r) and
(b) 1
i
( r
i
. r
i
) = inf
zSi(^ x)
1
i
( r
i
. .) and/or c
i
( r) = 0.
3.3.4 Nash equilibrium of games and abstract economies
Each strategy vector determines an outcome (which may be a lottery in some
models). Players have preferences over outcomes and this induce preferences
over strategy vectors. For convenience we will work with preferences over strat-
egy vectors. There are two ways we might do this. The rst is to describe
player is preferences by a binary relation l
i
dened on A. Then l
i
(r) is
the set of all strategy vectors preferred to r. Since player i only has control
over the i-th component of r, we will nd it more useful to describe player is
preferences in terms of the good reply set. Given a strategy vector r A and
a strategy n
i
A
i
, let r[n
i
denote the strategy vector obtained from r when
player i chooses n
i
and other players keep their choices xed. Let us say that n
i
is a good reply for player i to strategy vector r if r[n
i
l
i
(r). This denes
a multivalued mapping l
i
: A ( A
i
, called the good reply multivalued map-
ping, by l
i
(r) = n
i
[ n
i
A
i
. r[n
i
l
i
(r). It will be convenient to describe
preferences in terms of the good reply multivalued mapping l
i
rather than the
preference relation l
i
. Note however that we lose some information by doing
this. Given a good reply multivalued mapping l
i
it will not generally possible
to reconstruct the preference relation l
i
, unless we know that l
i
is transitive,
and we will not make this assumption. Thus a game in strategic form is a
tuple (1. (A
i
). (l
i
)) where each l
i
:

jI
A
j
(A
i
.
A shortcoming of this model of a game is that frequently there are situations
in which the choices of players cannot be made independently. A simplied
example is the pumping of oil out of a common oil eld by several producers.
Each producer chooses an amount r
i
to pump out and sell. The price depends
on the total amount sold. Thus each producer has partial control of the price
and hence of their prots. But the r
i
cannot be chosen independently because
their sum cannot exceed the total amount of oil in the ground. To take such
possibilities into account we introduce a multivalued mapping 1
i
: A ( A
i
which tells which strategies are actually feasible for player i, given the strategy
vector of the others. (We have written 1
i
as a function of the strategies of all
the players including i as a technical convenience. In modelling most situations,
1
i
will be independent of player is choice.) The jointly feasible strategy vectors
are thus the xed points of the multivalued mapping 1 =

iI
1
i
: A (
A. A game with the added feasibility or constraint multivalued mapping is
called a generalized game or abstract economy. It is specied by a tuple
(1. (A
i
). (1
i
). (l
i
)) where 1
i
: A (A
i
and l
i
: A (A
i
.
A Nash equilibrium of a strategic form game or abstract economy is a
strategy vector r for which no player has a good reply. For a game an equilibrium
is an r A such that l
i
(r) = O for each i. For an abstract economy an
equilibrium is an r A such that r 1(r) and l
i
(r)

1
i
(r) = O for each i.
97
Nash proves the existence of equilibria for games where the players prefer-
ences are representable by continuous quasi-concave utilities and the strategy
sets are simplexes. Debreu proves the existence of equilibrium for abstract
economies. He assumes that strategy sets are contractible polyhedra and that
the feasibility multivalued mapping have closed graph and the maximized utility
is continuous and that the set of utility maximizers over each constraint set is
contractible. These assumptions are joint assumptions on utility and feasibility
and the simplest way to make separate assumptions is to assume that strategy
sets are compact and convex and that utilities are continuous and quasi-concave
and that the constraint multivalued mappings are continuous with compact con-
vex values. Then the maximum theorem guarantees continuity of maximized
utility and convexity of the feasible sets and quasi-concavity imply convexity
(and hence contractibility) of the set of maximizers. Arrow and Debreu used
Debreus result to prove the existence of Walrasian equilibrium of an economy
and coined the term abstract economy.
Gale and Mas-Colell prove a lemma which allows them to prove the exis-
tence of equilibrium for a game without ordered preferences. They assume that
strategy sets are compact convex sets and that the good reply multivalued map-
pings are convex valued and have open graph. Shafer and Sonnenschein prove
the existence of equilibria for abstract economies without ordered preferences.
They assume that the good reply multivalued mappings have open graph and
satisfy the convexity/irreexivity condition r
i
co l
i
(r). They also assume
that the feasibility multivalued mappings are continuous with compact convex
values. This result doesnt strictly generalize Debreus result since convexity
rather than contractibility assumptions are made.
Theorem 3.8 (Gale, Mas-Colell). Let A =

iI
A
i
, A
i
being a non-
empty, compact, convex subset of R
ki
, and let l
i
: A ( A
i
be a multivalued
mapping satisfying
(i) l
i
(r) is convex for all r A,
(ii) l

i
(r
i
) is open in A for all r
i
A
i
.
Then there exists r A such that for each i, either r
i
l
i
(r) or l
i
(r) = O.
Proof. Let \
i
= r [ l
i
(r) = O. Then \
i
is open by (ii) and l
i
[
Wi
: \
i
(
A
i
satises the hypotheses of the selection theorem, so there is a continuous
function 1
i
: \
i
A
i
with 1
i
(r) l
i
(r). Dene the multivalued mapping

i
: A (A
i
by

i
(r) =

1
i
(r). i1 r \
i
A
i
. i1 r \
i
.
(45)
Then
i
is upper hemi-continuous with nonempty compact and convex val-
ues, and thus so is =

iI

i
: A (A. Thus by the Kakutani theorem, has
a xed point r. If
i
( r) = A
i
, then r
i

i
( r) implies r
i
= 1
i
( r) l
i
( r). If

i
( r) = A
i
, then it must be that l
i
( r) = O. (Unless of course A
i
is a singleton,
in which case r
i
=
i
( r).)
Remark 3.8. The previous theorem possesses a trivial extension. Each l
i
is assumed to satisfy (i) and (ii) so that the selection theorem may be employed.
98
If some l
i
is already a singleton-valued multivalued mapping, then the selec-
tion problem is trivial.Thus we may allow some of the l
i
s to be continuous
singleton-valued multivalued mapping instead, and the conclusion follows. The
next Corollary is derived from Theorem 3.8 by assuming each r
i
l
i
(r) and
concludes that there exists some r such that l
i
(r) = O for each i. Assuming
that l
i
(r) is never empty yields a result equivalent to a Fans result.
Corollary 3.3. For each i, let l
i
: A ( A
i
have open graph and satisfy
r
i
co l
i
(r) for each r. Then there exists r A with l
i
(r) = O for all i.
Proof. Because A
i
is convex subset the multivalued mapping co l
i
satisfy
the hypotheses of Theorem 3.8 so there is r A such that for each i, r
i

co l
i
(r) or co l
i
(r) = O. Since r
i
co l
i
(r) by hypothesis, we have co l
i
(r) =
O, so l
i
(r) = O.
Theorem 3.9. (Shafer-Sonnenschein) Let (1. (A
i
). (1
i
). (l
i
)) be an ab-
stract economy such that for each i,
(i) A
i
R
ki
is nonempty, compact and convex
(ii) 1
i
is a continuous multivalued mapping with nonempty compact convex
values
(iii) Gr l
i
is open in A A
i
(iv) r
i
co l
i
(r) for all r A.
Then there is an equilibrium.
Proof. Dene i
i
: AA
i
R
+
by i
i
(r. n
i
) = di:t[(r. n
i
). (Gr l
i
)
c
[. Then
i
i
(r. n
i
) 0 if and only if n
i
l
i
(r) and i
i
is continuous since Gr l
i
is open.
Dene H
i
: A (A
i
by
H
i
(r) = n
i
[ n
i
A
i
. n
i
:ari:i.c: i
i
(r. ) o: 1
i
(r).
Then H
i
has nonempty compact values and is upper hemi-continuous and
hence closed. (To see that H
i
is upper hemi-continuous, apply the maximum
theorem to the multivalued mapping (r. n
i
) (r 1
i
(r) and the function
i
i
.) Dene G : A ( A by G(r) =

iI
co H
i
(r). Then by a well known
results, G is upper hemi-continuous with compact convex values and so satises
the hypotheses of the Kakutani xed point theorem, so there is r A with r
G( r). Since H
i
( r) 1
i
( r) which is convex, r
i
G
i
( r) = co H
i
( r) 1
i
( r). We
now show l
i
( r)

1
i
( r) = O. Suppose not, that is, there is .
i
l
i
( r)

1
i
( r).
Then since .
i
l
i
( r) we have i
i
( r. .
i
) 0, and since H
i
( r) consists of the
maximizers of i
i
( r. ) on 1
i
( r), we have that i
i
( r. n
i
) 0 for all n
i
H
i
( r).
This says that n
i
l
i
( r) for all n
i
H
i
( r). Thus H
i
( r) l
i
( r), so r
i

G
i
( r) = co H
i
( r) co l
i
( r), which contradicts (iv). Thus l
i
( r)

1
i
( r) = O.

Remark 3.9. The multivalued mappings H


i
used in the proof of previ-
ous theorem arent natural constructions, which is the cleverness of Shafer and
Sonnenscheins proof. The natural approach would be to use the best reply
multivalued mappings, r ( r
i
[ l
i
(r[r
i
)

1
i
(r) = O. These multivalued
mappings are compact-valued and upper hemi-continuous. They may fail to
be convex-valued, however.Mas-Colell gives an example for which the best reply
multivalued mapping hasnt connected-valued submultivalued mapping. Taking
99
the convex hull of the best reply multivalued mapping doesnt help, since a xed
point of convex hull multivalued mapping may fail to be an equilibrium.
Another natural approach would be to use the good reply multivalued map-
ping r (co l
i
(r)

1
i
(r). This multivalued mapping, while convex-valued,
isnt closed-valued, and so the Kakutani theorem doesnt apply. What Shafer
and Sonnenschein do is choose a multivalued mapping that is a submultival-
ued mapping of a good reply set when it is nonempty and equal to the whole
feasible strategy set otherwise. Under stronger assumptions on the 1
i
multi-
valued mappings this approach can be made to work without taking a proper
subset of the good reply set. The additional assumptions on 1
i
are the fol-
lowing. First, 1
i
(r) is assumed to be topologically regular for each r, that
is, 1
i
(r) = c| [i:t 1
i
(r)[. Second, the multivalued mapping r ( i:t 1
i
(r)
is assumed to have open graph. The requirement of open graph is stronger
than lower hemi-continuity. These assumptions were used by Borglin and Kei-
ding who reduced the multi-player abstract economy to a 1-person game. The
proof below adds an additional player to the abstract economy by introducing
an abstract auctioneer, and incorporates the feasibility constraints onto the
preferences which converts it into a game. Both the topological regularity and
open graph assumptions are satised by budget multivalued mappings, provided
income is always greater than the minimum consumption expenditures on the
consumption set. The proof is closely related to the arguments used by Gale
and Mas-Colell to reduce an economy to a noncooperative game.
Theorem 3.10. (A special case of Shafer-Sonnenschein theorem).
Let (1. (A
i
). (1
i
). (l
i
)) be an abstract economy such that for each i we have
(i) A
i
R
ki
is nonempty, compact and convex
(ii) 1
i
is an upper hemi-continuous multivalued mapping with nonempty
compact convex values satisfying, for all r, 1
i
(r) = c| [i:t 1
i
(r)[ and r (
i:t 1
i
(r) has open graph
(iii) Gr l
i
is open in A A
i
(iv) for all r. r
i
co l
i
(r).
Then there is an equilibrium, that is, an r
+
A such that for each i,
r
+
i
1
i
(r
+
). a:d l
i
(r
+
)

1
i
(r
+
) = O.
Proof. We dene a game as follows. Put 7
0
=

iI
A
i
. For i 1 put
7
i
= A
i
, and set 7 = 7
0

iI
7
i
. A typical element of 7 will be denoted
(r. n), where r 7
0
and n

iI
7
i
. Dene preference multivalued mappings
j
i
: 7 (7
i
as follows. Dene j
0
by j
o
(r. n) = n, and for i 1 set
j
i
(r. n) =

1
i
(r). i1 n
i
1
i
(r)
co l
i
(n)

i:t 1
i
(r). i1 n
i
1
i
(r).
(46)
Note that j
0
is continuous and never empty-valued and that for i 1 the
multivalued mapping j
i
is convex-valued and satises n
i
j
i
(r. n). Also for
i 1, the graph of j
i
is open. To see this set
100

i
= (r. n. .
i
) [ .
i
i:t 1
i
(r). 1
i
= (r. n. .
i
) [ n
i
1
i
(r).
C
i
= (r. n. .
i
) [ .
i
co l
i
(n).
and note that
Gr j
i
= (
i

1
i
)

(
i

C
i
).
The set
i
is open because i:t 1
i
has open graph and C
i
is open by hy-
pothesis (iii). The set 1
i
is also open. If n
i
1
i
(r), then there is a closed
neighborhood \ of n
i
such that 1
i
(r) \
c
, and upper hemi-continuity of 1
i
then gives the desired result.
Thus the hypothesis of Remark 3.8 is satised and so there exists (r
+
. n
+
) 7
such that
r
+
j
0
(r
+
. n
+
). (+)
and for i 1
j
i
(r
+
. n
+
) = O. (++)
Now (*) implies r
+
= n
+
; and since 1
i
(r) is never empty, (**) becomes
co l
i
(r
+
)

i:t 1
i
(r
+
) = O. 1or i 1.
Thus l
i
(r
+
)

i:t 1
i
(r
+
) = O. But 1
i
(r
+
) = c| [i:t 1
i
(r
+
)[ and l
i
(r
+
) is
open, so l
i
(r
+
)

1
i
(r
+
) = O; that is, r
+
is an equilibrium.
3.3.5 Walrasian equilibrium of an economy
We now have several tools for proving the existence of a Walrasian equilibrium
of an economy. We will focus on two approaches. These are: the excess demand
approach and the abstract economy approach. The excess demand approach
utilizes the Debreu-Gale-Nikaido lemma, namely Theorem 3.1. The abstract
economy approach converts the problem of nding a Walrasian equilibrium of
the economy into the problem of nding the Nash equilibrium of an associated
abstract economy.
The central diculty of the excess demand approach involves proving the
upper hemi-continuity of the excess demand multivalued mapping.
The abstract economy approach explicitly introduces a ctitious agent, the
auctioneer, into the picture and models the economy as an abstract economy
or generalized game. The strategies of consumers are consumption vectors, the
strategies of suppliers are production vectors, and the strategies of the auction-
eer are prices. The auctioneers preferences are to increase the value of excess
demand. A Nash equilibrium of the abstract economy corresponds to a Wal-
rasian equilibrium of the original economy. The principal diculty to overcome
in applying the existence theorems for abstract economies is the fact that they
require compact strategy sets and the consumption and production sets arent
101
compact. This problem is dealt with by showing that any equilibrium must lie in
a compact set, then truncating the consumption and production sets and show-
ing that the Nash equilibrium of the truncated abstract economy is a Walrasian
equilibrium of the original economy.
We now recall some notations and denitions need in what follows.
Let R
m
denote the commodity space. For i = 1. 2. .... : let A
i
R
m
denote
the i-th consumers consumption set, n
i
R
n
his private endowment, and l
i
his preference relation on A
i
. For , = 1. 2. .... / let 1
j
denote the ,-th suppliers
production set. Set A =

n
i=1
A
i
. n =

n
i=1
n
i
. and 1 =

k
j=1
1
j
. Let a
i
j
denote the share of consumer i in the prots of supplier ,. An economy is then
described by a tuple ((A
i
. n
i
. l
i
). (a
i
j
)).
Denition 3.5. An attainable state of the economy is a tuple ((r
i
). (n
j
))

n
i=1
A
i

k
j=1
1
j
. satisfying
n

i=1
r
i

j=1
n
j
n = 0.

Let 1 denote the set of attainable states and let


' = ((r
i
). (n
j
)) [ ((r
i
). (n
j
)) (R
m
)
n+k
.
n

i=1
r
i

j=1
n
j
n = 0.
Then 1 = (

n
i=1
A
i

k
j=1
1
j
)

'. Let A
t
i
be the projection of 1 on A
i
,
and let 1
t
j
be the projection of 1 on 1
j
.
Denition 3.6. A Walrasian free disposal equilibriumis a price j
+
^
together with an attainable state ((r
+
i
). (n
+
j
)) satisfying:
(i) For each , = 1. 2. .... /,
j
+
n
+
j
_ j
+
n
j
. 1or a|| n
j
1
j
.
(ii) For each i = 1. 2. .... :.
r
+
i
1
i
. a:d l
i
(r
+
i
)

1
i
= O.
where
1
i
= r
i
[ r
i
A
i
. j
+
r
i
_ j
+
n
i

k

j=1
a
i
j
(j
+
n
+
j
).
Lemma 3.3. Let the economy ((A
i
. n
i
. l
i
). (1
j
). (a
i
j
)) satisfy:
For i = 1. 2. .... :,
(1) A
i
is closed, convex and bounded from below, and n
i
A
i
.
For , = 1. 2. .... / that
(2) 1
j
is closed, convex and 0 1
j
.
102
(3) 1

R
m
+
= 0.
(4) 1

(1 ) = 0.
Then the set 1 of attainable states is compact and nonempty. Furthermore,
0 1
t
j
. , = 1. 2. .... /.
Suppose in addition, that the following two assumptions hold. For each i =
1. 2. .... :,
(5) there is some r
t
i
A
i
satisfying n
i
r
t
i
.
(6) 1 R
m
+
.
Then r
t
i
A
t
i
. i = 1. 2. .... :.
Proof. Clearly ((n
i
). (0
j
)) 1. so 1 is nonempty and 0 1
t
j
. The set 1 of
attainable states is clearly closed, being the intersection of two closed sets. So,
it is suces to show that 1 = 0. where 1 is asymptotic cone of 1 (the
set of all possible limits of sequences of the form `
n
r
n
, where each r
n
1
and `
n
| 0.) By a well known result, we have
1 (
n

i=1
A
i

j=1
1
j
)

'.
Also, we have
(
n

i=1
A
i

j=1
1
j
)
n

i=1
(A
i
)
k

j=1
(1
j
).
Since each A
i
is bounded below there is some /
i
R
m
such that A
i

/
i
R
m
+
. Thus A
i
(/
i
R
m
+
) = R
m
+
= R
m
+
. Also, we have 1
j
1.
Again, since ' n is a cone, ' = ' n. Thus we can show 1 = 0 if
we can show that
(
n

i=1
R
m
+

k

j=1
1 )

(' n) = 0.
In other words, we need to show that if r
i
R
m
+
. i = 1. 2. .... :. and n
j
1.
, = 1. 2. .... / and

n
i=1
r
i

k
j=1
n
j
= 0. then r
1
= ... = r
n
= n
1
= ... =
n
k
= 0. Now

n
i=1
r
i
_ 0. so that

k
j=1
n
j
_ 0 too. Since 1 is a convex
cone,

k
j=1
n
j
1. Since 1

R
m
+
= 0.

n
i=1
r
i

k
j=1
n
j
= 0 implies

n
i=1
r
i
= 0 =

k
j=1
n
j
. Now r
i
_ 0 and

n
i=1
r
i
= 0 clearly imply that r
i
= 0,
i = 1. 2. .... :. Rewriting

k
j=1
n
j
= 0 yields n
i
= (

j,=i
n
j
). Both n
i
and this
last sum belong to 1 as 1 1 . Thus n
i
1

(1 ) so n
i
= 0. This is true
for all i = 1. 2. .... /.
Now assume that (5) and (6) hold. By (5),

n
i=1
r
t
i
<

n
i=1
n
i
. Set n
t
=

n
i=1
r
t
i

n
i=1
n
i
. Then n
t
< 0. so by (6) there are n
t
j
. ,. 1. 2. .... /. satisfying
n
t
=

k
j=1
n
t
j
. Then ((r
t
i
). (n
t
j
)) 1. so r
t
i
A
t
i
.
Under the hypotheses of Lemma 3.3 the set 1 of attainable states is compact.
Thus for each consumer i, there is a compact convex set 1
i
containing A
t
i
in
103
its interior. Set A
tt
i
= 1
i

A
i
. Then A
tt
i
i:tA
t
i
. Likewise, for each supplier ,
there is a compact convex set C
j
containing 1
t
j
in its interior. Set 1
tt
j
= C
j

1
j
.
Theorem 3.11. Let the economy ((A
i
. n
i
. l
i
). (1
j
). (a
i
j
)) satisfy:
For i = 1. 2. .... :,
(1) A
i
is closed, convex and bounded from below, and n
i
A
i
.
(2) There is some r
t
i
A
i
satisfying n
i
r
t
i
.
(3) (a) l
i
has open graph, (b) r
i
co l
i
(r
i
), (c) r
i
c| l
i
(r
i
).
For each , = 1. 2. .... /,
(4) 1
j
is closed and convex and 0 1
j
.
(5) 1

R
m
+
= 0.
(6) 1

(1 ) = 0.
(7) 1 R
m
+
.
Then there is a free disposal equilibrium of the economy.
Proof. Dene an abstract economy as follows. Player 0 is the auctioneer.
His strategy set is ^
m1
. the closed standard (:1)-simplex. These strategies
will be price vectors. The strategy set of consumer i will be A
t
i
. The strategy
set of supplier , is 1
t
j
. A typical strategy vector is thus of the form (j. (r
i
). (n
j
)).
The auctioneers preferences are represented by the multivalued mapping
l
0
: ^

iI
A
t
i

jJ
1
t
j
(^ dened by
l
0
(j. (r
i
). (n
j
)) = c [ c ^. c (

iI
r
i

jJ
n
j
n)
j (

iI
r
i

jJ
n
j
n).
Thus the auctioneer prefers to raise the value of excess demand. Observe
that l
0
has open graph, convex upper contour sets and j l
0
(j. (r
i
). (n
j
)).
Supplier ,
+
s preferences are represented by the multivalued mapping \
j
:
^

iI
A
tt
i

jJ
1
tt
j
(1
tt
j
dened by
\
j
(j. (r
i
). (n
j
)) = n
tt
j
[ n
tt
j
1
tt
j
. j n
tt
j
j n
j
.
Thus suppliers prefer larger prots. These multivalued mappings have open
graph, convex upper contour sets and satisfy n
j
\
j
(j. (r
i
). (n
j
)).
The preferences of consumer i
+
are represented by multivalued mapping
l
t
i
: ^

iI
A
tt
i

jJ
1
tt
j
(A
i
dened by
l
t
i
(j. (r
i
). (n
j
)) = co l
i
(r
i
).
This multivalued mapping has open graph, convex upper contour sets and
satises r
i
l
t
i
(j. (r
i
). (n
j
)).
The feasibility multivalued mappings are as follows. For suppliers and the
auctioneer, they are constant multivalued mappings and the values are equal
to their entire strategy sets. Thus they are continuous with compact convex
values. For consumers things are more complicated. Start by setting
j
(j) =
max
yjYj
j n
j
. By the maximum theorem this is a continuous function. Since
0 1
t
j
.
j
(j) is always nonnegative. Set
104
1
i
(j. (r
i
). (n
j
)) = r
tt
i
[ r
tt
i
A
tt
i
. j r
tt
i
_ j n
i

k

j=1
a
i

j

j
(j).
Since
j
(j) is nonnegative and r
t
i
< n
i
in A
tt
i
, j r
t
i
< j n
i
for any
j ^. Thus 1
i
is lower hemi-continuous and nonempty-valued. Since A
tt
i

is compact, 1
i
is upper hemi-continuous, since it clearly has closed graph.
Thus for each consumer, the feasibility multivalued mapping is a continuous
multivalued mapping with nonempty comapct convex values.
The abstract economy so constructed satises all the hypotheses of the
Shafer-Sonnenschein theorem and so has a Nash equilibrium. Translating the de-
nition of Nash equilibrium to the case at hand yields the existence of (j
+
. (r
+
i
). (n
t
j
))
^

iI
A
tt
i

jJ
1
tt
j
satisfying
(i) c (

iI
r
+
i

jJ
n
t
j
n) _ j
+
(

iI
r
+
i

jJ
n
t
j
n) for all c ^.
(ii) j
+
n
t
j
_ j
+
n
j
for all n
j
1
tt
j
. , = 1. 2. .... /.
(iii) r
+
i
1
i
and co l
i
(r
+
i
)

1
i
= O. i = 1. 2. .... :. where
1
i
= r
i
[ r
i
A
tt
i
. j
+
r
i
_ j
+
n
i

k

j=1
a
i
j
(j
+
n
tt
j
).
Let '
i
= j
+
n
i

k
j=1
a
i
j
(j
+
n
tt
j
). Then in fact, each consumer spends all
his income, so that we have the budget equality j
+
r
+
i
= '
i
. Suppose not. Then
since l
i
(r
+
i
) is open and r
+
i
c| l
i
(r
+
i
). it would follow that l
i
(r
+
i
)

1
i
= O. a
contradiction.
Summing up the budget equalities and using

n
i=1
a
i
j
= 1 for each , yields
j
+

n
i=1
r
+
i
= j
+
(

k
j=1
n
tt
j
n). so that
j
+
(

iI
r
+
i

jJ
n
t
j
n) = 0.
This and (i) yield

iI
r
+
i

jJ
n
t
j
n _ 0.
We next show that j
+
n
t
j
_ j
+
n
j
for all n
j
1
j
. Suppose not, and let
j
+
n
tt
j
j
+
n
t
j
. Since 1
j
is convex, `n
tt
j
(1 `)n
t
j
1
j
. and it too yields a
higher prot than n
t
j
. But for ` small enough, `n
tt
j
(1 `)n
t
j
1
tt
j
. because
1
t
j
is in the interior of C
j
. This contradicts (ii).
By (7) .
+
=

iI
r
+
i


jJ
n
t
j
n 1. so that there exists n
tt
j
1
j
,
, = 1. 2. .... / satisfying .
+
=

jJ
n
t
j
. Set n
+
j
= n
t
j
n
tt
j
. Since each n
t
j
maximizes
j
+
n
j
over 1
j
, than

jJ
n
t
j
maximizes j
+
n over 1. But since j
+
.
+
= 0.

jJ
n
+
j
also maximizes j
+
over 1 . But then each n
+
j
must also maximizes
j
+
n
j
over 1
j
. Thus we have so far shown that j
+
n
+
j
_ j
+
n
j
for all n
j
1
j
.
, = 1. 2. .... /. By construction, we have that ((r
+
i
). (n
+
j
)) 1. To show that
105
(j
+
. (r
+
i
). (n
+
j
)) is indeed a Walrasian free disposal equilibrium it remains to be
proven that for each i,
l
i
(r
+
i
)

r
i
[ r
i
A
i
. j
+
r
i
_ j
+
n
i

k

j=1
a
i
j
(j
+
n
+
j
) = O.
Suppose that there is some r
t
i
belonging to this intersection. Then for small
enough ` 0. `r
t
i
(1`)r
+
i
A
tt
i
and since r
+
i
c| l
i
(r
+
i
), `r
tt
i
(1`)r
+
i

co l
i
(r
+
i
)

1
i
. contradicting (iii). Thus ((r
+
i
). (n
+
j
)) is a Walrasian free disposal
equilibrium.
Theorem 3.12. Let the economy ((A
i
. n
i
. l
i
). (1
j
). (a
i
j
)) satisfy the hy-
potheses of Theorem 3.11 and further assume that there is a continuous quasi-
concave utility n
i
satisfying l
i
(r
i
) = r
t
i
[ r
t
i
A
i
. n
i
(r
t
i
) n
i
(r
i
). Then the
economy has a Walrasian free disposal equilibrium.
Proof. Let 1
tt
j
be as in proof of previous theorem. We dene the multivalued
mapping
j
as follows
j
: ^ (1
tt
j
by

j
(j) = n
j
[ n
j
1
tt
j
. j n
j
_ j n
tt
j
1or a|| n
tt
j
1
tt
j
.
Dene
j
: ^ R. by
j
(j) = max
yjYj
j n
j
. By the maximum theorem,
j
is upper hemi-continuous with nonempty compact values and
j
is continuous.
Since 0 1
j
.
j
is nonnegative. Since 1
tt
j
is convex,
j
(j) is convex too.
Let A
tt
i
be as in proof of the previous theorem and dene
i
: ^ (A
tt
i
by

i
(j) = r
i
[ r
i
A
tt
i
. j r
i
_ j n
i

jJ
a
i
j

j
(j).
As in proof of previous theorem the existence of r
tt
i
< n
i
in A
tt
i
implies that

i
is a continuous multivalued mapping with nonempty values. Since A
tt
i
is
compact and convex,
i
has compact convex values. Dene j
i
: ^ (A
tt
i
by
j
i
(j) = r
i
. [ r
i

i
(j). n
i
(r
i
) _ n
i
(r
tt
i
) 1or a|| r
tt
i

i
(j).
By a theorem of Berge, j
i
is an upper hemi-continuous multivalued mapping
with nonempty compact values. Since n
i
is quasi-concave, j
i
has convex values.
Set
7(j) =
n

i=1
j
i
(j)
k

j=1

j
(j) n.
This 7 is upper hemi-continuous and has nonempty compact convex values.
Also for any . 7(j), j . _ 0. To see this just add up the budget multivalued
mappings for each consumer. By theorem 3.1, there is some j
+
^ and .
+

7(j
+
). satisfying .
+
_ 0. Thus there are r
+
i
j
i
(j
+
) and n
+
j

j
(j
+
) such that
n

i=1
r
+
i

k

j=1
n
+
j
n _ 0.
106
It follows just as in proof of previous theorem that ((r
+
i
). (n
+
j
)) is a Walrasian
free disposal equilibrium.
Remark 3.10. The literature on Walrasian equilibrium is enormous. Two
standard texts in the eld are Debreu and Arrow-Hahn.
3.3.6 Equilibria for abstract economies
The object of this subsection is to use new xed-point theorems of the authors
Agarwal and Regan to establish the existence of equilibrium points of abstract
economies. These results improve, extend and complement those in the litera-
ture.
Throughout in this subsection, 1 will be a countable set of agents and we
describe an abstract economy by I = (Q
i
. 1
i
. 1
i
)
iI
where for each i 1. Q
i
is a choice (or strategy) set, 1
i
:

iI
Q
i
= Q 2
Qi
(nonempty subsets of
Q
i
) is a constraint multivalued mapping, and 1
i
: Q 2
Qi
is a preference
multivalued mapping; here Q
i
will be a subset of a Frchet space (complete,
metrizable locally convex vectorial topological space) 1
i
for each i 1. A point
r Q is called an equilibrium point of I if for each i 1. we have
r
i
1
i
(r) a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Theorem 3.13. Let \ be a closed, convex subset of a Frchet space 1 with
r
0
\. Suppose that there is an upper semicontinuous map 1 : \ C1(\)
(here C1(\) denotes the family of nonempty, compact, convex subsets of \)
with the following condition holding:
_ \. = co (r
0

1())
nit/ = C a:d C _ con:ta/|c. i:j|ic: i: co:jact. (+)
Then 1 has a xed point in \.
Remark 3.11. Suppose in addition in Theorem 3.13, we assume
1or a:n _ \. nc /ac 1(

) _ 1().
then we could replace (*) with
C _ \ con:ta/|c. C = co(r
0

1(C)) i:j|ic: C i: co:jact. (++)


and the result in Theorem 3.13 is again true.
Now Theorem 3.13 together with Remark 3.11 yields the following theorem
of M onch type for single valued maps.
107
Theorem 3.14. Let \ be a closed, convex subset of a Frchet space 1 with
r
0
\. Suppose that there is a continuous map 1 : \ \ with the following
condition holding:
C _ \ con:ta/|c. C = co(r
0

1(C)) i:j|ic: C i: co:jact.


Then 1 has a xed point in \.
Next we present a xed point result of Furi-Pera type.
Theorem 3.15. Let 1 be a Frchet space with Q a closed, convex subset
of 1 and 0 Q. Suppose 1 : Q C1(1) is a compact upper semicontinuous
map with the following condition holding:
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
nit/ r ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
1(r
n
) _ Q 1or : _ :
0
.
Then 1 has a xed point in Q.
Remark 3.12. In Theorem 3.15, if 1 is a Hilbert space, then one could
replace 1 : Q C1(1) a compact map in Theorem 3.15 with 1 : Q C1(1)
a one-set contractive, condensing map with 1(Q) a bounded set in 1.
Let 7 be a subset of a Hausdor topological space 1
1
and \ a subset of a
topological vector space 1
2
. We say 1 1T1(7. \) if \ is convex and there
exists a map 1 : 7 \ with
co (1(r)) _ 1(r). 1or a|| r 7. 1(r) = O 1or cac/ r 7.
and the bres
1
1
(n) = . [ . 7. n 1(.)
are open (in 7) for each n \.
The following selection theorem hold
Theorem 3.16. Let 7 be a nonempty, paracompact Hausdor topological
space and \ a nonempty, convex subset of a Hausdor topological vector space.
Suppose 1 1T1(7. \). Then 1 has a continuous selection, that is, there
exists a continuous single valued map 1 : 7 \ of 1.
The following result is a xed point theorem of Furi-Pera type for DTK
maps.
Theorem 3.17. Let 1 be a countable index set and Q
i

iI
a family of
nonempty closed, convex sets each in a Frchet 1
i
. Let Q =

iI
Q
i
and
assume 0 Q. For each i 1, let 1
i
1T1(Q. 1
i
) be a compact map. Let
1 : Q 2
E
(here 1 =

iI
1
i
) be given by
108
1(r) =

iI
1
i
(r). 1or r Q.
and suppose the following condition holds:
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
nit/ r ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
1(r
n
) _ Q 1or : _ :
0
.
Then 1 has a xed point in Q.
Remark 3.13. In Theorem 3.17, if 1
i
is a Hilbert space for each i 1,
then one could replace 1
i
, a compact map for each i 1 in Theorem 3.17 with
1 : Q 2
E
. a one-set contractive, condensing map with 1(Q) a bounded set
in 1.
We will now use the above xed point results to obtain equilibrium theorems
for an abstract economy.
Theorem 3.18. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
iI
an abstract
economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty closed, convex subset of a Frchet space 1
i
,
(2) 1
i
: Q C1(Q
i
) is upper semicontinuous; here C1(Q
i
) denotes the
family of nonempty, compact, convex subsets of Q
i
,
(3) l
i
= r [ r Q. 1
i
(r)

1
i
(r) = O is open in Q,
(4) 1
i
[
Ui
: l
i
2
Ei
is upper semicontinuous with 1
i
(r) closed and convex
for each r l
i
,
(5) r
i
1
i
(r)

1
i
(r). for each r Q: here r
i
is the projection of r on 1.
In addition, suppose r
0
Q with
(6) _ Q. _ co(r
0

1()) with = C and C _ countable, implies


is compact
holding; here 1 : Q 2
Q
is given by
1(r) =

iI
1
i
(r). 1or r Q.
Then I has a equilibrium point. That is, for each i 1, we have
r
i
1
i
(r) a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Fix i 1. Let G
i
: l
i
2
Qi
be given by
G
i
(r) = 1
i
(r)

1
i
(r).
which is upper semicontinuous. Let H
i
: Q 2
Qi
be dened by
109
H
i
(r) = G
i
(r). i1 r l
i
. a:d H
i
(r) = 1
i
(r). i1 r l
i
.
which is upper semicontinuous with nonempty, compact, convex values (note
G
i
(r) _ 1
i
(r) for r l
i
).
Let H : Q 2
Q
be dened by
H(r) =

iI
H
i
(r).
We have that H : Q C1(Q) is upper semicontinuous. We wish to apply
Theorem 3.13 to H. To see this, let _ Q with = co(r
0

H())., = C
and C _ countable. Then since
H(r) _ 1(r). 1or r
(note H
i
(r) _ 1
i
(r). for r Q), we have
_ co(r
0

1()).
Now (6) guarantees that is compact. Theorem 3.13 guarantees that there
exists r Q with r H(r). From (5), we have r l
i
for each i 1. As a
result, for each i 1, we have r
i
1
i
(r) and 1
i
(r)

1
i
(r) = O: here r
i
is the
projection of r on 1
i
.
Remark 3.14. If 1(1) _ 1(1) for any 1 _ Q. then could replace (6) in
Theorem 3.18 with (see Remark 3.11)
C _ Q con:ta/|c. C _ co(r
0

1(C)) i:j|ic: C i: co:jact.

Theorem 3.19. Let 1 be a countable set and I = (Q


i
. 1
i
. 1
i
)
iI
an abstract
economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty closed, convex subset of a Frchet space 1
i
,
(2) 1
i
: Q C1(Q
i
) is upper semicontinuous, compact map,
(3) l
i
= r [ r Q. 1
i
(r)

1
i
(r) = O is open in Q,
(4) 1
i
[
Ui
: l
i
2
Ei
is upper semicontinuous with 1
i
(r) closed and convex
for each r l
i
,
(5) r
i
1
i
(r)

1
i
(r). for each r Q: here r
i
is the projection of r on 1.
In addition, suppose 0 Q with
(6)
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
nit/ r ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
1(r
n
) _ Q 1or : _ :
0
110
holding; here 1 : Q 2
E
(here 1 =

iI
1
i
) is given by
1(r) =

iI
1
i
(r).
Then I has an equilibrium point r Q. That is, for each i 1, we have
r
i
1
i
(r). a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Fix i 1 and let H
i
be as in Theorem 3.18. The same reasoning as
in Theorem 3.18 guarantees that H
i
: Q C1(1
i
) is upper semicontinuous.
Let H : Q 2
E
be as in proof of previous theorem. Notice H : Q C1(1)
is an upper semicontinuous, compact map (use (2) with H
i
(r) _ 1
i
(r) for
r Q). We wish to apply Theorem 3.15. To see this, suppose (r
n
. `
n
)
n`1
is
a sequence in 0Q [0. 1[ converging to (r. `) with r `H(r) and 0 _ ` < 1.
Then since H(r) _ 1(r) for r Q. we have r `1(r). Now (6) guarantees that
there exists :
0
1. 2. ... with `
n
1
n
(r
n
) _ Q for each : _ :
0
. Consequently,
`
n
H
n
(r
n
) _ Q for each : _ :
0
. Theorem 3.15 guarantees that there exists
r Q with r H(r). and it is easy to check, as in Theorem 3.18, that r is an
equilibrium point of I.
Remark 3.15. Notice (6) can be replaced by
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
nit/ r ` H(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
H(r
n
) _ Q 1or : _ :
0
.
where H is given in proof of Theorem 3.18, and the result in Theorem 3.19
is again true.
Remark 3.16. If 1
i
is a Hilbert space for each i 1, then one could replace
1
i
: Q C1(1
i
) a compact map for each i 1 in (2) with 1 : Q 2
E
a
one-set contractive, condensing map with 1(Q) a bounded set in 1.
Next we present a generalization of Theorems 3.18 and 3.19.
Theorem 3.20. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
iI
an abstract
economy. Assume for each i 1 that (1), (2), (3) and (5) of Theorem 3.18 hold.
In addition, suppose for each i 1 that there exists an upper semicontinuous
selector
w
i
: l
i
2
Qi
o1 1
i

1
i
[
Ui
: l
i
2
Qi
(7) with w
i
(r) closed and convex for each r l
i
is satised. Then I has an equilibrium point r Q. That is, for each i 1,
we have
111
r
i
1
i
(r) a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Fix i 1. Let H
i
: Q 2
Qi
be dened by
H
i
(r) = w
i
(r). i1 r l
i
.
and
H
i
(r) = 1
i
(r). i1 r l
i
.
This H
i
: Q C1(Q
i
) is upper semicontinuous (note w
i
(r) _ 1
i
(r) for
r l
i
). Essentially the same reasoning as in Theorem 3.18 onwards establishes
result.
Remark 3.17. If 1
i
[
Ui
: l
i
2
Ei
is upper semicontinuous with 1
i
(r)
closed and convex for each r l
i
, then of course (7) holds.
If 1
i

1
i
[
Ui
: l
i
2
Qi
is lower semicontinuous with 1
i
(r) closed and convex
for each r l
i
, then (7) holds.
Theorem 3.21. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
iI
an abstract
economy. Assume for each i 1 that (1), (2), (3) and (5) of Theorem 3.19 hold.
In addition, suppose for each i 1 that there exists an upper semicontinuous
selector
w
i
: l
i
2
Ei
o1 1
i

1
i
[
Ui
: l
i
2
Ei
(8) with w
i
(r) closed and convex for each r l
i
is satised. Also assume 0 Q with (6) holding. Then I has an equilibrium
point.
Proof. Fix i 1 and let H
i
be as in Theorem 3.20. Essentially the same
reasoning as in Theorem 3.19 establishes the result.
The theorems so far in this subsection assume l
i
is open in Q. Our next
two results consider the case when l
i
is closed in Q.
Theorem 3.22. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
iI
an abstract
economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Frchet space 1
i
,
(2) 1
i
: Q C1(Q
i
) is lower semicontinuous,
(3) l
i
= r [ r Q. 1
i
(r)

1
i
(r) = O is closed in Q,
(4) there exists a lower semicontinuous selector w
i
: l
i
2
Qi
of 1
i

1
i
[
Ui
:
l
i
2
Qi
with w
i
(r) closed and convex for each r l
i
,
and
(5) r
i
1
i
(r)

1
i
(r) for each r Q: here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
Q with
(6) _ Q. _ co(r
0

1()) with = C and C _ countable, implies


is compact
holding; here 1 : Q 2
Q
is given by
112
1(r) =

iI
1
i
(r).
Then I has an equilibrium point.
Proof. Fix i 1 and let H
i
: Q 2
Qi
be given by
H
i
(r) = w
i
(r). i1 r l
i
H
i
(r) = 1
i
(r). i1 r l
i
.
This H
i
: Q C1(Q
i
) is lower semicontinuous. Then, there exists an upper
semicontinuous selector 1
i
: Q C1(Q
i
) of H
i
. Let 1 : Q 2
Q
be given by
1(r) =

iI
1
i
(r). 1or r Q.
Now 1 : Q C1(Q) is upper semicontinuous. We wish to apply Theorem
3.13 to 1. To see this, let _ Q with = co(r
0

1()), = C and C _
countable. Then since
1(r) _ 1(r). 1or r Q
(note 1
i
(r) _ H
i
(r) _ 1
i
(r), for r Q), we have
_ co(r
0

1()).
Now (5) guarantees that is compact. Theorem 3.13 guarantees that there
exists r Q with r 1(r). Now if r l
i
for some i 1, then
r
i
1
i
(r) _ H
i
(r) = w
i
(r)
(here r
i
is the projection of r on 1
i
), and so r
i
1
i
(r)

1
i
(r), a contra-
diction. As a result r l
i
for each i 1, so r
i
1
i
(r) and 1
i
(r)

1
i
(r) = O.

Remark 3.18. If 1
i

1
i
[
Ui
: l
i
2
Qi
is lower semicontinuous with 1
i
(r)
closed and convex for each r l
i
, then (4) is clearly satised.
Theorem 3.23. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
iI
an abstract
economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Frchet space 1
i
,
(2) 1
i
: Q C1(1
i
) is lower semicontinuous, compact map,
(3) l
i
= r [ r Q. 1
i
(r)

1
i
(r) = O is closed in Q,
(4) there exists a lower semicontinuous selector w
i
: l
i
2
Ei
of 1
i

1
i
[
Ui
:
l
i
2
Ei
with w
i
(r) closed and convex for each r l
i
,
and
(5) r
i
1
i
(r)

1
i
(r) for each r Q: here r
i
is the projection of r on 1
i
.
In addition, suppose 0 Q with
(6)
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
113
nit/ r ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
1(r
n
) _ Q 1or : _ :
0
holding; here 1 : Q 2
E
(here 1 =

iI
1
i
) is given by
1(r) =

iI
1
i
(r).
Then I has an equilibrium point r Q.
Proof. Fix i 1 and let H
i
be as in Theorem 3.22. The same reasoning as in
Theorem 3.22 guarantees that H
i
: Q C1(1
i
) is upper semicontinuous, and
that there exists an upper semicontinuous selector 1
i
: Q C1(1
i
) of H
i
. Let
1 : Q 2
E
be as in proof of previous theorem. Notice 1 : Q C1(1) is an
upper semicontinuous, compact map (use (2) with 1
i
(r) _ 1
i
(r) for r Q). We
wish to apply Theorem 3.22. To see this, suppose (r
n
. `
n
)
n`1
is a sequence
in 0Q [0. 1[ converging to (r. `) with r `1(r) and 0 _ ` < 1. Then since
1(r) _ 1(r) for r Q. we have r `1(r). Now (6) guarantees that there
exists :
0
1. 2. ... with `
n
1
n
(r
n
) _ Q for each : _ :
0
. Consequently,
`
n
1
n
(r
n
) _ Q for each : _ :
0
. Theorem 3.22 guarantees that there exists
r Q with r 1(r). and it is easy to check, as in Theorem 3.22, that r is an
equilibrium point of I.
Next we discuss an abstract economy I = (Q
i
. 1
i
. G
i
. 1
i
)
iI
(here 1 is count-
able) where for each i 1, Q
i
_ 1
i
is the choice set, 1
i
. G
i
:

iI
Q
i
= Q 2
Ei
are constraint multivalued mapping, and 1
i
: Q 2
Ei
is a preference multi-
valued mapping. A point r Q is called an equilibrium point of I if for each
i 1, we have
r
i
c|
Ei
G
i
(r) = G
i
(r) a:d 1
i
(r)

1
i
(r) = O
(here r
i
is the projection of r on 1
i
). The results which follows improve
those of Regan, Ding, Kim, Tan, Yannelis and Prabhaker. We establish a new
xed point result for DTK maps.
Theorem 3.24. Let 1 be a countable index set and Q
i

iI
a family of
nonempty closed, convex sets each in a Frchet 1
i
. For each i 1, let G
i

1T1(Q. Q
i
) where Q =

iI
Q
i
. Assume r
0
Q and suppose G : Q 2
Q
,
dened by G(r) =

iI
G
i
(r) for r Q, satises the following condition:
C _ Q con:ta/|c. C _ co(r
0

G(C)) i:j|ic: C i: co:jact.


Then G has a xed point in Q.
Proof. Since Q is a subset of a metrizable space 1 =

iI
1
i
we have that
Q is paracompact. Fix i 1. Then G
i
1T1(Q. Q
i
) together with Theorem
3.16 guarantees that there exists a continuous selector o
i
: Q Q
i
of G
i
. Let
o : Q Q be dened by
114
o(r) =

iI
o
i
(r). 1or r Q.
Notice G : Q Q is continuous and o is a selector of G. We now show
i1 C _ Q i: con:ta/|c a:d C = co(r
0

o(C)) t/c: C i: co:jact.


To see this, notice if C _ Q is countable and C = co(r
0

o(C)), then
since o is a selector of G, we have
C _ co(r
0

G(C)).
Now the condition in state of theorem implies C is compact. Theorem 3.14
guarantees that there exists r Q with r = o(r). That is,
r = o(r) =

iI
o
i
(r) _

iI
G
i
(r) = G(r).

Theorem 3.25. Let 1 be a countable set and I = (Q


i
. 1
i
. G
i
. 1
i
)
iI
an
abstract economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Frchet space 1
i
.
(2) for each r Q, 1
i
(r) = O and co(1
i
(r)) _ G
i
(r).
(3) for each n
i
Q
i
. the set [(co1
i
)
1
(n
i
)

'
i
[

1
1
i
(n
i
) is open in Q:
here '
i
= r [ r Q. 1
i
(r)

1
i
(r) = O.
(4) G
i
: Q 2
Qi
, and
(5) r
i
co(1
i
(r)) for each r Q; here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
Q with
(6) C _ Q countable, C _ co(r
0

G(C)) implies C is compact


holding; here G : Q 2
Q
is given by
(7) G(r) =

iI
G
i
(r). 1or r Q.
Then I has an equilibrium point r Q. That is, for each i 1, we have
r
i
G
i
(r) a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. For each i 1, let

i
= r [ r Q. 1
i
(r)

1
i
(r) = O.
and for each r Q, let
1(r) = i [ i 1. 1
i
(r)

1
i
(r) = O.
For each i 1, dene multivalued mappings
i
. 1
i
: Q 2
Qi
by

i
(r) = co1
i
(r)

1
i
(r). i1 i 1(r) (t/at i:. r
i
).
115
a:d
i
(r) = 1
i
(r). i1 i 1(r).
and
1
i
(r) = co1
i
(r)

G
i
(r). i1 i 1(r) (t/at i:. r
i
).
a:d 1
i
(r) = G
i
(r). i1 i 1(r).
It is easy to see (use (2)) and the denition of 1(r) that for each i 1 and
r Q that
co(
i
(r)) _ 1
i
(r) a:d
i
(r) = O.
Also, for each i 1 and n
i
Q
i
we have

1
i
(n
i
) = r [ r Q. n
i

i
(r) =
r [ r
i
. n
i
co1
i
(r)

1
i
(r)

r [ r '
i
. n
i
1
i
(r) =
[r [ r
i
. n
i
co1
i
(r)

r [ r
i
. n
i
1
i
(r)[

r [ r '
i
. n
i
1
i
(r) =
[(co1
i
)
1
(n
i
)

1
1
i
(n
i
)[

[1
1
i
(n
i
)

'
i
[ =
[(co1
i
)
1
(n
i
)

1
1
i
(n
i
)[

[1
1
i
(n
i
)

'
i
[ =
[(co1
i
)
1
(n
i
)

'
i
[

1
1
i
(n
i
).
which is open in Q. Thus, 1
i
1T1(Q. Q
i
). Let 1 : Q 2
Q
be dened
by
1(r) =

iI
1
i
(r). 1or r Q.
We now show
C _ Q con:ta/|c. C _ co(r
0

1(C)) i:j|ic: C i: co:jact.


To see this, let C _ Q be countable with C _ co(r
0

1(C)). Now since


1(r) _ G(r) for r Q (note for each i 1 that 1
i
(r) _ G
i
(r) for r Q), we
have
116
C _ co(r
0

G(C)).
Now (6) implies C is compact, so we have the above implication. Theorem
3.24 guarantees that there exists r Q with r 1(r), that is r
i
1
i
(r) for
each i 1; note if i 1(r) for some i 1, then 1
i
(r)

1
i
(r) = O and so
r
i
co(1
i
(r))

G
i
(r). In particular, r
i
co(1
i
(r)). and this contradicts (5).
Thus, i 1(r) for all i 1. Consequently, 1
i
(r)

1
i
(r) = O and r
i
G
i
(r) for
all i 1.
Theorem 3.26. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
iI
an
abstract economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Frchet space 1
i
.
(2) for each r Q, 1
i
(r) = O and co(1
i
(r)) _ G
i
(r).
(3) for each n
i
1
i
. the set [(co1
i
)
1
(n
i
)

'
i
[

1
1
i
(n
i
) is open in Q: here
'
i
= r [ r Q. 1
i
(r)

1
i
(r) = O.
(4) G
i
: Q 2
Ei
, is a compact map, and
(5) r
i
co(1
i
(r)) for each r Q; here r
i
is the projection of r on 1
i
.
In addition, suppose 0 Q with
(6)
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
nit/ r ` G(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
G(r
n
) _ Q 1or : _ :
0
holding; here G : Q 2
E
(here 1 =

iI
1
i
) is given by
G(r) =

iI
G
i
(r).
Then I has an equilibrium point r Q. That is, for each i 1, we have
r
i
G
i
(r) a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. For each i 1, let
i
.
i
and 1
i
be as in Theorem 3.25. Essentially
the same reasoning as in Theorem 3.25 guarantees that 1
i
1T1(Q. 1
i
) for
each i 1. Also note that 1
i
is a compact map for each i 1. Let 1 : Q 2
E
be as in proof of previous theorem. We wish to apply Theorem 3.17. To see
this, suppose (r
n
. `
n
)
n`1
is a sequence in 0Q [0. 1[ converging to (r. `)
with r `1(r) and 0 _ ` < 1. Then, since 1(r) _ G(r) for r Q, we
have r `G(r). Now (6) guarantees that there exists :
0
1. 2. ... with
`
n
G(r
n
) _ Q for each : _ :
0
. Consequently, `
n
1(r
n
) _ Q for each
: _ :
0
. Theorem 3.17 guarantees that there exists r Q with r 1(r). and it
is easy to check, as in Theorem 3.25, that r is an equilibrium point of I.
117
Remark 3.19. If 1
i
is a Hilbert space for each i 1, then one could replace
G
i
: Q 2
Ei
a compact map for each i 1 in (4) with G : Q 2
E
a one-set
contractive, condensing map with G(Q) a bounded set in 1.
Finally, in this subsection, we present two more results for upper semicon-
tinuous maps which extend the well known results in literature.
Theorem 3.27. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
iI
an
abstract economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Frchet space 1
i
.
(2) 1
i
: Q 2
Qi
is such that co(1
i
(r)) _ G
i
(r).
(3) G
i
: Q 2
Qi
and G
i
(r) is convex for each r Q.
(4) the multivalued mapping G
i
: Q C1(Q
i
). dened by G
i
(r) = c|
Qi
G
i
(r).
is upper semicontinuous,
(5) for each n
i
Q
i
, 1
1
i
(n
i
) is open in Q,
(6) for each n
i
Q
i
, 1
1
i
(n
i
) is open in Q, and
(7) r
i
co(1
i
(r)) for each r Q: here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
Q with
(8) _ Q. _ co(r
0

G()) with = C and C _ countable, implies


is compact
holding; here G : Q 2
Q
is given by
(9)
G(r) =

iI
G
i
(r). 1or r Q.
Then I has an equilibrium point r Q. That is, for each i 1, we have
r
i
G
i
(r) a:d 1
i
(r)

1
i
(r) = O.
Proof. Fix i 1 and let c
i
: Q 2
Qi
be dened by
c
i
(r) = co(1
i
(r))

co(1
i
(r)). 1or r Q.
and
l
i
= r [ r Q. c
i
(r) = O.
Now (5) and (6) together with a result of Yannelis and Prabhaker imply for
each n Q
i
that (co1
i
)
1
(n) and (co1
i
)
1
(n) are open in Q. As a result, for
each n Q we have that
c
1
i
(n) = (co1
i
)
1
(n)

(co1
i
)
1
(n)
is open in Q. Now it is easy to check that
l
i
=

yQi
c
1
i
(n).
118
and as a result, we have that l
i
is open in Q. Since l
i
is a subset of a
metrizable space 1 =

iI
1
i
, we have that l
i
is paracompact. Notice as well
that
c
i
= c
i
[
Ui
: l
i
2
Qi
has convex values. Also for n Q
i
, we have
c
1
i
(n) = r [ r l
i
. n c
i
(r) =
r [ r Q. n c
i
(r)

l
i
= (c
i
)
1
(n)

l
i
.
so c
1
i
(n) is open in l
i
. Theorem 3.16 guarantees that there exists a con-
tinuous selection 1
i
: l
i
2
Qi
of c
i
. For each i 1, let H
i
: Q 2
Qi
be given
by
H
i
(r) = 1
i
(r). i1 r l
i
. a:d H
i
(r) = G
i
(r). i1 r l
i
.
This H
i
is upper semicontinuous (note for each r l
i
that 1
i
(r) _
c
i
(r) _ co(1
i
(r)) _ G
i
(r)). Also notice (4) guarantees that H
i
: Q C1(Q
i
).
Let H : Q 2
Q
be given by
H(r) =

iI
H
i
(r). 1or r Q.
This H : Q C1(Q) is upper semicontinuous. We wish to apply Theorem
3.13 to H. To see this, let _ Q with = co(r
0

H()), = C, and
C _ countable. Then since H
i
(r) _ G
i
(r) for each r Q, we have
H(r) _

iI
G
i
(r) = G(r). 1or r Q.
Thus
_ co(r
0

G()).
so (8) guarantees that is compact. Theorem 3.13 guarantees that there
exists r Q with r H(r). If r l
i
for some i, then
r
i
= 1
i
(r) co(1
i
(r))

co(1
i
(r)) _ co(1
i
(r)).
This contradicts (7). Thus, for each i 1, we must have r l
i
, so r
i

G
i
(r) and co(1
i
(r))

co(1
i
(r)) = O. Our result follows since
1
i
(r)

1
i
(r) _ co(1
i
(r))

co(1
i
(r)).

Remark 3.20. Notice (5) and (6) in last theorem could be replaced by
119
(10) for each i 1, for each n
i
Q
i
, (co1
i
)
1
(n
i
)

(co1
i
)
1
(n
i
) is open in
Q,
and the result is again true.
Theorem 3.28. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
iI
an
abstract economy such that for each i 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Frchet space 1
i
.
(2) 1
i
: Q 2
Ei
is such that co(1
i
(r)) _ G
i
(r).
(3) G
i
: Q 2
Ei
and G
i
(r) is convex for each r Q.
(4) the multivalued mapping G
i
: Q C1(1
i
). dened by G
i
(r) = c|
Ei
G
i
(r).
is upper semicontinuous,
(5) for each n
i
1
i
, 1
1
i
(n
i
) is open in Q,
(6) for each n
i
1
i
, 1
1
i
(n
i
) is open in Q, and
(7) r
i
co(1
i
(r)) for each r Q: here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
Q with
(8)
i1 (r
n
. `
n
)
n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:croi:o to (r. `)
nit/ r ` G(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
1. 2. ...
nit/ `
n
G(r
n
) _ Q 1or : _ :
0
holding; here G : Q 2
E
(here 1 =

iI
1
i
) is given by
G(r) =

iI
G
i
(r).
Then I has an equilibrium point r Q. That is, for each i 1, we have
r
i
G
i
(r) a:d 1
i
(r)

1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Let c
i
. l
i
. H
i
. and H be as in previous theorem. Essentially the
same reasoning as in previous theorem guarantees that H : Q C1(1) is upper
semicontinuous. Notice also that H is compact. We wish to apply Theorem 3.15
to H. To see this, suppose (r
n
. `
n
)
n`1
is a sequence in 0Q[0. 1[ converging
to (r. `) with r `H(r) and 0 _ ` < 1. Then, since H(r) _ G(r) for r Q,
we have r `G(r). Now (8) guarantees that there exists :
0
1. 2. ... with
`
n
G(r
n
) _ Q for each : _ :
0
. Consequently, `
n
H(r
n
) _ Q for each
: _ :
0
. Theorem 3.15 guarantees that there exists r Q with r H(r).
Theorem 3.29. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
iI
an
abstract economy such that for each i 1, the following conditions hold:
(1) Q
i
is convex,
(2) 1
i
is a nonempty compact subset of Q
i
,
(3) for each r Q, 1
i
(r) is a nonempty convex subset of 1
i
,
120
(4) for each r
i
1
i
, 1
1
i
(r
i
)

l
i

1
i
(r
i
) contains a relatively open
subset O
xi
of co 1 such that

xiDi
O
xi
= co 1. where l
i
= r [ r Q. 1
i
(r)

1
i
(r) =
O and 1 =

iI
1
i
;
(5) for each r = r
i
Q, r
i
co 1
i
(r).
Then I has an equilibrium point.
Proof. For each i 1, let
G
i
= r [ r Q. 1
i
(r)

1
i
(r) = O
and for each r Q, let
1(r) = i [ i 1. 1
i
(r)

1
i
(r) = O.
Now for each i 1, we dene a multivalued mapping T
i
: Q 2
Di
by
T
i
(r) = co 1
i
(r)

1
i
(r). i1 i 1(r)
T
i
(r) = 1
i
(r). i1 i 1(r).
Clearly for each r Q, T
i
(r) is a nonempty convex subset of 1
i
. Also for
each n
i
1
i
,
T
1
i
(n
i
) = [(co 1
i
)
1
(n
i
)

1
1
i
(n
i
)

G
i
[

[1
1
i
(n
i
)

l
i
[
[1
1
i
(n
i
)

1
1
i
(n
i
)

G
i
[

[1
1
i
(n
i
)

l
i
[ =
= [1
1
i
(n
i
)

1
1
i
(n
i
)[

[1
1
i
(n
i
)

l
i
[ = [1
1
i
(n
i
)

l
i
[

1
1
i
(n
i
).
We note that the rst inequality follows from the fact that for each n
i
1
i
,
1
1
i
(n
i
) (co 1
i
)
1
(n
i
) because 1
i
(r) (co 1
i
)(r) for each r Q. Further-
more, by virtute of (4), for each n
i
1
i
, T
1
i
(n
i
) contains a relatively open
set O
yi
of Q such that

yiDi
O
yi
= co 1. Hence by a result of Hussain and
Tarafdar there exists a point r = r
i
such that r
i
T
i
(r) for each i 1. By
condition (5) and the denition of T
i
, it now easily follows that r Q is an
equilibrium point of I.
Corollary 3.4. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
iI
an
abstract economy such that for each i 1, the following conditions hold:
(1) Q
i
is convex,
(2) 1
i
is a nonempty compact subset of Q
i
,
(3) for each r Q, 1
i
(r) is a nonempty convex subset of 1
i
,
(4) the set G
i
= r [ r Q. 1
i
(r)

1
i
(r) = O is a closed subset of Q
i
,
(5) for each n
i
1
i
. 1
1
i
(n
i
) is a relatively open subset in G
i
and 1
1
i
(n
i
)
is a relatively open subset in Q.
(6) for each r = r
i
Q, r
i
co 1
i
(r).
121
Then there is an equilibrium point of the economy I.
Proof. Since 1
1
i
(n
i
) is relatively open in G
i
, there is an open subset \
i
of
Q with 1
1
i
(n
i
) = G
i

\
i
. Hence for n
i
1
i
, 1
1
i
(n
i
)

l
i
= (G
i

\
i
)

l
i
=
Q

(\
i

l
i
). Thus
1
1
i
(n
i
)

l
i

1
1
i
(n
i
) = (\
i

l
i
)

1
1
i
(n
i
) = O
yi
.
say, is relatively open subset of Q for each n
i
1
i
, since \
i
. l
i
and 1
1
i
(n
i
)
are open subset of Q. Now it follows that

yiDi
O
yi
= co 1. The corollary is
thus a consequence of Theorem 3.29.
3.4 Existence of rst-order locally consistent equilibria
3.4.1 Introduction
A rst-order locally consistent equilibrium (1-LCE) of a game is a congura-
tion of strategies at which the rst-order condition for payo maximization is
simultaneously satised for all players. The economic motivation for introduc-
ing this equilibrium concept is that oligopolistic rms dont know their eective
demand function, but at any given status quo each rm knows only the linear
approximation of its demand curve and beliefs it to be demand curve it faces.
In what follows, in order to distinguish between the abstract concept of 1-LCE,
that is, a conguration of a game in which the rst-order condition for payo
maximization is satised for all players, and its economic interpretation, that is,
a prot-maximizing conguration in a market or in an economy in which rms
know only the linear approximation of their demand functions, the latter equi-
librium concept will be called rst-order locally consistent economic equilibrium
(1-LCEE) (see [1], [22], [23]).
3.4.2 First-order equilibria for non-cooperative games
Consider the following non-cooperative game I = (1. (o
i
). (H
i
))
iI
, where 1 =
1. 2. .... : is the index set of players, o
i
is the strategy set of player i, and
H
i
is the payo function of player i. Set o =

iI
o
i
. and o
i
=

jI;j,=i
o
j
.
The generic element of set o, (respectively o
i
, respectively o
i
) is denoted by
r, (resp. r
i
, resp. r
i
). Denote by 1
xi
H
i
the derivative of H
i
with respect to
r
i
. The derivative of H
i
with respect to r
i
calculated at point r is denoted by
1
xi
H
i
(r).
A.1. (\) i 1, o
i
is a convex and compact subset of a Banach space.
A.2. (\) i 1, the function H
i
: o R is continuous; moreover, for every
r o, the derivative 1
xi
H
i
exists and is continuous, that is, there exists an
open set \
0
i
o
i
and an extension of function H
i
to \
0
i
which is continuously
dierentiable with respect to r
i
.
Denition 3.7. A 1-LCE for game I is a conguration r
+
o such that:
(i) if r
+
i
o
i
` 0o
i
, then 1
xi
H
i
(r
+
) = 0:
(ii) if r
+
i
0o
i
. then there exists a neighborhood of r
+
i
in o
i
, (r
+
i
). such
that: 1
xi
H
i
(r
+
)(r
i
r
+
i
) _ 0. for every r
i
(r
+
i
).
122
Condition (ii) means that if r
+
i
belongs to the boundary of the strategy set,
then either it satises the rst-order condition for payo maximization, or it
is a local maximum. Notice that Denition 3.7 is in line with the usual idea
that at 1-LCEs players carry out local experiments by employing the linear
approximations of some appropriate function, in this case the payo function.
Given a conguration r
0
o, interpreted as the status quo, dene the func-
tion 1
i
: oo
i
R as follows: 1
i
(r
0
. r
i
) = H
i
(r
0
)1
xi
H
i
(r
0
)(r
i
r
0
i
). With
some abuse of language, the following ctitious :-person non-cooperative game
I
c
= (1. (o
i
). (H
i
). (1
i
))
iI
will be associated to game I. In game I
c
, given
the status quo r
0
, the best strategy for player i is the solution to the following
problem:
(1
i
) :ar 1
i
(r
0
. r
i
). :nc/ t/at r
i
o
i
.
Denote by 1
i
(r
0
) the set of solutions to problem (1
i
).
If we interpret game I
c
as an oligopolistic game among rms which choose,
for example, the level of production, then the behavioral hypothesis underlying
problem (1
i
) is that given the status quo, rms maximize the linear approxima-
tion of their prot functions.
Denition 3.8. An equilibrium for the game I
c
is a conguration r
+
o
such that: r
+
i
1
i
(r
+
) for every i 1. Denote by 1(I
c
) the set of equilibria of
game I
c
, and by 1C1(I) the set of 1-LCEs of game I.
Theorem 3.30. Under A.1. and A.2., 1C1(I) = O.
Proof. First we show that 1(I
c
) = 1C1(I). Suppose that r
+
1(I
c
). It
is sucient to show that r
+
satises the following conditions:
(i)if r
+
i
o
i
` 0o
i
. then 1
xi
H
i
(r
+
) = 0:
(ii) if r
i
0o
i
then 1
xi
H
i
(r
+
)(r
i
r
+
i
) _ 0., for every r
i
o
i
.
To this end, suppose that r
+
i
o
i
` 0o
i
but 1
xi
H
i
(r
+
) = 0. Since r
+
i
is
an interior point of o
i
then the linearity of 1
i
implies that there exists a point
r
|
i
0o
i
such that 1
i
(r
+
. r
|
i
) 1
i
(r
+
. r
+
i
). which is a contradiction. Suppose
now that r
+
i
0o
i
and 1
xi
H
i
(r
+
)(r
i
r
+
i
) 0. for some r
i
o
i
. Clearly, r
+
i
doesnt solve problem (1
i
), a contradiction. Summarizing, r
+
1C1(I).
Finally, suppose that r
+
1C1(I). Then, r
+
satises conditions (i) and (ii)
in Denition 3.7. If r
+
i
o
i
` 0o
i
. then 1
xi
H
i
(r
+
) = 0: therefore, 1
i
(r
+
. r
i
) =
H
i
(r
+
) for every r
i
o
i
. It follows that r
+
i
solves problem (1
i
). Consider now
the case r
+
i
0o
i
with 1
xi
H
i
(r
+
)(r
i
r
+
i
) _ 0 for every r
i
in some neighborhood
of r
+
i
, (r
+
i
). By linearity, one obtains that 1
xi
H
i
(r
+
)(r
i
r
+
i
) _ 0 for every
r
i
o
i
. Thus, also in this case r
+
i
solves problem (1
i
). Therefore, r
+
1(I
c
).
Now it is sucient to show that 1(I
c
) = O. By A.2 it follows that the
function 1
i
: oo
i
R is continuous. Thus, by Berges maximum theorem the
multivalued mapping 1
i
: o ( o
i
is upper hemi-continuous. It is also convex-
valued because of linearity of 1
i
. Dene the multivalued mapping 1 : o ( A
as follows 1 =

iI
1
i
. Because of A.1., a Bohnenblust and Karlins xed
point theorem ensures that there exists r
+
o such that r
+
1(r
+
). Thus,
r
+
1(I
c
).
123
3.4.3 Existence of a rst-order economic equilibrium
Next, in following example, we prove the existence of a rst-order locally con-
sistent economic equilibrium in a model of monopolistic competition similar to
the Bonanno and Zeemans one.
Example 3.2. We consider a monopolistic competitive market with : price-
making rms, i 1, 1 = 1. 2. .... :. The cost function of rm i is C
i
(c
i
) = c
i
c
i
.
where c
i
is the level of output of rm i and c
i
is a positive number. We assume
that the rm i choose any price in the interval J
i
= [c
i
. 1
i
[. Set J =

iI
J
i
and
J
i
=

jI; j,=i
J
j
. The price set by rm i is denoted by j
i
. Denote by j
i
the
(:1)-dimensional vector whose elements are the prices set by all rms except
the i-th one. Set j = (j
i
. j
i
). The function 1
i
: J R is the demand function
of rms i, and it is indicated by 1
i
(j). The true prots of rms are given by
H
i
(j) = 1
i
(j)(j
i
c
i
).

Next, one show that there exists a rst-order locally consistent economic
equilibrium for the above monopolistic market.
We suppose that:
A.1. For every i 1, function 1
i
is continuous on J, and the derivative
01
i
0j
i
: J R exists and is continuous.
A.2. For every j
i
J
i
. if 1
i
(j
t
i
. j
i
) = 0, j
t
i
J
i
`1
i
. then (01
i
0j
i
)(j
t
i
. j
i
) _
0 and 1
i
(j
tt
i
. j
i
) = 0 for every j
tt
i
_ j
t
i
.
Here it is possible that for every price in J
i
. rm is market demand is zero.
Remark 3.21. We shall assume that rms maximize their conjectural prot
function calculated by taking into account the linear approximation of their
demand function. Given the status quo j
0
J. the conjectural demand of rm
i is
^
i
(j
i
. j
0
) := 1
i
(j
0
) (01
i
0j
i
)(j
0
)(j
i
j
0
i
)
and conjectural prot is
H
+
i
(j
i
. j
0
) := ^
i
(j
i
. j
0
)(j
i
c
i
).
Denition 3.9. A rst-order locally consistent economic equilibrium is a
vector j
+
J such that for every i 1 we have
H
+
i
(j
+
i
. j
+
) _ H
+
i
(j
i
. j
+
). 1or ccrn j
i
J
i
.

Denition 3.9 means that at equilibrium rms are maximizing their conjec-
tural prot function. It is easily seen that if j
+
is a rst-order locally consistent
economic equilibrium then:
i) ^
i
(j
+
i
. j
+
) = 1
i
(j
+
). and
ii) (0^
i
0j
i
)(j
+
) = (01
i
0j
i
)(j
+
).
124
The condition i) means that at equilibrium the conjectural demand must be
equal to the true demand.
The condition ii) means that at equilibrium the slope of the true demand
function is equal to the slope of the conjectural demand.
We have
Theorem 3.31. Under A.1 and A.2 there exists a rst-order locally consis-
tent economic equilibrium.
Proof. By setting o
i
= J
i
, and r
i
= j
i
. i 1. the industry we are con-
sidering reduces to the game I considered above. Under A.1 and A.2 the
game I has clearly a rst-order locally consistent equilibrium r
+
= (r
+
i
)
iI
.
Set j
+
i
= r
+
i
. i 1. Thus, to prove Theorem 3.31 it is sucient to prove that if
(j
+
i
)
iI
is a rst-order locally consistent equilibrium then it satises condition
in Denition 3.9. We have to consider three possible cases:
a) j
+
i
= 1
i
. b) j
+
i
= c
i
. c) j
+
i
0J
i
. i 1.
Case a). j
+
i
= 1
i
. Assumption A.2 ensures that 1
i
(j
+
) = (01
i
0j
i
)(j
+
) =
0. It follows that ^
i
(j
i
. j
+
) = 0. j
i
J
i
. Therefore H
+
i
(j
+
i
. j
+
) = H
+
i
(j
i
. j
+
) =
0. j
i
J
i
.
Thus, the condition in Denition 3.9 is satised.
Case b). j
+
i
= c
i
. Two case can occur:
/
1
) (0H
i
0j
i
)(j
+
) = 0:
/
2
) (0H
i
0j
i
)(j
+
) = 0.
In the case /
1
) it isnt possible that 1
i
(j
+
) 0. In fact, if it is so, one
has that (0H
i
0j
i
)(j
+
) = 1
i
(j
+
) 0. which is a contradiction. If 1
i
(j
+
) = 0
then H
+
i
(j
+
i
. j
+
) _ H
+
i
(j
i
. j
+
) for j
i
J
i
. since H
i
(j
+
i
. j
+
) = 0 and H
+
i
(j
i
. j
+
) =
((01
i
0j
i
)(j
+
)(j
i
j
+
i
))(j
i
c
i
) _ 0 because j
+
i
= c
i
and (01
i
0j
i
)(j
+
) _ 0
from assumption A.2.
In the case /
2
). then by the fact that j
+
is a rst-order locally consistent
equilibrium, it must satisfy the condition (1
i
(j
+
)(01
i
0j
i
))(j
+
)(j
+
i
c
i
))(j
i

j
+
i
) _ 0. j
i
(c
i
). where (c
i
) is a right neighborhood of c
i
. Because j
+
i
= c
i
one has 1
i
(j
+
)(j
i
j
+
i
) _ 0. j
i
J
i
. This implies that 1
i
(j
+
) = 0. and
therefore by A.2, that (01
i
0j
i
)(j
+
) _ 0 and that 1
i
(j
i
. j
+
i
) = 0 for every
j
i
J
i
` c
i
.
We shall prove that H
+
i
(j
+
i
. j
+
) _ H
+
i
(j
i
. j
+
) for j
i
J
i
. In fact, H
+
i
(j
+
i
. j
+
) =
0 while H
+
i
(j
i
. j
+
) = (1
i
(j
+
)(01
i
0j
i
)(j
+
)(j
i
j
+
i
))(j
i
c
i
) = (01
i
0j
i
)(j
i

c
i
)
2
_ 0 for every j
i
J
i
` c
i
. from the above argument. Thus, also in this
case condition of Denition 3.9 is satised.
Case c). j
i
J
i
` 0J
i
. By denition of rst-order locally consistent equilib-
rium, one must have (0H
i
0j
i
)(j
+
) = 0.
Two cases can occur:
c
1
) 1
i
(j
+
) 0. and
c
2
) 1
i
(j
+
) = 0.
In the case c
1
) by noticing that (0H
i
0j
i
)(j
+
) = 0 implies (01
i
0j
i
)(j
+
) < 0
and that (0
2
H
+
i
0j
2
i
)(j
+
i
. j
+
) = 2(01
i
0j
i
)(j
+
) one can conclude that (0H
i
0j
i
)(j
+
) =
0 implies (0
2
H
+
i
0j
2
i
)(j
+
i
. j
+
) < 0. Thus the condition in Denition 3.9 is satis-
ed.
125
In the case c
2
) if we prove that (01
i
0j
i
)(j
+
) = 0 we have completed
the proof because in this case H
+
i
(j
+
i
. j
+
) = H
+
i
(j
i
. j
+
) = 0. j
i
J
i
. Sup-
pose, on contrary, that (01
i
0j
i
)(j
+
) < 0, then (0H
i
0j
i
)(j
+
)(j
i
j
+
i
) =
(1
i
(j
+
) (01
i
0j
i
)(j
+
)(j
+
i
c
i
))(j
i
j
+
i
) = (01
i
0j
i
)(j
+
)(j
+
i
c
i
)(j
i
j
+
i
) 0
for j
i
< j
+
i
. contradicting the hypothesis that j
+
is a rst-order locally consis-
tent equilibrium. Thus, also in this last case the condition in Denition 3.9 is
satised. The proof is complete.
Remark 3.22. In [9], Bonanno and Zeeman have provided a general exis-
tence result of a rst-order locally consistent equilibrium for an abstract game-
theoretic, and they employ their existence result to prove the existence of a
rst-order locally consistent equilibrium in a monopolistic competitive industry
with price-making rms.
3.4.4 First-order equilibria for an abstract economy
We consider an abstract economy with productions of : rms and : goods
given by
1 = (G. 1. J. (n
i
)
iI
. (A
i
)
iI
. (.
i
)
iI
. (0
i
)
iI
. (1
j
)
jJ
)
where G. 1. respectively J are the index sets of goods, household, respectively
rms.
Given the production prole n = (n
1
. n
2
. .... n
n
) 1. where 1 =

jJ
1
j
. the
intermediate endowment of consumer i is .
0
i
(n) = .
i

jJ
0
ij
n
j
. We denote
by 1
i
(j. n) the individual demand mapping, and by .(j. n) the aggregate excess
demand mapping of the economy at price j ^ R
n
+
. given the production
prole n. The symbol \(n) indicates the set of Walrasian prices associated with
production prole n. that is \(n) = j [ j ^. .(j. n) = 0.
We set \ = n [ n R
m
. .
0
i
(n) 0. i 1.
We suppose that:
A1. For all i 1. n
i
is such that 1
i
(j. n) is single-valued, strictly positive
and of class C
o
in R
+n
+
\.
A2. 1 \. Moreover, 1 is compact, and 1
j
is a convex set, , J.
A3. If \(n) is nonempty, then \(n) is singleton.
A4. For all n \ the rank 1
pn
.

[j(n). n[ = :1 where .

is the function
. without the last element, j(n) \(n). and 1
pn
is the derivative with respect
to the : 1 rst component of j.
The producer , calculates his prots on the basis of the linear approximation
of the eective demand function
j
+
j
(n
i
. j
0
) = j(n
0
) (n
j
n
0
j
)1
yj
j(n
0
)
T
.
where n
0
is a status quo, and 1
yj
denote the derivative with respect to n
j
.
and symbol T indicates the operation of transposition for matrices.
Denition 3.10. A rst-order locally consistent economic equili-
brium for the economy 1 is a conguration (j
+
. (n
+
j
)
jJ
) ^1 such that
j
+
j
(n
+
j
. n
+
)n
+
j
_ j
+
j
(n
j
. n
+
)n
j
. n
j
1
j
. , J.
126

This denition means that at a rst-order locally consistent economic equi-


librium rms are maximizing their prots according their perceived demand
functions. It is easily seen that if (j
+
. (n
+
j
)
jJ
) is a rst-order locally consistent
economic equilibrium then
(a) j
+
j
(n
+
j
. n
+
) = j(n
+
). , J. and
(b) 1
yj
j
+
j
(n
+
j
. n
+
) = 1
yj
j(n
+
). , J.
Condition (a) means that at rst-order locally consistent economic equilib-
rium perceived prices are equal to the true ones, while condition (b) means that
the slopes of the perceived demand curves are equal to the slopes of the true
demand curves.
We have
Theorem 3.32.If the assumptions A1-A4 holds and c1
yj
j(n)c _ 0 for
every n 1 and for every c R
n
. then the economy 1 has a rst-order locally
consistent economic equilibrium.
Proof. If use set o
j
= 1
j
. r
j
= n
j
and H
i
(n
j
. n) = j
+
j
(n
j
. n)n
j
. , J.
the economy 1 reduces to the game I introduced in the rst subsection of this
section. Under assumptions A1-A4, j(n) is C
1
and this game has clearly a rst-
order locally consistent equilibrium (r
+
j
)
jJ
. We set n
+
j
= r
+
j
. , J. In order to
prove the theorem it is sucient to prove that n
+
= (n
+
j
)
jJ
satises condition
in Denition 3.9.
To this end, note that since n
+
is a rst-order locally consistent equilibrium,
then it must satisfy the condition
j(n
+
)n
j
_ j(n
+
)n
+
j
[j(n
+
) n
+
j
1
yj
j(n
+
)[(n
j
n
+
j
)
or
j(n
+
)(n
j
n
+
j
) _ n
+
j
1
yj
j(n
+
)(n
+
j
n
j
). n
j
1
j
.
We prove the assertion if we show that n
+
satises the following condition
j(n
+
)n
+
j
_ j(n
+
)n
j
(n
j
n
+
j
)1
yj
j(n
+
)
T
n
j
. n
j
1
j
.
that is,
[j(n
+
) n
j
1
yj
j(n
+
)[(n
j
n
+
j
) _ 0. n
j
1
j
.
From the rst member of last relationship and by taking into account pre-
vious relationship, one obtains
j(n
+
)(n
j
n
+
j
) n
j
1
yj
j(n
+
)(n
j
n
+
j
) _
_ n
+
j
1
yj
j(n
+
)(n
+
j
n
j
) n
j
1
yj
j(n
+
)(n
+
j
n
j
) =
= (n
+
j
n
j
)1
yj
j(n
+
)(n
+
j
n
j
) _ 0. n
j
1
j
.
by assumption. This ends the proof.
127
3.5 Existence of equilibrium in generalized games with
non-convex strategy spaces
3.5.1 Introduction
The generalized game concept (or abstract economy) extends the notion of
Nash non-cooperative game, in which the players strategy set depends on the
choices of all the other players. This concept was introduced by Debreu who
proved the existence of equilibrium in generalized games with general assump-
tions. Arrow and Debreu applied this result to obtain the existence of compet-
itive equilibrium by considering convex strategy subsets of a nite dimensional
space and a nite number of agents with continuous quasi-concave utility func-
tions. Since then, Arrow-Debreus result has been extended in several directions
by assuming weaker assumptions on strategy spaces, agent preferences, and so
on. We have mention the works of Gale and Mas-Colell, who consider pref-
erence relations which arent transitive or complete; Shafer and Sonnenschein
and Border, who modify continuity conditions on the constraint and preference
multivalued mappings; as well as Borglin and Keiding, Yannelis and Prabhakar,
Tarafdar, who consider innite dimensional strategy spaces or an innite number
of agents. Most of these existence theorems are proven by assuming convexity
conditions on the strategy spaces as well as on the constraint multivalued map-
ping, which allow to apply very well known xed points theorems, as those of
Brower, Kakutani or Browder. The purpose of this section is to present gener-
alizations of some of these results on the existence of equilibrium in generalized
games by relaxing the convexity conditions. In order to do that, we make use
of a new abstract convexity notion called mc-spaces which generalizes usual
convexity as well as other abstract convexity structures. These results cover
situations in which neither strategy spaces nor preferences are convex.
3.5.2 Abstract convexity
This subsection is devoted to introduce the new notion of abstract convexity,
mc-spaces, which be used throughout the section. Formally, an abstract
convexity on a set A is a family =
i

iI
of subsets of A stable under
arbitrary intersections, that is,

iJ

i
. for all J 1, and that contains the
empty and the total set, O. A . The notion of mc-spaces is based on the idea
of replacing the linear segments which join any pair of points up (or the convex
hull of a nite set of points) in the usual convexity, by an path (respectively, a
set) that will play its role.
Denition 3.11. A topological space A is an mc-space, or has an mc-
structure, if for any nonempty nite subset of A, A. there exists an
ordering on it, namely = a
0
. a
1
. .... a
n
. a set of elements /
0
. /
1
. .... /
n
A.
(not necessary dierent) and a family of functions
1
A
i
: A [0. 1[ A. i = 0. 1. .... :
such that
128
1. 1
A
i
(r. 0) = r. 1
A
i
(r. 1) = /
i
. \r A.
2. The function G
A
: [0. 1[
n
A given by
G
A
(t
0
. t
1
. .... t
n1
) = 1
A
0
(...(1
A
n1
(1
A
n
(/
n
. 1). t
n1
). ...). t
0
)
is a continuous function.
Remark 3.23. Note that if 1
A
i
(r. t) is continuous on t, then 1
A
i
(r. [0. 1[)
represents a continuous path which joints r and /
i
. These paths depend, in
some sense, on the points which are considered, as well as on the nite sub-
set which contain them. Thus, function G
A
can be interpreted as follows:
1
A
n1
(/
n
. t
n1
) = j
n1
represents a point of the path which joins /
n
with /
n1
,
1
A
n2
(j
n1
. t
n2
) = j
n2
is a point in the path which joins j
n1
with /
n2
, etc.
So, G
A
can be seen as a composition of these paths and can be considered as
an abstract convex combination of the nite set .
Given an mc-structure, it is possible to dene an abstract convexity dened
by the family of those sets which are stable under function G
A
. In order to
dene this convexity, we need some previous concepts.
Denition 3.12. If A is an mc-space, 7 a subset of A and we denote by
< A the family of nonempty nite subsets of A, then for all < A such
that

7 = O.

7 = a
i0
. a
i1
. .... a
im
. (i
0
< i
1
< ... < i
m
). we dene the
restriction of function G
A
to 7 as follows
G
A|Z
: [0. 1[
m
A
G
A|Z
(t) = 1
A
i0
(...(1
A
im1
(1
A
im
(/
im
. 1). t
im1
). ...). t
i0
).
where 1
A
i
k
are the functions associated with the elements a
i
k

7.
By making use of this notion, we can dene mc-sets which generalize usual
convex sets.
Denition 3.13. A subset 7 of an mc-space A is an mc-set, if and only
if it is satised that
\ < A .

7 = O. G
A|Z
([0. 1[
m
) 7.
where : = [

7[ 1.
Since the family of mc-sets is stable under arbitrary intersections, it denes
an abstract convexity on A. Furthermore, we can dene the mc-hull operator
in the usual way
C
mc
(7) =

1 [ 7 1. 1 i: a: :c :ct.
Then it is obvious that
\ < A .

7 = O. G
A|Z
([0. 1[
m
) C
mc
(7).
Remark 3.24. If A is a convex subset of a topological vector space, for any
nite subset a
0
. a
1
. .... a
n
we can dene functions 1
A
i
(r. t) = (1 t)r ta
i
.
129
which represent the segment joining a
i
and r when t [0. 1[. In this case, the
image of the composition G
A
([0. 1[
n
) coincides with the convex hull of , so
mc-sets generalize convex sets. Other abstract convexity structures which are
generalized by the notion of mc-structure are the simplicial convexity, c-spaces
or H-spaces, G-convex spaces, and so on.
It is important to point out that in some applications the space is required
to satisfy local properties. So we also introduce the notion of local convexity in
the context of mc-spaces.
Denition 3.14. A metric mc-space (A. d) is a locally mc-space if and
only if for all - 0. 1(1. -) = r [ r A. d(r. 1) < - is an mc-set whenever
1 is an mc-set.
It is not hard to prove that the product of mc-spaces is an mc-space, and
that the product of a countable quantity of locally mc-spaces is also a locally
mc-space.
Next, the notion of KF-multivalued mapping and KF-majorized multivalued
mapping, introduced in Borglin and Keiding, are dened in the context of mc-
spaces.
Denition 3.15. If A is an mc-space, then an mc-set multivalued mapping
A : A (A is a 11
+
-multivalued mapping if for all r A. A
1
(r) is open
and r A(r). A multivalued mapping 1 : A ( A is called 11
+
-majorized
if there is a 11
+
-multivalued mapping, A : A (A (majorant), such that for
all r A, 1(r) A(r).
The local version of 11
+
-multivalued mapping is dened as follows
Denition 3.16. If A is an mc-space, then a multivalued mapping A :
A ( A is locally 11
+
-multivalued mapping if for all r A such that
A(r) = O. there exists an open neighborhood \
x
of r, and a 11
+
-multivalued
mapping, 1
x
: A (A. such that
\ . \
x
. A(.) 1
x
(.).

3.5.3 Fixed points results


We present now some xed point results which will be applied to prove the
existence of equilibrium in generalized games. The following Llinares lemma
states the existence of a continuous selection, with a xed point, of the mc-hull
of a multivalued mapping dened on mc-spaces.
Lemma 3.4. If A is a compact topological mc-space and 1 : A ( A is a
nonempty multivalued mapping such that if n 1
1
(r). then there exists some
r
t
A such that n i:t 1
1
(r
t
). Then, there exists a nonempty nite subset
of A, and a continuous function 1 : A A satisfying:
1. () r
+
A such that r
+
= 1(r
+
):
2. (\) r A. 1(r) G
A|(x)
([0. 1[
m
).
Next result is an extension of Browders theorem. The proof is immediately
obtained by applying Lemma 3.4.
130
Theorem 3.33. If A is a compact topological mc-space and 1 : A ( A
a multivalued mapping with open inverse images and nonempty mc-set values,
then 1 has a continuous selection and a xed point.
A consequence of Theorem 3.33 is that any 11
+
-multivalued mapping de-
ned from a compact topological mc-space in itself, has a point with empty
image. In context of binary relations, the existence of points with empty images
in the multivalued mapping of upper contour sets is equivalent to existence of
maximal element (it is enough to consider 1(r) as the set of alternatives better
than r).
Corollary 3.5. If A is a compact topological mc-space and 1 : A (A is
a 11
+
-multivalued mapping, then there exists r
+
A such that 1(r
+
) = O.
In order to extend the previous result to locally 11
+
-majorized multivalued
mapping, rst we present the following lemma.
Lemma 3.5. If A is a compact topological mc-space and 1 : A (A is a lo-
cally majorized 11
+
-multivalued mapping, then there exists a 11
+
-multivalued
mapping c : A (A. such that (\)r A. 1(r) c(r).
Proof. Consider 1 = r [ r A. 1(r) = O and for each r 1, choose a
11
+
-multivalued mapping c
x
majorant of 1 at r, and an open neighborhood
G
x
of r. The set G =

xD
G
x
is paracompact, so the open covering G
x

xD
of G has a closed locally nite renement G
t
x
.
For each r 1 dene the set
J(r) = r
i
[ r G
t
xi
.
and the following multivalued mapping
c(r) =

xiJ(x)
c
xi
(r). i1 r G. rc:jcctic|n c(r) = O. i1 r G.
Now we are going to see that multivalued mapping c is the 11
+
multivalued
mapping. It is clear that c hasnt xed point since c
xi
are 11
+
-multivalued
mappings, c has mc-set values by construction and satises for all r A,
1(r) c(r). Finally, to see that c has open lower sections, consider
r c
1
(n) =n c(r) =

xiJ(x)
c
xi
(r).
n c
xi
(r). (\) r
i
J(r).
Since c
xi
are 11
+
multivalued mappings, then they have open lower sec-
tions, so for each r
i
J(r) there exists an open neighborhood \
i
x
of r, such
that
\
i
x
c
1
xi
(n). (\) r
i
J(r).
By considering \
t
x
=

xiJ(x)
\
i
x
, \
t
x
is an open neighborhood of r since
J(r) is nite. Moreover,
131
r

xi = J(x)
G
t
xi
(which is closed since G
t
xi
is a locally nite renement). Therefore, there
exists an open set containing r, \
+
x
, such that
\
+
x

[

xi = J(x)
G
t
xi
[ = O.
so, J(n) J(r) for each n \
+
x
. But then,
\
t
x

\
+
x
= \
x
c
1
xi
(n). (\) r
i
J(r).
and
n c
xi
(n). (\)n \
x
a:d (\)r
i
J(r).
that is,
n

xiJ(x)
c
xi
(n)

xiJ(w)
c
xi
(n) = c(n). (\)n \
x
therefore
\
x
c
1
(n)
and we conclude that c has open lower sections.
As a consequence of Lemma 3.5 we state now the extension of Corollary 3.5
by considering locally 11
+
-majorized multivalued mappings.
Theorem 3.34. If A is a compact topological mc-space and 1 : A (A is
a locally 11
+
-majorized multivalued mapping, then the exists r
+
A such that
1(r
+
) = O.
3.5.4 Existence of equilibrium
In this subsection we analyze the existence of equilibrium for generalized games
in the context of mc-spaces by considering similar conditions to those of Borglin
and Keiding and Tulcea. In order to do this, rst we need to utilize the well
known notation from generalized games. First result is a version of Borglin and
Keidings result in context of mc-spaces.
Lemma 3.6. If A is a compact topological mc-space, 1 : A ( A a non-
empty mc-set multivalued mapping such that for all r A. 1
1
(r) is an open
set, and 1 : A ( A a locally 11
+
majorized multivalued mapping, then the
exists r
+
A such that
r
+
1(r
+
) a:d 1(r
+
)

1(r
+
) = O.
132
Proof. From Lemma 3.5 and without loss of generality we can assume that
multivalued mapping 1 is a 11
+
-multivalued mapping. Dene the multivalued
mapping c : A (A by
c(r) = 1(r). i1 r 1(r)
c(r) = 1(r)

1(r). i1 r 1(r).
In order to see that multivalued mapping c is 11
+
-multivalued mapping,
consider r A, such that c(r) = O (if c(r) = O. we have the conclusion). It is
easy to see that c hasnt xed points and mc-set values. To see that has open
lower sections, consider r c
1
(n). that is, n c(r).
On the one hand, if r 1(r). then it is possible to choose a neighborhood
\
x
of r such that
(\) . \
x
. . 1(.).
Moreover, since n c(r) = 1(r). that is, r 1
1
(n). which is open,
there exists an open set \
x
containing r such that \
x
1
1
(n). If we take
l = \
x

\
x
. then l c
1
(n).
On the other hand, if r 1(r). then n c(r) = 1(r)

1(r). so
r 1
1
(n)

1
1
(n)
which are open sets, therefore, there exists an open set \
x
containing r such
that
\
x
1
1
(n)

1
1
(n) c
1
(n).
So, multivalued mapping c is a 11
+
multivalued mapping, then by apply-
ing Corollary 3.5 we have the conclusion.
Next result shows that the previous Lemma remains valid in the case of
considering a generalized game with a nite quantity of agents.
Lemma 3.7. If for each i = 1. 2. .... :. A
i
is a compact topological mc-space,
A =

n
i=1
A
i
. 1
i
: A ( A
i
is a nonempty mc-set multivalued mapping with
open lower sections, and 1
i
: A (A
i
is a locally 11
+
majorized multivalued
mapping, then there exists r
+
A such that
r
+
i
1
i
(r
+
) a:d 1
i
(r
+
)

1
i
(r
+
) = O. i = 1. 2. .... :.
Proof. Consider a multivalued mapping 1 : A (A dened as follows,
n 1(r) i1 a:d o:|n i1 n
i
1
i
(r). i = 1. 2. .... :.
that is, 1(r) =

n
i=1
1
i
(r). So multivalued mapping 1 has nonempty mc-set
values and open lower sections.
133
From Lema 3.5 and without loss of generality, we can assume that mul-
tivalued mappings 1
i
are 11
+
multivalued mappings. Moreover, for each
i = 1. 2. .... :, we dene the following multivalued mappings
a) 1
+
i
: A (A such that n 1
+
i
(r) if and only if n
i
1
i
(r).
b) 1 : A (A in the following way:
1(r) =

iI(x)
1
+
i
(r). i1 1(r) = O.
1(r) = O. i1 1(r) = O.
where 1(r) = i [ i 1. 1
i
(r)

1
i
(r) = O.
Next, we are going to see that 1 is 11
+
-majorized. To do that, consider r
A such that 1(r) = O. then there exists i
0
1(r) such that 1
i0
(r)

1
i0
(r) = O.
Since the set
r [ r A. 1
i0
(r)

1
i0
(r) = O
is an open set, there exists a neighborhood \ of r such that (\). \. 1
i0
(.)

1
i0
(.) =
O. that is, i
0
1(.). so
(\). \. 1(.) =

iI(z)
1
+
i
(.) 1
+
i0
(.).
Moreover, since 1
i0
is a 11
+
-multivalued mapping, 1
+
i0
is a 11
+
-multivalued
mapping and therefore multivalued mapping 1 is majorized by 1
+
i0
. Therefore
by applying previous lemma to multivalued mappings 1 and 1 we obtain the
conclusion.
In order to analyze the existence of equilibrium by considering a countable
quantity of agents, we use the next approximation result.
Lemma 3.8. Let A be a compact topological metric space and 1 a locally
mc-space. If 1 : A (1 is an upper hemicontinuous multivalued mapping with
mc-set values, then (\)- 0 there exists an mc-set valued multivalued mapping
H
"
: A (1 with open graph such that
Gr(1) Gr(H
"
) 1(Gr(1). -).
Proof. By applying that multivalued mapping 1 is upper hemicontinuous
we know that
(\)- 0. ()0 < c(r) < -. :nc/ t/at
(\). 1(r. c(r)). 1(.) 1(1(r). -2).
so, family 1(r. c(r)2)
xX
is an open covering of A, which is compact,
thus there exists a nite subcovering 1(r
i
. c(r
i
)2)
n
i=1
. Consider c
i
= c(r
i
)2
134
and dene for all r A 1(r) = i [ i 1. r 1(r
i
. c
i
) and the following
multivalued mapping
H
"
(r) =

iI(x)
1(1(r
i
). -2).
It is clear that H
"
is mc-set valued and moreover it has open graph, since
for every r

i = I(x)
1(r
i
. c
i
) which is a closed set, therefore, there exists j 0
such that 1(r. )

i = I(x)
1(r
i
. c
i
)) = O. so, 1(.) 1(r) for all . 1(r. j)
and
H
"
(r) =

iI(x)
1(1(r
i
). -2)

iI(z)
1(1(r
i
). -2) = H
"
(.)
and H
"
(r) is open because it is a nite intersection of open sets.
Moreover since
(\)(.. n) 1(r. j) H
"
(r). H
"
(r) H
"
(.)
then, 1(r. j) H
"
(r) Gr(H
"
), that is, Gr(H
"
) is open.
Furthermore, Gr(1) Gr(H
"
) 1(Gr(1). -). since for all r A,
1(r) 1(1(r
i
). -2). 1or cac/ i 1(r).
therefore
1(r)

iI(x)
1(1(r
i
). -2) = H
"
(r).
thus Gr(1) Gr(H
"
) and it is easy to see that Gr(H
"
) 1(Gr(1). -).
Theorem 3.35. If A is a compact locally mc-space, 1 : A (A is a non-
empty mc-set valued multivalued mapping with closed graph, 1 : A (A is a lo-
cally 11
+
majorized multivalued mapping and the set r [ r A. 1(r)

1(r) =
O is closed in A. then there exists r
+
A such that
r
+
1(r
+
) a:d 1(r
+
)

1(r
+
) = O.
Proof. By applying Lemma 3.8, we have that (\)- 0. () H
"
such that
Gr(1) Gr(H
"
) 1(Gr(1). -)
where H
"
is an open graph multivalued mapping whose values are mc-sets.
If we consider (A. H
"
. 1) and we apply Lemma 3.6 we can ensure that there
exists an element r
"
such that
r
"
H
"
(r
"
) a:d H
"
(r
"
)

1(r
"
) = O.
Let -
n
be a sequence which converges to 0 and by reasoning as above we
obtain another sequence r
"n

nN
such that
135
(\): N. [1(r
"n
)

1(r
"n
)[ [H
(
r
"n
)

1(r
"n
)[ = O
and since it belongs to a compact set due to
r
"n
r [ r A. 1(r)

1(r) = O
then there exists a convergent subsequence to a point r
+
. which will be an
element of this set since it is closed.
In order to prove that r
+
is a xed point of 1, we have that (\): N.
(r
"n
. r
"n
) Gr(H
"n
) 1(Gr(1). -
n
)
and since Gr(1) is a compact set, then (r
"n
. r
"n
) converges to (r
+
. r
+
)
Gr(1).
Next, a result on the existence of equilibrium in generalized games with a
countable number of agents is presented.
Theorem 3.36. Let I = (o
i
. 1
i
. 1
i
)
iI
be a generalized game such that
1 is a countable set of indexes and for each i 1 it is satised that o
i
is a
nonempty compact locally mc-space; 1
i
is a closed graph multivalued mapping
such that 1
i
(r) is a nonempty mc-set (\)r A, 1
i
is a locally 11
+
majorized
multivalued mapping and the set r [ r A. 1
i
(r)

1
i
(r) = O is closed in A,
then there exists an equilibrium for the generalized game.
Proof. Consider multivalued mapping 1 : A (A as follows:
n 1(r) i1 a:d o:|n i1 n
i
1
i
(r). (\)i 1.
that is, 1(r) =

iI
1
i
(r).
So multivalued mapping 1 has closed graph with nonempty mc-set values.
Moreover, for each i 1 we dene the following multivalued mappings:
a)1
+
i
: A (A such that n 1
+
i
(r) if and only if n
i
1
i
(r).
b) 1 : A (A in the following way:
1(r) =

iI(x)
1
+
i
(r). i1 1(r) = O. a:d 1(r) = O. i1 1(r) = O.
where 1(r) = i [ i 1. 1
i
(r)

1
i
(r) = O.
In order to see that 1 is 11
+
majorized, consider r A such that 1(r) =
O. then there exists i
0
1(r) such that 1
i0
(r)

1
i0
(r) = O. Since the set
r [ r A. 1
i0
(r)

1
i0
(r) = O
is open set, there exists a neighborhood \ of r such that (\). \. 1
i0
(.)

1
i0
(.) =
O. that is, i
0
1(.). so
(\). \. 1(.) =

iI(z)
1
+
i
(.) 1
+
i0
(.).
136
Moreover, from Lemma 3.5 and without loss of generality, we can assume
that multivalued mapping 1
i0
is a 11
+
multivalued mapping, so 1
+
i0
is the
11
+
multivalued mapping which majorizes 1.
Finally, we show that the set r [ r A. 1(r)

1(r) = O is closed. (\)i


1(r) we dene the following multivalued mapping Q
i
: A (o
i
Q
i
(r) = 1
i
(r)

1
i
(r). i1 i 1(r). a:d Q
i
(r) = 1
i
(r). i1 i 1(r).
It is clear that
1(r)

1(r) =

Q
i
(r). i1 1(r) = O. a:d
1(r)

1(r) = O. ot/crni:c.
Multivalued mappings Q
i
: A (o
i
have nonempty values, thus 1(r)

1(r) =
O if and only if 1(r) = O.
Therefore, we have
r [ r A. 1(r)

1(r) = O = r [ r A. 1(r) = O =
=

iI
r [ r A. 1
i
(r)

1
i
(r) = O.
Hence, r [ r A. 1(r)

1(r) = O is closed because it is the intersection


of closed sets. So, by applying the previous theorem we obtain that there exists
an element r
+
A such that
r
+
1(r
+
) a:d 1(r
+
)

1(r
+
) = O.
so, 1(r
+
) = O and nally,
r
+
i
1
i
(r
+
) a:d 1
i
(r
+
)

1
i
(r
+
) = O.

137
3.6 References
1. DAgata, A., Existence of rst-order locally consistent equilibria, Annales
dEconomie et de Statistique, 43 (1996), 171-179
2. Agarwal, R.P., ORegan, P., A note on equilibria for abstract economies,
Mathematical and Computer Modelling, 34 (2001), 331-343
3. Aliprantis, C.D., Tourky, R., Yannelis, N.C., Cone conditions in
general equilibrium theory, Journal of Economic Theory, 92 (2000), 96-121
4. Aliprantis, C.D., Tourky, R., Yannelis, N.C., The Riesz-Kantorovich
formula and general equilibrium theory, Journal of Mathematical Economics, 34
(2000), 55-76
5. Arrow, K.J., Debreu, G., Existence of an equilibrium for a competitive
economy, Econometrice, 22 (1954), 265-290
6. Arrow, K.J., Hahn, F., General competitive analysis, 1971, San Fran-
cisco, Holden-Day
7. Aubin, J.P., Ekeland, I., Applied nonlinear analysis, New York, John
Wiley and Sons, 1984
8. Berge, C., Topological spaces, 1963, New York, Macmillan
9. Bonanno, G., Zeeman, C.E., Limited knowledge of demand and oligopoly
equilibria, J. Econom. Theory, 35 (1985), 276-283
10. Border, K.C., Fixed point theorems with applications to economics
and game theory, Cambridge University Press, 1985
11. Borglin, A., Keiding, H., Existence of equilibrium actions and of
equilibrium: A note on the new existence theorems, J. Math. Econom., 3 (1976),
313-316
12. Browder, F.E., The xed point theory of multi-valued mappings in
topological vector spaces, Math. Annalen, 177 (1968), 283-301
13. Debreu, G., New concepts and techniques for equilibrium analysis,
Internatinal Economic Review, 3 (1962), 257-273
14. Ding, X., Kim, W., Tan, K., A selection theorem and its applications,
Bull. Austral. Math. Soc. 46(1992),205-212
15. Gale, D., Mas-Colell, A., An equilibrium existence theorem for a
general model without ordered preferences, J. Math. Econom. 2 (1975), 9-15
16. Grandmont, J.M., Temporary general equilibrium theory, Economet-
rica, 45 (1977), 535-572
17. Himmelberg, C.J., Fixed points of compact multifunctions, J. Math.
Anal. Appl., 38 (1972) 205-207
18. Husain, T., Tarafdar E., A selection and a xed point theorem and
an equilibrium of an abstract economy, Internat. J. Math. and Math. Sci. 18,1
(1995), 179-184
19. Kakutani, S., A generalization of Brouwers xed point theorem,
Duke Mathematical Journal, 8 (1941), 416-427
20. Llinares J.V., Existence of equilibrium in generalized games with
non-convex strategy spaces, CEPREMAP, No. 9801 (1998), 1-14
138
21. Maugeri, A., Time dependent generalized equilibrium problems, Ren-
diconti del Circolo Matematico di Palermo, 58 (1999), 197-204
22. Muresan, A.S., First-order equilibria for an abstract economy, I,
Bull. Stiint. Univ., Baia Mare, Ser. B, Matematic a-Informatic a, Vol. XIV,2
(1998),191-196
23. Muresan, A.S., First-order equilibria for an abstract economy, II,
Acta Technica Napocensis, Ser. Applied Mathematics and Mechanics, 41 (1998),
201-204
24. Neuefeind, W., Notes on existence of equilibrium proofs and the
boundary behavior of supply, Econometrica, 48 (1980), 1831-1837
25. Nikaido, H., Convex structures and economic theory, New York,
Academic Press, 1968
26. Oettli, W., Schlager, D., Generalized vectorial equilibria and gen-
eralized monotonicity, Functional analysis with current applications in science,
technology and industry (Aligarh, 1996), 145-154, Pitman Res. Notes Math.
Ser., 377, Logman, Harlow, 1998
27. Petrusel, A., Multifunctions and applications, Cluj University Press,
Cluj-Napoca, 2002 (In Romanian)
28. Ray, I., On games with identical equilibrium payos, Economic The-
ory, 17 (2001), 223-231
29. Rim, D.I., Kim, W.K., A xed point theorem and existence of equi-
librium for abstract economies, Bull. Austral. Math. Soc., 45 (1992), 385-394
30. Rus, A.I., Generalized contractions and applications, Cluj University
Press, Cluj-Napoca, 2001
31. Rus, A.I., Iancu, C., Mathematical modelling, Transilvania Press,
Cluj-Napoca, 2000 (In Romanian)
32. Shafer, W.J., Sonnenschein, H., Equilibrium in abstract economics
without ordered preferences, J. Math. Econom. 2 (1975) 345-348
33. Tarafdar, E., A xed point theorem and equilibrium point in abstract
economy, J. Math. Econom., 20 (1991), 211-218
34. Tulcea, C.I., On the approximation of upper semi-continuous cor-
respondeces and equilibrium of generalized games, J. Math. Anal. Appl., 136
(1988), 267-289
35. Yannelis, N., Prabhakar, N., Existence of minimal elements and
equilibria in linear topological spaces, J. Math. Econom., 12 (1983), 233-246
139

Вам также может понравиться