Вы находитесь на странице: 1из 118

Introduction to Formal Political Theory

Marcos Menchaca

qn
Version 1.2
ii
Contents
I Game Theory 1
1 Preference Theory and Game Forms 3
1.1 Mathematical Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Modeling Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Types of Games in Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Utility Functions in Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Applying Game Theory to Extensive Form Games . . . . . . . . . . . . . . . 6
1.6 The Ordinal Property of Utility Functions . . . . . . . . . . . . . . . . . . . . 7
1.7 Starting to Solve a Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.8 Preferences Over Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Extensive Form Games 11
2.1 Actions versus Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Backward Induction and Rollback Equilibrium . . . . . . . . . . . . . . . . . 12
2.3 Multiple Rollback Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 How Many Strategies a Player Can Have . . . . . . . . . . . . . . . . . . . . 17
3 Choice Under Uncertainty 19
3.1 Basic Probability Theory and Expected Utility Theory . . . . . . . . . . . . . 19
3.2 Rollback Equilibrium in Games with Nature Nodes . . . . . . . . . . . . . . 22
3.3 Ex Post Mistake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4 Efcient Outcomes 27
4.1 Pareto Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Strong Pareto Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Dominated Strategies 35
5.1 Dominated Actions in Normal Form Games . . . . . . . . . . . . . . . . . . . 35
5.2 Dominated Strategies in Extensive Form Games . . . . . . . . . . . . . . . . 37
6 Nash Equilibrium 39
6.1 Dening Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 Discrete and Finite Players and Choices . . . . . . . . . . . . . . . . . . . . . 40
iii
6.3 Making Sense of Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . 42
6.4 The Connection Between Dominated Strategies and Nash Equilibrium . . . 42
6.5 A Collective Action Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7 Mixed Strategy Nash Equilibrium 45
7.1 Dening Mixed Strategy Nash Equilibrium . . . . . . . . . . . . . . . . . . . 45
7.2 Calculating Mixed Strategy Nash Equilibrium . . . . . . . . . . . . . . . . . 45
7.3 The Existence of a Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 47
7.4 Expected Utility of Playing a Mixed Strategy . . . . . . . . . . . . . . . . . . 48
8 Variables in Game Theory 51
8.1 Extensive Form Games with Variables . . . . . . . . . . . . . . . . . . . . . . 51
8.2 Commitment in Extensive Form Games . . . . . . . . . . . . . . . . . . . . . 52
9 Information Games 55
9.1 Information Sets in Extensive Form Games . . . . . . . . . . . . . . . . . . . 55
9.2 Signaling Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
10 Game Form Transformation 59
10.1 Transforming Extensive Form to Normal Form Games . . . . . . . . . . . . . 59
11 Repeated Games 61
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.2 Discounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.3 Innitely Repeated Prisoners Dilemma . . . . . . . . . . . . . . . . . . . . . . 62
11.3.1 One-Shot Deviation Principle . . . . . . . . . . . . . . . . . . . . . . . 63
11.3.2 Tit for Tat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
II Social Choice Theory 67
12 Social Preferences and Voting Models 69
12.1 An Introduction to Social Choice Theory and Its Importance . . . . . . . . . 69
12.2 Individual and Social Preferences for Candidates . . . . . . . . . . . . . . . . 69
12.3 Electoral Regimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.3.1 Majority Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.3.2 Plurality Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.3.3 Borda Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
12.3.4 Approval Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.3.5 Single Transferable Vote . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.4 Sincere and Strategic Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12.5 Agenda Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.6 Strong Pareto Efciency of Candidates . . . . . . . . . . . . . . . . . . . . . . 76
iv
13 Spatial Preference Models 79
13.1 Median Voter Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.1.1 Means and Medians . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.1.2 The Hotelling-Downs Model . . . . . . . . . . . . . . . . . . . . . . . 80
14 Political Power 83
14.1 Shapley-Shubik Power Index . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
III Appendices 85
A Practice Problems 87
A.1 Reversing the Incumbent-Challenger Game . . . . . . . . . . . . . . . . . . . 87
A.2 A Standard Extensive Form Game . . . . . . . . . . . . . . . . . . . . . . . . 87
A.3 A Different Extensive Form Game . . . . . . . . . . . . . . . . . . . . . . . . 87
A.4 A Little More Challenging Extensive Form Game . . . . . . . . . . . . . . . . 88
A.5 The Budget Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.6 A Game with a Nature Node . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
A.7 Another Standard Extensive Form Game . . . . . . . . . . . . . . . . . . . . 90
A.8 Another Different Extensive Form Game . . . . . . . . . . . . . . . . . . . . . 91
A.9 A Little More Challenging Extensive Form Game . . . . . . . . . . . . . . . . 92
A.10 A Normal Form Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.11 A Different Normal Form Game . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.12 Yet Another Normal Form Game . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.13 Repeated Prisoners Dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.14 Asymmetric Repeated Prisoners Dilemma . . . . . . . . . . . . . . . . . . . . 94
A.15 A Game with Information Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 94
B Practice Problems Solutions 97
B.1 Reversing the Incumbent-Challenger Game . . . . . . . . . . . . . . . . . . . 97
B.2 A Standard Extensive Form Game . . . . . . . . . . . . . . . . . . . . . . . . 97
B.3 A Different Extensive Form Game . . . . . . . . . . . . . . . . . . . . . . . . 98
B.4 A Little More Challenging Extensive Form Game . . . . . . . . . . . . . . . . 98
B.5 The Budget Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
B.6 A Game with a Nature Node . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
B.7 Another Standard Extensive Form Game . . . . . . . . . . . . . . . . . . . . 102
B.8 Another Different Extensive Form Game . . . . . . . . . . . . . . . . . . . . . 103
B.9 A Little More Challenging Extensive Form Game . . . . . . . . . . . . . . . . 104
B.10 A Normal Form Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.11 A Different Normal Form Game . . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.12 Yet Another Normal Form Game . . . . . . . . . . . . . . . . . . . . . . . . . 106
B.13 Repeated Prisoners Dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
B.14 Asymmetric Repeated Prisoners Dilemma . . . . . . . . . . . . . . . . . . . . 107
v
B.15 A Game With Information Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Bibliography 108
vi
Introduction
I am writing this book for student who are taking POL SCI 30 at UCLA. However, I
suspect that other people studying formal political theory can benet from the text as
well. This is a working document, and there will be many revisions. So if you nd any
mistakes or if you think that something is not explained well enough, shoot me an email
(menchaca@ucla.edu) and I will either correct it or explain why it is not a mistake.
There are three main versions of PS 30. Although this book does not follow each
version exactly, it probably helps Barry ONeills students the most. Nevertheless, most
of the topics apply to all of the versions of PS 30. So a prudent student will be able to nd
the topic on which the professor is teaching and learn from and be able to pick up what
she needs. Again, I am always available by email to answer any questions that you may
have.
There are many good game theory and social choice theory books out there. There are
even some that combine both of these topics very well at the graduate levelsee McCarty
and Meroiwicz (2007). However, there are not many good examples of good undergrad-
uate textbooks in formal political theory, which requires both knowledge of game theory
and social choice theory. Barry and Kathy use Dixit, Skeath, and Reiley (2009), but I nd
that it is too wordy and does not have enough worked out examples. Undergraduates
especially need this type of training. There are other game theory textbooks out there:
Kreps (1990), Morrow (1994) and Gibbons (1992). Probably the most extensive game the-
ory textbook for undergraduate is Osborne (2004). But this book is so extensive that it
often is hard to nd the thing that you are trying to learn about.
vii
viii
Part I
Game Theory
1
Chapter 1
Preference Theory and Game Forms
1.1 Mathematical Prerequisites
The basic tool for analysis we will use the the set X = {. . .}. A set is undened in mathe-
matics and logic, but we all have a basic understanding of what it means. In game theory,
we usually analyze things of discrete value: X = {x
1
, x
2
, . . . , x
n
}. But we can also an-
alyze sets of continuous values: for example X = [0, 1]. The choice set, X, is all of the
alternatives that a person can choose from, and we will assume that it is a discrete set.
An economic agent (a mathematical entity that makes decisions) has preferences over the
elements of X, she might prefer x
1
to x
2
for example. We say that her preferences are
complete if for every x
1
and x
2
in X either x
1
is preferred to x
2
or x
2
is preferred to x
1
or
both. Preferences are transitive if, for all x
1
, x
2
, and x
3
in X, if x
1
is preferred to x
2
and x
2
is preferred to x
3
then x
1
is preferred to x
3
. We say that preferences are rational if they are
complete and transitive.
In game theory, we often have to combine different sets. Suppose that we have two
sets X = {Bob, Fred} and Y = {Amy, Katy}. Then the Cartesian product of these two sets
is (Bob, Amy), (Bob, Katy), (Fred, Amy) and (Fred, Katy). Thus we have the set of all the
possible different combinations of the two sets. For those of you who can understand, the
general denition for the Cartesian product for two sets is XY = {(x, y) |x X y Y}.
1.2 Modeling Satisfaction
Those of you who come from economics should be familiar with utility. In order for
economists to use utility theory, they develop a concept called the utility function
1
. It is
simply a function with a domain over the set of choices or outcomes and a range over the
real numbers, u : X R, that expresses the level of satisfaction of each good for that
individual. Here, the set X = {x
1
, x
2
, . . . , x
n
} is just the choice or consumption set of the
1
Utility theory was developed by Jeremy Bentham and John Stuart Mill in order to analyze choice
through deductive reasoning.
3
agent. Its composed of all the possibilities from which the person has to choose. It can
also be multi-variable where the consumer has to choose different levels of two or more
goods. Hence, it could be the case that the consumption set is composed of two goods
that can have various levels: X = {(x
1
, z
1
), (x
2
, z
2
), . . .}. A utility function represents the
preferences of a consumer by giving a number to how satisfaction he receives from that
good in relation to all of the other goods. A util does not have any signicant value on its
own unless it has other utils to which which we can compare it. We will model peoples
behavior through economic agents. Remember that an agent is a mathematical entity and
he or she can only do what the utility function induces him or her to do.
The foundation for game theory is all based on the utility function. This is a function,
u (), that has a domain over alternatives, like an option of a versus an option of b, and
assigns a real number to each of these options. An economic agent prefers option a to
option b if she assigns a higher number to a in her utility function: u(b) < u(a). If the
person is indifferent to both goods (he receives the same level of satisfaction from both),
then u(a) = u(b). The formal denition of a utility function is deceptively basic.
A function, u (), is a utiltity function if, for any two alternatives a and b, a is
preferred to b if and only if u(a) > u(b).
This seems very simple, and in many ways it is. However, utility theory is what drives
almost all of modern economic thought. We can only represent rational preferences with
a utility function. That means that if preferences are either not complete or not transitive,
then we cannot represent them with a utility function.
Utility functions have several different names in game theory: payoffs, preferences,
values, goals, ect. But they all mean the same thing, and as long as you understand the
discussion above you will not be confused by what we call it. When an agent receives a
higher level of utility, we say that he is better off.
Example (Choosing Dinner): Suppose that our economic agent, Heather, is eating out at
a restaurant and is looking at a menu. She only has three choices: chicken, beef, or sh.
Suppose that she enjoys chicken less than she enjoys beef, and she enjoys beef less than
she enjoys sh. And, being consistent, she enjoys chicken less than sh. Then we can
create a utility function for Heather
u(chicken) = 1
u(bee f ) = 2
u( f ish) = 3
This expresses Heathers preferences because it orders her level of satisfaction: u(chicken) <
u(bee f ) < u( f ish). Hence, Heather chooses sh for dinner since it gives her the most sat-
isfaction. This may seem like a trivial example, but its intended to give you a good idea
of what a utility function actually does. It only represents phenomenon in real life.
4
1.3 Types of Games in Game Theory
So you might be wondering to yourself where the numbers are all these games are coming
from. The answer is that they are simply utility numbers constructed fromthe theories we
have about how people will act in that particular situation. There are two types of games
that you will encounter in game theory: normal form and extensive form.
2
Normal form
is a simultaneous move game (where everyone moves at the same time). Extensive form
is when one player moves after another (or at least there is a sequence of moves). Later
on you will learn about repeated games that go on forever. These are not extensive form
games, these are simply normal form games played over and over again. An example of
a normal form game is Figure 1.1. The numbers inside the boxes are the utility numbers
for all of the players in the game. The numbers in the boxes represent the utility for both
playersthe blue represents the kicker and the red represents the goalie. We will always
write the rst players utility numbers rst (before the comma) and the second players
second (after the comma). An example of an extensive form game is that in Figure A.1.
The numbers at the end of the tree are the utility numbers for each player.
kicker
goalie
le f t right
Le f t 0,1 1,0
Right 1,0 0,1
Figure 1.1: A typical normal form game.
1.4 Utility Functions in Game Theory
In game theory, the choices are strategic. So it is not enough to say that a person will be
satised at a particular level if she decides on a particular choice. It depends on what all
of the other players are doing also. So before we stated the utility function as u : X R,
but now a persons utility depends on what the other people are doing. If both players
are moving simultaneously and there are two players in the game we nowstate the utility
function as u : X
1
X
2
R where the symbol stands for the Cartesian product. In
general, if there are n players in the game, then the utility function for a particular person,
say person i, is stated as
u
i
: X
1
X
2
X
n
R
So the level of satisfaction for player 1 not only depends on his actions alone but also on
the actions of all of the other players.
We can start with a simple example. Suppose that two soccer players are playing a
penalty kick, one goalie and one kicker. Denote the kicker as k and the goalie and g.
2
Normal form games are also sometimes called strategic form games in some textbooks.
5
The choices for both players are either to go right or left: X
g
= {right, le f t} and X
k
=
{Right, Le f t}. Then the Cartesian product, X
g
X
k
, is the set
(right, Right)
(right, Le f t)
(le f t, Right)
(le f t, Le f t)
And we can dene the utility function for the kicker as
u
k
=
_

_
(right, Right) 0
(right, Le f t) 1
(le f t, Right) 1
(le f t, Le f t) 0
And we can dene the utility function for the goalie as
u
g
=
_

_
(right, Right) 1
(right, Le f t) 0
(le f t, Right) 0
(le f t, Le f t) 1
However, sometimes players dont move simultaneously but sequentially. In these cases,
we must dene utilities over strategies and not just actions.
1.5 Applying Game Theory to Extensive Form Games
Learning how to solve simultaneous move games is one of the simplest things to learn in
game theory, and Kathy will show you how to do this in her second lecture. The solution
concept is called backwards induction (or rollback equilibriumas Dixit, Skeath and Reiley
like to call it).
For example, suppose that we have a game as that in Figure 1.2. This has the same
strategies as the penalty kick game but different payoffsso this is not the same game.
Player 1 moves rst and chooses between l and r. Then player 2 moves and chooses
between L and R. But player 2 s choice depends on what player 1 choose rst. If player 1
picked l, then player 2 prefers R. If player 2 played r, then player 2 prefers L. But we can
be more specic than this.
Player 2s most preferred outcome is to have player 1 play l and then he will play R.
His second most preferred outcome is to have player 2 play r and then he would pick
L. His third preferred outcome is to have player 1 play r and then he pick R. His least
preferred outcome is for player 1 to play l and then he will pick L. This is expressed in
player 2 s utility function. Player 1s most preferred outcome is for her to pick r and then
for player 2 to pick L. Her second most favorite outcome is for her to pick r and then for
6
1
2
1,0
L
0,4
R
l
2
4,3
L
2,2
R
r
Figure 1.2: An extensive form game.
player 2 to pick R. Her third most preferred outcome is for her to pick l and for player 2
to pick L. Finally, her least favorite outcome is for her to pick l and then for player 2 to
pick R.
Using backwards induction, we see that if player 1 chooses l then player 2 will choose
R and if player 1 chooses r then player 2 will choose L. Knowing what player 2 will do
given player 1 s action, player 1 has now can see that he has the choice between choosing
l and getting a payoff of 0 or choosing r and getting a payoff of 4.
1.6 The Ordinal Property of Utility Functions
It is important to understand the ordinal property of utility functions. This means that
the actual numbers of the utility function do not have actual meaning by themselves.
Consider our example of eating out in a restaurant and having to choose between beef,
chicken or sh. In Table 1.1, I give three examples of the same ordinal utility function.
The rst the one, u
1
, is simply the one you saw above. But if I change the numbers to
those in u
2
we will have a negative number in the range. Some people might think that
this is signicant and means that this person would rather not eat any chicken rather than
eat chicken, which is his worst choice. This is because the 10 seems really bad because
it is below zero and the other numbers are very large. But this notion that negative utility
numbers are bad is incorrect. You can see that his preferences have not changed any from
u
1
. We have simply re-scaled his preferences (notice that we have not changed the order
at all). Hence, 10 is just the same as 1 to this person. And we can go even further,
consider the preferences given in u
3
. Here, 10 is the best that this person can do; so
it is actually a good number with this utility function. Notice that now 1000 seems
like a very large level of dissatisfaction. However, if we look at the example utility u
4
we see that 1000 is actually the best that this person can do. So a negative number
7
Chicken Beef Fish
u
1
1 2 3
u
2
10 20 100
u
3
1000 100 10
u
4
10000
1000
9000 1000
Table 1.1: Different forms for the same ordinal utility function.
does not mean this person is necessarily unsatised with that choice. This brings up
a very important point: utility numbers have meaning only in relation to other utility
number for that particular economic agent. This is called the ordinal property of utility,
which is in contrast to the cardinal property of some functions. However, we will see
when some cardinal properties may matter to us, particularly when using expected utility
theory and when we solve for the mixed strategy Nash equilibrium of a game.
3
But even
then the utility function will maintain some ordinal properties, this will be the case when
multiplying the utility numbers by some constant. Moreover, we do not compare utility
numbers between agents.
1.7 Starting to Solve a Game
Many of the homework and exam questions that Kathy will give you will be translating a
word problem into game theory. This usually does not require sophisticated mathemati-
cal knowledge but rather sharp analytical skills. Before analyzing any type of game, you
will need to ask yourself three fundamental questions:
1. Who are the players
2. What are their actions
3. What are their payoffs.
If you dont know the answer to any one of these questions, then you cannot move
forward in solving the game. However, knowing all of these questions does not guarantee
that you will be able to solve a game. You may need more information that is specic to
that game type. For example, in extensive form games you will need to know who moves
rst, second, third, ect.
3
You dont know what mixed strategies are yet because we havent covered them. But dont worry, we
will get to them soon enough.
8
1.8 Preferences Over Outcomes
Players do not have preferences over actions. Rather, they have preferences over the
actual outcome that those actions induce. Different strategies may induce different out-
comes. However, even if two or more outcomes are different, they may seem the same
for one particular player.
Strategies
Actions Outcome
Figure 1.3: From strategies to outcomes.
9
10
Chapter 2
Extensive Form Games
2.1 Actions versus Strategies
This is one of the hardest parts to learn in beginning game theory. In extensive form
games, actions are not enough to explain what is going on. Thus, we will need to ex-
tend our analysis from the choice set to strategies. An action is a possible choice at
one particular decision node. The formal denition of a strategy is the Cartesian prod-
uct of the choices that player i has at all (where here he has m) of his decision nodes:
S
i
= X
1
i
X
2
i
X
m
i
. You should always keep the following denition in mind.
A strategy for a player is a complete plan of action for every one of his decision
nodes.
So for extensive form games, the utility function for person i is stated as
u
i
: S
1
S
2
S
n
R
You probably dont have a strong understanding of what a strategy is right now, but it is
vitally important that you understand what it means. They are fundamentally different
from actions. The best way to learn is through examples.
As one could guess from knowing plain English, an action is something that a player
does. Astrategy is a plan of action; it is constructed before anyone actually does anything.
It is a set of decisions for each of the players decision nodes. For example, in Figure A.1
we see that Right is a an action, but if Up then Right and if Down then Right is a plan
for each of player 2s decision nodes.
The important word to recognize in the denition of a strategy is complete. This
means that each player must know what to do at every decision node, even if it is never
reached. Hence, if you write a strategy on a homework or exam I should be able to tell
what choice each player will make at each decision node that belongs to him or her. If
there is a node for that player in which you do not specify what he or she would do there,
then the strategy you wrote down is incomplete and really isnt a strategy at all.
11
2.2 Backward Induction and Rollback Equilibrium
Backwards induction is a very intuitive process once you have seen someone else solve it
for you. But just in case you forgot, here is the method for nding it:
1. At each node before the terminal nodes, nd what the players optimal action is at
that node;
2. Given the results in step 1, nd out each players optimal action is before those node
in step 1;
3. Repeat this process until the beginning of the game (the initial node).
The rollback equilibrium is simply writing out what you found by solving the game
with backward induction. This is always true for extensive form games without any
information sets.
1
Here is the formal denition:
A rollback equilibrium is a set of strategies for each player that satises the back-
ward induction solution to the extensive form game.
Many students are unsatised with the denition of rollback equilibriumbecause it in-
cludes actions that are not played. They only want to specify the actions that are played in
equilibrium. First of all, notice that rollback equilibrium is dened with strategies. Thus,
writing it without a complete contingent plan of action would not constitute a strategy
and thus would be a violation of the denition. But then one could ask, why are strate-
gies dened the way they are? The reason we dene them that way in game theory is
because we must know what the players are choosing at the end in order to gure out
what action a player will choose in the beginning. This is extremely useful in political
science. Remember that in the social sciences we do not only want to explain what does
happen but we also sometimes want to explain what does not happen. An event that
didnt happen is called a counterfactual. For example, the U.S. and the Soviet Union did
not engage in a nuclear war during the Cuban missile crisis. But it is not enough for social
scientists to say that it didnt happen; we want to explain why it didnt happen. In order
to do that, we need to theorize what would have happened if we did initiate war with the
Soviets. Explaining how Russia would have responded if the U.S. initiated war is crucial
to the explanation of why there was not any nuclear war at all. Think of the Rollback
of Figure 2.1 as saying, player 1 is choosing Up because he is happier when player 2
chooses Left when he chooses Up than he is when he chooses Down and then player 2
chooses Right.
In game theory, we have technical terms for what does happen and what does not
happen. So we separate those actions that are played and those that are not.
1
All of the games that you will see in Kathy Bawns version of PS 30 wont have any information sets in
them.
12
Actions on the equilibrium path are the ones that are played in the rollback equi-
librium. Actions off the equilibrium path are the ones that are not played in the
rollback equilibrium.
So when you hear game theorists talking about actions off the equilibrium path, you
know they are talking about actions that are not played in the rollback equilibrium. But
these actions are still part of the equilibrium strategy. I show what an equilibrium path is
and what off the equilibrium path is in Figure 2.4
Example: Consider the extensive form game in Figure 2.1 with no story behind it. You
could have a type of game on your midterm of this type: two players and two action
per player. Of course the actions and payoffs will be different, but it will be the same
format and your answers should mimic what I have written below. Suppose that the exam
asks you to write the strategies for each player and then write the rollback equilibrium.
Moreover, suppose that you are also asked to write out your answer in words (this simply
means that you will not receive credit for shading the game tree). Then your answers for
each players set of strategies should look like the following.
1
2
16,9
Left
-3, 4
Right
Up
2
4,1
Left
0,12
Right
Down
Figure 2.1: An extensive form game. The heavy shaded branches represent an action that
is played in the rollback equilibrium.
Player 1 strategies:
1. Up
2. Down
Player 2 strategies:
1. Left if Up, Left if Down
2. Left if Up, Right if Down
13
3. Right if Up, Left if Down
4. Right if Up, Right if Down
And the rollback equilibrium will always have one strategy from each players set of
strategies. So for this particular example, your answer should look like the following:
Rollback equilibrium: (Up; Left if Up, Right if Down)
Example: Consider the game in Figure 2.2. Player 1 moves rst and has the choice of
either blue or green. If player 1 chooses blue then player 2 moves next and has the choice
of either brown or red. If player 2 chooses red, then player 1 moves again and has the
choice of either orange or black.
blue
green
brown
red
orange
black
6,30
11,25
19,-2
10,40
1
2
1
Figure 2.2: Another extensive form game.
In order to construct a strategy, we have to say what a each player will do at every
node. To help you do this, I have developed the following process. First, nd all of the
decision nodes for player 1. Then pretend that the nodes and decisions for player 2 arent
there, as represented in Figure 2.3(a). Then make a list of the Cartesian product of the two
choice sets for player 1. Then note all of the decision notes for player 2, and pretend that
player 1s decision nodes arent there as represented in Figure 2.3(b). Then make a list of
all of the decisions for player 2 at that node. The resulting list should be the following:
Player 1
1. blue, orange if red
2. blue, black if red
3. green, orange if red
4. green, black if red
Player 2
14
1. brown if blue
2. red if blue
Some people may object to this because, if player 1 chooses green, then the game ends
and player 2 never has to make any choice at all and hence player 1 never gets to make
a second choice of either orange or black. They might argue that because they never
get to those decision nodes they shouldnt have to specify what they would do there
because those decisions are never made. However, it actually makes no sense if you dont
include what player 2 plans to do and what player 1 plans to do at his last decision node
because these choices will impact how player 1 makes his decision between blue and
green. Remember that we are trying to explain why people make decisions and not just
what happens. At any rate, the denition of a strategy requires you to specify a decision
at every node, and you should always follow the denitions exactly in game theory.
blue
green
brown
red
orange
black
6,30
11,25
19,-2
10,40
1
2
1
blue
green
brown
red
orange
black
6,30
11,25
19,-2
10,40
(a) (b)
1
2
1
Figure 2.3: Visualizing how to write strategies.
Example: Sometimes when writing out a strategy there can be ambiguity over which
action goes to which node. For example, consider Figure 2.4. Player 1 has only two
strategies that are without ambiguity: l and r. But player 2 has four strategies that could
cause ambiguitythere might be confusion on which node the stated actions are being
chosen. When we say that player 2 will move L, do we mean he will move L on the node
on the right or left. We must specify at which node he is moving. Traditionally, we specify
the nodes by saying how that player got to that particular node. So for player 2 to be able
to choose at the node on the left, player 1 has to choose l; we are not saying that player 1
will choose or would have chosen l, we are simply saying that if player 1 chooses l then
player 2 will choose L. The best way to write this out a full strategy in this case is
L if l, R if r
Notice how I write them very simply for conciseness. Thus, a fully specied strategy for
the whole game will look like
15
l; L if l, L if r
l; L if l, R if r
l; R if l, L if r
l; R if l, R if r
r; L if l, L if r
r; L if l, R if r
r; R if l, L if r
r; R if l, R if r
1
2
1,0
L
0,4
R
l
2
4,3
L
2,2
R
r
equilibrium path off the equilibrium path
Figure 2.4: Rollback: (r; R if l, L if r).
Notice that an action that is included in a players rollback equilibrium is always
shaded in a extensive form game. For games without any information sets, you can
use the following rule to be sure that you always write down the rollback equilibrium
correctly, like in Figure 2.4.

If you shade an action for a player when solving by backward induction, you
should include the action of the shaded branch in the players rollback equilibrium
strategy.
Example: This is the game that we played in class and is shown in Figure 2.5. First,
I put a dollar down and ask if the rst person will take it or not: T or N. If the rst
person decides N, then I put another dollar down and then the second person decides
whether to take the two dollars or not: t or n. If the second person decides n, then we
I put another dollar down and the rst person decides between T and N again. This
process continues for seven dollars. We can use backwards induction to gure out that
the Rollback equilibrium of this game is. I will cheat and use a form of shorthand to write
it out: (T, T, T, T; t, t, t).
16
(1,0) (0,2) (3,0) (0,4) (5,0) (0,6)
(0,7)
(7,0)
1 2 1 2 1 2 1
T
N
t
n
T
N
t
n
T
N
t
n
T
N
Figure 2.5: The centipede dollar game.
2.3 Multiple Rollback Equilibria
So far, we have been talking about rollback equilibrium in the singular. But there can
also be multiple rollback equilibria. For example, consider that in Figure 2.6(a) player 2
is indifferent between choosing C or D if player 1 chooses a. Thus, as is shown in 2.6(a),
the strategy prole (a; C if a, D if b) is a rollback equilibrium. But notice that, as is shown
in Figure 2.6(b), the strategy prole (b; D if a, D if b) is also a rollback equilibrium. So
there are two rollback equilibria in this game. In any homework or exam, you should
always specify all of the possible rollback equilibria for a game unless you are told to do
otherwise.
a
b
1
3,2
6,2
C D
2
9,1
5,4
C D
2
(a)
a
b
1
3,2
6,2
C D
2
9,1
5,4
C D
2
(b)
Figure 2.6: A game with indifference and multiple rollback equilibria.
2.4 How Many Strategies a Player Can Have
Suppose that a single player has a total of K number of nodes. At node k there are x
k
number of choices, and this is true for k = 1, 2, . . . , K. Then the number of strategies that
17
this player can have is equal to
K times
..
x
1
x
2
x
K
Example: In Figure 2.4, player 2 has 2 nodes with 2 actions at each one. Thus, he can have
a total of 2 2 = 4 strategies.
18
Chapter 3
Choice Under Uncertainty
3.1 Basic Probability Theory and Expected Utility Theory
Now we will move to the part of the course when outcomes are uncertain, but we can al-
ways know the probability of every outcome. In game theory, sometimes you encounter
nature nodes. Nature is not a player in the sense of having preferences over outcomes
but rather just a tool to describe when an event has uncertain outcomes that are assigned
probabilities.
1
Suppose that there are possible outcomes x
1
, x
2
, . . . , x
n
. A probability dis-
tribution over these outcomes is a set of numbers, p
1
, p
2
, . . . , p
n
, corresponding to each
of the possible events respectively. Probability theory requires that these numbers must
satisfy two conditions:
1. 1 p
i
0 for all i; and
2. p
1
+ p
2
+ + p
n
= 1.
Now we can analyze how agents will make decisions when choices are uncertain. This
will be the expected value or weighted average of the utilities. The general denition for
expected utility when there are n possible outcomes is given by
Eu (x) = p
1
u (x
1
) + p
2
u (x
2
) + + p
n
u (x
n
)
You will probably not need to remember this long cumbersome denition. In most ex-
amples that you will encounter in this class, you will have two cases: x
1
and x
2
with
respective probabilities of p
1
and p
2
. Since p
1
+ p
2
= 1, then p
1
= 1 p
2
. But then we
only need to denote one number, p, so that p
1
= p and p
2
= 1 p. This simplies the
analysis, and you should know the following denition when there are only two possible
outcomes:
1
This can become quite technical, and there are many reference from economics. I only give a cursory
introduction here, and you may want to consult some established textbooks if you wish to do a more formal
analysis for advanced study.
19
If u(x
1
) is the utility associated with outcome x
1
which occurs with probability p
and u(x
2
) is the utility associated with the outcome x
2
which occurs with proba-
bility (1 p), then the expected utility is Eu(x) = pu(x
1
) + (1 p)u(x
2
).
Example: Suppose that we have the following lottery: I will ip an unbiased coin and
give you $0 if it lands on heads and $1 if it lands on tails. Assume that your utility over
money is linear: u (x) = x.
Event x
1
= $0 x
2
= $1
Utility 0 1
Probabilty 0.5 0.5
Then the expected utility from this lottery is
Eu (x) = 0.5 (0) +0.5 (1) = 0.5
So your expected ex ante utility is 50 cents. Of course, in reality you will either get 0 or 1
(we call this the ex post utility). But the event has not happened yet, so you can only base
things on the probability of what might happen.
So why do we care about expected utility? We care because we want to compare
decisions when both choices have outcomes that are uncertain but have probability dis-
tributions over the events. And we also want to compare a decision where an outcome is
uncertain to a decision where an outcome is deterministic (something happens for sure).
Thus, we can dene formally how a player uses expected utility.
A player makes an expected utility calculation by doing the following:
1. if a and b are choices both with uncertain outcomes, then a is preferred to b if
and only if Eu (a) > Eu (b); or
2. if a has an uncertain outcome and b has a certain one, then a is preferred to b
if and only if Eu (a) > u (b).
Example: We can also do expected utility with extensive form games. Suppose that we
have a game as that in Figure 3.1(a), where nature moves after person 1. Then the ex-
pected utility calculation of person 1 is
Eu (a) = 0.5 (5) +0.5 (3) = 1
Eu (b) = 0.5 (1) +0.5 (0) = 0.5
Thus, person 1 chooses option b since Eu (a) < Eu (b). Notice that the expected utility of
playing a is negative. This is ne because utility can be positive or negative. However, it
would not be ne if any of the probabilities are negative.
20
5
3
1
0
a
b
u
(0.5)
d
(0.5)
(0.5)
u
(0.5)
d
1
N
N
5
3
1
0
a
b
u
(0.3)
d
(0.7)
(0.5)
u
(0.5)
d
1
N
N
(a) (b)
Figure 3.1: A game with nature nodes. The N stands for when nature moves.
Negative probabilities are not something that you have to be concerned about now
because with these types of games you are always given the probabilities. However,
later on you will learn about mixed strategies. This is when you will have to nd the
probabilities. So make sure you keep the following in mind:

Negative expected utility is possible but negative probabilities are never possible.
This is a very improtant comment. You need to pay attention to this.
Example: Now consider the choices person 1 faces in Figure 3.1(b). The utilities are the
same but the probabilities when choosing a have changed from 0.3 chance of going u and
a 0.7 chance of going d. Person 1s expected utility calculation is
Eu (a) = 0.3 (5) +0.7 (3) = 0.6
Eu (b) = 0.5 (1) +0.5 (0) = 0.5
Thus, person 1 chooses option a since Eu (a) > Eu (b).
Example: Consider again the situation in Figure 3.1(a), but with a slight change. Now
assume that, instead of choosing u and d each with probability 0.5, nature will choose u
with probability p and d with probability 1 p at each node. At what value of p is person
21
5
8
3
a
b
u
p
d
1 p
1
N
Figure 3.2: A game with only one uncertain outcome.
1 indifferent between choosing option a and option b? We need to set the two expected
utility functions equal to each other. This will give us an equation where p is the only
unknown. Then we solve the equation for p:
Eu (a) = Eu (b)
p (5) + (1 p) (3) = p (1) + (1 p) (0)
3 8p = p
p =
1
3
So at p =
1
3
person 1 is indifferent between choosing option a and choosing option b.
Example: Consider the game in Figure 3.2. At what value of p is this person indifference
from taking the risk to deciding on the sure thing? We need to calculate
Eu (a) = u (b)
p (8) + (1 p) (3) = 5
p =
2
5
.
So at p =
2
5
this person is indifferent from taking the risk to choosing the sure thing.
3.2 Rollback Equilibrium in Games with Nature Nodes
Figure 3.3 presents a game in which player 1 faces uncertainty but player 2 always knows
natures move after the fact (ex post).
22
8,-1 0,4 6,3 2,0 5,-2 4,7
1
N
2 2 2
A B
C D C D C D
G
p
W
1 p
Figure 3.3: Nature node.
Notice that player 2 will always choose the strategy (D if G, C if W, D if B) regardless
of anything else. But what is player 1s strategy? To answer this we need to calculate his
expected utility, and we only need to do it for him because he is the only one that faces
uncertainty in the game. Knowing that player 2s strategy is (D if G, C if W, D if B), player
1 calculates
Eu
1
(A) = 0p + (1 p) 6
= 6 6p
If player 1 chooses B, then player 2 will choose D and player 1 will get a payoff of 4.
Hence, in order for player 1 to choose A it must be the case that
Eu
1
(A) > u
1
(B)
6 6p > 4
1
3
> p
In order to nd the rollback equilibrium, we must analyze two cases:
Case 1 When p >
1
3
the rollback equilibrium is (B; D if G, C if W, D if B).
Case 2 When p <
1
3
the rollback equilibrium is (A; D if G, C if W, D if B).
3.3 Ex Post Mistake
Even if Eu (a) > u (b), in the end it may not actually turn out that the utility from a is
higher than that of b. That is, the ex-post utility is not always higher than the ex-ante
23
utility. We can only say that it will be higher on average.
A person commits an ex post mistake if
1. she picked a (which has a uncertain outcome) over b (which has a certain
outcome) because Eu(a) > u(b), but in the end it turned out that the utility
she received from a is lower than the utility she could have received from b;
or
2. she picked b (which has a certain outcome) over a (which has an uncertain
outcome) because E(a) < u(b), but in the end it turned out that the utility
she received from b is lower than the utility she could have received from a.
Example: Consider the game in Figure 3.3. What would an ex post mistake be in this
game? Suppose that p =
2
3
, which means that we are in case 1. Then player 1 will pick B
over A because Eu
1
(A) < u
1
(B). But suppose that in the end nature chooses W. Player
1 received a utility of 4 when she could have received a utility of 6 had she known that
nature was going to pick W. But she does not know before hand; so her best decision was
to pick B. But it feels like she made a mistake even though she made the right decision.
If nature chooses W then player 1 knows after the fact (ex post) that it would have been
a better choose if she had picked A and received a utility of 6 instead of 4. But ex ante
(before the fact) she made the best decision. What is the probability that this ex post
mistake happens? It is
1
3
because nature will choose W with probability
1
3
. But player 1
will always choose B and receive a utility of 4. The probability of an ex post mistake is
the probability of getting a lower utility than the alternate choice.
Example: Now lets change the game a little to that in Figure 3.4. Again assume that
p =
2
3
. The only thing that I have changed in this game is player 1s utility of 5 to 1.
Notice that everything else is the same in the game so that player 2 has the same rollback
equilibrium strategy as before. Then player 1s will choose A if
Eu
1
(A) > 1
6 6p > 1
7
6
> p
which is always true because p is a probability and can never be greater than 1. Hence,
player 1 will always choose A regardless of the probability and can never make an ex post
mistake. This is because he always receives a higher utility by choosing A (either 0 or 6)
over B. A player can never make an ex post mistake if one of his choices always gives
him a higher utility than the alternate choice.
Example: Consider the game in Figure 3.2. Suppose that p =
2
3
. Then the person makes
the expected utility calculation of
Eu (a) =
_
2
3
_
(8) +
_
1
3
_
(3) = 6.33
24
8,-1 0,4 6,3 2,0 -1,7 4,-2
1
N
2 2 2
A B
C D C D C D
G
p
W
1 p
Figure 3.4: No ex post mistake.
Then this person would choose a over b since 6.33 > 5. But what would an ex post mistake
in this game be? If nature chooses d, then this person could have received a utility of 5 if
she had picked b but received a utility of 3 instead. This ex post mistake will happen with
probability
1
3
.
25
26
Chapter 4
Efcient Outcomes
4.1 Pareto Optimality
The motivation for Pareto optimality is the question, is the outcome efcient or is it
wasteful? Sometimes Pareto optimality is called Pareto efciency; economists make no
distinction between the two. They are one and the same. But before we move on to the
denition of Pareto optimality, we must rst dene Pareto dominance.
Outcome A Pareto dominates outcome B if each player in A is at least as well off
as in B and there is at least one player who is strictly better off.
To put this into mathematical terms, suppose that we have players i = 1, 2, and each
player can play a strategy, s
i
, from his strategy possibility set: s
i
S
i
. But the utility for
player i depends on the strategy on the other players: u
i
(s
1
, s
2
). An outcome induced
by a set of strategies, (s
1
, s
2
), Pareto dominates another outcome induced by another set
of strategies, (s
1
, s
2
), if u
i
(s
1
, s
2
) u
i
(s
1
, s
2
) for all i and there is at least one i such that
u
i
(s
1
, s
2
) > u
i
(s
1
, s
2
). Note that this is different from Pareto optimality. In fact, Parato
optimality uses the denition of Pareto dominance in its denition.
An outcome is Pareto optimal if it is not Pareto dominated by any other outcome.
Although this is only for two players, it is highly generalizable for multiple players. Even
in this simplied version, this denition of Pareto optimality is probably too complex for
most of you because you will get lost in all of the notation. You can simplify it somewhat
by considering only two players. Or you can ignore these denitions altogether and just
remember the following denition (but you should memorize one denition of Pareto
optimality either way).
An outcome is Parato optimal if no one can be made better off without making
someone else strictly worse off.
27
Even though I have given you two denitions for Pareto optimality, they are two ways
of saying the same thing and will never contradict each other. Such will be the case for all
of the concepts in this class that will have two or more denitions. But how can you use
these denitions when solving problems on your homework or exams? Its trickier than it
may seem. The best way to nd whether an outcome is Pareto optimal or not is use the
following rule.
To determine if an outcome is Pareto optimal, you simply ask, is there another
outcome in which someone is made better off without making anyone else worse
off? If the answer is no, then the outcome is Pareto optimal. If the answer is yes,
then the outcome is not Pareto optimal.
Notice that Pareto optimality in general does not imply equality or fairness. It only
asks if the outcome is efcientthat it is not wasteful. For example, many economists have
pointed out that slavery was Pareto optimal; the slave cannot be made better off without
making the slave owner worse off. But no one will argue that American slavery was fair.
c
1
c
2
0
0.5
0.5
1
1
A
B
C
Figure 4.1: The Pareto optimality set for dividing a cake of size 1.
Example: Suppose that we have a cake of size 1 and we are deciding how much to give
to person 1, c
1
, and to person 2, c
2
. This is shown in Figure 4.1. Suppose that we decided
to give a proportion of 0.4 to person 1 and a proportion of 0.5 to person 2, which is rep-
resented by the point B in Figure 4.1. Then we still have a 0.1 proportion of the cake left
over. I can make someone better off without making anyone else worse off. I can either
give the 0.1 left to person 2 and then he will have 0.6 of the cake, which is represented by
point C. This does not make person 1 any worse off, so it Pareto dominates point B. Or I
could give the 0.1 left to person 2 without making person 2 any worse off, which is rep-
resented by point A. The green area represents all of he points that are Pareto dominated
28
Player 1
Player 2
C D
c 3,3 1,4
d 4,1 2,2
Figure 4.2: The prisoners dilemma.
by some other point. Notice that a green point is also Pareto dominated by another green
point above it. The thick black line, including the endpoints, represents the set of points
that are Pareto optimal. This is called the Pareto frontier in economics.
Example: The game in Figure 4.2 is called the prisoners dilemma. You will learn the
story later, so I will just show you the game here. I will ask the question above for each of
the outcomes in this game.
1. For (1, 4), is there another outcome that makes someone better off without making
anyone else worse off? No. Even though moving to (3, 3) will make player 1 better
off, it makes player 2 worse off. Moving to (4, 1) makes player 1 better off but makes
player 2 worse off. Moving to (2, 2) makes player 1 better off but makes player 2
worse off. Thus, the outcome with utilities (1, 4) is Pareto optimal because in order
to make someone better off we have to make someone else worse off.
2. For (4, 1), is there any other outcome that makes someone better off without making
anyone else worse off? No. Even though we can make player 2 better off by moving
from (4, 1) to (1, 4) player 1 is strictly worse off. Moving to (3, 3) will make player
2 better off but will make player 1 worse off. Moving to (2, 2) will make player 2
better off but will make player 1 worse off. Thus, the outcome with utilities (4, 1)
is Pareto optimal because in order to make someone better off we have to make
someone else worse off.
3. For (3, 3), is there any other outcome that makes someone better off without making
anyone else worse off? No. Even though we can move from (3, 3) to (4, 1) and make
player 1 better off, player 2 is worse off. Moving to (1, 4) makes player 2 better off
but makes player 1 worse off. Moving to (2, 2) makes both players worse off. Thus,
the outcome with utilities (3, 3) is Pareto optimal because in order to make someone
better off we have to make someone else worse off.
4. For (2, 2), is there any other outcome that makes someone better off without mak-
ing anyone else worse off. Yes. Moving from (2, 2) to (3, 3) will make player 1
and player 2 better off. Thus, the outcome with utilities (2, 2) is not Pareto optimal
because we can make someone better off without making anyone else worse off.
Example: Now let us change the prisoners dilemma just a little, as shown in Figure 4.3.
I will go through the same procedure.
29
1. For (1, 4), is there another outcome that makes someone better off without making
anyone else worse off? No. Even though moving to (3, 2) will make player 1 better
off, it makes player 2 worse off. Moving to (4, 1) makes player 1 better off but makes
player 2 worse off. Moving to (2, 2) makes player 1 better off but makes player 2
worse off. Thus, the outcome with utilities (1, 4) is Pareto optimal because in order
to make someone better off we have to make someone else worse off.
2. For (4, 1), is there any other outcome that makes someone better off without making
anyone else worse off? No. Even though we can make player 2 better off by moving
from (4, 1) to (1, 4) player 1 is strictly worse off. Moving to (3, 2) will make player
2 better off but will make player 1 worse off. Moving to (2, 2) will make player 2
better off but will make player 1 worse off. Thus, the outcome with utilities (4, 1)
is Pareto optimal because in order to make someone better off we have to make
someone else worse off.
3. For (3, 2), is there any other outcome that makes someone better off without making
anyone else worse off? No. Even though we can move from (3, 2) to (4, 1) and make
player 1 better off, player 2 is worse off. Moving to (1, 4) makes player 2 better off
but makes player 1 worse off. Moving to (2, 2) makes player 1 worse off. Thus, the
outcome with utilities (3, 2) is Pareto optimal because in order to make someone
better off we have to make someone else worse off.
4. For (2, 2), is there any other outcome that makes someone better off without making
anyone else worse off. Yes. Moving from (2, 2) to (3, 2) will make player 1 better off
without making player 2 any worse off. Thus, the outcome with utilities (2, 2) is not
Pareto optimal because we can make someone better off without making anyone
else worse off.
Player 1
Player 2
C D
c 3,2 1,4
d 4,1 2,2
Figure 4.3: The prisoners dilemma revised.
I have not shown you what a Nash equilibrium is in my notes yet. But you will learn
later that (d, D) is the Nash equilibriumof the original prisoners dilemma game in Figure
11.3. But it is Pareto dominated by the outcome induced by (c, C). There is something to
learn from this example. In particular, you should remember:
In general, Pareto optimality and Nash equilibrium have no relation to each other.
30
Example: Consider the extensive form game in Figure 4.4. We can use the same proce-
dure.
1. For (1, 0), is there any other outcome that makes someone better off without mak-
ing anyone else worse off? Yes, the outcome with utilities (2, 2) make makes both
players better off. Thus, the outcome with utilities (1, 0) is not Pareto optimal.
2. For (0, 4), is there another outcome that makes someone better off without making
anyone else worse off? No. Even though moving from (0, 4) to (4, 3) makes player 1
better off, player 2 is worse off. Moving to (1, 0) makes player 1 better off but makes
player 2 worse off. Moving to (2, 2) makes player 1 better off but makes player 2
worse off. Thus, the outcome with utilities (0, 4) is Pareto optimal because no one
can be made better off without making someone else worse off.
3. For (4, 3), is there another outcome that makes someone better off without making
anyone else worse off? No. Even though moving from (4, 3) to (0, 4) makes player
2 better off, it makes player 1 worse off. Moving to (1, 0) makes both players worse
off. Moving to (2, 2) also makes both players worse off. Thus, the outcome with util-
ities (4, 3) is Pareto optimal because no one can be made better off without making
someone else worse off.
4. For (2, 2), is there another outcome that makes someone better off without making
anyone else worse off? Yes. Moving from (2, 2) to (4, 3) makes both players better
off. Thus, the outcome with utilities (2, 2) is not Pareto optimal because someone
can be made better off without making anyone else worse off.
Notice that (2, 2) Pareto dominates (1, 0) but is not Pareto optimal itself. This high-
lights that Pareto dominance and Pareto optimality are two different denitions.
1
2
1,0
L
0,4
R
l
2
4,3
L
2,2
R
r
Figure 4.4: An extensive form game.
31
There are many myths oating around about Pareto optimality that I hope I have
shown you are not true with the above examples. First, there is a myth that there can
be only one Pareto optimal outcome. I think people believe this because of the word
optimal in Pareto optimality. But remember that the denition of Pareto optimality
allows for multiple Pareto optimal cases. Second, there is a myth that some Pareto optimal
outcomes are better than others. This is also false. As stated above, Pareto optimality has
nothing to do with fairness. So just because one Pareto optimal outcome might seemmore
practical than all of the others, that does not make the others not Pareto optimal.
4.2 Strong Pareto Optimality
We can rene our denition of Pareto optimality so that it is much stricter. This adjust-
ment of Pareto optimality is dened with strict inequality. You should recognize how
close it is to the denition above.
Outcome A strongly Pareto dominates outcome B if both players are made better
off in A than they are in B.
Again, assume that there are only two players with two strategies as is the case above.
An outcome induced by a set of strategies, (s
1
, s
2
), strongly Pareto dominates another
outcome induced by another set of strategies, (s
1
, s
2
), if u
i
(s
1
, s
2
) > u
i
(s
1
, s
2
) for all i.
An outcome is strongly Pareto optimal if it is not strongly Pareto dominated by
any other outcome.
Or you can use the following equivalent denition.
An outcome is strongly Parato optimal if all players cannot be made better off.
This denition is usually used in social choice theory. Economist usually just use the
regular Pareto optimality as dened above. Here is an easy way to determine if any
outcome is strongly Pareto optimal.
To determine if an outcome is strongly Pareto optimal, you simply ask, can both
players be made better off in another outcome? If the answer is no, then the
outcome is strongly Pareto optimal. If the answer is yes, then the outcome is not
strongly Parato optimal.
Example: Consider Figure 4.2. I will use the procedure stated above in the shadowbox
for determining whether an outcome is Pareto optimal or not.
32
1. For (1, 4), is there another outcome that makes both players better off? No. Thus, it
is strongly Pareto efcient.
2. For (4, 1), is there another outcome that makes both players better off? No. Thus, it
is strongly Pareto efcient.
3. For (3, 3), is there another outcome that makes both players better off? No. Thus, it
is strongly Pareto efcient.
4. For (2, 2), is there another outcome that makes both players better off? Yes, the
outcome with payoffs (3, 3) makes both players better off. Thus, it is strongly Pareto
dominated and therefore not strongly Pareto efcient.
Example: Consider Figure 4.3. It is almost the same game as before, but here I have
changed one of the 3s into a 2 for (c, C). Use the same procedure as before.
1. For (1, 4), is there another outcome that makes both players better off? No. Thus, it
is strongly Pareto efcient.
2. For (4, 1), is there another outcome that makes both players better off? No. Thus, it
is strongly Pareto efcient.
3. For (3, 2), is there another outcome that makes both players better off? No. Thus, it
is strongly Pareto efcient.
4. For (2, 2), is there another outcome that makes both players better off? No. Thus,
it is strongly Pareto efcient. Some people might incorrectly think that the outcome
with payoffs (3, 2) strongly Pareto dominates the outcome with payoffs (2, 2). But
only one player is made better off, and both players must be made better off for it
to be strongly Pareto dominated.
Example: Consider Figure 4.4. Notice that the outcome with payoffs (1, 0) is strongly
dominated by the outcome with payoffs (2, 2), and so it is not strongly Pareto optimal.
Neither is the outcome with the payoffs (2, 2) because it is strongly Pareto dominated
by the outcome with payoffs (4, 3). But (4, 3) are not strongly Pareto dominated by any
other payoffs. So the outcome that gives these payoffs is strongly Pareto optimal. Fur-
thermore, (0, 4) are not strongly Pareto dominated by any other payoffs. So the outcome
that induces them is strongly Pareto optimal also.
Theorem: If an outcome has a payoff prole with one of the players highest utili-
ties, then the outcome is necessarily strongly Pareto efcient.
33
34
Chapter 5
Dominated Strategies
5.1 Dominated Actions in Normal Form Games
Now we will turn to dominates strategies. Here we begin by analyzing normal form
games, and then alter we will turn to extensive form games. Again, here are some broad
denitions which may or may not help some of you. This topic is probably best learned
by the examples I will give you. Nevertheless, I still think it is worthwhile to dwell on the
denitions.
Strategy a strongly dominates strategy a

if u(a, b) > u(a

, b) for every possible


strategy, b, of the other player.
Thus, a player will never choose strategy a

over strategy a because he could always do


better by choosing a regardless of what the other player does. For example, in Figure
5.1, for player 1, 1c strongly dominates 1b because the utility that player 1 gets from 1c is
always greater than the utility that he gets from because the utility that player 1 gets from
1c is always no matter what player 2 does.
Strategy a weakly dominates strategy a

if u(a, b) u(a

, b) for every strategy, b,


of the other player and there is at least one b such that u(a, b) > u(a

, b).
Hence, there can be indifference with some of the outcomes for weakly dominated
strategies. But there must always be at least one action of the other player in which the
dominated strategy gives a lower payoff. However, with strongly dominated strategies,
it must be the case that in the dominated strategy everything is worse off for the chooser.
This is how the two denitions are different.
Since a player will never play a strongly or weekly dominated strategy, we can elimi-
nate themfromthe game in order to see if the other player has a dominated strategy in the
new game. If so, then we can delete that strategy from the game and then have another
new game. We can continue this process until we cannot eliminate anymore dominated
35
strategies. This process is called iterative deletion of dominated strategies. We can it-
eratively delete only strongly dominated strategies or iterative delete both strongly and
weakly dominated strategies.
Example: This example is shown in Figure 5.1. For player 1, 1c strictly dominates 1b.
Thus, we eliminate it from the original game and get a new game, and this process is
shown in Figure 5.1 by drawing a whole new matrix but without 1b. Then for player 2,
2d strongly dominates 2c. So we eliminate it fromthe game and get a newgame as Figure
5.2 shows. Now Figure 5.3 shows that for player 1, 1c strongly dominates 1a; so we can
eliminate it from the game. Then for player 2, 2d strongly dominates 2b. Then for player
1, 1c strongly dominates 1d. Then for player 2, 2d strongly dominates 2a. And nally we
have our solution by iteratively eliminating strongly dominated strategies.
Pl. 1
Pl. 2
2a 2b 2c 2d
1a 3,3 1,0 2,-1 0,4
1b 4,8 3,6 -1,9 1,5
1c 7,-1 5,2 3,0 2,3
1d 0,7 6,1 7,-2 1,2
Pl. 1
Pl. 2
2a 2b 2c 2d
1a 3,3 1,0 2,-1 0,4
1c 7,-1 5,2 3,0 2,3
1d 0,7 6,1 7,-2 1,2
Figure 5.1: The rst iterative elimination.
Pl. 1
Pl. 2
2a 2b 2d
1a 3,3 1,0 0,4
1c 7,-1 5,2 2,3
1d 0,7 6,1 1,2
Pl. 1
Pl. 2
2a 2b 2d
1c 7,-1 5,2 2,3
1d 0,7 6,1 1,2
Figure 5.2: The second elimination.
Pl. 1
Pl. 2
2a 2d
1c 7,-1 2,3
1d 0,7 1,2
Pl. 1
Pl. 2
2a 2d
1c 7,-1 2,3
Pl. 1
Pl. 2
2d
1c 2,3
Figure 5.3: The third and fourth elimination.
Example: This example is shown in Figure 5.4. You should use the following format to
write down your answer
1. For player 1, strategy Y strongly dominates strategy W.
2. Now for player 2, strategy B strongly dominates strategy A.
36
3. Now for player 1, strategy X strongly dominates strategy Y.
4. Now for player 2, strategy B strongly dominates strategy C.
5. Now for player 1, strategy Z strongly dominates strategy X.
6. Nowfor player 2, strategy B strongly dominates strategy D. This leaves us with only
one cell left: (Z, B) . This is the only cell that survives iterative deletion of strongly
dominated strategies.
Pl. 1
Pl. 2
A B C D
W 0,0 1,-3 1,1 -1,1
X 3,1 5,2 4,1 2,0
Y 7,2 4,3 2,-7 0,1
Z 0,1 6,4 3,2 1,2
Figure 5.4: Just another game.

If you think that the answer to the previous problem is (6, 4), then you need to
review what a strategy is. This will never be counted as full credit to a question
like this. Also, simply crossing out rows and columns and then circling the cell
that survives is insufcient as a answer. You need to write down the process of
elimination.
5.2 Dominated Strategies in Extensive Form Games
Notice howthey are called dominated strategies but we have only used actions until now.
That is because we have only used normal formgames as examples. So the strategies have
been the same as the actions. But players can also have dominated strategies in extensive
form games. And that is why we call them dominated strategies and not dominated
actions.
Example: Consider the game in Figure 5.5. You can see that player 1s strategy of Up
strongly dominates his strategy of Down. This is because he will always receive a higher
payoff by playing Up (either 16 or 9) regardless of what player 2 does than he would if he
played Down.
Example: Consider the game tree in Figure 5.6. Notice that for player 1 the strategy
(blue, if red then orange) strongly dominates the strategies (green, if red and then orange),
(green, if red then black) and (blue, if red then black). We can check that this strategy gives
37
1
2
16,9
Left
9, 4
Right
Up
2
4,1
Left
0,12
Right
Down
Figure 5.5: An extensive form game with dominated strategies.
player 1 a higher utility than the other strategies regardless of what player 2 does. First
notice that player 1 will always choose orange over black. Now notice that player 1 will
receive a utility of either 11 or 19 if he chooses blue over green. Thus, (blue, if red then
orange) strongly dominates all of the others.
blue
green
brown
red
orange
black
6,30
11,25
19,-2
10,40
1
2
1
Figure 5.6: Another extensive form game.
38
Chapter 6
Nash Equilibrium
6.1 Dening Nash Equilibrium
This will be the most basic types of equilibrium that you will learn. The formal denition
is so important that I will state it here for those of you who can understand it. Now,
instead of using strategies, S
i
, I will go back to actions, X
i
. Remember that X
i
is the choice
set for player i. A pure action prole, (x

1
, . . . , x

n
), is a Nash equilibrium if
u
i
_
x

i
, x

i
_
u
i
_
x
i
, x

i
_
for every x
i
X
i
and where x

i
simply means all of the other actions of the prole
(x

1
, . . . , x

n
) other than x

i
.
This denition is hard to understand for most people. However, the concept of Nash
equilibrium is very easy for many types of games. You should think about Nash equilib-
rium in the following way:
In a Nash equilibrium, each player is playing a best response given the actions of
the other players.
I expect all of you to be able to memorize this informal denition by the end of the
quarter. So what this is saying is that for a particular player, taking the actions of all the
other players as xed, that particular player is doing the best that he can do. If this is
true for all players, then the action prole is a Nash equilibrium. Or some people like the
following way to think about Nash equilibrium:
In a Nash equilibrium, no player wants to change his action given the actions of
the other players.
This is saying that there is not even one player that wants to deviate from his original
action. You can check if an action prole is a Nash equilibrium this way also, but it takes
longer than doing the algorithm with bi-matrix games. There are some games in which
39
this is the only way in which you can solve for the Nash equilibrium set of strategies.
These two denitions are one and the same (one is simply a re-statement of the other)
and will never contradict each other. It will be helpful for you to memorize at least one of
these denitions.
6.2 Discrete and Finite Players and Choices
I am writing this because most game theory examples that people see are really easy and
they can be solved by following a very simple algorithm. This is not to say that game
theory does not get complicated. It does indeed, and I will show you how it can. But for
understanding the basic concepts and solving for the basic types of equilibria, these are
very simple.
In this section, we will assume that the set of players is discretethe players can be
labeled by the numbers 1, 2,. . ., n. And it is important the number of players, n, is nite:
n < . Furthermore, we will assume that the set of actions for each player, a
1
,. . . , a
k
, is
discrete and the number of choices for each player is also nite: k < .
Finding Pure Strategy Nash Equilibrium for Finite Games
Here is the algorithm for solving for pure strategy Nash Equilibria for a game in a normal
form matrix:
Step 1 For player 1, take all of the numbers in his rst row and nd the maximum and
underline it. There may be a tie, and if so just underline all the one that tie.
Step 2 For player 1, repeat step 1 for all of player 1s rows.
Step 3 For player 2, take all the numbers in his rst column and nd the maximum and
underline it. There may be ties here as well.
Step 4 For player 2, repeat step 3 for of player 2s columns.
Step 5 Find all of the boxes that have both numbers underlined. These are the pure strat-
egy Nash Equilibria of the game.
There may be no pure strategy Nash Equilibriumof a game. We can only guarantee
a mixed strategy Nash equilibrium if there is a nite set of players and actions.
Finding Nash Equilibria Example
Here is an example I found in Gibbons. The payoff bi-matrix is as follows.
Step 1 For player 1, take all of the numbers in his rst column: 0, 4, and 3. Now pick the
maximum, 4, and underline it.
40
Player 1
Player 2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
Figure 6.1: A normal form game.
P1
P2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
(a) First column
P1
P2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
(b) Second column
P1
P2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
(c) Third column
Figure 6.2: Player 1s best responses.
Step 2 For player 2, repeat step 1 for columns 2 and 3.
1. For column 2, the numbers are 4, 0, and 3. Maximum is 4; so we underline it.
2. For column 3, the numbers are 5, 5, and 6. Maximum is 6; so we underline it.
Step 3 For player 2, take all of the numbers in his rst row: 4, 0, and 3. Now pick the
maximum, 4, and underline it.
Step 4 For Player 2, repeat step 3 for rows 2 and 3.
1. For row 2, the numbers are 0, 4, and 3. Maximum is 4; so we underline it.
2. For row 3, the numbers are 5, 5, and 6. Maximum is 6; so we underline it.
Step 5 Find all of the boxes that have both numbers underlined. The only one that ts
this criteria is (B, R) with corresponding payoffs of 6,6.
Remember that this algorithm is only for bi-matrix games with two people. If the
game is of any other sort, you may not be able to apply this algorithm.
Example: Find the Nash equilibria for the following Battle of the Sexes game. A hus-
band and wife work at the opposite sides of town and had decided that they are to meet
for a date. They knew that they were supposed to go to either the comedy club on the
husbands side of town or the opera on the wifes side. But they both forgot which one,
and this is the time before phones so that they cannot contact each other. The husband
prefers to go to the comedy club while the wife prefers to go to the opera. But they both
would rather be together than alone. The game is represented in Figure 6.5. Then the
Nash equilibria of the game are (comedy, Comedy) and (opera, Opera).
41
P1
P2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
(a) First row
P1
P2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
(b) Second row
P1
P2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
(c) Third row
Figure 6.3: Player 2s best responses.
Player 1
Player 2
L C R
T 0,4 4,0 5,3
M 4,0 0,4 5,3
B 3,5 3,5 6,6
Figure 6.4: The Nash equilibrium.
6.3 Making Sense of Nash Equilibrium
Sometimes Nash equilibria can be objectionable to what people would consider common
sense. But Nash equilibrium is not always what you think should happen. You must
remember that it is rigorous a denition. Consider the following game. You will nd
that the two Nash equilibria of this game are (stag, Stag) and (hare, Hare). However,
(stag, Stag) strictly Pareto dominates (hare, Hare). Hence, some people might say why
would the players even choose (hare, Hare) if it is worse for both of them compared to
(stag, Stag)? They might even argue that this should not be a Nash equilibrium at all.
But look at the denition of Nash equilibrium. Can either player unilaterally deviate and
be better off? The answer is a resounding no. Thus, if no player wants to unilaterally
deviate from that strategy prole, it constitutes a Nash equilibrium. You can think of
this game as both players having a mutually most preferred option but also having a risk
factor that the other player does not pick that option.
6.4 The Connection Between Dominated Strategies and Nash
Equilibrium
Is there a connection between dominated strategies and Nash equilibrium? Yes. But we
must distinguish between strongly dominated strategies and weakly dominated strate-
gies.
42
Husband
Wife
Comedy Opera
comedy 2,1 0,0
opera 0,0 1,2
Figure 6.5: Battle of the Sexes game.
Player 1
Player 2
Stag Hare
stag 4,4 1,3
hare 3,1 2,2
Figure 6.6: The Stag-Hunt game.
THEOREM: If we only iteratively delete strongly dominated strategies and end up
with only one cell left in the matrix, then that cell is the only Nash equilibrium in
the game.
Thus, for the example in Figure 5.1, we know that we found the only Nash equilib-
rium of the whole game, and you can check this. But what if we deleted some weakly
dominated strategies? Then we cannot use this theorem because there might be other
Nash equilibrium in the game.
THEOREM: If we iteratively delete weakly dominated strategies and end up with
only one cell left in the matrix, then that cell is a Nash equilibrium but there may
be others in the game.
The reason why we there might be more Nash equilibria in the game is because the
order to deletion matters for weakly dominated strategies.
6.5 A Collective Action Problem
These types of games are common in political science. Suppose that there are a nite
number, M, of people living in a community. The people can make a decision about
leisurely driving. Person i chooses x
i
{0, 1} in which he can either drive one mile,
x
i
= 1, or not drive one mile, x
i
= 0. If he drives, he gains satisfaction from driving
of b (x
i
) = x
i
. But he also has a dissatisfaction from the smog created from driving,
c
x
i
M
. However, this is only if he drives. If another person drives, say person j, then he
gains a dissatisfaction of c
x
j
M
without gaining any of the benet that person j does of
43
driving. Hence, his total dissatisfaction with both of these people driving is c
x
i
M
c
x
j
M
=

c
M
_
x
i
+ x
j
_
. Thus, his net level of satisfaction with everyone involved is
U
i
(x
1
, . . . , x
M
) = x
i

c
M
M

k=1
x
k
Assume that 1 < c < M. You should be able to nd all of the Nash equilibrium of this
game. Remember that a strategy prole, x

=
_
x

1
, . . . , x

M
_
is a set of strategies for all the
players. What is a strategy prole, x

, that constitutes a Nash equilibrium?


44
Chapter 7
Mixed Strategy Nash Equilibrium
7.1 Dening Mixed Strategy Nash Equilibrium
All of the Nash equilibria we have studied thus far are called pure strategy Nash equi-
libria. But there is another kind. In a mixed strategy Nash equilibrium, players choose a
probability distribution over their strategies.
In a mixed strategy Nash equilibrium (MSNE), each player chooses his action
through a probability distribution which makes the other player indifferent be-
tween choosing his actions.
Again, for most of the instances in this class you will be given two players with two
actions each. Then this simplies the matter and you can use the following generalization
of a two person two action game (a 2 2 game). Assume that there are only two play-
ers in the game each with two possible actions: Player 1 has actions A and B and picks
the probability p of choosing action A and Player 2 has actions C or D and can picks a
probability q of choosing action C. Let player 1s actions of A and B have the respective
utilities of (a
1
, a
2
) and (b
1
, b
2
), with the rst number indicating the utility Player 1 receives
if Player 2 chooses C and the second indicating the utility he receives if Player 2 plays D.
And let player 2s actions of C and D have the respective utilities of (c
1
, c
2
) and (d
1
, d
2
).
This is represented by the generic 2 2 game in Figure 7.1. Then, if a mixed strategy
Nash equilibrium exists, the equilibrium probability q should make Player 1s expected
utilities equal, EU
1
(A) = EU
1
(B), and the equilibrium probability p should make Player
2s the expected utilities be equal: EU
2
(C) = EU
2
(D). This is the only way to satisfy the
denition of a MSNE.
7.2 Calculating Mixed Strategy Nash Equilibrium
Using the denition of mixed strategies fromabove, we can always calculate MSNE prob-
abilities in simple 2 2 games. And we can also verify if a probability distribution satis-
es the denition of a mixed strategy Nash equilibrium or not. Lets use the 2 2 game
45
Pl. 2
q 1 q
C D
Pl. 1 p A a
1
, c
1
a
2
, d
1
1 p B b
1
, c
2
b
2
, d
2
Figure 7.1: The generic 2 2 game.
setup found in Figure 7.1. Then a probability value, q with 0 < q < 1, of Player 2 choosing
action C will always satisfy
EU
1
(A) = EU
1
(B)
qa
1
+ (1 q) a
2
= qb
1
+ (1 q) b
2
if it is a mixed strategy Nash equilibrium. The same logic applies to the probability dis-
tribution over Player 1s actions. Similarly, the value of p, with 0 < p < 1, of Player 1
choosing action A must satisfy
EU
2
(C) = EU
2
(D) .
pc
1
+ (1 p) c
2
= pd
1
+ (1 p) d
2
in order to be a MSNE. If you think that you found a MSNE and you check your solutions
for these equalities and one of them doesnt match, then you did something wrong in
trying to nd the MSNE.
Example: The best way to show you is by example. Consider the following game. Sup-
pose that each player wants to play a mixed strategy. Player 1 chooses S with probability
p and N with probability 1 p. Player 2 chooses S with probability q and N with proba-
bility 1 q.
q 1 q
S N
p S 1,1 0,5
1 p N 5,0 -10,-10
Then if player 1 plays S his expected payoff would be
EU
1
(S) = 1q +0 (1 q) = q
And if player 1 plays N then his expected utility is
EU
1
(N) = 5q 10 (1 q) = q = 15q 10
Hence, if EU
1
(S) > EU
1
(N) then player 1 will play S. And if EU
1
(S) < EU
1
(N) then
player 1 will play N. But if EU
1
(S) = EU
1
(N) then player 1 is indifferent between
46
playing S and N. This is what player 2 wants if he is going to play a mixed strategy.
Thus, player 2 will set
EU
1
(S) = EU
1
(N)
q = 15q 10
10 = 14q
5
7
= q
By a similar logic, in order for player 2 to be indifferent, player 1 must set
EU
2
(S) = EU
2
(N)
p = 15p 10
p =
5
7
Thus, the mixed strategy Nash equilibrium (MSNE) of this game is
1. Player 1 plays S with probability p =
5
7
and N with probability 1 p =
2
7
; and
2. Player 2 plays S with probability q =
5
7
and N with probability 1 q =
2
7
.
Notice that this is a Nash equilibrium because neither player wants to deviate from his
strategy given the strategy of the other player. You should ask yourself, can either player
make himself better off by changing his strategy given what the other player is doing? By
doing this, you can convince yourself that this is an equilibrium.
Example: Now, suppose that we have the following game.
q 1 q
C D
p C 3,3 0,5
1 p D 5,0 2,2
Then if we follow the same procedure we nd that p = 1 and q = 1. You should
remember that probabilities can never be negative. Hence, there does not exist a p such
that player 2 is indifferent between playing C and D and there does not exist a q such that
player 1 is indifferent between playing C and D. Thus, we cannot have a mixed strategy
Nash equilibrium in this game.
7.3 The Existence of a Nash Equilibrium
The Nash equilibrium derives it name from John Nash, the famous mathematician, who
proved that a Nash equilibrium always exists for games with a nite set of players and
47
kicker
goalie
le f t right
Le f t 0,1 1,0
Right 1,0 0,1
Figure 7.2: The penalty kick game.
actions. The games that we have been studying so far fall into this category. However,
consider the following example in Figure 7.2.
If you did the algorithm correctly for this game, you should have found that there is
no pure strategy Nash equilibrium: when players choose either on action or the other.
Does this falsify Nashs theorem? Not at all. In fact, there does exist a Nash equilibrium
in this game. Remember that a mixed strategy Nash equilibrium count just as much as a
pure strategy Nash equilibrium. In fact, you can think of a pure strategy as being a mixed
strategy when each player is playing one action with probability 1 and all of the other
actions with probability 0. The following is Nashs theorem.
THEOREM: If a game has a nite set of players and a nite set of actions, then there
will always exist a Nash equilibrium (either pure strategy, mixed strategy or both).
7.4 Expected Utility of Playing a Mixed Strategy
What payoff can a player expect to receive if she plays a MSNE? Will this payoff be better
in expectation than the one she could receive by playing a PSNE if one exists in the game?
We can determine the answers to both of these questions. Consider the generic 2 2 game
in Figure 7.1. What is player 1s expected utility if both players play the mixed strategy
Nash equilibrium? It is given by
EU
1
(MSNE) = p [qa
1
+ (1 q) a
2
] + (1 p) [qb
1
+ (1 q) b
2
]
and for player 2 it is given by
EU
2
(MSNE) = q [pc
1
+ (1 p) c
2
] + (1 q) [pd
1
+ (1 p) d
2
] .
If a MSNE exists in a 2 2 game, then it will always satisfy this condition.
Example: Consider the game in Figure 6.5. The Wife sets q such that
2q + +0 (1 q) = 0q +1 (1 q)
q =
1
3
48
and the Husband sets p such that
1p +0 (1 p) = 0p +2 (1 p)
p =
2
3
.
First we can check to see if the expected utilities equate for each player. First, we will
check the Husband.
EU
H
(c) = 2q = 2
_
1
3
_
=
2
3
EU
H
(o) = 1 q = 1
1
3
=
2
3
.
So EU
H
(c) = EU
H
(o) =
2
3
as it should if it is a MSNE. Now lets check the Wife.
EU
W
(c) = p =
2
3
EU
W
(o) = 2 2p = 2 2
_
2
3
_
=
2
3
.
So EU
W
(c) = EU
W
(o) =
2
3
as we would hope. Now what is the expected utility of the
Husband if they play the MSNE? It is
EU
H
(MSNE) =
2
3
_
1
3
(2) +
2
3
(0)
_
+
1
3
_
1
3
(0) +
2
3
(1)
_
=
2
3
49
50
Chapter 8
Variables in Game Theory
8.1 Extensive Form Games with Variables
Utility numbers can be represented by variables or have variables in them. I think the
best way for you to learn this is to see an example without any story behind it.
1
2
12 C,9
L
3,V
R
l
2
4,6
L
0,1
R
r
Figure 8.1: A game with variables.
Consider the game in Figure 8.1. Player 1 has a cost of C > 0 for one of the outcomes
and player 2 has a value of V (where V can be any real number) for another outcome.
We have to analyze cases and subcases in this example. It can be tricky; so always pay
attention to what you are doing.
Case 1 If V > 9, then player 2s strategy is R if l, L if r. Notice that player 2 will always
choose L if r regardless of the value of V. Also notice that player 1s value of C is
irrelevant; he will always choose r. Thus, the rollback equilibrium is always (r; R if
l, L if R) in this case.
51
Case 2 If V < 9, then player 2s strategy is L if l, L if r. Now player 1s value of C matters
in this case. We will have two cases of rollback that depend on C.
1. If C > 8, then player 1s strategy is r and the rollback equilibrium is (r; L if l, L
if R).
2. If C < 8, then player 1s strategy is l and the rollback equilibrium is (l; L if l, L
if R).
8.2 Commitment in Extensive Form Games
Consider two countries, G and W, who are ghting over the use of an island that connects
both of their lands with bridges. They both value the island at v. Their value of not
owning the island is 0. If they both go to war, they will experience a cost. They both
have a cost (disutility) of c for some c > 0. Country G moves rst. It can either attack,
A, or leave country W alone, L. If G leaves country W alone, then W retains the island.
If it decides to attack, then W can choose to defend, D, or retreat, R. If it defends the
island, it will certainly loose the war, since it has a weaker army. If it retreats, then G
owns the island without any war. This game is represented in Figure 8.2, and the rollback
equilibrium is (A; R if A). And country Ws utility in the rollback equilibrium is 0.
0,v
A L
G
v c,c v,0
D R
W
Figure 8.2: The original game.
But now lets consider the possibility that country W can change the game by burning
the bridge that allows them to retreat. Thus, they have the option to burn, B, or not burn,
N. If they burn the bridge, they will have no option but to ght. This game is depicted in
Figure 8.3 where player W now moves rst.
How does this change the game? It depends on what the relationship between v and
c is. We can write the rollback equilibrium as a conditional statement:
1. if v > c, then the rollback equilibrium is (B, R if A; A if B, A if N);
52
B N
W
c,v c v,0
A L
G
v,0
G
c,v c 0,v
A L
W
D R
Figure 8.3: The new game. Since the rst mover has changed from the original game, I
have switched the order of the payoffs.
2. if v < c, then the rollback equilibrium is (N, R if A; L if B, A if N).
But we can also say that these two conditional rollback equilibria tell a story about
what will happen. More precisely, we can they that
1. if the value of the land is greater than the cost of the war, then country W will not
burn the bridge because country G will attack country W if they burn the bridge
and will attack country W if they do not build a bridge, and country W will retreat
if they have the option to do so;
2. if the value of the land is not greater than the cost of the war, then country G will
attack if country W does not burn knowing that they will retreat, and country G will
leave country W alone if they burn the bridge because they know that they have no
option but to ght; thus country W burns the bridge because they will know that
country G will not attack them if they do so.
This is an example of what we call commitment in game theory. Before, country Ws
utility was always 0 because it didnt matter what the relationship between v and c was,
it was always a better option for them to retreat. But if the cost of ghting is sufciently
high and with the ability to make a commitment by burning the bridge, they are able to
make themselves better off (because they now receive a utlity of v instead of 0) by forcing
themselves to ght if attacked. So by losing power they actually make themselves better
off. This is called the Paradox of Power, and there are many more examples like this.
53
54
Chapter 9
Information Games
9.1 Information Sets in Extensive Form Games
An information set means that the player that is moving does not know where which
decision node he is at in the information set. Consider the extensive form game in Figure
9.1. Player 1 moves rst and then player 2 moves. However, player 2 has two information
sets: I
1
and I
2
. Both of these are represented by the dashed lines around the decision nodes
of which the information set surrounds. At I
1
, player 2 does not know whether player 1
has moved A or B but not which one. But since he can distinguish between information
sets, he knows that player 1 has not moved either C or D. Hence, he knows that he is not
in information set I
2
. If he is in information set I
1
, he can only decide to move X or Y. He
cannot condition his decision on whether player 1 has played A or B. Similarly, if he is
in information set I
2
, then he knows that player 1 has moved either C or D. He can only
decide to move Z or W. But he knows that he is not in information set I
1
and hence knows
that player 1 has not played either A or B. You might ask, how can we solve a game like
this? Notice that if player 2 is in information set I
1
he has a strictly dominated strategy in
playing Y. Playing X always gives him a higher payoff when he is in information set I
1
.
Thus, Y is strictly dominated and he should never play it. Moreover, playing Z is strictly
dominated by playing W when he is in information set I
2
. So player 2s strategy should
be if ever in information set I
1
play X and if ever in information set I
2
play W. Since
the information sets are numbered in an orderly fashion, lets label this type of strategy
as (X, W), where the action before the comma tells what he will do in I
1
and the action
after tells what he will do in I
2
. Knowing that player 2 will always play X when in I
2
and W when in I
2
, player 1 can play his best strategy knowing what player 2s strategy
is. So it can become like backwards induction when there are dominated strategies for
the player with information sets. We can even transform this extensive form game to a
strategic form as in Figure 9.2.
It may seem a little weird that we are using just one plan of action for two notes, espe-
cially when we have indoctrinated you to always have a plan of action for all of decision
nodes. But actually we just need to redene strategies when there are information sets in
55
1,3 9,1 0,4 2,2 9,5 3,4 7,2 0,9
1
2 2 2 2
A B C D
X Y X Y W Z W Z
I
1
I
2
Figure 9.1: An extensive form game with information sets.
Player 1
Player 2
(X, Z) (X, W) (Y, Z) (Y, W)
A 1,3 1,3 9,1 9,1
B 0,4 0,4 2,2 2,2
C 3,4 9,5 3,4 9,5
D 7,2 0,9 7,2 0,9
Figure 9.2: The strategic form of the game with information sets.
the game.
A strategy for a player is a complete contingent plan of actions for every one of the
players information sets.
Notice that this works for all the games that we have done before because we can just
think of a each decision node as being a singleton information set.
9.2 Signaling Games
In the signaling game you saw in class, there are three players: the able students, the
challenged students and the employers. The utility for student with high ability is
U
H
(n) = 150000 6000n
and the utility of a student with challenged ability is
U
C
(n) = 150000 9000n
Suppose that we are considering the maximumamount that anyone can get paid, whether
able or challenged. We want to nd two cutoff points for n: one for which the able people
56
are indifferent to taking hard classes versus not taking them and one for challenged peo-
ple. If player 1 chooses to take n = 2 hard classes and the employer pays him $150000,
then the able persons utility is U
H
(2) = 150000 6000 2 = 138000. So it is well worth it
for the able student because he can make an extra $38000 by taking hard classes and get-
ting paid $150000 instead of $100000e.i. the extra benet of $50000 outweighs the extra
cost of $12000. In fact, it will always be worth it to him if
150000 6000n 100000
6000n 50000
n
50
6
= 8.33
So we see that n cannot be any greater than 8.33 or else it will not be worth it for the
able students to take the hard classes. But they want to ensure that the able people want
to take classes and the challenged people dont take them so that they can distinguish
between the two. The challenged people will not take n number of hard classes is
100000 150000 9000n
50000 9000n
50
9
n
so that n 5.55556. Thus, in order to calculate what values of n constitute a separating
equilibrium, we nd that n can be in the interval [5.5, 8.33]. Since n has to be a natural
number, the values that satisfy this are 6, 7 and 8.
Also, in order to calculate the pooling equilibrium, where everyone sends the same
signal, the employer just pays the expected value or weighted average of what the two
types of people are worth. In the example that Barry gave in class this would be
0.2 150000 +0.8 100000 = 110000
So the employer offers $110000 to both of them. But in your homework the new numbers
have changed to 0.4 and 0.6. Here you should also plug in the numbers and redo the
procedure for your homework.
57
58
Chapter 10
Game Form Transformation
10.1 Transforming Extensive Form to Normal Form Games
There is a way to transform normal form games into extensive form. Consider our old
extensive form game in gure 10.1. You know how to write the strategies properly, but
here, for player 2, I will just use the shorthand for simplicity: (L, L) is equivalent to (L if l,
L if r). The strategies for player 1 are still exactly the same as before: l or r. The the way to
transformthis extensive formgame into a normal formis in Figure 10.2. Instead of actions
we use strategies for player 2. Notice that there are several outcomes repeated here. Thats
because there are two different types of strategies that have the same equilibrium path.
1
2
1,0
L
0,4
R
l
2
4,3
L
2,2
R
r
Figure 10.1: The no-name game.
As I have indicated in Figure 10.2, there are two pure strategy Nash equilibria of this
game:
r; (L, L)
r; (R, L)
59
However, as you all know, there is only one subgame perfect Nash equilibrium (back-
wards induction solution or rollback):
r; (R, L)
You may notice that the subgame perfect Nash is one of the regular Nash equilibria. This
will always be the case. In fact, subgame perfect Nash equilibriumis a renement of Nash
equilibrium for extensive form games.
Player 1
Player 2
(L, L) (L, R) (R, L) (R, R)
l 1,0 1,0 0,4 0,4
r 4,3 2,2 4,3 2,2
Figure 10.2: Normal form for the no-name game.
60
Chapter 11
Repeated Games
11.1 Introduction
Until now, we have learned about either simultaneous move games in one period or
sequential move games. Now we will turn our attention to simultaneous move games
played repeatedly on an innite time horizon. We will focus on the prisoners dilemma.
11.2 Discounting
In repeated games, time is denoted by t. Players do not value payoffs the same for all
periods. A person who discounts the future does not value the same thing tomorrow as
he does today. For example, suppose that a player is in time period 0. If I am going to
give him $1 in the initial period (t = 0), then he will not value the same $1 in period 0 as
in period 1. In particular, suppose that he has a discount factor of 0 < < 1. Then he is
indifferent between receiving $ today or $1 tomorrow. So if = 0.75 and I offer him the
choice of $0.8 today or $1 tomorrow, then he would take the $0.8 today instead of waiting
to get $1 tomorrow. But if I offer him $0.7 today or $1 tomorrow, he would take the $1
tomorrow. Now, in another example, suppose that there are four periods and a person
is to receive the amounts of money shown in Figure 11.1. This this persons net present
value is given by
PV = 3 +7 +2
2
+11
3
+0
4
We can calculate any type of net present value this way.
t
0 1 2 3 4
3 7 2 11 0
Figure 11.1: Net present value.
61
t
0 1 2 3 4
10 10 10 10
. . .

Figure 11.2: Net present discounted values stream of payoffs.
Or suppose that a person is going to receive $10 each period forever as in Figure 11.2.
Then he values the $10 he receives in period 0 as just $10. He values the $10 he receives
in period 1 as 10. In period 2, he values it 10
2
and so on and so forth. Then this will be
an innite sum of discounted payoffs. Calculus proves that this number converges to a
nite real valued number.
1
You should remember the following result:
THEOREM: If 0 < < 1, then 1 + +
2
+
3
+ =
1
1
Hence, the net present value of the future payoff of $10 is

t=0
10
t
= 10 +10 +10
2
+
= 10
_
1 + +
2
+
_
= 10
_
1
1
_
which we know converges from the theorem above. In fact, the innite summation

t=0
10
t
is called a geometric series.
2
Of course, you can calculate the innite stream of
net present value payoffs for any constant a such that a(1 + +
2
+
3
+. . . ) = a
_
1
1
_
.
11.3 Innitely Repeated Prisoners Dilemma
In these types of games, each player moves simultaneously, but the same game is repeated
for an innite time horizon. A game played each period is called a stage game. So, in any
period, each player cannot condition his action on what the other player will do that
period. However, a strategy for an innitely repeated game can be conditional on what
the other player did in the last period or any of the previous periods.
Amongst the many interesting games that could be played for an innite time horizon,
game theorists generally focus on the prisoners dilemma game. It is important to note
that, if this game is played for any nite periods, both players will defect in the last period
and thus will defect in all of the previous periods. But what happens when they play it
1
This is calculus proper because we are working with limits of sequences.
2
The simply means innity.
62
for an innite time horizon? In this case, it might be optimal for both players to condition
their own strategies on what the other player has done in the previous periods. And there
are many possible types of these strategies that a player can use when playing this game,
but the most famous examples is the grim-trigger strategy.
A player plays the grim trigger strategy by using the following rule:
1. Play C in the rst period;
2. Every period after that, if the other player played C in all previous periods,
then I will play C also in this period;
3. But if the other player played D in any of the previous periods then I will
play D this period and in all future periods.
Notice that in all previous periods includes the last one. Hence, any time one of the
players deviates from playing C, the other player will automatically play D in the next
period if he is playing the grim-trigger strategy. You should also notice that this is a
proper strategy because it is a complete contingent plan of action.
11.3.1 One-Shot Deviation Principle
We want to analyze of it is rational for a player to deviate from the grim-trigger strategy
when playing an innitely repeated game. We can do this by using the one-shot deviation
principle logic. Suppose two players are playing the prisoners dilemma game over an
innite time horizon with discounting for both players. Is it rational for them to play the
grimtrigger strategy, which will result in the action prole (C, C) in each period? In order
to answer this, we must see if it is rational for one player to deviate from this strategy? So
lets use player 1, if he plays the grim trigger strategy he will receive a continuous payoff
streamof 3 each period. His net present discounted value in period 0 is 3 +3 +3
2
+
because the other player is also playing grimtrigger. But what if he deviates in one period
(the rst period for example), then he gets 4 for that period but 2 for the rest of the periods
because they both will play (D, D). Hence, he would rather play the grimtrigger strategy
if
3 +3 +3
2
+3
3
+ 4 +2 +2
2
+2
3

3
_
1
1
_
4 +2
_
1
1
_
3 4 (1 ) +2
3 4 2

1
2
63
Thus, if >
1
2
, then each player does not want to deviation from the grim trigger strategy.
That implies that playing the grim trigger strategy constitutes a Nash equilibrium, and
the equilibrium strategy set that is played is {(C, C) , (C, C) , . . .}. If it does not hold, then
each player will want to deviate fromthe grimtrigger strategy. So in this case, the strategy
set {(D, D) , (D, D) , . . .} of playing defect each period constitutes a Nash equilibrium.
Player 1
Player 2
C D
C 3,3 1,4
D 4,1 2,2
Figure 11.3: The prisoners dilemma.
Example: Suppose that you are given the game in Figure 11.3 and told that = 0.6. Then
you knowfromthe analysis above that it is optimal for both players to play grimmtrigger
rather than deviate.
Example: Consider the game in Figure 11.4. We can use the same procedure as the one
above:
3
_
1
1
_
7 +2
_
1
1
_

4
5
Thus, if >
4
5
then it is optimal to play grimtrigger. You can see that the higher the payoff
is to deviate, the more each player has to care about the future.
Player 1
Player 2
C D
C 3,3 1,7
D 7,1 2,2
Figure 11.4: The prisoners dilemma modied.
11.3.2 Tit for Tat
The grim trigger strategy is not the only one that can give both players a strongly Pareto
efcient solution to the repeated prisoners dilemma game. Another strategy that both
can use is the tit for tat strategy.
64
A player plays the tit for tat by using the following rule:
1. Play C in the rst period;
2. Every period after that, if the other player played C in the previous period,
then I will play C also in this period;
3. But if the other player played D in the previous period then I will play D this
period.
What happens when both players play the tit for tat strategy? They end up playing
(C, C) in all periods. But does this constitute a Nash equilibrium?
THEOREM: If 0 < < 1, then 1 +
2
+
4
+
6
+ =
1
1
2
You should notice the slight change in the sequence: now all of the powers are even
numbers. Now we have that for any real numbers a and b
a + b + a
2
+ b
3
+ b
4
+ =
_
a
0
+ a
2
+ a
4
+
_
+
_
b + b
3
+ b
5

_
= a
_
1 +
2
+
4
+
_
+ b
_
1 +
2
+
4
+
_
= a
_
1
1
2
_
+ b
_
1
1
2
_
65
66
Part II
Social Choice Theory
67
Chapter 12
Social Preferences and Voting Models
12.1 An Introduction to Social Choice Theory and Its Im-
portance
What is social choice theory? It is the study of how democratic societies make collective
decisions. It is a vast area of research, and there are numerous examples that I could give
you in this study note. However, I will try to focus on a few that give you an intuition of
what social choice theory is all about. We are beginning the section that deals with purely
political science topics, and we will hear less econ lingo and more political science terms.
Before we begin, I want you to think about he following questions:
What does it mean for a decision making process to be democratic?
Does it matter how the agenda is set when making collective choices?
Is one voting system more democratic than any other?
We will attempt to answer these questions in the coming weeks. Right now, in order to
have the tools to do so, you will need to learn the structure of social choice theory. Michael
Chwe has a very good teaching style for this topic. I will replicate his method here given
the time constraints. But there are many good sources on this subject, among them are
Hinich and Munger (1997), Austen-Smith and Banks (2000), Austen-Smith and Banks
(2005), McCarty and Meroiwicz (2007), Kreps (1990), Muller (1997), Riker and Ordeshook
(1968), Riker (1986) and Varian (1992) among others. I havent written everything that I
wanted to yet, but I will try to get to it soon.
12.2 Individual and Social Preferences for Candidates
Social preferences can take on many different forms. But before we study those, we rst
must dene individual preferences. In past notes, I have stated individual preferences
69
with a utility function. Now, I will just state the preferences. For example, suppose that
the choice set, X, contains candidates x, y and z. Three citizens are voting in an election
that will select one of the candidates. Person 1 prefers x to y, y to z and x to z (we usually
just write it like xPy, yPz and xPz). Person 2 has preferences xPz, zPy and xPy; and
person 3 has preferences yPx, xPz and yPz. To simplify the visualization of this, we
will usually express individual preferences with a table. Each persons preferences are
represented by Table 12.1; higher candidates are more preferred by that person.
Person 1 2 3
x x y
y z x
z y z
Table 12.1: Social preferences with a Condercet winner.
I will call the collection of all the preferences of each voter a preference prole. These
proles always have three assumptions in social choice theory to simplify the analysis:
1. No indifference between any two candidates for each voter (the voter strongly prefers
one candidate or the other).
2. There is an odd number of voters.
3. Each voters preferences are complete across all candidates (each candidate can be
ranked by every voter).
We have these assumptions so that there will be no ties in the vote tallies for some
of the simple voting rules. Notice that the individual preference set in Table 12.1 satisfy
these assumptions. But in social choice theory we are not just concerned with individual
preferences but also with social preferences.
A social preference relation is an ordering between two candidates through some
individual preference aggregation rule.
When expressing what the social preferences are between two candidates, we usually
use the social preference symbol, , to say that one candidate if preferred to another
under that preference aggregation rule. The rules we usually use for social preference
aggregation are voting rules. For example, under majority voting, if x is socially preferred
to y we say that x
m
y where the m stands for majority. Or if the voting system is Borda
count, we will say that x
b
y if x is preferred to y under the Borda count rule. If there is
no subscript, then you should assume that the social preference indicator means majority
rule, as will be the case throughout these study notes.
70
12.3 Electoral Regimes
A democracy has many different options when it comes to voting rules. We will not
study every single one of them here, but this should give you a good idea of the variety.
It should also give you a good idea about how to answer the question is one voting
system more democratic than the other?
12.3.1 Majority Rule
Majority rule simply means that we will have a pairwise contest between two of the can-
didates: just pick any two candidates and have the whole voting population vote for one
or the other. For example, if we the individual preference set of Table 12.1 and we are
having a pairwise contest between x and y, then person 1 votes for x, person 2 votes for
x and person 3 votes for y. So x would win, and we write it like x
m
y (x is socially
preferred to y by majority rule). You should be able to gure out that y
m
z and x
m
z.
Hence, x beats any other candidate that it could possibly run against. This example is
from de Condorcet (1785). This motivates the following denition.
A Condercet winner is a candidate that beats any other candidate in any pairwise
voting contest (where the decision is between only two candidates).
But majority rule does not always give as an obvious winner in all cases. Suppose
that we have a case like that in Table 12.2. This is called a Latin square, and you should
notice that no one candidate occupies the same position twice. In case example, x
m
y,
y
m
z and z
m
x. Thus, no one candidate beats all the others in a pairwise contest. This
is called the Condercet paradox and is one of the most famous examples in social choice
theory. The Condercet paradox gives rise to a Condercet cycle (sometimes called a tope
cycle).
1 2 3
x y z
y z x
z x y
Table 12.2: Latin square.
12.3.2 Plurality Voting
Unlike majority voting where we have a pairwise contest, the plurality voting winner is
the candidates who receives the most votes when each voter can cast one vote for the
71
1 2 3 4 5 6 7 8 9
x x x x y y y z z
y y y y z z z y y
z z z z x x x x x
Table 12.3: Social preferences for plurality voting.
candidate of his choice. For example, take the social preferences represented below. I will
assume that each voter votes sincerely: each person can votes for his favorite candidate.
If everyone did, then you can see that x would receive 4 votes, y would receive 3 and z
would receive 2. Hence, x is the plurality winner if everyone votes sincerely.
In plurality voting, every voter casts one vote for the candidate of his choice and
the candidate with the most votes wins.
This denition does not assume that a citizen will vote for his favorite candidate. We
see later with strategic voting that this may not always be the case, even in real life.
In the example given in Table 12.3, we see that if each person is voting sincerely, then x
receives 4 votes, y receives 3 votes and z receives 2 votes. Hence, x would win by plurality
vote: x
p
y and x
p
z. However, you should notice that by majority rule y
m
x and
z
m
x. Does this mean that plurality voting is more democratic than majority voting
or vice versa?
12.3.3 Borda Count
In this method of voting, each person states his preferences over all of the candidates.
Then all of the candidates are awarded points. If there are n candidates, then each can-
didates receives n 1 points each time he is a persons most favoritefor being on the
top row. Then each candidate receives n 2 points every time he is a persons second
favoritefor being in the second row. Finally, a candidate receives 0 points for being a
persons least favorite choice.
Points 1 2 3 4 5 6 7 8 9
2 x x x x y y y z z
1 y y y y z z z y y
0 z z z z x x x x x
Table 12.4: Borda count points given by preference ordering.
72
In the example given in Table 12.4 (which is the same preferences as the plurality
example) we see that there are 3 candidates, n = 3. Hence, each candidate receives 2
points for each time he is the most preferred, 1 point each time he is second preferred and
0 points each time he is the last choice. Thus, x receives 2 4 +1 1 +0 5 = 9 points,
y receives 2 3 +1 6 +0 1 = 12 points and z receives 2 2 +1 2 +0 = 6 points.
So y wins because he receives the most points.
12.3.4 Approval Voting
Each voter votes for as many candidatesup to all of themas he chooses, and the can-
didate with the most votes at the end wins. This is one of the most interesting voting
systems out of all the ones we will study. Just because a voter has the right to vote for
all of the candidates, is it in his interest to do so? To answer this question, you can think
about the follow result.
THEOREM: A voter who votes for all of the candidates will not have any inuence
on the outcome of the election.
PROOF: Suppose that there are N voters and M candidates. Denote the number of votes
that candidate i receives without the last person, person N, voting yet as x
i
. Without loss
of generality, assume that 0 x
1
x
2
x
M
< N. Now the last person votes in the
election and votes for all of the candidates. Then 0 < x
1
+1 x
2
+1 x
M
+1 N,
meaning that the last persons vote did not change the order of the votes received by each
candidate. The logic for the last person works for any of the other voters. Therefore, any
person who votes for all of the candidates will not change the order of the number of
votes each candidate receives.
12.3.5 Single Transferable Vote
The single transferable vote system is used to ll in a particular number of seats where
each winner fullls a given quota of votes. But suppose that we are looking to just ll one
seat and the quota that we require is a majority. We can still use this system by using the
following rule:
1. if any one candidate has a majority of the votes, then that candidate is the winner;
2. if no candidate has a majority, then eliminate the candidate with the fewest rst
place votes and, for the places where he was rst place, move the second place
candidates up to rst place;
3. now tally the rst place votes and, if no candidate has a majority, then repeat the
process.
73
Lets do an example for only one seat. Consider the preference set of Table 12.3. Each
voter would have ranked the candidates by his preference. Then each rst place ranking
is counted to see if any candidate received a majority of the rst place votes. In this case,
no one did (x received 4 rst place votes but he needs 5 to have a majority). So we need
to drop the candidate with the fewest rst place votes, this happens to be z. His second
place votes are moved up to rst place as Table 12.5 shows. Now y has the most rst place
votes and wins the election because he has a majority.
1 2 3 4 5 6 7 8 9
x x x x y y y y y
y y y y x x x x x
Table 12.5: The new rankings with z eliminated.
12.4 Sincere and Strategic Voting
Is it best for a voter to vote for his most favorite candidate in the election? The answer is
not always. Sometimes it is better for a voter to vote for one of his less preferred choices
in order to manipulate the outcome of the election.
A voter votes sincerely if he votes for his favorite candidates regardless of any-
thing else. A voter votes strategically if he votes knowing how the other voters
will vote and with the eventual outcome in mind.
Lets take a real world example. Suppose that you are a Florida voter in the 2000 U.S.
presidential election. The candidates are Bush (b), Gore (g) and Nader (n) (of course there
were other really unknown third party candidates but I will use these three to simplify
the analysis). And suppose that your preferences are nPg, gPb and nPb. Should you vote
for your favorite candidate? You should also think about howthe rest of Florida is voting,
and you know that the rest of Florida either supports Bush or Gore. Moreover, you know
that it is a very tight race in Florida, and thus your vote could really count. So when
you vote for Nader you are also not voting for Gore. This means that, in the very tight
race between Bush and Gore, you could have helped Gore beat Bush but decided to help
Nader instead. Thus, you will end up with Bush as president, which is your least favorite
option. So you think that it is best to vote for Gore instead of Nader to avoid having Bush
as president.
74
12.5 Agenda Setting
Agenda setting is one of the strange yet interesting topics of social choice theory because
it shows that a person in charge of the voting process can manipulate which outcome he
wants simply by changing the order of majority voting. But this only works when there
is a Condercet cycle in the social preferences. If there is a Condercet winner, then it does
not matter where you put that candidate in the voting process, he will win every time.
And remember that this is only done with majority voting.
1 2 3
a c b
b a c
c b a
d d d
Table 12.6: A random set of individual preferences.
Suppose that we have social preferences like that in Table 12.6. You should be able
to gure out that the social preferences under majority rule are a b, b c, c a,
a d, b d and c d. Then if the chairman of the decision making body were to set
up the voting agenda like that in Figure 12.1, this will induce a certain outcome (the blue
lines mean that particular option was chosen at that decision node). Whenever at the last
decision node, the voting body will always choose c over a because c c. Knowing that
this will be the case, voting between d and Not is like voting between d and c. The body
will always choose c to d because c d, and hence they will choose Not. When choosing
between b and Not is like choosing between b and c because if they choose Not they will
get c in the end. Hence, they choose b over c because a c and b is the outcome of the
voting process.
Now lets change the agenda or voting process altogether. I will change it to that
represented in Figure 12.2. The social preferences are still the same as before; only the
agenda has changed. At the end, the people are voting between d and c, and they choose
c because c d. Then they are choosing between b and Not, which is like choosing
between b and c. They choose b since b c. Then they will choose between a and Not,
which is like choosing between a and b. They choose a because a b. Hence, with
the exact same social preferences, we can get a different outcome just by changing the
structure in the way the people vote. This result confounds many people, including Pliny
the Younger of whom Riker (1988) wrote about.
75
Not
b
Not
d
a
c
Figure 12.1: The rst agenda.
Not
a
Not
b
d
c
Figure 12.2: The second agenda.
12.6 Strong Pareto Efciency of Candidates
How to we know if a candidate is strongly Pareto efcient or not? Remember the deni-
tion of strong Pareto efciency is that you must make everybody strictly better off. We can
apply this method even to the choosing of candidates. Suppose that we have preferences
in Table 12.7.
1 2 3
A B A
B C C
C A B
D D D
Table 12.7: A strongly Pareto dominated candidate.
76
Then you can easily see that candidate D is always outranked by all of the other
candidateshe is everyones least favorite. So he is denitely not SPE. You might also
think that C is also not SPE because he is always outranked by another candidate. But
notice that some people disagree about who is better than C. Thus, A does not strongly
Pareto dominate C because person 2 is worse off if we move from C to A. And B does not
strongly Pareto dominate C because person 3 is worse off if we move from C to B. So if C
is not Pareto dominated by any other candidate it is SPE. And of course you know that A
and B are SPE.
77
78
Chapter 13
Spatial Preference Models
13.1 Median Voter Models
13.1.1 Means and Medians
First, I want to say that you are not required to know calculus in PS 30. Nevertheless, I
feel that showing you the calculus foundation of some of these topics will help some of
you understand the intuition behind them. That being said, if you are deathly afraid of
calculus, then you should try to understand as much of the following and then make sure
that you understand the last paragraph with the denition.
What is a median in probability theory? You probably learned about it in middle
school. In order to nd the median of a sequence of numbers, X = {x
1
, x
2
, . . . x
n
}, you
were told to line them up in ascending order and pick the middle one if n is odd and
pick the middle two and divide by 2 if n is even. But how are you supposed to nd the
median if the distribution is not a discrete set like X = {x
1
, x
2
, . . . , x
n
} but a continuous
set like X = [0, 1]? A probability distribution function over this space is a function f (x)
for x X. Since it is a probability value it must be the case that f (x) 0 for all x X
and
_

f (x) dx = 1
as it is for discrete distributions. A median of a distribution is a value m such that
Pr (X m)
1
2
and Pr (X m)
1
2
if X is discrete and
_
m

f (x) dx =
_

m
f (x) dx =
1
2
if X is continuous. Is this the same thing as a mean? No. The mean is dened as
_

x f (x) dx
which you can tell is fundamentally different from the median. The case when they are
the same is when f (x) is symmetric.
79
If you did not understand most of what I wrote above, dont worry about it. What is
important is that you have a good understanding of what a median is for a continuous
distribution.
A median is a number, m, such that it divides the area under the curve of f (x) in
half.
Most of the distributions that you will see in this class will either be symmetric, in which
case it is easy to see what the middle of the distribution is. Otherwise it should still be
easy to gure out what the median is. Many of the examples that we will do will be over
a uniform distribution.
13.1.2 The Hotelling-Downs Model
Downs (1957) came up with a unique idea (originally constructed by Hotelling) about
electoral competition. The idea is that two or more candidates compete in an election
over a continuous policy space. Let this policy space be denoted by . In general, we
might consider a multidimensional policy space, but here, we will only consider unidi-
mensional policy spaces: R. There are an innite number of voters each with a
unique ideal policy: the point at which any movement to the left or to the right makes
this voter less happy. Voters are distributed over this policy space according to some con-
tinuous distribution, f () for . Each candidate states his policy position, the policy
that he will implement if elected, on the policy space. The citizens vote for the candidate
who has the closest position to him. If two or more candidates state their policy position
at the same place, then the vote is split equally amongst the candidates.
0 10 5
M
2
C
1
6 4
C
2
d = 4
Figure 13.1: The Hotelling-Downs model.
Consider the example given in Figure 13.1. Here we have that = [0, 10] and the
voters are distributed uniformly over this policy space: f () =
1
10
for [0, 10]. The
median of this distribution is M = 5. Suppose that there are two candidates: C
1
and C
2
.
In this case, C
1
states his policy position at 2 and C
2
states his policy position at 6. We
know that all of the people to from 0 to 2 will vote for candidate C
1
and all of the people
from 6 to 10 will vote for C
2
. The tricky part is the people in between, from 2 to 6. The
80
distance from between these two points is d = |2 6| = 4. Half of the distance is
d
2
= 2.
Therefore, the halfway point between 6 an 6 is 2 +
d
2
= 4. So all of the people to the left
of 4 will vote for candidate C
1
and all of the people to the right of 4 will vote for C
2
. So
C
2
will win if the candidates state these policy positions. But can C
1
improve his position
by not losing the election? The answer is yes. He can move just a little bit to the left of 6,
where C
2
currently is, but to the right of 5. Then C
1
will get the majority of votes and win
the election and C
2
will lose. But then C
2
will move to just a little bit to the left of C
1
but to
the right of 5 and receive a majority of the votes needed to win. But then C
1
will move to
the left of C
2
and to the right of 5 and win a majority. They will continue this process until
they both end up at the median, M. The intuition you gained from this example should
help you understand the following famous result in social choice theory.
THEOREM: When the policy space is one-dimensional and each voter has single
peaked policy preference over , then there exists a unique point which all of the
candidates will state as their policy position which is the median of the distribution
of voters.
This theorem has been widely applied throughout both economics and political sci-
ence. Single peakedness simply means that voters do not have competing ideal points.
Take for example the preferences for voter i over the policy space represented
by Figure 13.2(a). Here there is a unique maximum of the utility function, which means
that voter i has a unique ideal point. However, the utility function represented in Figure
13.2(b) has two maximum points over the policy space. So these preferences do not have
a unique ideal point.
u
i
()

(a)
u
i
()

(b)
Figure 13.2: Single peaked and non-single peaked preferences.
81
82
Chapter 14
Political Power
14.1 Shapley-Shubik Power Index
Suppose that there are three parties in a legislature: A, B and C. Each party has 5, 4 and 3
delegates respectively. So there are 12 delegates in total, and a majority in the legislature
is 7. We need to list of all the different possible orders as is done in Table 14.1; there are
6 in all. The table also lists the delegates in the order of the parties. Then start with the
rst delegate on the left and count until you get to 7. Whichever letter you land on when
you reach seven, all the parties until that point are needed to be a winning coalition and
the parties after that are not. The delegates that are needed in a winning coalition are in
bold face. For example, in the rst row of Table 14.1, the seventh delegate is an B party
member. So all of the delegates in party B and the one before it (in party A) are required
to win a majority given that order. Thus, B is the pivot. The pivot is the party that brings
the coalition from non-winning to winning status. The Shapley-Shubik power index is
the number of times a party is the pivot divided by the number of different possible
coalitions. Thus, the Shapley-Shubik power index for A is
2
6
=
1
3
. Similarly, the index for
B and C is also
1
3
.
Order Delegates Pivot
ABC aaaaabbbbccc B
ACB aaaaacccbbbb C
CAB cccaaaaabbbb A
CBA cccbbbbaaaaa B
BCA bbbbcccaaaaa C
BAC bbbbaaaaaccc A
Table 14.1: All the different types of possible coalitions.
83
84
Part III
Appendices
85
Appendix A
Practice Problems
A.1 Reversing the Incumbent-Challenger Game
Take Kathys incumbent-challenger game and simply switch the order of the movers so
that the challenger moves rst and the incumbent moves second.
1. Draw this game.
2. Write out the rollback equilibrium.
A.2 A Standard Extensive Form Game
For the game in Figure A.1, you should do the following.
1. Write out all of the strategies for both players.
2. State the rollback equilibriumin the correct format. Write it out; dont just shade the
tree and then say, see shading.
3. State the equilibrium path.
4. State which outcomes are Pareto efcient.
A.3 A Different Extensive Form Game
For the game in Figure A.2, you should do the following.
1. Write out all of the strategies for both players.
2. State the rollback equilibriumin the correct format. Write it out; dont just shade the
tree and then say, see shading.
87
1
2
0,6
Left
2,5
Right
Up
2
4,1
Left
9,4
Right
Down
Figure A.1: A regular game.
3. State the equilibrium path.
4. State which outcomes are Pareto efcient.
U
D
F
E
A
B
8,3
4,0
3,2
5,6
1
2
1
Figure A.2: A different kind of extensive form game.
A.4 A Little More Challenging Extensive Form Game
For the game in Figure A.3, you should do the following
1. Write out all of the strategies for both players.
2. State the rollback equilibriumin the correct format. Write it out; dont just shade the
tree and then say, see shading.
88
B N
1
2,5 8,3
A L
2
7,0
2
9,10 0,1
K H
1
D R
Figure A.3: A bit more challenging extensive form game.
3. State the equilibrium path.
4. State which outcomes are Pareto efcient.
A.5 The Budget Game
There are two players: Democrats and Republicans. They are deciding the budget for
a particular year. Assume that Republicans hold the House and Senate and Democrats
hold the Presidency. First, the Republicans decide if they want big budget cuts or small
budget cuts. Then the Democrats choose if they want to pass and sign (accept) the Re-
publicans budget bill or veto (reject) it. The Republicans favor a budget with big cuts and
the Democrats favor one with small cuts. Assume that both parties receive a utility of 5
if their favorite budget is passed and a utility of 2 if their least favorite budget is passed.
If the Democrats veto the bill, the federal government will shut down. If the government
shuts down, the parties are unsure about who the public will blame for the shutdown.
In particular, both of them believe that with probability p the public will blame the Re-
publicans and with probability 1 p they will blame the Democrats. Assume that each
party receives a utility of 1 for being blamed for the shutdown and a utility of 3 if the
government shuts down but they are not blamed.
1. Draw this game tree.
2. State the strategies of each player.
3. State the rollback equilibrium.
89
4. What is the probability of an ex-post mistake?
5. State the equilibrium path using game theoretic notation.
6. State the equilibrium path using the simplest words you can think of that ade-
quately describes the situation (this part is just for fun).
A.6 A Game with a Nature Node
For the game in Figure A.4, do the following.
1. Write out all of the strategies for both players.
2. Write out the rollback equilibrium.
3. What is the probability of an ex-post mistake?
4. State which outcomes are Pareto optimal.
4,-5 -2,2 5,3 2,0 6,-2 3,7
1
N
2 2 2
A B
C D C D C D
G
0.4
W
0.6
Figure A.4: Nature node.
A.7 Another Standard Extensive Form Game
For the game in Figure A.5, you should do the following.
1. Transform the extensive form game into a normal form game.
90
2. Find any strongly dominated strategies for both players. If there are none for a
player, then say so.
3. Find any pure strategy Nash equilibria of the game.
4. State which pure strategy Nash equilibria are also rollback equilibria and which are
not.
1
2
0,6
Left
2,5
Right
Up
2
4,1
Left
9,4
Right
Down
Figure A.5: A regular game.
A.8 Another Different Extensive Form Game
For the game in Figure A.6, you should do the following.
1. Transform the extensive form game into a normal form game.
2. Find any strongly dominated strategies for both players. If there are none for a
player, then say so.
3. Find any pure strategy Nash equilibria of the game.
4. State which pure strategy Nash equilibria are also rollback equilibria and which are
not.
91
U
D
F
E
A
B
8,3
4,0
3,2
5,6
1
2
1
Figure A.6: A different kind of extensive form game.
A.9 A Little More Challenging Extensive Form Game
For the game in Figure A.7, you should do the following
1. Transform the extensive form game into a normal form game.
2. Find any strongly dominated strategies for both players. If there are none for a
player, then say so.
3. Find any pure strategy Nash equilibria of the game.
4. State which pure strategy Nash equilibria are also subgame perfect Nash equilibria.
A.10 A Normal Form Game
For the game in Figure A.8, do the following.
1. Iteratively eliminate strongly dominated strategies. (You can skip this because this
type of problem will probably not be on the nal exam.)
2. Find any pure strategy Nash equilibrium.
A.11 A Different Normal Form Game
For the game in Figure A.9, do the following.
1. State which strategies are strongly dominated.
92
B N
1
2,5 8,3
A L
2
7,0
2
9,10 0,1
K H
1
D R
Figure A.7: A bit more challenging extensive form game.
Pl. 1
Pl. 2
W X Y Z
A -3,0 3,-1 -2,1 0,4
B -2,8 5,6 -1,9 1,5
C 4,3 1,2 3,0 2,2
D -5,7 6,1 7,-2 1,2
Figure A.8: A normal form game.
2. Find any pure strategy Nash equilibrium.
3. Find any mixed strategy Nash equilibria.
4. What is the expected utility of player 1 if they both play the mixed strategy? What
is the expected utility of player 2 is they both play the mixed strategy?
A.12 Yet Another Normal Form Game
For the game in Figure A.10, do the following.
1. State which strategies are strongly dominated.
2. Find any pure strategy Nash equilibrium.
3. Find any mixed strategy Nash equilibria.
93
Player 1
Player 2
X Y
A 3,1 7,-2
B 9,0 1,2
Figure A.9: A simple normal form game.
Player 1
Player 2
X Y
A 9,2 3,-2
B 2,0 1,4
Figure A.10: Yet another normal form game.
A.13 Repeated Prisoners Dilemma
For the game in Figure A.11, solve for the minimum threshold discount factor in which
both players want to cooperate.
Player 1
Player 2
C D
C 8,8 2,9
D 9,2 3,3
Figure A.11: The prisoners dilemma.
A.14 Asymmetric Repeated Prisoners Dilemma
For the game in Figure A.12, solve for the minimum threshold discount factor for each
player.
A.15 A Game with Information Sets
For the game in Figure A.13, do the following.
1. State which outcomes are strongly Pareto efcient.
2. Write out all the strategies for each player.
3. Transform the extensive form game into a normal form game.
94
Player 1
Player 2
C D
C 6,2 2,9
D 7,0 5,1
Figure A.12: The prisoners dilemma.
4. Find all of the pure strategy Nash equilibrium from the normal form game.
1,3 9,1 0,4 2,2 9,5 3,4
1
2 2 2
A B C
X Y X Y W Z
Figure A.13: An extensive form game with information sets.
95
96
Appendix B
Practice Problems Solutions
B.1 Reversing the Incumbent-Challenger Game
Since you reversed the order of the players, you must also reverse the order of the payoffs.
1. The game tree.
C
I
2,5
RF
5,0
N
rf
I
0,5
RF
0,7
N
n
2. Rollback: (n; RF if rf, N if n).
B.2 A Standard Extensive Form Game
1. Strategies
(a) Player 1
i. Up
ii. Down
(b) Player 2
97
i. Left if Up, Left if Down
ii. Left if Up, Right if Down
iii. Right if Up, Left if Down
iv. Right if Up, Right if Down
2. Rollback: (Down; Left if Up, Right if Down).
3. Equilibrium path: player 1 chooses Down and then player 2 chooses Right.
4. The outcomes with payoffs (0, 6), (2, 5) and (9, 4) are Pareto efcient.
B.3 A Different Extensive Form Game
1. Strategies
(a) Player 1
i. U, A if F
ii. U, B if F
iii. D, A if F
iv. D, B if F
(b) Player 2
i. E
ii. F
2. Rollback: (D, A if F; F).
3. The equilibrium path: player 1 chooses D, player 2 chooses F and then player 1
chooses A.
4. The outcomes with payoffs (8, 3) and (5, 6) are Pareto optimal.
B.4 A Little More Challenging Extensive Form Game
1. Strategies
(a) Player 1
i. B, R if K
ii. B, D if K
iii. N, R if K
iv. N, D if K
98
(b) Player 2
i. A if B, H if N
ii. A if B, K if N
iii. L if B, H if N
iv. L if B, K if N
2. Rollback: (N, D if K; A if B, K if N).
3. Equilibrium path: player 1 chooses N, player 2 chooses K and then player 1 chooses
D.
4. The outcome with payoff (9, 10) is the only Pareto efcient outcome in the game.
B.5 The Budget Game
1. The game tree.
1
S B
R
2,5
1,3 3,1
N
BR BD
p
1 p
A V
D
5,2
D
1,3 3,1
V A
N
BD BR
1 p
p
2. Strategies
(a) Republicans
i. S
ii. B
(b) Democrats
1
R =Republican, D =Democrat, S =Small Budget Cuts, B =Big Budget Cuts, A =Accept, V =Veto,
BR =Blame Republicans and BD =Blame Democrats.
99
i. A if S, A if B
ii. A if S, R if B
iii. V if S, A if B
iv. V if S, V if B
3. The rollback equilibrium depends on the value of p. We know that the Democrats
will always choose A if S. But if the Republicans choose B, then the Democrats will
choose A if
2 > 3p + (1 p)
p >
1
2
.
If p <
1
2
, the Democrats strategy is (A if S, A if B) and thus the Republicans choose
B. If p >
1
2
, the Democrats strategy is (A if S, V if B), and the Republicans choose S
if
2 > p +3 (1 p)
p >
1
2
.
Thus, both choices are consistent with the intervals of p. Therefore, the rollback
equilibrium will have two cases.
Case 1 If p <
1
2
, the rollback is (B; A if S, A if B).
Case 2 If p >
1
2
, the rollback is (S; A if S, V if B).
4. This is somewhat complicated but still a very workable problem. The ex post mis-
take depends on p.
(a) If p <
1
2
, then the Democrats will make an ex post mistake when they choose
A after the Republicans choose B and then Nature chooses BD. This mistake
happens with probability 1 p. The Republicans can never make an ex post
mistake given the Democrats strategy.
(b) If p >
1
2
, then the Democrats can never make an ex post mistake because choos-
ing A after the Republicans choose S will always give a higher utility than
choosing V when the Republicans choose S. The Republicans will make an ex
post mistake when they choose S and then the Democrats choose A and then
Nature chooses BD. This ex post mistake happens with probability 1 p.
5. Equilibrium path in game theory notation.
(a) If p <
1
2
, the Republicans choose B and then the Democrats choose A.
100
(b) If p >
1
2
, the Republicans choose S and then the Democrats choose A.
6. The equilibrium path in plain words: if the Democrats are more likely than not to
be blamed for the government shutdown, then the Republicans pass a large budge
cut bill and the Democrats sign it; otherwise, the Republicans pass a small budget
cut bill and the Democrats sign it.
B.6 A Game with a Nature Node
1. Strategies
(a) Player 1
i. A
ii. B
(b) Player 2
i. C if G, C if W, C if B
ii. C if G, C if W, D if B
iii. C if G, D if W, C if B
iv. C if G, D if W, D if B
v. D if G, C if W, C if B
vi. D if G, C if W, D if B
vii. D if G, D if W, C if B
viii. D if G, D if W, D if B
2. We know that player 2s strategy will be (D if G, C if W, D if B). Then player 1 has
to take his expected utility for playing A based on this strategy. If he plays A, there
is a 0.4 probability that he will receive a utility of 4 and a 0.6 probability that he will
receive a utility of 5. Thus, his expected utility is
Eu
1
(A) = 0.4 (2) +0.6 (5) = 2.2
His utility if he chooses B is 3. Thus, since 2.2 < 3, he chooses B. So the rollback
equilibrium of this game is (B; D if G, C if W, D if B).
3. An ex post mistake in this game is when player 1 chooses B and Nature chooses
W and then player 2 would have chosen C. This ex post mistake happens with
probability 0.6.
4. The outcomes with utilities (5, 3), (6, 2) and (3, 7) are Pareto optimal.
101
B.7 Another Standard Extensive Form Game
1. The transformed game
Pl. 1
Pl. 2
(Left,Left) (Left,Right) (Right,Left) (Right,Right)
Up 0,6 0,6 2,5 2,5
Down 4,1 9,4 4,1 9,4
2. Down strongly dominates Up for player 1. For player 1 (Left,Right) strongly domi-
nates (Right,Left).
3. Down; (Left,Right) and Down; (Right,Right) are the pure strategy Nash equilibria.
4. Down; (Left,Right) is the only Nash equilibrium that is also a rollback equilibrium.
Hence, Down; (Right,Left) is not a rollback.
1
2
0,6
Left
2,5
Right
Up
2
4,1
Left
9,4
Right
Down
Figure B.1: A regular game.
102
U
D
F
E
A
B
8,3
4,0
3,2
5,6
1
2
1
Figure B.2: A different kind of extensive form game.
B.8 Another Different Extensive Form Game
1. The transformed game
Pl. 1
Pl. 2
F E
(U, A) 5,6 4,0
(U, B) 3,2 4,0
(D, A) 8,3 8,3
(D, B) 8,3 8,3
2. (D, A) and (D, B) strongly dominate (U, A) and (U, B) for player 1. Player 2 has no
strongly dominated strategies.
3. The pure strategy Nash are ((D, A) ; F), ((D, A) ; E), ((D, B) ; F) and ((D, B) ; E).
4. ((D, A) ; F) is the only Nash that is also subgame perfect.
103
B.9 A Little More Challenging Extensive Form Game
1. The transformed game
Pl. 1
Pl. 2
(A, H) (A, K) (L, H) (L, K)
(B, R) 2,5 2,5 8,3 8,3
(B, D) 2,5 2,5 8,3 8,3
(N, R) 7,0 0,1 7,0 0,1
(N, D) 7,0 9,10 7,0 9,10
2. Player 1 has no strongly dominated strategies. For player 2, (A, K) strongly domi-
nates (L, H).
3. The pure strategy Nash equilibria are ((N, D) ; (A, K)) and ((N, D) ; (L, K))
4. ((N, D) ; (A, K)) is the only Nash equilibrium that is also subgame perfect.
104
B.10 A Normal Form Game
1. We need to list the order to elimination
(a) W strongly dominates X.
(b) B strongly dominates A.
(c) W strongly dominates Z.
(d) C strongly dominates B.
(e) W strongly dominates Y.
(f) C strongly dominates D.
2. From the theorem in the section notes, we know that the strategy pair (C, W) is the
only Nash equilibrium of the game. Also, we can nd this out using the algorithm.
B.11 A Different Normal Form Game
1. None for either player.
2. None.
3. Let p be the probability that player 1 chooses A and let q be the probability that
player 2 chooses X. If I we were to draw the matrix, it would look like this.
q 1 q
X Y
p A 3,1 7,2
1 p B 9,0 1,2
Then
3q +7 (1 q) = 9q +1 (1 q)
7 4q = 8q +1
q =
1
2
and
1p +0 (1 p) = 2p +2 (1 p)
p = 2 4p
p =
2
5
Hence, the formal way to express the MSNE is
105
(a) Player 1 plays A with probability
2
5
and plays B with probability
3
5
.
(b) Player 2 plays X with probability
1
2
and plays Y with probability
1
2
.
4. If both players play the MSNE, then the expected utility for player 1 is
EU
1
(MSNE) =
_
2
5
__
3
_
1
2
_
+7
_
1
2
__
+
_
3
5
__
9
_
1
2
_
+1
_
1
2
__
= 5
And the expected utility for player 2 is
EU
2
(MSNE) =
_
1
2
__
1
_
2
5
_
+0
_
3
5
__
+
_
1
2
__
2
_
2
5
_
+2
_
3
5
__
= 0.4
B.12 Yet Another Normal Form Game
1. Strategy A strongly dominates strategy B for player 1. Player 2 has no strongly
dominated strategies.
2. The strategy pair (A, X) is a pure strategy Nash equilibrium.
3. Let p be the probability that player 1 plays A and q be the probability that player 2
plays X. Then
9q +3 (1 q) = 2q +1 (1 q)
6q +3 = q +1
q =
2
5
which violates the denition of a probability which requires that 0 q 1. Thus,
there does not exist a probability such that player 1 indifferent between playing A
and B. You probably gured this out when you found that B is strongly dominated.
Hence, if player 1 will always play A, player 2 does not want to mix either and will
always play X. A MSNE does not exist.
B.13 Repeated Prisoners Dilemma
We must solve the following equation
8
_
1
1
_
9 +3
_
1
1
_

1
6
Thus, both players will cooperate if
1
6
.
106
B.14 Asymmetric Repeated Prisoners Dilemma
We must solve the following equation for player 1
6
_
1
1
1
_
7 +5
1
_
1
1
1
_

1

1
2
And then solve the following equation for player 2.
2
_
1
1
2
_
9 +
2
_
1
1
2
_

2

7
8
Thus, player 1 will cooperate if
1

1
2
and player 2 will cooperate if
2

7
8
.
B.15 A Game With Information Sets
1. The outcomes with utilities (9, 1) and (9, 5) are the only strongly Pareto efcient
outcomes of the game.
2. strategies
(a) Player 1
i. A
ii. B
iii. C
(b) Player 2
i. if A or B then X, if C then Z
ii. if A or B then X, if C then W
iii. if A or B then Y, if C then Z
iv. if A or B then Y, if C then W
3. Let (if A or B then X, if C then Z) = (X, Z). Then the transformed game is
Pl. 1
Pl. 2
(X, Z) (X, W) (Y, Z) (Y, W)
A 1,3 1,3 9,1 9,1
B 0,4 0,4 2,2 2,2
C 3,4 9,5 3,4 9,5
4. The two pure strategy Nash equilibriumof this game are (C; (X, W)) and (C; (Y, Z)).
107
108
Bibliography
Austen-Smith, David and Jeffrey S Banks. 2000. Positive Political Theory I: Collective Prefer-
ences. Ann Arbor, MI: University of Michigan Press.
. 2005. Positive Political Theory II: Strategy and Structure. Ann Arbor, MI: University
of Michigan Press.
de Condorcet, Marquis. 1785. Essai sur lapplication de lanalyse ` a la probabilit e des d ecisions
rendues ` a la pluralit e des voix. Paris.
Dixit, Avinash K, Susan Skeath, and David H Reiley. 2009. Games of Strategy. New York:
W. W. Norton & Co., 3rd ed.
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper & Row.
Gibbons, Robert. 1992. Game Theory for Applied Economists. Princeton, NJ: Princeton Uni-
versity Press.
Hinich, Melvin and Michael Munger. 1997. Analytical Politics. Cambridge, UK: Cambridge
University Press.
Kreps, David M. 1990. A Course in Microeconomic Theory. Princeton, NJ: Princeton Univer-
sity Press.
McCarty, Nolan and Adam Meroiwicz. 2007. Political Game Theory: An Introduction. Cam-
bridge, UK: Cambridge University Press.
Morrow, James D. 1994. Game Theory for Political Scientists. Princeton, NJ: Princeton Uni-
versity Press.
Muller, Dennis C. 1997. Perspectives on Public Choice: A Handbook. Cambridge, UK: Cam-
bridge University Press.
Osborne, Martin J. 2004. An Introductionto Game Theory. Oxford, UK: Oxford University
Press.
Riker, William H. 1986. The Art of Political Manipulation. New Haven, CT: Yale University
Press.
109
. 1988. Liberalism Against Populism: A Confrontation Between the Theory of Democracy
and the Theory of Social Choice. San Francisco: W. H. Freeman.
Riker, William H and Peter C. Ordeshook. 1968. A Theory of the Calculus of Voting.
American Political Science Review 62 (1):2542.
Varian, Hal R. 1992. Microeconomic Analysis. New York: W. W. Norton & Co., 3rd ed.
110

Вам также может понравиться