Вы находитесь на странице: 1из 14

The Emergence of Cooperation among Egoists

Author(s): Robert Axelrod


Source: The American Political Science Review, Vol. 75, No. 2 (Jun., 1981), pp. 306-318
Published by: American Political Science Association
Stable URL: http://www.jstor.org/stable/1961366
Accessed: 04-09-2015 08:03 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

American Political Science Association is collaborating with JSTOR to digitize, preserve and extend access to The American
Political Science Review.

http://www.jstor.org

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
The Emergence of Cooperation among Egoists
ROBERT AXELROD
University of Michigan

This article investigates the conditions under which cooperation will emerge in a world of egoists
without central authority. This problem plays an important role in such diverse fields as political
philosophy, international politics, and economic and social exchange. The problem is formalized as
an iterated Prisoner's Dilemma with pairwise interaction among a population of individuals.
Results from three approaches are reported: the tournament approach, the ecological approach,
and the evolutionary approach. The evolutionary approach is the most general since all possible stra-
tegies can be taken into account. A series of theorems is presented which show: (I) the conditions
under which no strategy can do any better than the population average if the others are using the
reciprocal cooperation strategy of TIT FOR TAT, (2) the necessary and sufficient conditions for a
strategy to be collectively stable, and (3) how cooperation can emerge from a small cluster of dis-
criminating individuals even when everyone else is using a strategy of unconditional defection.

Under what conditions will cooperation emerge ment for mutual defection). By assumption,
in a world of egoists without central authority? P > S, so it pays to defect if the other player de-
This question has played an important role in a fects. Thus no matter what the other player does,
variety of domains including political philosophy, it pays to defect. But if both defect, both get P
international politics, and economic and social ex- rather than the R they could both have got if both
change. This article provides new results which had cooperated. But R is assumed to be greater
show more completely than was previously pos- than P. Hence the dilemma. Individual rationality
sible the conditions under which cooperation will leads to a worse outcome for both than is possi-
emerge. The results are more complete in two ble.
ways. First, all possible strategies are taken into To insure that an even chance of exploitation or
account, not simply some arbitrarily selected sub- being exploited is not as good an outcome as mu-
set. Second, not only are equilibrium conditions tual cooperation, a final inequality is added in the
established, but also a mechanism is specified standard definition of the Prisoner's Dilemma.
which can move a population from noncoopera- This is just R>(T+S)12.
tive to cooperative equilibrium. Thus two egoists playing the game once will
The situation to be analyzed is the one in which both choose their dominant choice, defection,
narrow self-maximization behavior by each per- and get a payoff, P. which is worse for both than
son leads to a poor outcome for all. This is the the R they could have got if they had both coop-
famous Prisoner's Dilemma game. Two individu- erated. If the game is played a known finite num-
als can each either cooperate or defect. No matter ber of times, the players still have no incentive to
what the other does, defection yields a higher pay-
off than cooperation. But if both defect, both do
worse than if both cooperated. Figure 1 shows the Cooperate Defect
payoff matrix with sample utility numbers at-
tached to the payoffs. If the other player coop- Cooperate R3, R3 S=O,7T5
erates, there is a choice between cooperation
which yields R (the reward for mutual coopera-
tion) or defection which yields T (the temptation Defect T-5, S=0 P-1, -1
to defect). By assumption, T >R, so it pays to de-
fect if the other player cooperates. On the other T>R>P>S
hand, if the other player defects, there is a choice R > (S+T)/2
between cooperation which yields S (the sucker's
payoff), or defection which yields P (the punish- Source: Robert Axelrod, "Effective Choice in the
Prisoner'sDilemma,"Journal of Conflict Resolution
24 (1980): 3-25; "More Effective Choice in the
Prisoner'sDilemma,"Journal of Conflict Resolution
I would like to thank John Chamberlin,Michael 24 (1980): 379-403.
Cohen, BernardGrofman, William Hamilton, John
Kingdon,LarryMohr, John Padgettand ReinhardSel- Note: The payoffs to the row chooserare listed first.
ten for their help, and the Instituteof Public Policy
Studiesfor its financialsupport. Figure1. A Prisoner'sDilemma

306

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
1981 The Emergenceof CooperationamongEgoists 307
cooperate. This is certainly true on the last move 4. International political economy. Multina-
since there is no future to influence. On the next- tional corporations can play off host governments
to-last move they will also have no incentive to co- to lessen their tax burdens in the absence of coor-
operate since they can anticipate mutual defection dinated fiscal policies between the affected gov-
on the last move. This line of reasoning implies ernments. Thus the commodity exporting country
that the game will unravel all the way back to mu- and the commodity importing country are in an
tual defection on the first move of any sequence iterated Prisoner's Dilemma with each other,
of plays which is of known finite length (Luce and whether they fully appreciate it or not (Laver,
Raiffa, 1957, pp. 94-102). This reasoning does not 1977).
apply if the players will interact an indefinite In the literatures of these areas, there has been a
number of times. With an indefinite number of convergence on the nature of the problem to be
interactions, cooperation can emerge. This article analyzed. All agree that the two-person Prisoner's
will explore the precise conditions necessary for Dilemma captures an important part of the stra-
this to happen. tegic interaction. All agree that what makes the
The importance of this problem is indicated by emergence of cooperation possible is the possi-
a brief explanation of the role it has played in a bility that interaction will continue. The tools of
variety of fields. the analysis have been surprisingly similar, with
1. Political Philosophy. Hobbes regarded the game theory serving to structure the enterprise.
state of nature as equivalent to what we now call a As a paradigm case of the emergence of coop-
two-person Prisoner's Dilemma, and he built his eration, consider the development of the norms of
justification for the state upon the purported im- a legislative body, such as the United States Sen-
possibility of sustained cooperation in such a situ- ate. Each senator has an incentive to appear effec-
ation (Taylor, 1976, pp. 98-116). A demonstration tive for his or her constituents even at the expense
that mutual cooperation could emerge among ra- of conflicting with other senators who are trying
tional egoists playing the iterated Prisoner's to appear effective for their constituents. But this
Dilemma would provide a powerful argument that is hardly a zero-sum game since there are many
the role of the state should not be as universal as opportunities for mutually rewarding activities
some have argued. between two senators. One of the consequences is
2. International Politics. Today nations interact that an elaborate set of norms, or folkways, have
without central control, and therefore the conclu- emerged in the Senate. Among the most impor-
sions about the requirements for the emergence of tant of these is the norm of reciprocity, a folkway
cooperation have empirical relevance to many which involves helping out a colleague and getting
central issues of international politics. Examples repaid in kind. It includes vote trading, but it ex-
include many varieties of the security dilemma tends to so many types of mutually rewarding be-
(Jervis, 1978) such as arms competition and its ob- havior that "it is not an exaggeration to say that
verse, disarmament (Rapoport, 1960); alliance reciprocity is a way of life in the Senate" (Mat-
competition (Snyder, 1971); and communal con- thews, 1960, p. 100; see also Mayhew, 1974).
flict in Cyprus (Lumsden, 1973). The selection of Washington was not always like this. Early ob-
the American response to the Soviet invasion of servers saw the members of the Washington com-
Afghanistan in 1979 illustrates the problem of munity as quite unscrupulous, unreliable, and
choosing an effective strategy in the context of a characterized by "falsehood, deceit, treachery"
continuing relationship. Had the United States (Smith, 1906, p. 190). But by now the practice of
been perceived as continuing business as usual, reciprocity is well established. Even the significant
the Soviet Union might have been encouraged to changes in the Senate over the last two decades
try other forms of noncooperative behavior later. toward more decentralization, more openness,
On the other hand, any substantial lessening of and more equal distribution of power have come
U.S. cooperation risked some form of retaliation without abating the folkway of reciprocity (Orns-
which could then set off counter-retaliation, set- tein, Peabody and Rhode, 1977). I will show that
ting up a pattern of mutual defection that could we do not need to assume that senators are more
be difficult to get out of. Much of the domestic honest, more generous, or more public-spirited
debate over foreign policy is over problems of just than in earlier years to explain how cooperation
this type. based on reciprocity has emerged and proven
3. Economic and social exchange. Our everyday stable. The emergence of cooperation can be ex-
lives contain many exchanges whose terms are not plained as a consequence of senators pursuing
enforced by any central authority. Even in eco- their own interests.
nomic exchanges, business ethics are maintained The approach taken here is to investigate how
by the knowledge that future interactions are like- individuals pursuing their own interests will act,
ly to be affected by the outcome of the current ex- and then see what effects this will have for the sys-
change.. tem as a whole. Put another way, the approach is

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
308 The American Political Science Review Vol. 75

to make some assumptions about micro-motives, utilities. The utilities already include whatever
and then deduce consequences for macro- consideration each player has for the interests of
behavior (Schelling, 1978). Thinking about the the other (Taylor, 1976, pp. 69-83).
paradigm case of a legislature is a convenience, Under these conditions, words not backed by
but the same style of reasoning can apply to the actions are so cheap as to be meaningless. The
emergence of cooperation between individuals in players can communicate with each other only
many other political settings, or even to relations through the sequence of their own behavior. This
between nations. While investigating the condi- is the problem of the iterated Prisoner's Dilemma
tions which foster the emergence of cooperation, in its fundamental form.
one should bear in mind that cooperation is not Two things remain to be specified: how the pay-
always socially desirable. There are times when off of a particular move relates to the payoff in a
public policy is best served by the prevention of whole sequence, and the precise meaning of a
cooperation-as in the need for regulatory action strategy. A natural way to aggregate payoffs over
to prevent collusion between oligopolistic business time is to assume that later payoffs are worth less
enterprises. than earlier ones, and that this relationship is ex-
The basic situation I will analyze involves pair- pressed as a constant discount per move (Shubik,
wise interactions.' I assume that the player can 1959, 1970). Thus the next payoff is worth only a
recognize another player and remember how the fraction, w, of the same payoff this move. A
two of them have interacted so far. This allows whole string of mutual defection would then have
the history of the particular interaction to be a "present value" of P+ wP+ w2P+ w3P. . . =
taken into account by a player's strategy. P/(1-w). The discount parameter, w, can be
A variety of ways to resolve the dilemma of the given either of two interpretations. The standard
Prisoner's Dilemma have been developed. Each economic interpretation is that later consumption
involves allowing some additional activity which is not valued as much as earlier consumption. An
alters the strategic interaction in such a way as to alternative interpretation is that future moves may
fundamentally change the nature of the problem. not actually occur, since the interaction between a
The original problem remains, however, because pair of players has only a certain probability of
there are many situations in which these remedies continuing for another move. In either interpreta-
are not available. I wish to consider the problem tion, or a combination of the two, w is strictly be-
in its fundamental form. tween zero and one. The smaller w is, the less im-
1. There is no mechanism available to the play- portant later moves are relative to earlier ones.
ers to make enforceable threats or commitments For a concrete example, suppose one player is
(Schelling, 1960). Since the players cannot make following the policy of always defecting, and the
commitments, they must take into account all other player is following the policy of TIT FOR
possible strategies which might be used by the TAT. TIT FOR TAT is the policy of cooperating
other player, and they have all possible strategies on the first move and then doing whatever the
available to themselves. other player did on the previous move. This
2. There is no way to be sure what the other means that TIT FOR TAT will defect once for
player will do on a given move. This eliminates the each defection by the other player. When the
possibility of metagame analysis (Howard, 1971) other player is using TIT FOR TA T, a player who
which allows such options as "make the same always defects will get T on the first move, and P
choice as the other player is about to make." It on all the subsequent moves. The payoff to some-
also eliminates the possibility of reliable reputa- one using ALL D when playing with someone us-
tions such as might be based on watching the ing TIT FOR TAT is thus:
other player interact with third parties.
3. There is no way to change the other player's V(ALL D I TFT) T + wP + w2P + w3P. . .
=T + wP (1 + w + w2. . .)
= T + wP/(l-w).
'A single player may be interacting with many others,
but the player is interacting with them one at a time. The Both ALL D and TIT FOR TA Tare strategies.
situations which involve more than pairwise interaction In general, a strategy (or decision rule) is a func-
can be modeled with the more complex n-person Pri- tion from the history of the game so far into a
soner's Dilemma (Olson, 1965; G. Hardin, 1968; R.
probability of cooperation on the next move.
Hardin, 1971; Schelling, 1973). The principal applica-
tion is to the provision of collective goods. It is possible Strategies can be stochastic, as in the example of a
that the results from pairwise interactions will help sug- rule which is entirely random with equal proba-
gest how to undertake a deeper analysis of the n-person bilities of cooperation and defection on each
case as well, but that must wait. For a parallel treatment move. A strategy can also be quite sophisticated in
of the two-person and n-person cases, see Taylor (1976, its use of the pattern of outcomes in the game so
pp. 29-62). far to determine what to do next. It may, for ex-

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
1981 The Emergence of Cooperation among Egoists 309
ample, use Bayesian techniques to estimate the Hinckley, 1972). The very possibility of achieving
parameters of some model it might have of the stable mutual cooperation depends upon there be-
other player's rule. Or it may be some complex ing a good chance of a continuing interaction, as
combination of other strategies. But a strategy measured by the magnitude of w. Empirically, the
must rely solely on the information available chance of two members of Congress having a con-
through this one sequence of interactions with the tinuing interaction has increased dramatically as
particular player. the biennial turnover rates in Congress have fallen
The first question one is tempted to ask is, from about 40 percent in the first 40 years of the
"What is the best strategy?" This is a good ques- Republic to about 20 percent or less in recent
tion, but unfortunately there is no best rule inde- years (Young, 1966, pp. 87-90; Jones, 1977, p.
pendent of the environment which it might have 254; Patterson, 1978, pp. 143-44). That the in-
to face. The reason is that what works best when creasing institutionalization of Congress has had
the other player is unconditionally cooperative its effects on the development of congressional
will not in general work well when the other player norms has been widely accepted (Polsby, 1968,
is conditionally cooperative, and vice versa. To esp. n. 68). We now see how the diminished turn-
prove this, I will introduce the concept of a nice over rate (which is one aspect of institutionaliza-
strategy, namely, one which will never be the first tion) can allow the development of reciprocity
to defect. (which is one important part of congressional
folkways).
THEOREM 1. If the discount parameter, w, is But saying that a continuing chance of inter-
sufficiently high, there is no best strategy indepen- action is necessary for the development of coop-
dent of the strategy used by the other player. eration is not the same as saying that it is suffi-
cient. The demonstration that there is not a single
PROOF. Suppose A is a strategy which is best best strategy still leaves open the question of what
regardless of the strategy used by the other patterns of behavior can be expected to emerge
player. This means that for any strategies A' when there actually is a sufficiently high proba-
and B, V(AIB)>V(A'IB). Consider separately bility of continuing interaction between two peo-
the cases where A is nice and A is not nice. If A ple.
is nice, let A' = ALL D, and let B = ALL C.
Then V(AIB) = R/(1-w) which is less than The Tournament Approach
V(A'IB) = T/(1-w). On the other hand, if A is
not nice, let A' = ALL C, and let B be the Just because there is no single best decision rule
strategy of cooperating until the other player de- does not mean analysis is hopeless. For example,
fects and then always defects. Eventually A will be progress can be made on the question of which
the first to defect, say, on move n. The value of strategy does best in an environment of players
n is irrelevant for the comparison to follow, so who are also using strategies designed to do well.
assume that n = 1. To give A the maximum To explore this question, I conducted a tourna-
advantage, assume that A always defects after ment of strategies submitted by game theorists in
its first defection. Then V(A 1B) = T + economics, psychology, sociology, political
wP/(1-w). But V(A'IB) = R/(1-w) = science and mathematics (Axelrod, 1980a). An-
R+wR/(1-w). Thus V(AIB)< V(A'IB) when- nouncing the payoff matrix shown in Figure 1,
ever w > (T-R)/(T-P). Thus the immediate and a game length of 200 moves, I ran the 14 en-
advantage gained by the defection of A will tries and RANDOM against each other in a round
eventually be more than compensated for by robin tournament. The result was that the highest
the long-term disadvantage of B's unending average score was attained by the simplest of all
defection, assuming that w is sufficiently large. the strategies submitted, TIT FOR TAT.
Thus, if w is sufficiently large, there is no one I then circulated the report of the results and
best strategy. solicited entries for a second round. This time I re-
ceived 62 entries from six countries.3 Most of the
In the paradigm case of a legislature, this theo- contestants were computer hobbyists, but there
rem says that if there is a large enough chance that were also professors of evolutionary biology,
a member of Congress will interact again with
another member of Congress, then there is no one
best strategy to use independently of the strategy
'In the second round, the length of the games was un-
being used by the other person. It would be best to certain, with an expected median length of 200 moves.
be cooperative with someone who will reciprocate This was achieved by setting the probability that a given
that cooperation in the future, but not with some- move would not be the last one at w = .99654. As in the
one whose future behavior will not be very much first round, each pair was matched in five games. See
affected by this interaction (see, for example, Axelrod (1980b) for a complete description.

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
310 The American Political Science Review Vol. 75
physics, and computer science, as well as the five is collectively stable if no strategy can invade it.3
disciplines represented in the first round. TIT The biological motivation for this approach is
FOR TA T was again submitted by the winner of based on the interpretation of the payoffs in terms
the first round, Anatol Rapoport of the Institute of fitness (survival and fecundity). All mutations
for Advanced Study (Vienna). And it won again. are possible, and if any could invade a given pop-
An analysis of the 3,000,000 choices which were ulation it would have had the chance to do so.
made in the second round shows that TIT FOR Thus only a collectively stable strategy is expected
TA T was a very robust rule because it was nice, to be able to maintain itself in the long-run equili-
provocable into a retaliation by a defection of the brium as the strategy used by all.4 Collectively
other, and yet forgiving after it took its one re- stable strategies are important because they are
taliation (Axelrod, 1980b). the only ones which an entire population can
maintain in the long run if mutations are intro-
duced one at a time.
The Ecological Approach The political motivation for this approach is
based on the assumption that all strategies are
To see if TIT FOR TAT would do well in a possible, and that if there were a strategy which
whole series of simulated tournaments, I calcu- would benefit an individual, someone is sure to
lated what would happen if each of the strategies try it. Thus only a collectively stable strategy can
in the second round were submitted to a hypo- maintain itself as the strategy used by all-pro-
thetical next round in proportion to its success in vided that the individuals who are trying out novel
the previous round. This process was then re- strategies do not interact too much with one
peated to generate the time path of the distribu- another.5 As we shall see later, if they do interact
tion of strategies. The results showed that as the in clusters, then new and very important develop-
less-successful rules were displaced, TIT FOR ments become possible.
TA Tcontinued to do well with the rules which ini- A difficulty in the use of this concept of collec-
tially scored near the top. In the long run, TIT tive stability is that it can be very hard actually to
FOR TA T displaced all the other rules and went determine which strategies have it and which do
to what biologists call fixation (Axelrod, 1980b). not. In most biological applications, including
This is an ecological approach because it takes both the Prisoner's Dilemma and other types of
as given the varieties which are present and inves- interactions, this difficulty has been dealt with in
tigates how they do over time when interacting
with each other. It provides further evidence of
the robust nature of the success of TIT FOR
TAT. 3Those familiar with the concepts of game theory will
recognize this as a strategy being in Nash equilibrium
with itself. My definitions of invasion and collective sta-
The Evolutionary Approach bility are slightly different from Maynard Smith's
(1974) definitions of invasion and evolutionary stability.
His definition of invasion allows V(A IB) = V(B IB)
A much more general approach would be to
provided that V(BIA) > V(A IA). I have used the new
allow all possible decision rules to be considered, definitions to simplify the proofs and to highlight the
and to ask what are the characteristics of the de- difference between the effect of a single mutant and the
cision rules which are stable in the long run. An effect of a small number of mutants. Any rule which is
evolutionary approach recently introduced by evolutionarily stable is also collectively stable. For anice
biologists offers a key concept which makes such rule, the definitions are equivalent. All theorems in the
an analysis tractable (Maynard Smith, 1974, text remain true if "evolutionary stability" is substi-
1978). This approach imagines the existence of a tuted for "collective stability" with the exception of
whole population of individuals employing a cer- Theorem 3, where the characterization is necessary but
no longer sufficient.
tain strategy, B, and a single mutant individual
employing another strategy, A. Strategy A is said 4For the application of the results of this article to
to invade strategy B if V(A IB) > V(BIB) where biological contexts, see Axelrod and Hamilton (1981).
For the development in biology of the concepts of reci-
V(A IB) is the expected payoff an A gets when procity and stable strategy, see Hamilton (1964), Trivers
playing a B, and V(BIB) is the expected payoff a (1971), Maynard Smith (1974), and Dawkins (1976).
B gets when playing another B. Since the B's are
'Collective stability can also be interpreted in terms of
interacting virtually entirely with other B's, the a commitment by one player, rather than the stability of
concept of invasion is equivalent to the single mu- a whole population. Suppose player Y is committed to
tant individual being able to do better than the using strategy B. Then player X can do no better than
population average. This leads directly to the key use this same strategy B if and only if strategy B is col-
concept of the evolutionary approach. A strategy lectively stable.

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
1981 The Emergenceof CooperationamongEgoists 311
one of two ways. One method has been to restrict TIT FOR TAT as a
the analysis to situations where the strategies take CollectivelyStableStrategy
some particularly simple form such as in one-
parameter models of sex-ratios (Hamilton, 1967). TIT FOR TAT cooperateson the first move,
The other method has been to restrict the strate- and then does whateverthe other playerdid on
gies themselves to a relatively narrow set so that the previous move. This means that any rule
some illustrative results could be attained (May- which starts off with a defection will get T, the
nard Smith and Price, 1973; Maynard Smith, highest possible payoff, on the first move when
1978). playingTITFOR TAT. Consequently,TITFOR
The difficulty of dealing with all possible strate- TAT can only avoid being invadableby such a
gies was also faced by Michael Taylor (1976), a rule if the game is likely to last long enough for
political scientist who sought a deeper under- the retaliationto counteractthe temptationto de-
standing of the issues raised by Hobbes and Hume fect. In fact, no rulecan invade TITFOR TATif
concerning whether people in a state of nature the discount parameter,w, is sufficiently large.
would be expected to cooperate with each other. This is the heartof the formalresultcontainedin
He too employed the method of using a narrow the followingtheorem.Readerswho wish to skip
set of strategies to attain some illustrative results. the proofs can do so without loss of continuity.
Taylor restricted himself to the investigation of
four particular strategies (including ALL D, ALL
C, TIT FOR TAT, and the rule "cooperate until THEOREM2. TIT FOR TAT is a collective-
the other defects, and then always defect"), and ly stable strategy if and only if w>
one set of strategies which retaliates in progres- max( T- R .)
) An alternativeformulation of
sively increasing numbers of defections for each T-P R-S
defection by the other. He successfully developed the same result is that TIT FOR TAT is a
the equilibrium conditions when these are the only collectively stable strategy if and only if it is
rules which are possible.' invadable neither by ALL D nor the strategy
Before running the computer tournament I be- which alternatesdefection and cooperation.
lieved that it was impossible to make very much
progress if all possible strategies were permitted to PROOF. First we prove that the two formu-
enter the analysis. However, once I had attained lations of the theorem are equivalent,and then
sufficient experience with a wide variety of spe- we prove both implications of the second
cific decision rules, and with the analysis of what formulation. To say that ALL D cannot invade
happened when these rules interacted, I was able TIT FOR TAT means that V(ALLDITFT) <
to formulate and answer some important ques- V(TFTITFT).As shown earlier, V(ALLDITFT)
tions about what would happen if all possible stra- = T + wP/(l1-w). Since TFT always cooperates
tegies were taken into account. with its twin, V(TFTITFT)= R + wR W2R....
The remainder of this article will be devoted to = R/(l-w). Thus ALL D cannot invade TIT
answering the following specific questions about FOR TAT when T + wP/(I1-w) < R /(1-w), or
the emergence of cooperation in the iterated Pri-
soner's Dilemma:
T(1-w)+ wP<R, or T-R<w(T-P) or wT-R
T-P
Similarly, to say that alternation of D and C
1. Under what conditions is TIT FOR TA T col- cannot invade TIT FOR TAT means that
lectively stable? T-R
(T+ wS)I (l -w2) < R /(1-w), or S < w. Thus
2. What are the necessary and sufficient condi- R-S
tions for any strategy to be collectively stable? T-R T-R
w > p and w> ~s is equivalent to saying
3. If virtually everyone is following a strategy of T-P R-S
unconditional defection, when can coopera- that TITFOR TAT is invadableby neitherA LL
tion emerge from a small cluster of newcomers D nor the strategy which alternates defection
who introduce cooperation based on reci- and cooperation. This shows that the two
procity? formulationsare equivalent.
Now we prove both of the implicationsof the
second formulation. One implication is estab-
lished by the simpleobservationthat if TITFOR
'For the comparable equilibrium conditions when all TAT is a collectivelystablestrategy,then no rule
possible strategies are allowed, see below, Theorems 2 can invade, and hence neithercan the two speci-
through 6. For related results on the potential stability fied rules. The other implicationto be proved is
of cooperative behavior, see Luce and Raiffa (1957, p. that if neitherALL D nor Alternationof D and C
102), Kurz (1977) and Hirshleifer (1978). can invade TITFOR TAT, then no strategycan.

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
312 The AmericanPoliticalScienceReview Vol. 75
TIT FOR TAT has only two states, depending on Similarly, any member of Congres who is per-
what the other player did the previous move (on ceived as likely to be defeated in the next election
the first move it assumes, in effect, that the other may have some difficulty doing legislative
player has just cooperated). Thus if A is inter- business with colleagues on the usual basis of trust
acting with TIT FOR TAT, the best which any and good credit.7
strategy, A, can do after choosing D is to choose There are many other examples of the impor-
C or D. Similarly, the best A can do after choos- tance of long-term interaction for the stability of
ing D is to choose C or D. This leaves four possi- cooperation in the iterated Prisoner's Dilemma. It
bilities for the best A can do with TITFOR TA T: is easy to maintain the norms of reciprocity in a
repeated sequences of CC, CD, DC, or DD. The stable small town or ethnic neighborhood. Con-
first does the same as TIT FOR TA T does with versely, a visiting professor is likely to receive
another TIT FOR TAT. The second cannot do poor treatment by other faculty members com-
better than both the first and the third. This im- pared to the way these same prople treat their reg-
plies if the third and fourth possibilities cannot in- ular colleagues.
vade TIT FOR TAT, then no strategy can. These Another consequence of the previous theorem
two are equivalent, respectively, to Alternation of is that if one wants to prevent rather than pro-
D and C, and ALL D. Thus if neither of these two mote cooperation, one should keep the same indi-
can invade TIT FOR TAT, no rule can, and TIT viduals from interacting too regularly with each
FOR TA T is a collectively stable strategy. other. Consider the practice of the government
selecting two aerospace companies for com-
petitive development contracts. Since firms spe-
The significance of this theorem is that it dem-
cialize to some degree in either air force or in navy
onstrates that if everyone in a population is coop-
planes, there is a tendency for firms with the same
erating with everyone else because each is using
specialties to be frequently paired in the final
the TIT FOR TA T strategy, no one can do better
competition (Art, 1968). To make tacit collusion
using any other strategy provided the discount
between companies more difficult, the govern-
parameter is high enough. For example, using the
ment should seek methods of compensating for
numerical values of the payoff parameters given
the specialization. Pairs of companies which
in Figure 1, TIT FOR TA T is uninvadable when
shared a specialization would then expect to inter-
the discount parameter, w, is greater than 2/3. If
act less often in the final competitions. This would
w falls below this critical value, and everyone else
cause the later interactions between them to be
is using TIT FOR TA T, it will pay to defect on al-
worth relatively less than before, reducing the
ternative moves. For w less than 1/2, ALL D can
value of w. If w is sufficiently low, reciprocal
also invade.
cooperation in the form of tacit collusion ceases
One specific implication is that if the other
to be' a stable policy.
player is unlikely to be around much longer be-
Knowing when TIT FOR TAT cannot be in-
cause of apparent weakness, then the perceived
vaded is valuable, but it is only a part of the story.
value of w falls and the reciprocity of TIT FOR
Other strategies may be, and in fact are, also col-
TA T is no longer stable. We have Caesar's expla-
lectively stable. This suggests the question of what
nation of why Pompey's allies stopped cooperat-
a strategy has to do to be collectively stable. In
ing with him. "They regarded his IPompey's]
other words, what policies, if adopted by every-
prospects as hopeless and acted according to the
one, will prevent any one individual from bene-
common rule by which a man's friends become his
fiting by a departure from the common strategy?
enemies in adversity" (trans. by Warner, 1960, p.
328). Another example is the business institution
The Characterization of
of the factor who buys a client's accounts receiv-
Collectively Stable Strategies
able. This is done at a very substantial discount
when the firm is in trouble because
The characterization of all collectively stable
strategies is based on the idea that invasion can
once a manufacturerbeginsto go under,evenhis be prevented if the rule can make the potential
best customersbegin refusingpaymentfor mer- invader worse off than if it had just followed
chandise,claimingdefects in quality, failureto
meet specifications, tardy delivery, or what-
have-you.Thegreatenforcerof moralityin com- 7Acountervailingconsiderationis that a legislatorin
merce is the continuingrelationship,the belief electoral trouble may receive help from friendly col-
that one will have to do businessagainwith this leagueswho wishto increasethechancesof reelectionof
customer, or this supplier, and when a failing someonewho has provenin the past to be cooperative,
companyloses this automaticenforcer,not even trustworthy,and effective. Two currentexamplesare
a strong-armfactor is likely to find a substitute MorrisUdall and Thomas Foley. (I wish to thank an
(Mayer,1974, p. 280). anonymousreviewerfor this point.)

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
1981 The Emergenceof CooperationamongEgoists 313
the common strategy. Rule B can prevent Using Equation (1) gives:
invasion by rule A if B can be sure that no
matter what A does later, B will hold A's total Vn+1(A IB) < V(BIB) -wn-ilP(l _W)
score low enough. This leads to the following
+ w n-1p
useful definition: B has a secure position over A
on move n if no matter what A does from move
n onwards, V(A B)< V(BIB), assuming that B Vn+I(A IB) < V(BIB) - w nP/( l-W).
defects from move n onwards. Let Vn(A 1B)
represent A's discounted cumulative score in Second, B will only cooperate on move n
when
the moves before move n. Then another way of
saying that B has a secure position over A on
move n is that Vn(A IB) < V(BIB) -wn- (T+ wP/( - w)).

Since A can get at most T on move n, we have


Vn(AJB) + wn-1P/(1-w) < V(BIB),

since the best A can do from move n onwards if Vn+1(A JB) < V(BIB)-
B defects is get P each time. Moving the second wn-1(T+ wP/( -w)) + wn-1T
term to the right side of the inequality gives the
helpful result that B has a secure position over Vn+1(AJB)< V(BIB) -w np/(I - W).
A on move n if Vn(AIB) is small enough,
namely if Therefore, B always has a secure position over
A, and consequently B is a collectively stable
Vn(AIB)< V(BIB)-wn-1P(l-w). (1) strategy.
The second part of the proof operates by
The theorem which follows embodies the contradiction. Suppose that B is a collectively
advice that if you want to employ a collectively stable strategy and there is an A and an n such
stable strategy, you should only cooperate that B does not defect on move n when
when you can afford an exploitation by the
other side and still retain your secure position. Vn(A IB) > V(BIB) - wn- (T + wP/(1 -w)),

i.e., when

THEOREM 3. THE CHARACTERIZATION Vn(A IB) + w n-I (T + wPI(I -w)) > V(BIB).
THEOREM. B is a collectively stable strategy if (2)
and only if B defects on move n whenever the
other player's cumulative score so far is too Define A' as the same as A on the first n-I
great, specifically when Vn(AIB) > V(BIB) - moves, and D thereafter. A' gets T on move n
wn- I (T+wP/( 1-w)). (since B cooperated then), and at least P
thereafter. So,

V(A'IB) > Vn(A IB)+ wn-I (T+ wP/( 1-w)).


PROOF. First it will be shown that a
strategy B which defects as required will always Combined with (2) this gives V(A'IB) > V(BIB).
have a secure position over any A, and there- Hence A' invades B, contrary to the assumption
fore will have V(A IB) 6 V(BIB) which in turn that B is a collectively stable strategy. There-
makes B a collectively stable strategy. The fore, if B is a collectively stable strategy, it
proof works by induction. For B to have a must defect when required.
secure position on move 1 means that Figure 2 illustrates the use of this theorem. The
V(A[4LL D) < V(ALL DIALL D) according to dotted line shows the value which must not be ex-
the definition of secure position applied to n=1. ceeded by any A if B is to be a collectively stable
Since this is true for all A, B has a secure strategy. This value is just V(B IB), the expected
position on move 1. If B has a secure position payoff attained by a player using B when virtually
over A on move n, it has a secure position on all the other players are using B as well. The solid
move n+l. This is shown in two parts. curve represents the critical value of A's cumula-
First, if B defects on move n, A gets at most tive payoff so far. The theorem simply says that B
P, so is a collectively stable strategy if and only if it de-
fects whenever the other player's cumulative value
Vn+l(A IB) < Vn(A IB)+ wn-1P. so far in the game is above this line. By doing so,

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
314 The American Political Science Review Vol. 75
B is able to prevent the other player from even- rules with potential invaders because nice rules do
tually getting a total expected value of more than so well with each other.
rule B gets when playing another rule B. The flexibility of a nice rule is not unlimited,
The characterization theorem is "policy- however, as shown by the following theorem. In
relevant" in the abstract sense that it specifies fact, a nice rule must be provoked by the very first
what a strategy, B, has to do at any point in time defection of the other player, i.e., on some later
as a function of the previous history of the inter- move the rule must have a finite chance of re-
action in order for B to be a collectively stable taliating with a defection of its own.
strategy.8 It is a complete characterization because
this requirement is both a necessary and a suffi-
cient condition for strategy B to be collectively THEOREM 4. For a nice strategy to be collec-
stable. tively stable, it must be provoked by the first de-
Two additional consequences about collectively fection of the other player.
stable strategies can be seen from Figure 2. First,
as long as the other player has not accumulated
too great a score, a strategy has the flexibility to
either cooperate or defect and still be collectively PROOF. If a nice strategy were not provoked
stable. This flexibility explains why there are typi- by a defection on move n, then it would not be
cally many strategies which are collectively stable. collectively stable because it could be invaded by a
The second consequence is that a nice rule (one rule which defected only on move n.
which will never defect first) has the most flexi-
bility since it has the highest possible score when Besides provocability, there is another require-
playing an identical rule. Put another way, nice ment for a nice rule to be collectively stable. This
rules can afford to be more generous than other requirement is that the discount parameter w, be
sufficiently large. This is a generalization of the
second theorem, which showed that for TITFOR
8Tobe precise, V(BIB) must also be specifiedin ad- TAT to be collectively stable, w has to be large
vance. For example, if B is never the first to defect, enough. The idea extends beyond just nice rules to
V(BIB) = RI(1-w). any rule which might be the first to cooperate.

X- V(B B)

Must Defect V(BI--)w'' (T+wP/(1-w))

V,(AlB)

May Defect

Moven Time
Source: Drawn by the author.

Figure 2. Characterization of a Collectively Stable Strategy

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
1981 The Emergence of Cooperation among Egoists 315
THEOREM 5. Any rule, B, which may be the sion" to include invasion by a cluster.9 As
first to cooperate is collectively stable only when before, we will suppose that strategy B is being
w is sufficiently large. used by virtually everyone. But now suppose
that a small group of individuals using strategy
A arrives and interacts with both the other A's
PROOF. If B cooperates on the first move,
and the native B's. To be specific, suppose that
V(ALL DIB) > T + wP/(1-w). But for any B,
the proportion of the interactions by someone
R/(1-w) > V(BIB) since R is the best B can do using strategy A with another individual using
with another B by the assumptions that R>P
strategy A is p. Assuming that the A 's are rare
and R>(S+T)/2. Therefore V(ALL DIB) > relative to the B's, virtually all the interactions
V(BIB) is so whenever T + wP/(1-w) > of B's are with other B's. Then the average
R/(l-w). This implies that ALL D invades a B score of someone using A is P V(AIA) +
which cooperates on the first move whenever (l-p)V(A 1B) and the average score of someone
T-R using B is V(BIB). Therefore, a p-cluster of A
w<T p . If B has a positive chance of cooperat-
T-P invades B if p V(AIA)+ (I -p) V(AIB) > V(BIB),
ing on the first move, then the gain of where p is the proportion of the interactions by
V(ALL DIB) over V1(BIB) can only be nullified a player using strategy A with another such
if w is sufficiently large. Likewise, if B will not player. Solving for p, this means that invasion is
be the first to cooperate until move n, possible if the newcomers interact enough with
Vn(ALL DIB) = Vn(BiB) and the gain of each other, namely when
Vn+1(ALLDIB) over Vn+I(BIB) can only be
nullified if w is sufficiently large. V(BIB)- V(AJB) (3)
V(AIA)- V(A JB)'
There is one strategy which is always collective-
ly stable, that is regardless of the value of w or the Notice that this assumes that pairing in the
payoff parameters T, R, P. and S. This is ALL D, interactions is not random. With random pair-
the rule which defects no matter what. ing, an A would rarely meet another A. Instead,
the clustering concept treats the case in which
THEOREM 6. ALL D is always collectively the A's are a trivial part of the environment of
stable. the B's, but a nontrivial part of the environ-
ment of the other A 's.
PROOF. ALL D is always collectively stable The striking thing is just how easy invasion
because it always defects and hence it defects of ALL D by clusters can be. Specifically, the
whenever required by the condition of the charac- value of p which is required for invasion by TIT
terization theorem. FOR TAT of a world of ALL D's is surprisingly
low. For example, suppose the payoff values
This is an important theorem because of its im- are those of Figure 1, and that w = .9, which
plications for the evolution of cooperation. If we corresponds to a 10-percent chance two inter-
imagine a system starting with individuals who acting players will never meet again. Let A be
cannot be enticed to cooperate, the collective sta- TITFOR TAT and B be ALL D. Then V(BIB)=
bility of ALL D implies that no single individual P/(1-w) = 10; V(A IB)= S +wP/(1-w) = 9; and
can hope to do any better than just to go along V(AIA) = R/(1-w) = 30. Pluggingthese num-
and be uncooperative as well. A world of "mean- bers into Equation (3) shows that a p-cluster of
ies" can resist invasion by anyone using any other TIT FOR TAT invades ALL D when p> 1/21.
strategy-provided that the newcomers arrive one Thus if the newcomers using TIT FOR TAT
at a time. have any more than about 5 percent of their
The problem, of course, is that a single new- interactions with others using TIT FOR TAT,
comer in such a mean world has no one who will they can thrive in a world in which everyone
reciprocate any cooperation. If the newcomers ar- else refuses ever to cooperate.
rive in small clusters, however, they will have a In this way, a world of meanies can be invaded
chance to thrive. The next section shows how this by a cluster of TIT FOR TA T-and rather easily
can happen. at that. To illustrate this point, suppose a business
school teacher taught a class to give cooperative
behavior a chance, and to reciprocate cooperation
The Implications of Clustering

To consider arrival in clusters rather than 9For related concepts from biology, see Wilson (1979)
singly, we need to broaden the idea of "inva- and Axelrod and Hamilton (1981).

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
316 The American Political Science Review Vol. 75
from other firms. If the students did, and if they such a strategy because it always cooperates for
did not disperse too widely (so that a sufficient n 1, it cooperates only once with ALL D, and
proportion of their interactions were with others it always cooperates with another TIT FOR
from the same class), then the students would find TA T.
that their lessons paid off.
When the interactions are expected to be of The final theorem demonstrates that nice
longer duration (or the time discount factor is not rules (those which never defect first) are actual-
as great), then even less clustering is necessary. ly better able than other rules to protect
For example, if the median game length is 200 themselves from invasion by a cluster.
moves (corresponding to w = .99654) and the
payoff parameters are as given in Figure 1, even THEOREM 8. If a nice strategy cannot be
one interaction out of a thousand with a like- invaded by a single individual, it cannot be
minded follower of TIT FOR TAT is enough for invaded by any cluster of individuals either.
the strategy to invade a world of ALL D's. Even
when the median game length is only two moves PROOF. For a cluster of rule A to invade a
(w = .5), anything over a fifth of the interactions population of rule B, there must be a p< 1 such
by the TIT FOR TAT players with like-minded that p V(A [A) + (I1-p)V(A IB) > V(BIB). But if
types is sufficient for invasion to succeed and co- B is nice, then V(AIA) < V(BIB). This is so
operation to emerge. because V(BIB) = R/(l-w), which is the largest
The next result shows which strategies are the value attainable when the other player is using
most efficient at invading ALL D with the least the same strategy. It is the largest value since
amount of clustering. These are the strategies R>(S+T)/2. Since V(AIA) < V(BIB), A can
which are best able to discriminate between them- invade as a cluster only if V(A B) > V(BIB).
selves and ALL D. A strategy is maximally dis- But that is equivalent to A invading as an
criminating if it will eventually cooperate even if individual.
the other has never cooperated yet, and once it co-
operates it will never cooperate again with ALL D
but will always cooperate with another player us- This shows that nice rules do not have the struc-
ing the same strategy. tural weakness displayed in ALL D. ALL D can
withstand invasion by any strategy, as long as the
THEOREM 7. The strategies which can invade players using other strategies come one at a time.
ALL D in a cluster with the smallest value of p are But if they come in clusters (even in rather small
those which are maximally discriminating, such as clusters), ALL D can be invaded. With nice rules,
TIT FOR TAT. the situation is different. If a nice rule can resist
invation by other rules coming one at a time, then
it can resist invasion by clusters, no matter how
PROOF. To be able to invade ALL D, a rule large. So nice rules can protect themselves in a
must have a positive chance of cooperating way that ALL D cannot.10
first. Stochastic cooperation is not as good as In the illustrative case of the Senate, Theorem 8
deterministic cooperation with another player demonstrates that once cooperation based on reci-
using the same rule since stochastic cooperation procity has become established, it can remain
yields equal probability of S and T, and stable even if a cluster of newcomers does not re-
(S+T)/2<R in the Prisoner's Dilemma. There- spect this senatorial folkway. And Theorem 6 has
fore, a strategy which can invade with the shown that without clustering (or some compara-
smallest p must cooperate first on some move, ble mechanism) the original pattern of mutual
n, even if the other player has never cooperated "treachery" could not have been overcome. Per-
yet. Employing Equation (3) shows that the haps these critical early clusters were based on the
rules which invade B=ALL D with the lowest boardinghouse arrangements in the capital during
value of p are those which have the lowest value the Jeffersonian era (Young, 1966). Or perhaps
of p*, where p* = [ V(BB)- V(A IB)] /[ V(A [A) the state delegations and state party delegations
- V(A JB)]. The value of p* is minimized when were more critical (Bogue and Marlaire, 1975).
V(A IA) and V(A JB) are maximized (subject to But now that the pattern of reciprocity is estab-
the constraint that A cooperates for the first lished, Theorems 2 and 5 show that it is collec-
time on move n) since V(AIA) > V(BIB) >
V(A [B). V(A IA) and V(A JB) are maximized
subject to this constraint if and only if A is a
maximally discriminating rule. (Incidentally, it '?Thispropertyis possessedby populationmixes of
does not matter for the minimal value of p just nice rules as well. If no single individualcan invadea
when A starts to cooperate.) TIT FOR TA T is populationof nice rules, no clustercan either.

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
1981 The Emergenceof CooperationamongEgoists 317
tively stable, as long as the biennial turnover rate Jervis, Robert (1978). "Cooperation Under the Security
is not too great. Dilemma." World Politics 30: 167-214.
Thus cooperation can emerge even in a world of Jones, Charles 0. (1977). "Will Reform Change Con-
unconditional defection. The development cannot gress?" In Lawrence C. Dodd and Bruce I. Oppen-
take place if it is tried only by scattered indi- heimer (eds.), Congress Reconsidered. New York:
Praeger.
viduals who have no chance to interact with each Kurz, Mordecai (1977). "Altruistic Equilibrium." In
other. But cooperation can emerge from small Bela Balassa and Richard Nelson (eds.), Economic
clusters of discriminating individuals, as long as Progress, Private Values, and Public Policy. Am-
these individuals have even a small proportion of sterdam: North Holland.
their interactions with each other. Moreover, if Laver, Michael (1977). "Intergovernmental Policy on
nice strategies (those which are never the first to Multinational Corporations, A Simple Model of Tax
defect) eventually come to be adopted by virtually Bargaining." European Journal of Political Re-
everyone, then those individuals can afford to be search 5: 363-80.
Luce, R. Duncan, and Howard Raiffa (1957). Games
generous in dealing with any others. The popula-
and Decisions. New York: Wiley.
tion of nice rules can also protect themselves Lumsden, Malvern (1973). "The Cyprus Conflict as a
against clusters of individuals using any other Prisoner's Dilemma." Journal of Conflict Resolu-
strategy just as well as they can protect themselves tion 17: 7-32.
against single individuals. But for a nice strategy Matthews, Donald R. (1960). U.S. Senators and Their
to be stable in the collective sense, it must be pro- World. Chapel Hill: University of North Carolina
vocable. So mutual cooperation can emerge in a Press.
world of egoists without central control, by start- Mayer, Martin (1974). The Bankers. New York: Ballan-
ing with a cluster of individuals who rely on tine Books.
reciprocity. Mayhew, David R. (1975). Congress: The Electoral
Connection. New Haven, Conn.: Yale University
Press.
Maynard Smith, John (1974). "The Theory of Games
References and the Evolution of Animal Conflict." Journal of
Theoretical Biology 47: 209-21.
Art, Robert J. (1969). The TFX Decision: McNamara (1978). "The Evolution of Behavior." Scientific
and the Military. Boston: Little, Brown. American 239: 176-92.
Axelrod, Robert (1980a). "Effective Choice in the Pri- , and G. R. Price (1973). "The Logic of Animal
soner's Dilemma." Journal of Conflict Resolution Conflict." Nature 246: 15-18.
24: 3-25. Olson, Mancur, Jr. (1965). The Logic of Collective Ac-
(1980b). "More Effective Choice in the Pri- tion. Cambridge, Mass.: Harvard University Press.
soner's Dilemma." Journal of Conflict Resolution Ornstein, Norman, Robert L. Peabody, and David W.
24: 379-403. Rhode (1977). "The Changing Senate: From the
, and William D. Hamilton (1981). "The Evolu- 1950s to the 1970s." In Lawrence C. Dodd and
tion of Cooperation." Science 2: 1390-96. Bruce I. Oppenheimer (eds.), Congress Reconsi-
Bogue, Allan G., and Mark Paul Marlaire (1975). "Of dered. New York: Praeger.
Mess and Men: The Boardinghouse and Congres- Patterson, Samuel (1978). "The Semi-Sovereign Con-
sional Voting, 1821-1842." American Journal of gress." In Anthony King (ed.), The New American
Political Science 19: 207-30. Political System. Washington, D.C.: American En-
Dawkins, Richard (1976). The Selfish Gene. New York: terprise Institute.
Oxford University Press. Polsby, Nelson (1968). "The Institutionalization of the
Hamilton, William D. (1964). "The Genetical Theory U.S. House of Representatives." American Political
of Social Behavior (I and II)." Journal of Theoreti- Science Review 62: 144-68.
cal Biology 7: 1-16; 17-32. Rapoport, Anatol (1960). Fights, Games, and Debates.
(1967). "Extraordinary Sex Ratios." Science Ann Arbor: University of Michigan Press.
156: 477-88. Shelling, Thomas C. (1960). The Strategy of Con-
Hardin, Garrett (1968). "The Tragedy of the Com- flict. Cambridge, Mass.: Harvard University Press.
mons." Science 162: 1243-48. (1973). "Hockey Helmets, Concealed Weapons,
Hardin, Russell (1971). "Collective Action as an Agree- and Daylight Savings: A Study of Binary Choices
able n-Prisoner's Dilemma." Behavioral Science 16: with Externalities." Journal of Conflict Resolution
472-81. 17: 381-428.
Hinckley, Barbara (1972). "Coalitions in Congress: (1978). "Micromotives and Macrobehavior." In
Size and Ideological Distance." Midwest Journal of Thomas Schelling (ed.), Micromotives and Micro-
Political Science 26: 197-207. behavior. New York: Norton, pp. 9-43.
Hirshleifer, J. (1978). "Natural Economy versus Politi- Shubik, Martin (1959). Strategy and Market Struc-
cal Economy." Journal of Social and Biological ture. New York: Wiley.
Structures 1: 319-37. (1970). "Game Theory, Behavior, and the Para-
Howard, Nigel (1971). Paradoxes of Rationality: The- dox of the Prisoner's Dilemma: Three Solutions."
ory of Metagames and Political Behavior. Cam- Journal of Conflict Resolution 14: 181-94.
bridge, Mass.: MIT Press. Smith, Margaret Bayard (1906). The First Forty Years

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions
318 The American Political Science Review Vol. 75

of Washington Society. New York: Scribner's. Warner, Rex, trans. (1960). War Commentaries of
Snyder, Glenn H. (1971). " 'Prisoner's Dilemma' and Caesar. New York: New American Library.
'Chicken' Models in International Politics." Inter- Wilson, David Sloan (1979). Natural Selection of Popu-
national Studies Quarterly 15: 66-103. lations and Communities. Menlo Park, Calif.: Ben-
Taylor, Michael (1976). Anarchy and Cooperation. jamin / Cummings.
New York: Wiley. Young, James Sterling (1966). The Washington Com-
Trivers, Robert L. (1971). "The Evolution of Recipro- munity, 1800-1828. New York: Harcourt, Brace &
cal Altruism." Quarterly Review of Biology 46: World.
35-57.

This content downloaded from 198.179.130.111 on Fri, 04 Sep 2015 08:03:17 UTC
All use subject to JSTOR Terms and Conditions

Вам также может понравиться