Вы находитесь на странице: 1из 65

Individual Choice Behavior

This is a large, sprawling literature, in economics and psychology, much of which is devoted to testing the predictions of different theories of rational behavior, and to documenting deviations from those predictions. The experiments are mostly simpleoften just carefully crafted questions about hypothetical choices. They have given rise to some vigorous debates, on experimental methodology, on the interpretation of the observed choice behaviors, and on their implications for economics. One of the chief methodological debatesgoing back at least to the 1931 experiment of Thurstone and a long 1942 critique of it by Allen Wallis and Milton Friedman, concerns the use of hypothetical versus real choices. Well talk about some series of experiments in which phenomena initially observed in hypothetical choices were reproduced in experimental environments in which subjects made real choices Many of these experiments address questions that arise out of the mathematical structure of expected utility theory and its near relations. So these experiments will also give us a chance to look at the interaction between theory and experiment. Often experiments start as tests of the predictions of a theory of rational behavior, and then move on to become investigations of unpredicted regularities.

These experiments, and the theories they test, and which have been developed to account for them, will also give us an interesting window on how science is done, how new evidence is incorporated, or ignored. Part of what we will see is that there is some tension between two points of view about what we use theories for, what kinds of theories we seek, and how we test them. On the one hand, we could be looking for theories that are true descriptions of behavior. In this view, we can test theories by trying to falsify them, and when we falsify them we should reject them. On the other hand, economists are often looking for theories that will serve as models, i.e. as good approximations of observed behavior. Simple falsification may not cause us to reject such theories, since, of course, approximations are not true theories. But in that case, we have to think about how we will deal with falsifying data, to help us decide which models are useful, for what, under what circumstances, and when we can find better approximations. (Think for example of the approximation that the earth is round)

What do we mean by rationality? There are lots of formulations, involving assumptions of different strength. These assumptions have shaped the experiments designed to test them. Varieties of rationality: Goal oriented, maximizing behavior, Ordinal utility maximization Expected utility maximization, (and its important special case, expected value maximization) Subjective expected utility maximization (as well as some of the component parts such as Bayesian belief formation).

(Note that some of these theories are nested, so if simple falsification were enough, we could e.g. reject ordinal utility maximization, and all the stronger theories would fall with it. Well see that isnt how the history of this discussion looks)

Well also consider whether there are some robust phenomena that, while they may be consistent with rationality broadly defined, violate some of the simplifying assumptions that we sometimes take for granted.

Lets do a simple in class experiment. Half of you have just received a windfall gift of a classy, stylish, desirable HBS pencil. It is yours to keep, or sell. Please examine it closely, taking note of its sleek lines and prestigious lettering. Think of yourself taking notes with such an instrument. In what follows, you will be referred to as owners. Half of you did not receive a pencil. You will be referred to as non-owners. Will each owner now please pass his/her pencil to a neighboring non-owner, so that the non-owners can also fully examine the pencil. Because I randomly chose who would become an owner, there may exist some possible gains from trade. In order to assess this, I want to elicit from each owner the minimum price at which he/she would be willing to sell the pencil. From each non-owner, I would like to elicit the maximum price she/he would be willing to pay to buy the pencil. Of course, eliciting this price is a problem that itself presents some challenges to experimental design

Becker, DeGroot, and Marschak (1964) designed a procedure that would give utility maximizers the incentive to reveal their reservation price. Each owner of the object will be asked to name the amount of money for which he would be willing to sell it. Each non-owner will be asked to name the amount of money for which he would be willing to buy it. Next a random price will be determined. (We will throw a 20-sided die, to determine a price between $0.05 and $1.00.) Each buyer seller pair will then transact (the buyer will buy the object from the seller at the random price) if and only if the random price is higher than the sellers demand and lower than the buyers offer. Note that the transaction takes place at the random price, not at the stated willingness to pay or willingness to accept.

It is a dominant strategy for a utility maximizer to state his true reservation price, since if he overstates or understates it he will miss some desirable selling opportunities or be forced to enter into some undesirable transactions. Are there any questions?

Would everyone please write down a price.

(Note that not only is using the BDM elicitation procedure a part of the design, so is explaining itsome theories of rationality could be taken to imply that the explanation is unnecessary.) And of course, if subjects arent utility maximizers, they may not even have unique reservation prices, so the BDM technique may have unpredictable effects. But because many modern economic theories are based on the assumption that agents are expected utility maximizers, so that it is frequently the case that the predictions of the theory can only be known if the utility functions of the subjects are elicited in an incentive compatible way, this technique has found wide application even in experiments whose primary purpose is not to estimate utility functions.

Owner prices (WTA)

Non-Owner prices (WTP)

A version of this experiment was reported in Kahneman, Daniel, Jack L. Knetsch, and Richard H. Thaler 1990, Experimental Tests of the Endowment Effect and the Coase Theorem, JPE, 98, 6, 1325-1348.

(Why do the results say something about the Coase Theorem?)

Incidentally, this experiment (with real choices, controlled incentives, etc.) arose from the observation that, in hypothetical questionnaires, that there was often a big gap between reported WTPs and WTAse.g. in contingent valuation studies, with questions like these: How much would you pay to eliminate some risk that presently gives you a .001 chance of sudden death over the next five years? How much would you have to be paid to accept an additional .001 chance of sudden death over the next five years?

Putting prices on unfamiliar things turns out to be a difficult task, and the results are sensitive to how youre asked.

Ariely, D., Loewenstein, G. and Prelec, D. (2003). Coherent arbitrariness: Stable demand curves without stable preferences. Quarterly Journal of Economics, 118, 73-106. Experiment 3: Annoying sounds of 100, 300 and 600 seconds. At the beginning of the experiment, subjects are asked about the last 3 digit number of their social security number. Subjects were then asked whether, hypothetically, they would listen again to the sound they just experienced (for 300 seconds) if they were paid the money amount they had generated from their social security number. In the main part of the experiment, subjects had three opportunities to listen to sounds in exchange for payment. The three different durations were again ordered in either an increasing set (100 seconds, 300 seconds, 600 seconds) or a decreasing set (600 seconds, 300 seconds, 100 seconds). In each trial, after they indicated their WTA, subjects were shown both their own price and a random price. If the price set by the subject was higher than the random price, subjects continued directly to the next trial. If the price set by the subjects was lower than the random price, subjects received the sound and the money associated with it (the amount set by the randomly drawn number), and then continued to the next trial. This process repeated itself 3 times, once for each of the three durations.



Maybe the coherence has to do with the fact that money is a familiar, scalar quantity. But, they show that similar results can be achieved by using ounces of unpleasant liquid (composed of equal parts of Gatorade and Vinegar) as opposed to money and having people choose between the liquid and the unpleasant sound. Depending on the anchor (1 or 3 minutes) people have different trade offs between these two unpleasant things (For each drink size, what is the highest number of seconds of unpleasant noise, s.t. you prefer the unpleasant noise)


Lets take a step back and consider some general formulations of what constitutes rational behavior.

Two closely related formulations of simple rationality:


Goal oriented, maximizing behavior in which choices can be represented by preferences. Each individual is choices on sets of alternatives can be represented by a preference relation Ri (>) (with strict component Pi and indifference relation Ii) such that an individual is choice from a set A can always be represented by [ Ci(A) = {x in A| x Ri y for all y in A}]

2. Ordinal utility maximization: individual is choices can be represented by a real valued utility function ui [ Ci(A) = {x in A| ui(x) > ui(y) for all y in A}]


Not every binary relation on a set A can be represented by a utility function [u(x) > u(y) iff xRy]. Two necessary conditions for a preference R to be representable by a utility function u are that it be: 1. complete: for each x,y in A either xRy or yRx (since either u(x) > u(y) or u(y) > u(x)). 2. transitive: xRyRz implies xRz (since u(x) >u(y) > u(z) implies u(x) > u(z)). These are sometimes also viewed as prescriptive rationality conditions: Completeness can be viewed either as a formality (if were willing to define xIy whenever neither xPy or yPx), or, more interestingly, as a kind of integrated personality condition (you shouldnt fall apart when presented with choices between x and y) Transitivity can be viewed as an elementary rationality condition insofar as it avoids being the victim of a money pump among choices in a cycle xPyPzPx. Theorem: If A is finite or countable, then completeness and transitivity are sufficient as well as necessary conditions for a preference to be representable by a utility function.


Some experiments concerned with the rational preference model.

An intransitivity demo: you like your tea with two spoons of sugar; you are presented with 101 cups of tea ordered from zero to two spoons of sugar in increments too small to taste the difference. So, you are indifferent between any two adjacent cups. Transitivity implies you should be indifferent between any two cups, but you are not

If we were testing utility theory to see if it were precisely true, we could stop here (although simple modifications to take account of just noticeable differences would quickly suggest themselves). But its less clear how we should think about this kind of evidence if we are considering utility theory to be some kind of approximation. (Maybe we could improve the approximation by incorporating just noticeable differences e.g. by modeling choices via preferences and utilities such that xPy iff u(x) > u(y) + jnd, where jnd is a number to be determined empirically. But maybe this would just make the theory cumbersome, and not matter much for most of the choices were interested in


Intransitivity can arise in other ways also, more related to cognition than to perception (K. May, Econometrica, 1954) [note the date]: The subjects were 62 college students. The alternatives were three hypothetical marriage partners, x, y, and z. In intelligence they ranked xyz, in looks yzx, in wealth zxy subjects were confronted at different times with pairs labelled with randomly chosen letters. On each occasion x was described as very intelligent, plain looking, and well off; y as intelligent, very good looking, and poor; z as fairly intelligent, good looking, and rich. During the experiment proper, the subjects were never confronted with all three alternatives at once. [repeated choice showed consistency] results: xyz: 21; xyzx: 17; yzx: 12; yxz: 7; zyx: 4; xzy: 1; zxy: 0; xzyx: 0. Note that the intransitive cycle xyzx chosen by 17 subjects is a Condorcet majority rule cycleeach alternative beats the next in two out of three dimensions.

So preferences arent always (even) transitive. Lets (nevertheless) consider the stronger assumptions made about preferences over risky alternatives.


Note that, for lotteries involving monetary or other quantifiable payoffs, a special case of a utility maximizing individual is a risk neutral utility maximizer, who maximizes expected value.
Perhaps one of the first economics experiments was the hypothetical problem proposed by Nicholas Bernoulli and discussed by his cousin Daniel Bernoulli in 1738 The Petersburg paradox How much would you pay for a gamble which paid you 2n-1 ducats for the nth head in a sequence of coin tosses, ending at the first tail. (e.g. if the sequence is HHHT, you get 1+2+4, if it is HHHHT you get 1+2+4+8, etc. The expected value of this bet is (1/2)n(2n-1) which is infinite. But most of us wouldnt pay more than, say 20 ducats.

The Bernoullis solution to this problem was to propose a form of expected utility theory, with diminishing marginal returns for increasingly large sums of money, so that the sequence (1/2n)u(2n) converged.
Bernoulli, Daniel [1738], "Specimen Theoriae Novae de Mensura Sortis," Commentarii Academiae Scientiarum Imperialis Petropolitanae, 5, pp175-192. English translation in Econometrica, 22, 1954, pp23-36.


But the Petersburgh paradox didnt cause expected value to be supplanted as economists canonical model of individual choice behavior over lotteries. In fact, we find sustained formal attention to expected utility theory only picking up in earnest after von Neumann and Morgensterns 1944 axiomatization. (And expected value maximization is still a wonderful theory of choiceonly an approximation, to be sure, but perhaps the first approximation each of us is interested in when we start to analyze the returns on lotteries.)


Expected utility functions:

Consider preferences defined not only on some set A of riskless alternatives, but also over lotteries involving probability distributions over elements of A - i.e. consider preferences which allow an individual to evaluate not only alternatives a,b in A, but also lotteries [pa;(1-p)b] which give a probability p of receiving a, and (1-p) of receiving b.

For a set A of alternatives, consider the mixture set M containing A to be the (smallest) set of lotteries such that, for all a,b in M, and p,q in [0,1], [pa;(1-p)b] is in M [pa;(1-p)b] = [(1-p)b;pa] a = [1a;0b] [p[qa;(1-q)b];(1-p)b] = [pqa;(1-pq)b] (dont let the = sign fool you; this is a behavioral assumption about compound lotteries [!]).


It isnt hard to find violations of the compound lottery assumption:

Problem 5. Consider the following two-stage game. In the first stage, there is a 75% chance to end the game without winning anything and a 25% chance to move into the second stage. If you reach the second stage you have a choice between C. D. A sure win of $30 An 80% chance to win $45 [74%] [26%]

Your choice must be made before the game starts, that is, before the outcome of the first-stage game is known. Please indicate the option you prefer.

Problem 6. Which of the following options do you prefer? E. A 25% chance to win $30 [42%] F. A 20% chance to win $45 [58%] [Source: Tversky and Kahneman, 1981]

The compound lottery assumption means we will be treating problems 5 and 6 as the same lottery.


If a preference relation R is defined on M (with strict preference P and indifference I), then an expected utility function u:M R is a function u such that (i) u(a) > u(b) if and only if aRb; and (ii) u[pa;(1-p)b] = pu(a)+(1-p)u(b) Conditions on the preferences: 1. Independence: For all alternatives x,y,z in A and probabilities p If xPy then [px;(1-p)z] P[py;(1-p)z]. If xIy then [px;(1-p)z] I[py;(1-p)z]

2. Archimedian condition: If xPyPz then there exists a p such that yI[px;(1-p)z].

(Many peoples intuition is that the independence condition is natural, and probably describes their behavior pretty well, but that the Archimedian condition might be hard to take: e.g. suppose x = finding a $20 bill as you leave class y = leaving class today as you expected z = getting boiled in oil. But this intuition may be wrong on both counts For example, would you cross a street to pick up $20?


The Allais paradox (Allais, Econometrica, 1953)

Choice 1: choose one of the two lotteries P or Q, where Q gives you $500,000 for certain, and P is the lottery [.89$500,000; .10$2,500,000; .01$0]

Choice 2: choose one of the two lotteries R or S, where R is the lottery [.11$500,000; .89$0] and S is the lottery [.10$2,500,000; .90$0].


Allais observed that many people prefer Q to P in choice 1, and S to R in choice 2. However it is easy to confirm that no expected utility maximizer can simultaneously hold both opinions. That is preferring Q to P implies u($500,000) > .89u($500,000) + .1u($2,500,000) + .01u($0), which reduces to .11u($500,000) > .1u($2,500,000) + .89u($0), while preferring S to R implies the reverse inequality.


Some of the modifications of utility theory designed to account for this kind of phenomenon focus on the fact that it can be viewed as a violation of the independence condition which we assumed was satisfied by an individual's preferences. That is, if we define the lottery T = [(10/11)$2,500,000; (1/11)$0], then (keeping the compound lottery assumption in mind) Q= $500,000 = [.11Q; .89Q],

P = [.89$500,000; .10$2,500,000; .01$0] = [.11T; .89Q],

R = [.11$500,000; .89$0] = S = [.10$2,500,000; .90$0] =

[.11Q; .89$0], [.11T; .89$0].

So if an individual whose preferences obey the independence condition prefers Q to P then he prefers Q to T, which implies he must prefer R to S.


Theorem (vN-M circa 1944): If R is a complete and transitive preference on M obeying the Independence and Archimedian conditions, then an expected utility function exists that represents R. Note that to get an expected utility function, and not just an ordinal utility function, we need stronger assumptions about the consistency of preferences: in addition to completeness and transitivity, we need Independence and Archimedian conditions. All these conditions apply not just on the abstract set of riskless alternatives A, but on the whole mixture set M generated by A. (So even if A is finite, the preferences have to be consistent on an infinite set) (When we deal with subjective probabilities, we move into the Bayesian domain of subjective expected utility theory, and require further consistency conditions.) An expected utility function can now be constructed by picking arbitrary alternatives xPy and setting u(x)=1, u(y) = 0. For an alternative z in A, we can now scale u(z) to preserve expected utility: e.g. if xPyPz, u(z) = p such that zI[px; (1-p)y] (such a p exists by the Archimedian condition)


A big theoretical/experimental literature has grown out of investigating the consequences of relaxing the independence assumption, and getting generalized utility theories (with additional personal parameters), which have expected utility as a special case. (Machina et al. for the theory; see the Camerer chapter in the Handbook for some of the experiments.) Another, largely separate literature has grown out of the hypothesis that what drives choices like the Allais paradox has more to do with how probabilities are perceived. In particular, experiments in this literature are aimed at exploring whether probability perception is non linear, with discontinuities at 0 and 1a certainty effect. This is the direction taken by Tversky and Kahneman, e.g. in prospect theory.

Daniel Ellsberg, back in 1961, posed a fundamental challenge to the Bayesian formulation of subjective expected utility theory. Ellsbergs paradox: There are two urns containing a large number of red and black balls. Urn A is known to have 50% red balls and 50% black balls. Urn B has red and black balls in unknown proportions. You will win $100 if you draw the color ball of your choice from an urn. From which urn would rather choose a ball?


Kahneman and Tversky developed a big class of demonstrations of a variety of systematic violations of expected utility theory. The examples below were collected in Thaler, Richard The Psychology of Choice and the Assumptions of Economics, in A.E. Roth, editor, Laboratory Experimentation in Economics: Six Points of View, Cambridge University Press, 1987.


Problem 4. Which of the following options do you prefer? A. A sure win of $30 [78%] B. An 80% chance to win $45 [22%]

Problem 5. Consider the following two-stage game. In the first stage, there is a 75% chance to end the game without winning anything and a 25% chance to move into the second stage. If you reach the second stage you have a choice between C. D. A sure win of $30 An 80% chance to win $45 [74%] [26%]

Your choice must be made before the game starts, that is, before the outcome of the first-stage game is known. Please indicate the option you prefer.

Problem 6. Which of the following options do you prefer? E. A 25% chance to win $30 [42%] F. A 20% chance to win $45 [58%] [Source: Tversky and Kahneman, 1981]

We might have expected subjects to treat problems 5 and 6 as equivalent, but they come much closer to treating problem 5 as equivalent to problem 4. (So this might be a pseudo-certainty effect in problem 5, or perhaps a compound lottery effect, or both)


Problem 7. Imagine that you face the following pair of concurrent decisions. First examine both decisions; then indicate the options you prefer: Decision (i). Choose between A. A sure gain of $240 B. 25% chance to gain $1,000 and 75% chance to lose nothing` Decision (ii). Choose between C. A sure loss of $750 D. 75% chance to lose $1,000 and 25% chance to lose nothing [Source: Tversky and Kahneman, 1981]

[84%] [16%]

[13%] [87%]

Problem 8. Choose between E. 25% chance to win $240 and 75% chance to lose $760 F. 25% chance to win $250 and 75%chance to lose $750 [Source: Tversky and Kahneman, 1981] But E = A&D and F = B&C




mental accounting

Problem 9. Imagine that you are about to purchase a jacket for ($125)[$15] and a calculator for ($15)[$125]. The calculator salesman informs you that the calculator you wish to buy is on sale for ($10)[$120] at the other branch of the store, a 20-minute drive away. Would you make the trip to the other store? [Source: Tversky and Kahneman, 1981] Problem 10. Imagine that you have decided to see a play, admission to which is $10 per ticket. As you enter the theater you discover that you have lost a $10 bill. Would you still pay $10 for the ticket to the play? Yes: 88% No: 12% Problem 11. Imagine that you have decided to see a play and paid the admission price of $10 per ticket. As you enter the theater you discover that you have lost your ticket. The seat was not marked and the ticket cannot be recovered. Would you pay $10 for another ticket? Yes: 46% No: 54% [Source: Tversky & Kahneman, 1981]


sunk costs

Problem 12. You have tickets to a basketball game in a city 60 miles from your home. On the day of the game there is a major snow storm, and the roads are very bad. Holding constant the value you place on going to the game, are you more likely to go to the game (1) if you paid $20 each for the tickets or (2) if you got the tickets for free? [Source: Thaler, 1980]

This has been replicated fairly cleanly in an experiment (Arkes and Blumer, 85) in which season ticket holders to a campus theater group were randomly divided into two groups, one of which was given a refund on part of the price of the tickets. This group attended the first half of the season less regularly than the control group, which received no refund.


The relationship between time of payment and consumption is further explored in Gourville, J.T. and Soman, D. (1998). "Payment Depreciation: The Behavioral Effects of Temporally Separating Payments from Consumption." Journal of Consumer Research, 25 (September), 160-174. Gourville and Soman look at participation rates of health club members as a function of when their twiceyearly dues come due. The fact that participation is highest in the month following billing supports the general contention that consumption of services is in part a function of when they were paid for. (This is also a phenomenon first explored through hypothetical questions.)

(See also Della Vigna and Malmendier)


One attempt to summarize a number of observed or hypothesized regularities was Kahneman and Tverskys Prospect Theory (1979, Econometrica). (See also the updated version,
Tversky, Amos, and Daniel Kahneman. "Advances in Prospect Theory: Cumulative Representation of Uncertainty." Journal of Risk and Uncertainty 5 (1992): 297-323)

Prospect Theory posits both a nonlinear value function that scales different monetary payoffs, and a nonlinear weighting function that scales different probabilities.


Prospect Theory


Evaluate Lotteries at (px) v (x) instead of px u (x)


There is also a large literature on intertemporal choices. Which would you prefer: $10 today, $10 in 50 weeks or or $15 in 2 weeks? $15 in 52 weeks?

Laibson and Rabin are two of the names associated with the burgeoning literature on modeling time preferences as hyperbolic rather than exponential, i.e. as U = U0 + tut (summing over discounted future utilities received at times t = 1 to infinity) instead of the more conventional (stationary over time) exponential formulation U = U0 + tut A good deal of thoughtful work has gone into drawing out the differences to be expected between rational and irrational hyperbolic discounters, a distinction based on whether they correctly anticipate their future preferences So far, weve seen some evidence that choices may not always be resolvable into nicely behaved preferences over alternatives. But we can also ask whether choices can be represented by preferences at all.


Choosing cancer treatments (McNeil, Pauker, and Tversky (1988) Survival Frame (% alive) Radiation After treatment After 1 year After 5 years 100 77 22 Surgery 90 68 34 Mortality Frame (% dead) Radiation 0 23 78 Surgery 10 32 66

Percentage Choosing each: American MDs and students 16 84 50 50


Presentation effects present a different kind of challenge to theories of choice based on preferences, since they suggest that choices between two alternatives may depend on how the decision is presented or framed (and not merely on the properties of the alternatives). That is, they suggest that there may not necessarily be any underlying preferences that are tapped when we ask a question or demand a choice. Instead, sensitivity of choices to how they are framed can be interpreted as suggesting that different frames elicit different psychological choice processes, and these may result in different choices.

Preferences over other complex domains (not just gains and losses); e.g. preferences for fairness. Problem 13. You are lying on the beach on a hot day. All you have to drink is ice water. For the past hour you have been thinking about how much you would enjoy a nice cold bottle of your favorite brand of beer. A companion gets up to make a phone call and offers to bring back a beer from the only nearby place where beer is sold (a fancy resort hotel)[a small, rundown grocery store]. He says that the beer may be expensive and so asks how much you are willing to pay for it. He says that he will buy the beer if it costs as much as or less than the price you state, but if it costs more than the price you state he will not buy it. You trust your friend and there is no possibility of bargaining with (the bartender)[the store owner]. [Source:Thaler, 85]


So, starting not long after von Neumann and Morgensterns axiomatization of expected utility, and the theoretical literature which it gave rise to, there have been a long sequence of challenges to its descriptive power. Weve discussed, e.g. Allais, Maurice [1953] "Le Comportement de L'homme Rationnel Devant le Risque: Critique des Postulats et Axiomes de L'ecole Americane," Econometrica, 21, pp503546. May, Kenneth O. [1954] "Intransitivity, Utility, and the Aggregation of Preference Patterns," Econometrica, Vol. 22, pp1-13. Ellsberg, Daniel [1961], "Risk, Ambiguity and the Savage Axioms," Quarterly Journal of Economics, 75, 643-669. Kahneman, D., and A. Tversky (1979): Prospect Theory: An Analysis of Decisions Under Certainty, Econometrica, 47, 263-291.

This brings us to the homework: lets talk about Rabin, 2000.


Rabin considers utility functions defined on total wealth, u(w). That is, the set of alternatives A he considers is something like alternative values for the present value of your lifetime income. He notices that if u is concave, then if it has considerable curvature over all small intervals, concavity may imply that u is bounded. And even if u has considerable curvature for small intervals in some finite range [0,w], concavity (diminishing marginal returns) puts big limits on how fast u can grow.



p. 229


How should we think about all this?


Possible Reactions to Rabin:

The usual suspects:

Markets may provide very different environments

But some markets protect individuals more than otherse.g. securities markets versus housing markets or labor markets (which may discipline individual errors in some directions but not in others)

Results may be artifacts of presentationsome presentations may be more natural than others, and people may be more prone to various behaviors in some presentations than others But marketers (e.g. of appliance insurance, etc.) will have an incentive to look for frames that encourage people to make bad decisions.

Hypothetical vs real payoffs Learning versus inexperienced play


How Efficient Are Information Markets? Evidence from an Online Exchange Paul Tetlock October 2003


Palacios-Huerta and Serrano question the data behind the assumption that extreme aversion to small risks is a reasonable summary of behavior.
Palacios-Huerta, Ignacio, Roberto Serrano, and Oscar Volij: Rejecting Small Gambles Under Expected Utility, working paper, Brown University, August 2002.

They point out that many estimates of risk aversion, even on small gambles, are much more moderate than those Rabin supposes. And no experiment has been done to specifically look at whether big changes in wealth alter peoples risk aversion for small gambles. (It would be fun to be a subject in the increasing wealth condition of a withinsubjects design, but it might be cheaper to do a between subjects design and ask people about their net worth, or future income expectations.) One of the things we learn from the work of Kahneman and Tversky and others is that there are surely experiments that can be done that would support Rabins estimates of risk aversion for low stakes, and also others that can be done, perhaps using different response modes, that would get much more risk neutral behavior.

Another line of reaction to Rabins paper has to do with the scope of the consistency requirements. Rabin looks at u(w) defined on all levels of wealth, and wants all decisions to be consistent with one another. Notice that, while this isnt an unreasonable thing to want, it is nowhere an assumption in the axiomatization of expected utility. His argument doesnt address the idea that utility theory might be a good approximation for local use, i.e. for comparing lots of small decisions, or lots of big ones.


Implications for experimenters? Rabin and Thaler make an empirical prediction, about the literature of which their paper is a part. Looking at Allais, 1953, May, 1954, Ellsberg, 1961, Kahneman and Tversky, 1979, Rabin, 2000, Rabin and Thaler, 2001. They predict that the subsequence 1954, 1961, 1979, 2000, 2001 has converged. If they are right, and they have indeed written the last paper on utility theory, which comes to be regarded as an ex hypothesis, then of course experimenters will find themselves testing very different kinds of theories, and experiments will have to control for the hypotheses that become plausible to economists. But of course their argument seems to also imply that e.g. Allais paper should have been the last on utility theory. So, their prediction may be wrong. Even if their prediction about the general implications of Rabins work is wrong, his observations may still have implications for experimenters.


One obvious, and perhaps uncontroversial implication, is that we need to be careful about what conclusions we draw from an experiment. We will need to be very careful if were tempted to estimate individuals risk aversion for large sums based on their choices for small sums. (Well later talk about other domains on which the scale of payoffs raises questions about the possible generality or lack of it of experimental results: in this connection recall the debate about hypothetical versus real payoffs.)

Rabin, and Rabin and Thaler, also propose another implication of their word for experimenters: experimental designs should no longer use binary lottery games, or other tools to control for unobserved risk aversion. My guess is that, as long as economists continue to use expected utility to formulate predictions, experiments that want to test those predictions that depend sensitively on agents utility functions will either have to try to control them or measure them.


As prospect theory has become better known, it has also started to attract the kind of critical attention from experimenters that utility theory has attracted. Lets look quickly at two of these. Harbaugh, Krause, and Vesterlund (2002), Prospect Theory in Choice and Pricing Tasks, working paper HK&V report that the predictions of cumulative prospect theory are sensitive to the way the questions are asked.
Tversky, Amos, and Daniel Kahneman. "Advances in Prospect Theory: Cumulative Representation of Uncertainty." Journal of Risk and Uncertainty 5 (1992): 297-323) Distinctive Fourfold Pattern Summarized by CPT: (1) (2) (3) (4) risk-seeking over low-probability gains risk-aversion over low-probability losses risk-aversion over high-probability gains risk-seeking over high-probability losses

Reflection of risk attitude:

low and high probability loss and gain


Gambles examined in HK&Vs study:

Table 1: The Six Prospects Prospect Number 1 2 3 4 5 6 Prob. .1 .4 .8 .1 .4 .8 Payoff +$20 +$20 +$20 -$20 -$20 -$20 Expected Value $2 $8 $16 -$2 -$8 -$16 FFP Prediction Seeking Neutral Averse Averse Neutral Seeking


Experimental Procedure:
Probability presented both by spinner and as probability

Elicitation: (1) Choice-based procedure (Harbaugh et al., 2000). chose between gamble and its expected value


Price-based procedure Report maximum willingness to pay o to play a gamble over gains o to avoid playing a gamble over losses. BDM procedure to determine whether subjects get risky prospect or pay the randomly determined price to play the gamble (gain), or avoid the gamble (loss)

Participants: 96 college students o 64 use the choice method first and price method second (choice-subjects) o 32 use the price method first and choice method second (price-subjects)


How much would you pay to avoid playing this game?

50% -$0

50% -$20


No Spin, - $10

50% -$20

50% -$0



Risk Attitudes of Price-subjects in the Price Task Round 1 Decisions (N=32)


Risk Attitudes of Choice-subjects in the Choice Task (N=64)

So HK&V find they can reverse prospect theorys fourfold pattern of risk attitudes for high and low probabilities and gains and losses.


Ralph Hertwig, Greg Barron, Elke U. Weber, and Ido Erev, Decisions From Experience and the Effect of Rare Events, Psychological Science, forthcoming. look at choices over gambles in three conditions, which they call
Description: The Description condition is the condition used by Kahneman and Tversky. The subjects were presented with a description of the problems (as described above) and were asked to state which gamble they prefer in each problem. Feedback: In the Feedback condition, the participants did not see the description of the relevant gambles. Rather, the participants were presented with two unmarked keys and were told that in each trial of the experiment they can select one of the two keys. Each selection led to a draw from the keys payoff distributions (a play of the relevant gambles). Sampling: In the Sampling condition the participants were told that their goal is to select once between two gambles. They were not presented with a description of the gambles, but were allowed to sample as many time as they wish the relevant payoff distributions. Thus, like the Feedback condition they had to make decisions from experience, but like the Description condition they had to make a single choice.


HBW&E also find that CPTs overweighting of small probabilities and underwaiting of large probabilities occurs only in the description condition.

Problem 1: Choose between:

Option H Outcome and likelihood 4 with probability 0.8; 0 otherwise L 3 for sure Descript ion 35% Feedback 65% Sampling 88%

Problem 2: Choose between:

Option Outcome and likelihood H 4 with probability 0.2; 0 otherwise L 3 with probability 0.25; 0 otherwise





So, there are systematic departures from simple models of rational choice. But it is hard to find general descriptive models. The same tools used to show that e.g. utility theory isnt a general description seem to work well on prospect theory too.

What are the implications of all this:

For economists generally? For experimental economists?

I started to think self consciously about the science in the large issues when I was asked to reply to Amos Tverskys question: What accounts for economists' "reluctance to depart from the rational model, despite considerable contradictory evidence"?


I tried to give a serious answer to this, under the heading Individual Rationality as a Useful Approximation) Tversky, Amos 'Rational Theory and Constructive
Choice," The Rational Foundations of Economic Behavior, K. Arrow, E. Colombatto, M Perlman, and C. Schmidt, editors, Macmillan, 1996 Roth, A.E. "Individual Rationality as a Useful Approximation: Comments on Tversky's 'Rational Theory and Constructive Choice'," The Rational Foundations of Economic Behavior, K. Arrow, E. Colombatto, M Perlman, and C. Schmidt, editors, Macmillan, 1996, 198-202.

That the question might need a serious answer is suggested by the fact that the discussion took place in the 1990s, despite the fact that some of the most famous evidence against the expected utility hypothesis (e.g. Allais, 1953), was collected around the same time that the expected utility hypothesis was formalized (von Neumann & Morgenstern, 1944).

Different levels of explanation may be good for different things: blood chemistry, mental models, rational behavior


Risk neutral economic man (never buys insurance, but would be willing to pay any finite amount to participate in Petersburg paradox (if only he could be convinced that the resources exist to pay off any possible winnings).. Expected utility maximizing man. (buys insurance, but ignores sunk costs, and is immune to framing effects). Almost rational economic man --e.g. prospect theory man. (has malleable reference points and probability perceptions, but still has preferencescomfortable with non-utility Allais choices, but doesnt do preference reversals). Psychological man (Tversky 96). (doesn't have preferences, has mental processes. Different frames and contexts, and different choice procedures elicit different processes.) So he may sometimes exhibit preference reversals because choosing and pricing elicit different mental procedures.


Neurobiological man doesn't (even) have a fixed collection of mental processes, in the sense of psychological man. He has biological and chemical processes which influence his behavior. Different blood chemistry leads to different mental processes; e.g. depending on the level of lithium (or Valium or Prozac) in his blood, he makes different decisions (on both routine matters and matters of great consequence--even life and death). An understanding of how chemistry interacts with mental processes has proved to be very useful, for instance in treating depression. One can then pose the neurobiologist's question: What accounts for the [psychologist's] "reluctance to abandon the [psychological] model, despite considerable contrary evidence"?


Assessing the usefulness of an approximation

Since we know that approximations arent precisely true, it is easy not to be impressed by evidence that they are not (even when this evidence is in sufficient quantity that you can see that it isnt hard to find) The problem is that most of the evidence doesnt directly address the question of how useful is the assumption of rationality as an approximation on (each of) the domains to which it is applied. But economists havent been very good at formalizing the idea of a useful approximation in a way that makes it precise. And truer models dont always make more useful approximationit depends on the use you want to make of them. Before economics was an experimental science we could get away with being unclear about the difference between useful approximations and precisely true theories, because a model that could be shown to be false on noisy field data probably wasnt a very good approximation. But experimental economics marks the end of this as a harmless confusion, because a model that isnt precisely true can now be shown to be false.


Yuji Ijiri and Herbert A. Simon, American Economic Review, 54, 2, part 1, March, 1964, 77-89. Suppose we wish to test Galileos law of the inclined planethat the distance, s(t), travelled by a ball rolling down the plane increases with the square of the time: s(t) = kt2, where k is a constant. We perform a large series of careful observations, obtaining a set of [s,t] pairs from which we estimate k. To decide whether we have confirmed or refuted Galileos law, we test whether the observed deviations of the observations from the fitted curve could have arisen by chance. Suppose that the statistical test rejects the hypothesis. Then we may conclude either (a) that Galileos law is substantially incorrect or (b) that it is substantially correct but only as a first approximation. We know, in fact, that Galileos law does ignore variables that may be important under various circumstances: irregularities in the ball or the plane, rolling friction, air resistance, possible electrical or magnetic fields if the ball is metal, variations in the gravitational fieldand so on, ad infinitum. The enormous progress that physics has made in three centuries may be partly attributed to its willingness to ignore for a time discrepancies from theories that are in some sense substantially correct. (emphasis added) no one has ever formalized the criteria for ignoring discrepancies of this kind


Sometimes discussions between psychologists and economists, and even between experimental economists and theorists, are at cross purposes because of a failure to distinguish whether we believe a theory is precisely true, or a useful approximation on some appropriately specified domain. (Useful approximations only become more useful as we become aware of what domains they are useful on, and when they might lead us dangerously astray.)

when Im trying to particularly annoy my psychologist friends and colleagues, I tell them that this difference in approach is why the President doesnt have a Council of Psychological Advisors


Studying industries like insurance required economists to move beyond expected value maximization (in a way that the Petersburg paradox never did). Insurance presented phenomena that simply couldnt be studied without some notion of risk aversion. Perhaps industries like advertising, and marketing in general, will be the impetus for economists to look at models of choice that do not posit fairly fixed preferences. [there has been a lot of semi-formal experimenting in the mail order industry]

Non preference models of collective economic behavior may helpe.g. learning models, or models of boundedly rational play which show that some of the collective phenomena we want to explain (e.g. equilibrium phenomena) may not depend critically on highly rational behavior. These are things to keep an eye out for in the classes which follow, as we talk about behavior in markets and other strategic environments.


Homework: Next week, Muriel Niederle will be in town. Please read her paper Gneezy, Uri, Muriel Niederle, Aldo Rustichini, Performance in Competitive Environments: Gender Differences, Quarterly Journal of Economics, CXVIII, August 2003, 1049 1074. at http://www.stanford.edu/~niederle/Gender.pdf and come prepared with comments, criticisms, attacks, and ideas for follow-up experiments that might further illuminate the issues raised in the paper, and the hypotheses it discusses. Its also time to start thinking, at least broadly, about what kind of experiment you would like to design. It would be a good idea to form groups of 3 or 4 people who have some common interests and would like to design an experiment together