Вы находитесь на странице: 1из 93

Decision Theory

Description
We have all had to make important decisions where we were uncertain about factors that were relevant to the decisions. In todays class, we study situations in which decisions are made in an uncertain environment.

Decision Theory Outline


Decision Making Under Uncertainty Decision Criteria Decision Trees EVPI Bayes Theorem

Types of Decision-Making
If future states of nature are known Decision-making with certainty If future states of nature are uncertain Decision-making under uncertainty Decision Theory focuses on uncertainty

Steps in Decision Theory


Identify alternatives open to decision maker

Identify future events states of nature not

under control of decision maker Determine payoffs and construct payoff table Select a criterion for evaluating alternative decisions

Decision Criteria
No knowledge of future probabilities Criterion of Pessimism (e.g., MaxiMin) Criterion of Optimism (e.g., MaxiMax) Criterion of Regret (e.g., MiniMax Regret) Future Probabilities Estimated or Known Expected Value Expected Opportunity Loss

Decision Theory Example


Kn Video Productions, needs to decide to produce a pilot for a T.V. show or to sell the idea to another production firm:
PAYOFF TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years ? ? ? -100 50 150 100 100 100

Decision Criteria
Dominated Actions Definition: An action ai dominated by an action ai' for all sj S, rij ri' j, and for some state si', rij' < ri' j'.

The Maximin Criterion For each action, determine the worst outcome (smallest reward). The maximin criterion chooses the action with the best worst outcome.

Definition: The maximin criterion chooses the action ai with the largest value of minjSrij.

The Maximax Criterion For each action, determine the best outcome (largest reward). The maximax criterion chooses the action with the best best outcome. Definition: The maximax criterion chooses the action ai with the largest value of maxj Srij.

Minimax Regret

The maximax regret criterion (developed by L. J. Savage) uses the concept of opportunity cost to arrive at a decision. For each possible state of the world sj, find an action i* (j) that maximizes rij. i*(j) is the best possible action to choose if the state of the world is actually sj. For any action ai and state sj, the opportunity loss or regret for ai in sj is ri*(j),jrij. The Expected Value Criterion Chooses the action that yields the largest expected reward.

Criterion of Pessimism
What is the worst that can happen?

Choose alternative with best worst-case.


MaxiMin or MiniMax
PAYOFF TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years ? ? ? -100 50 150 100 100 100

MaxiMin -100 100

MaxiMin strategy = Sell to Competitor

Criterion of Optimism
What is the best that can happen?

Choose alternative with best possible outcome.


MaxiMax or MiniMin
PAYOFF TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years ? ? ? -100 50 150 100 100 100

MAXIMAX 150 100

MaxiMax strategy = Produce Pilot

Criterion of Regret
What is the opportunity loss (regret) for each

possible decision? Choose alternative with the minimum regret. MiniMax Regret
PAYOFF TABLE Produce Pilot Sell to Competitor REGRET TABLE Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years -100 50 150 100 100 100 States of Nature Reject 1 Year 2 Years 200 50 0 0 0 50

MiniMax Regret 200 50

MiniMax Regret strategy = Sell to Competitor

A Real Estate Example


Bansals Real Estate Co. (BRE), needs to decide how large a development to construct in the face of uncertain demand for resort properties. BRE wishes maximize profits.
Demand for Resorts Low Medium High 4 4 4 1 6 6 -3 3 9 Profit in Rupees crores

Small Development Medium Development Large Development

Example: Expected value criterion


Further investigation by Kn Video Productions, indicates that the probabilities of success for its proposed pilot are as shown:
PAYOFF TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years 0.2 0.5 0.3 -100 50 150 100 100 100

Expected Value Criterion


Multiply payoffs by probability of occurrence

Add products to obtain expected value


PAYOFF TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years 0.2 0.5 0.3 -100 50 150 100 100 100

Produce Pilot Sell to Competitor

-20 20

25 50

45 30

EV 50 100

Max EV strategy = Sell to Competitor

Expected Opportunity Loss


Calculate regret table Multiply regret by probabilities Add together to calculate opportunity loss
REGRET TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years 0.2 0.5 0.3 200 50 0 0 0 50

Produce Pilot Sell to Competitor

40 0

25 0

0 15

EOL 65 15

Min EOL strategy = Sell to Competitor

Exercise
Suppose that Pizza King and Fast Pizza must determine the price they will charge for each Pizza sold. Pizza King believes that fast Pizzas price is random variable D having the following mass function: P(D = Rs. 16) = .25, P(D = Rs. 18) = . 5, P(D = Rs. 20) = .25. If Pizza King charges a price of p1 and Fast Pizza charges a price of p2, Pizza King will sell 100 + 25(p2 p1) pizzas. It costs Piza King Rs. 14 to make a pizza. Pizza King is considering charging Rs. 15, Rs. 16, Rs. 17, Rs. 18 or Rs. 19 for a pizza. Use each decision criteria discussed today to determine the price that Pizza King should charge.

Decision Trees

Decision Trees
Often, people must make a series of decisions at

different points in time. Then decision trees can often be used to determine optimal decisions. A decision tree enables a decision maker to decompose a large complex decision problem into several smaller problems.

An event fork is drawn when outside forces

determine which of several random events will occur. Each branch of an event fork represents a possible outcome, and the number on each branch represents the probability that the event will occur. A branch of a decision tree is a terminal branch if no forks emanate from the branch.

Graphical Decision Analysis


Squares represent decision nodes

Circles represent chance nodes


Branches represent alternative paths
s1 Payoff 1 Payoff 2 Payoff 3 Payoff 4

d1

s2 s1

d2

s2

Decision Tree Example


From the payoff and regret tables for Kn Video Productions:
PAYOFF TABLE Probability Produce Pilot Sell to Competitor States of Nature Reject 1 Year 2 Years 0.2 0.5 0.3 -100 50 150 100 100 100

REGRET TABLE Probability Produce Pilot Sell to Competitor

States of Nature Reject 1 Year 2 Years 0.2 0.5 0.3 200 50 0 0 0 50

Draw a decision tree

Computing Joint and Marginal Probabilities


The probability of a joint event, A and B:

number of outcomes satisfying A and B P( A and B) total number of elementary outcomes


Computing a marginal (or simple) probability:

P(A) P(A and B1 ) P(A and B2 ) P(A an d Bk )


Where B1, B2, , Bk are k mutually exclusive and collectively exhaustive events

Joint Probability Example


P(Red and Ace)
number of cards that are red and ace 2 total number of cards 52

Type Ace Non-Ace Total

Color
Red Black

Total 4 48

2 24

2 24

26

26

52

Marginal Probability Example


P(Ace)
P( Ace and Re d) P( Ace and Black) 2 2 4 52 52 52

Type Ace Non-Ace Total

Color
Red Black

Total 4 48

2 24

2 24

26

26

52

Joint Probabilities Using Contingency Table


Event
Event
A1 A2 B1 B2

Total
P(A1)

P(A1 and B1) P(A1 and B2)

P(A2 and B1) P(A2 and B2) P(A2) P(B1) P(B2)

Total

Joint Probabilities

Marginal (Simple) Probabilities

General Addition Rule


General Addition Rule:
P(A or B) = P(A) + P(B) - P(A and B) If A and B are mutually exclusive, then P(A and B) = 0, so the rule can be simplified: P(A or B) = P(A) + P(B) For mutually exclusive events A and B

General Addition Rule Example


P(Red or Ace) = P(Red) +P(Ace) - P(Red and Ace) = 26/52 + 4/52 - 2/52 = 28/52
Dont count the two red aces twice!

Type Ace
Non-Ace Total

Color
Red Black

Total 4

24
26

24
26

48
52

Computing Conditional Probabilities


A conditional probability is the

probability of one event, given that another event has occurred:


P(A and B) P(A | B) P(B) P(A and B) P(B | A) P(A)
The conditional probability of A given that B has occurred The conditional probability of B given that A has occurred

Where P(A and B) = joint probability of A and B P(A) = marginal probability of A P(B) = marginal probability of B

Conditional Probability Example


Of the cars on a used car lot, 70% have air

conditioning (AC) and 40% have a CD player (CD). 20% of the cars have both.

What is the probability that a car has a

CD player, given that it has AC ? i.e., we want to find P(CD | AC)

Conditional Probability Example


Of the cars on a used car lot, 70% have air

(continued)

conditioning (AC) and 40% have a CD player (CD). 20% of the cars have both.
CD No CD Total

AC No AC Total

.2 .2

.5 .1

.7 .3

.4

.6

1.0

P(CD and AC) .2 P(CD | AC) .2857 P(AC) .7

Conditional Probability Example


these, 20% have a CD player. 20% of 70% is about 28.57%.
CD No CD Total

(continued)

Given AC, we only consider the top row (70% of the cars). Of

AC No AC Total

.2 .2

.5 .1

.7

.3
1.0

.4

.6

P(CD and AC) .2 P(CD | AC) .2857 P(AC) .7

Using Decision Trees


Given AC or no AC:
.2 .7
P(AC and CD) = .2

.5 .7
All Cars

P(AC and CD) = .5

.2 .3

P(AC and CD) = .2

.1 .3

P(AC and CD) = .1

Using Decision Trees


Given CD or no CD:
.2 .4
(continued) P(CD and AC) = .2

.2 .4
All Cars

P(CD and AC) = .2

.5 .6

P(CD and AC) = .5

.1 .6

P(CD and AC) = .1

Statistical Independence
Two events are independent if and

only if:

P(A | B) P(A)
Events A and B are independent when the

probability of one event is not affected by the other event

Multiplication Rules
Multiplication rule for two events A and

B:

P(A and B) P(A | B)P(B)


P(A | B) P(A)

Note: If A and B are independent, then and the multiplication rule simplifies to

P(A and B) P(A)P(B)

Marginal Probability
Marginal probability for event A:
P(A) P(A | B1 ) P(B1 ) P(A | B2 ) P(B 2 ) P(A | Bk ) P(B k )

Where B1, B2, , Bk are k mutually exclusive and collectively exhaustive events

Question
A company has 2 machines that produce widgets. An older machine produces 23% defective widgets, while the new machine produces only 8% defective widgets. In addition, the new machine produces 3 times as many widgets as the older machine does. What is the probability that a randomly chosen widget produced by the company is defective?

Answer (Marginal Probability)

P(D) P(D | M1 ) P(M1 ) P(D | M 2 ) P(M2 ) 1 3 0.23 x 0.08 x 4 4 0.1175

Bayes Theorem
P(A | Bi )P(B i ) P(B i | A) P(A | B1 )P(B 1 ) P(A | B2 )P(B 2 ) P(A | Bk )P(B k )

where:

Bi = ith event of k mutually exclusive and collectively exhaustive events A = new event that might impact P(Bi)

Question
A company has 2 machines that produce widgets. An older machine produces 23% defective widgets, while the new machine produces only 8% defective widgets. In addition, the new machine produces 3 times as many widgets as the older machine does. Given that a randomly chosen widget was tested and found to be defective, what is the probability it was produced by the new machine?

Answer
P(D | M 2 )P(M2 ) P(M2 | D) P(D | M1 )P(M1 ) P(D | M 2 )P(M2 ) 3 0.08x 4 0.1175 0.511

Bayes Theorem Example


A drilling company has estimated a 40%

chance of striking oil for their new well.


A detailed test has been scheduled for more

information. Historically, 60% of successful wells have had detailed tests, and 20% of unsuccessful wells have had detailed tests.
Given that this well has been scheduled for a

detailed test, what is the probability that the well will be successful?

Bayes Theorem Example


(continued)

Let S = successful well

U = unsuccessful well
P(S) = .4 , P(U) = .6

(prior probabilities)

Define the detailed test event as D Conditional probabilities:

P(D|S) = .6
Goal is to find P(S|D)

P(D|U) = .2

Bayes Theorem Example


Apply Bayes Theorem:
(continued)

P(D | S)P(S) P(S | D) P(D | S)P(S) P(D | U)P(U) (.6)(.4) (.6)(.4) (.2)(.6) .24 .667 .24 .12
So the revised probability of success, given that this well has been scheduled for a detailed test, is .667

Bayes Theorem Example


(continued)

Given the detailed test, the revised probability of a

successful well has risen to .667 from the original estimate of .4

Event S (successful)

Prior Prob. .4

Conditional Prob. .6

Joint Prob. .4*.6 = .24

Revised Prob. .24/.36 = .667

U (unsuccessful)

.6

.2

.6*.2 = .12

.12/.36 = .333

Sum = .36

Bayes Theorem and Posterior Probabilities


Knowledge of sample or survey information can be

used to revise the probability estimates for the states of nature. Prior to obtaining this information, the probability estimates for the states of nature are called prior probabilities. With knowledge of conditional probabilities for the outcomes or indicators of the sample or survey information, these prior probabilities can be revised by employing Bayes' Theorem. The outcomes of this analysis are called posterior probabilities or branch probabilities for decision trees.

Example: Burger Prince


Burger Prince Restaurant is contemplating opening a new restaurant on Main Street. It has three different models, each with a different seating capacity. Burger Prince estimates that the average number of customers per hour will be 80, 100, or 120. The payoff table for the three models is as follows: Average Number of Customers Per Hour s1 = 80 s2 = 100 s3 = 120 Model A Model B Model C $10,000 $ 8,000 $ 6,000 $15,000 $18,000 $16,000 $14,000 $12,000 $21,000

Example: Burger Prince


Expected Value Approach

Calculate the expected value for each decision. The decision tree on the next slide can assist in this calculation. Here d1, d2, d3 represent the decision alternatives of models A, B, C, and s1, s2, s3 represent the states of nature of 80, 100, and 120.

Example: Burger Prince


Payoffs

Decision Tree
2 d1 d2 1 d3 3

s1 s2 s3 s1 s2 s3 s1 s2 s3

.4 .2 .4

10,000 15,000 14,000

.4 .2 .4

8,000 18,000

12,000
.4 6,000

.2 16,000
.4 21,000

Example: Burger Prince

Expected Value For Each Decision


d1 Model A Model B d2 EMV = .4(10,000) + .2(15,000) + .4(14,000) = $12,600 2

EMV = .4(8,000) + .2(18,000) + .4(12,000) = $11,600 3

Model C

d3 EMV = .4(6,000) + .2(16,000) + .4(21,000) = $14,000 4

Choose the model with largest EV, Model C.

Expected Value of Perfect Information


Frequently information is available which can

improve the probability estimates for the states of nature. The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. The EVPI provides an upper bound on the expected value of any sample or survey information.

Expected Value of Perfect Information


EVPI Calculation

Step 1: Determine the optimal return corresponding to each state of nature. Step 2: Compute the expected value of these optimal returns. Step 3: Subtract the EV of the optimal decision from the amount determined in step (2).

Example: Burger Prince


Expected Value of Perfect Information

Calculate the expected value for the optimum payoff for each state of nature and subtract the EV of the optimal decision. EVPI= .4(10,000) + .2(18,000) + .4(21,000) 14,000 = $2,000

Risk Analysis
Risk analysis helps the decision maker

recognize the difference between:


the expected value of a decision alternative and the payoff that might actually occur

The risk profile for a decision alternative

shows the possible payoffs for the decision alternative along with their associated probabilities.

Example: Burger Prince


Risk Profile for the Model C Decision

Alternative
.50 Probability .40 .30 .20 .10 5 10 15 20 25

Sensitivity Analysis
Sensitivity analysis can be used to determine how changes

to the following inputs affect the recommended decision alternative: probabilities for the states of nature values of the payoffs If a small change in the value of one of the inputs causes a change in the recommended decision alternative, extra effort and care should be taken in estimating the input value.

Computing Branch Probabilities


Branch (Posterior) Probabilities Calculation

Step 1: For each state of nature, multiply the prior probability by its conditional probability for the indicator -- this gives the joint probabilities for the states and indicator. Step 2: Sum these joint probabilities over all states -- this gives the marginal probability for the indicator. Step 3: For each state, divide its joint probability by the marginal probability for the indicator -- this gives the posterior probability distribution.

Expected Value of Sample Information


The expected value of sample information

(EVSI) is the additional expected profit possible through knowledge of the sample or survey information.

Expected Value of Sample Information


EVSI Calculation

Step 1: Determine the optimal decision and its expected return for the possible outcomes of the sample or survey using the posterior probabilities for the states of nature. Step 2: Compute the expected value of these optimal returns. Step 3: Subtract the EV of the optimal decision obtained without using the sample information from the amount determined in step (2).

Efficiency of Sample Information


Efficiency of sample information is the ratio

of EVSI to EVPI. As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1.

Sample Information

Example: Burger Prince

Burger Prince must decide whether or not to purchase a marketing survey from Stanton Marketing for $1,000. The results of the survey are "favorable" or "unfavorable". The conditional probabilities are: P(favorable | 80 customers per hour) = .2 P(favorable | 100 customers per hour) = .5 P(favorable | 120 customers per hour) = .9 Should Burger Prince have the survey performed by Stanton Marketing?

Example: Burger Prince


Legend: Decision Chance Consequence

Influence Diagram
Market Survey Results Avg. Number of Customers Per Hour

Market Survey

Restaurant Size

Profit

Example: Burger Prince


Posterior Probabilities

Favorable

State Prior Conditional 80 .4 .2 100 .2 .5 120 .4 .9 Total

Joint Posterior .08 .148 .10 .185 .36 .667 .54 1.000

P(favorable) = .54

Example: Burger Prince


Posterior Probabilities

Unfavorable

State Prior Conditional 80 .4 .8 100 .2 .5 120 .4 .1 Total

Joint Posterior .32 .696 .10 .217 .04 .087 .46 1.000

P(unfavorable) = .46

Example: Burger Prince


Decision Tree (top half)
4 d1 d2 2 I1 (.54) d3 6 1 s1 (.148) s2 (.185) s3 (.667) s1 (.148) 5 $10,000

$15,000
$14,000

$8,000

s2 (.185) $18,000 s3 (.667) $12,000 s1 (.148) $6,000 s2 (.185) $16,000 s3 (.667) $21,000

Example: Burger Prince


1

Decision Tree (bottom half)


I2 (.46)

s1 (.696) s2 (.217) s3 (.087) s1 (.696) s2 (.217) s3 (.087)

$10,000
$15,000 $14,000 $8,000 $18,000

d1
d2 3

d3
s1 (.696) s2 (.217) s3 (.087)

$12,000
$6,000 $16,000 $21,000

Example: Burger Prince


d1 $17,855 2 I1 (.54) d3 6 1 I2 (.46) 3 $11,433 d3 9 d1 d2 8 7 EMV = .696(10,000) + .217(15,000) +.087(14,000)= $11,433 EMV = .696(8,000) + .217(18,000) + .087(12,000) = $10,554 4 5 EMV = .148(10,000) + .185(15,000) + .667(14,000) = $13,593

d2

EMV = .148 (8,000) + .185(18,000) + .667(12,000) = $12,518


EMV = .148(6,000) + .185(16,000) +.667(21,000) = $17,855

EMV = .696(6,000) + .217(16,000) +.087(21,000) = $9,475

Example: Burger Prince


Expected Value of Sample Information

If the outcome of the survey is "favorable" choose Model C. If it is unfavorable, choose model A. EVSI = .54($17,855) + .46($11,433) $14,000 = $900.88 Since this is less than the cost of the survey, the survey should not be purchased.

Example: Burger Prince


Efficiency of Sample Information

The efficiency of the survey: EVSI/EVPI = ($900.88)/($2000) = .4504

Example: L. S. Clothiers
A proposed shopping center will provide strong competition for downtown businesses like L. S. Clothiers. If the shopping center is built, the owner of L. S. Clothiers feels it would be best to relocate. The shopping center cannot be built unless a zoning change is approved by the town council. The planning board must first make a recommendation, for or against the zoning change, to the council. Let: A1 = town council approves the zoning change A2 = town council disapproves the change Prior Probabilities Using subjective judgment: P(A1) = .7, P(A2) = .3

Example: L. S. Clothiers
New Information

The planning board has recommended against the zoning change. Let B denote the event of a negative recommendation by the planning board. Given that B has occurred, should L. S. Clothiers revise the probabilities that the town council will approve or disapprove the zoning change? Conditional Probabilities Past history with the planning board and the town council indicates the following: P(B|A1) = .2 P(B|A2) = .9

Example: L. S. Clothiers
Tree Diagram
P(B|A1) = .2 P(A1) = .7
P(Bc|A1) = .8 P(B|A2) = .9 P(A1 Bc) = .56 P(A2 B) = .27

P(A1 B) = .14

P(A2) = .3
P(Bc|A2) = .1 P(A2 Bc) = .03

Posterior Probabilities

Example: L. S. Clothiers

Given the planning boards recommendation not to approve the zoning change, we revise the prior probabilities as follows. P ( A1 ) P ( B| A1 ) (. 7)(. 2) P ( A1| B ) P ( A1 ) P ( B| A1 ) P ( A2 ) P ( B| A2 ) (. 7)(. 2) (.3)(.9) = .34 Conclusion The planning boards recommendation is good news for L. S. Clothiers. The posterior probability of the town council approving the zoning change is .34 versus a prior probability of .70.

QUESTION
A costumer has approached a bank for a Rs. 50,000 one

year loan at 12% interest. If the bank does not approve the loan, the Rs. 50,000 will be invested in bonds that earns a 6% annual return. Without further information the bank feels that there is a 4% chance that the customer will totally default on the loan. If the customer totally defaults, the bank loses Rs. 50,000. At a cost of Rs. 5000, the bank can thoroughly investigate the customers credit record and supply a favorable or unfavorable recommendation. Past experience indicate that

P (favourable recommendation | customer does not default) = 77

Question (contd.)

/96 P (favourable recommendation | customer default) = 1 /4 How the bank maximize its expected profits? Also find EVSI and EVPI.

Value of Information
If we had better or perfect information

about future states of nature, what would that information be worth? How much would we be willing to pay for better information?

EVPI Example
Tobys Ski-Cat Service is considering buying either another snow cat for the coming winter ski season. Large-capacity and medium-capacity ski-cats are available. The expected net payoffs for various outcomes are shown here:
Snowfall Light Medium Heavy 30% 30% 40% Large Capacity -90 20 70 Medium Capacity -15 10 35 No Purchase 0 0 0 Thousands of dollars

EV 7 12.5 0

Perfect Information for Example


Snowfall Light M edium Heavy 30% 30% 40% -90 20 70 -15 10 35 0 0 0 Thousands of dollars

EV 7 12.5 0

Snowfall Light Medium Heavy

Prob 30% 30% 40%

Perfect Weighted Outcome Outcome 0 0 20 6 70 28 34

EVPI 34 -12.5 21.5

Perfect Info Current Info Difference

Expected Value of Perfect Information (EVPI) = 21.5

EVPI and EOL Compared


REGRET TABLE Large Capacity Medium Capacity No Purchase Snowfall Light Medium Heavy 30% 30% 40% 90 0 0 15 10 35 0 20 70 Thousands of dollars

EOL 31.86 21.50 30.32

Note: EVPI = EOL

Decision Making with Experimentation

Experimentation
Perfect Information usually unattainable

Improve knowledge with experimentation Market research Product testing Material sampling
Experimentation provides better (but

usually not perfect) information

Example with Experimentation


Suppose that it is known that if September is unusually cold (I), then there is an increased chance that snowfall will be heavy. An experiment then is to observe the weather in September before deciding which snowcat to purchase: P( I | s1 ) = 0.05 P( I | s2 ) = 0.20 P( I | s3 ) = 0.30 s1 = light snow s2 = medium snow s3 = heavy snow

I = cold September (indicator)

But we want (and need) P( sj | I ) !!

Bayes Theorem
Prior Probabilities Likelihood Estimates Posterior Probabilities

sj
P( I k ) j P( I k s j )
P( s j | I k ) P( I k | s j ) P( s j ) P( I k )

Bayes Theorem for Example


States of Nature Light snow (s1) Medium snow (s2) Heavy snow (s3)

P(sj) 0.3 0.3 0.4

P(I|sj) 0.05 0.2 0.3

P(Isj) 0.015 0.06 0.12 P(I)=0.195

P(sj|I) 0.077 0.308 0.615 1.000

But what if September is not cold (I)? States of Nature Light snow (s1) Medium snow (s2) Heavy snow (s3)
P(sj) 0.3 0.3 0.4 P(I|sj) P(Isj) 0.95 0.285 0.8 0.24 0.7 0.28 P(I)=0.805 P(sj|I) 0.354 0.298 0.348 1.000

Note that P(I) + P(I) = 0.195 + 0.805 = 1

Revised Example Decisions


Cold September
PAYOFF TABLE Large Capacity Medium Capacity No Purchase Snowfall Light Medium Heavy 7.70% 30.80% 61.50% -90 20 70 -15 10 35 0 0 0 Thousands of dollars
EV 42.28 23.5 0

Buy large snocat

Warm September
Snowfall PAYOFF TABLE Light Medium Heavy 35.40% 29.80% 34.80% Large Capacity -90 20 70 Medium Capacity -15 10 35 No Purchase 0 0 0 Thousands of dollars
EV -1.54 9.9 0

Buy medium snocat

Manufacturing Example
Gorman Manufacturing has the following payoff table for a make vs. buy decision:
Demand Low (s1) Med (s2) High (s3) Probability 35.0% 35.0% 30.0% Make component (d1) -20 40 100 Purchase component (d2) 10 45 70 Demand Low (s1) Med (s2) High (s3) Probability 35.0% 35.0% 30.0% Make component (d1) 30 5 0 Purchase component (d2) 0 0 30 REGRET TABLE PAYOFF TABLE

EV 37.00 40.25

EOL 12.25 9.00

What is the expected value of perfect information?

Mfg Example Continued


Gorman decides to conduct a test market survey to better understand the potential demand for the new product. Based on past experience, the market research firm employed says that the survey will provide either indication of high demand (I1) or low demand (I2) with the following outcomes: High Demand P( I1 | s1 ) =0.10 P( I1 | s2 ) =0.40 P( I1 | s3 ) =0.60 Low Demand P( I2 | s1 ) =0.90 P( I2 | s2 ) =0.60 P( I2 | s3 ) =0.40

Utility Theory

Marginal Utility for Money


Linear marginal utility for money Risk Neutral Decreasing marginal utility for money Risk Averse Increasing marginal utility for money Risk Seeking

Graphical Marginal Utility


Utility Risk Averse

Risk Neutral
Risk Seeking

Money

Complex Marginal Utility


Utility
Risk Averse for large sums

Risk Seeking for small sums Money