Вы находитесь на странице: 1из 12

Decision Making and Negotiation

MM5009
Tugas Resume

Fania Anindita Rizka


NIM: 29115695

PROGRAM STUDI MAGISTER ADMINISTRASI BISNIS


SEKOLAH BISNIS DAN MANAJEMEN
INSTITUT TEKNOLOGI BANDUNG
2016

Decision trees and influence diagrams


Constructing a decision tree
You may recall from earlier chapters that two symbols are used in decision trees. A square is used
to represent a decision node and, because each branch emanating from this node presents an
option, the decision maker can choose which branch to follow. A circle, on the other hand, is
used to represent a chance node. The branches which stem from this sort of node represent the
possible outcomes of a given course of action and the branch which is followed will be
determined, not by the decision maker, but by circumstances which lie beyond his or her control.
The branches emanating from a circle are therefore labeled with probabilities which represent the
decision makers estimate of the probability that a particular branch will be followed. Obviously,
it is not sensible to attach probabilities to the branches which stem from a square.
Determining the optimal policy
It can be seen that our decision tree consists of a set of policies. A policy is a plan of action
stating which option is to be chosen at each decision node that might be reached under that
policy. We will now show how the decision tree can be used to identify the optimal policy. For
simplicity, we will assume that the engineer considers that monetary return is the only attribute
which is relevant to the decision, and we will also assume that, because the company is involved
in a large number of projects, it is neutral to the risk involved in this development and therefore
the expected monetary value (EMV) criterion is appropriate. The technique for determining the
optimal policy in a decision tree is known as the rollback method. To apply this method, we
analyze the tree from right to left by considering the later decisions first.
Decision trees and utility
The procedure for analyzing the tree when utilities are involved is exactly the same as that which
we used for the EMV criterion. Figure 6.5 shows the decision tree, with the utilities replacing the
monetary values. After applying the rollback method it can be seen that now the optimum policy
is to develop the electric-powered design and, if it fails, to abandon the project. Note, however,
that the closeness of the expected utilities suggests that sensitivity analysis should be applied to
the tree before a firm decision is made.
Decision trees involving continuous probability distributions
In the decision problem we considered above there were only two possible outcomes for each
course of action, namely success and failure. However, in some problems the number of possible

outcomes may be very large or even infinite. For example, we might approximate a market share
distribution with just three outcomes: high, medium and low. A number of methods for making
this sort of approximation have been suggested, and we will discuss the Extended Pearson-Tukey
(EP-T) approximation here. This was proposed by Keefer and Bodily,4 who found it to be a very
good approximation to a wide range of continuous distributions. The method is based on earlier
work by Pearson and Tukey5 and requires three estimates to be made by the decision maker:
(i) The value in the distribution which has a 95% chance of being exceeded. This value is
allocated a probability of 0.185.
(ii) The value in the distribution which has a 50% chance of being exceeded. This value is
allocated a probability of 0.63.
(iii) The value in the distribution which has only a 5% chance of being exceeded. This value is
also allocated a probability of 0.185.
Practical applications of decision trees
Ulvila6 used the technique to help the US postal service to decide on whether to continue with
the nine-digit zip code for business users. The analysis was designed to compare the monetary
returns which might result from the use of various types of automatic sorting equipment either
with or without the code.
The EP-T approximation was used to represent probability distributions of the extent to which
the code would be used, and the savings which would result from the various options in the tree.
The author reported that the approach helped the decision makers to think creatively about the
problem and to generate options.
Cohan et al.7 used decision trees to analyze decisions relating to the use of fire in forest
management. Nevertheless, if a decision is made to postpone a fire this itself will result in costs
caused by the delay in meeting the foresters objectives. The authors reported that the use of
decision analysis helped forest managers to understand the implications and relative importance
of the key uncertainties involved in the decisions. Moreover, it provided clear and concise
documentation of the rationale underlying important decisions. They argued that this can be
invaluable in stimulating discussion and communication within a forest management
organization. As we have seen, decision trees are the major analytical structures underlying
application of decision analysis to problems involving uncertainty.
Assessment of decision structure
Consider the following real-life decision problem that we would like you to attempt to

represent in the form of a decision tree.

It is really a matter of the decision analysts judgment as to whether the elicited tree is a fair
representation of the decision makers decision problem. Once a structure is agreed then the
computation of expected utility is fairly straightforward. Structuring is therefore a major problem
in decision analysis, for if the structuring is wrong then it is a necessary consequence that
assessments of utilities and probabilities may be inappropriate and the expected utility
computations may be invalid.

Stages 1 and 2 of the decision process are iterative, the structure of the decision problem emerges
from discussions between the decision maker and the analyst. Once a structure for the decision
representation has been agreed and probabilities and utilities are elicited (stage 3) the expected
utility for the various acts under consideration can be computed (stage 4) and the act which has
the maximal expected utility is chosen (stage 5).

What determines the decision analysts provisional representation of the decision problem?
Generally, it will be based upon past experience with similar classes of decision problems and, to
a significant extent, intuition
Many decision makers report that they feel the process of problem representation is perhaps
more important than the subsequent computations. However, some studies have illustrated that
the decision makers estimates, judgment and choices are affected by the way knowledge is
elicited.
This research has direct relevance for the decision analysts attempts at structuring. In one study,
Fischhoff et al.17 investigated estimation of failure probabilities in decision problem
representations called fault trees. These fault trees are essentially similar to decision trees, with
the exception that events rather than acts and events are represented.
However, focusing subjects attention on what was missing only partially improved their

awareness. Fischhoff labeled this insensitivity to the incompleteness of the fault tree out of
sight, out of mind. The finding was confirmed with technical experts and garage mechanics
Another finding from the study was that the perceived importance of a particular sub-event or
branch of the fault tree was increased by presenting it in pieces (i.e. as two separate branches).
The implications of this result are far reaching. Decision trees constructed early in the
analyst/decision maker interaction may be incomplete representations of the decision problem
facing the decision maker.
Eliciting decision tree representations
What methods have been developed to help elicit decision tree representations from decision
makers? One major method, much favored by some decision analysts, is that of influence
diagrams18 which are designed to summarize the dependencies that are seen to exist among
events and acts within a decision
The advantage of starting with influence diagrams is that their graphic representation is more
appealing to the intuition of decision makers who may be unfamiliar with decision technologies.
Decision trees, because of their strict temporal ordering of acts and events, need completely
respecifying when additional acts and events are inserted into preliminary representations

We shall illustrate the applicability of influence diagrams through a worked example. First,
however, we will present the basic concepts and representations underlying the approach.
As with the decision tree, event nodes are represented by circles and decision nodes by squares.
Arrowed lines between nodes indicate the influence of one node to another.
One step-by-step procedure for turning an influence diagram into a decision tree is as follows:
(1) Identify a node with no arrows pointing into it (since there can be no loops at least one node
will be such).
(2) If there is a choice between a decision node and an event node, choose the decision node.
(3) Place the node at the beginning of the tree and remove the node from the influence diagram.
(4) For the now-reduced diagram, choose another node with no arrows pointing into it. If there is
a choice a decision node should be chosen.
(5) Place this node next in the tree and remove it from the influence diagram.
(6) Repeat the above procedure until all the nodes have been removed
from the influence diagram.

Very complex decision trees can be represented as one-page influence diagrams. However, the
use of influence diagrams to construct decision trees where subsequent events and acts depend
on the initial decision (i.e. where the resulting decision tree is asymmetric) is more problematic.
In these instances, the influence diagram approach to decision tree structuring can be used as a
guide only.

Bayes theorem
In Bayes theorem an initial probability estimate is known as a prior probability. When Bayes
theorem is used to modify a prior probability in the light of new information the result is known
as a posterior probability.

The steps in the process which we have just applied are summa- rized below:
(1) Construct a tree with branches representing all the possible events which can occur and write
the prior probabilities for these events on the branches.
(2) Extend the tree by attaching to each branch a new branch which represents the new
information which you have obtained. On each branch write the conditional probability of
obtaining this information given the circumstance represented by the preceding branch.

(3) Obtain the joint probabilities by multiplying each prior probability by the conditional
probability which follows it on the tree.
(4) Sum the joint probabilities.
(5) Divide the appropriate joint probability by the sum of the joint
probabilities to obtain the required posterior probability.
The effect of new information on the revision of probability judgments
It is interesting to explore the relative influence which prior probabilities and new information
have on the resulting posterior probabilities.
Applying Bayes theorem to a decision problem
Assessing the value of new information
New information can remove or reduce the uncertainty involved in a decision and thereby
increase the expected payoff. However, in many circumstances it may be expensive to obtain
information since it might involve, for example, the use of scientific tests, the engagement of the
services of a consultant or the need to carry out a market research survey.
If this is the case, then the question arises as to whether it is worth obtaining the information in
the first place or, if there are several potential sources of information, which one is to be
preferred (sometimes the process of determining whether it is worth obtaining new information
is referred to as preposterior analysis). To show how this question can be answered, we will first
consider the case where the information is perfectly reliable (i.e. it is certain to give a correct
indication) and then look at the much more common situation where the reliability of the
information is imperfect.
The expected value of perfect information
In many decision situations it is not possible to obtain perfectly reliable information, but
nevertheless the concept of the expected value of perfect information (EVPI) can still be useful.
The expected value of imperfect information
A summary of the main stages in the above analysis is given below:
(1) Determine the course of action which would be chosen using only the prior probabilities and
record the expected payoff of this course of action;
(2) Identifythepossibleindicationswhichthenewinformationcangive;

(3) For each indication:


(a) Determine the probability that this indication will occur;
(b) Use Bayes theorem to revise the probabilities in the light of
this indication;
(c) Determine the best course of action in the light of this indication
(i.e. using the posterior probabilities) and the expected payoff of
this course of action;
(4) Multiplytheprobabilityofeachindicationoccurringbytheexpected
payoff of the course of action which should be taken if that indication occurs and sum the
resulting products. This will give the expected payoff with imperfect information;
(5) The expected value of the imperfect information is equal to the expected payoff with
imperfect information (derived in stage 4) less the expected payoff of the course of action which
would be selected using the prior probabilities (which was derived in stage 1).
There is an alternative way of looking at the value of information. New information can be
regarded as being of no value if you would still make the same decision regardless of what the
information told you. If the farm manager were to go ahead and plant a crop of potatoes
whatever the test indicated then there would clearly be no point in buying the test. Information
has value if some of its indications would cause you to take a different decision than the one you
would take without the benefit of the information.
Practical considerations
We will now outline a number of examples of the application of the methods we have just
discussed and consider some of the practical problems involved. Clearly, it is easier to identify
the expected value of perfect as opposed to imperfect information, and we recommend that, in
general, calculating the EVPI should be the first step in any information-evaluation exercise. The
EVPI can act as a useful screen, since some sources of information may prove to be too
expensive, even if they were to offer perfectly reliable data, which is unlikely.
As we saw in the previous section, assessing the expected value of imperfect information
requires the decision maker to judge how reliable the information will be in order to obtain the
conditional probabilities for the Bayes theorem calculations. In some circumstances this
assessment can be made on the basis of statistical theory.

The analysis acted as an initial screening process, leading to the early rejection of a number of
projects and enabling the corporation to focus only on those which had a good potential for
generating economic gains. Because the managerial and other inputs to the model were soft,
Schell stressed the importance of sensitivity analysis. This involved the calculation of the
expected value of perfect information and also the variation of the inputs into the model.
In this chapter we have discussed the role that new information can play in revising the
judgments of a decision maker. We argued that Bayes theorem shows the decision maker how
his or her judgments should be modified in the light of new information, and we showed that this
revi- sion will depend both upon the vagueness of the prior judgment and the reliability of the
new information. Of course, receiving information is often a sequential process. Your prior
probability will reflect the informa- tion you have received up to the point in time when you
make your initial probability assessment. As each new instalment of information arrives, you
may continue to revise your probability. The posterior probability you had at the end of last week
may be treated as the prior probability this week, and be revised in the light of this weeks
information.

Вам также может понравиться