Вы находитесь на странице: 1из 20

Markov Models: Overview

Gerald F. Kominski, Ph.D.


Professor, Department of Health Services

Markov Models: Why Are They Necessary?

Conventional decision analysis models


assume:
-

Chance events
Limited time horizon
Events that do not recur

What happens if we have a problem with:


- An extended time horizon, say, over a lifetime
- Events can reoccur throughout a lifetime

Decision Tree for Atrial Fibrillation

State-Transition Diagram for Atrial Fibrillation


p11=0.7
Well
p12=0.2

p13=0.1

p22=0.9
PostStroke

Dead
p23=0.1

p33=1.0

The probabilities for all paths out of a state must sum to 1.0.
Death is known as an absorbing state, because individuals who enter that
state cannot transition out of it.

Transition Probabilities
State of Next Cycle

Well
State of
Current Cycle PostStroke
Dead

Well

PostStroke

Dead

0.7

0.2

0.1

0.0

0.9

0.1

0.0

0.0

1.0

Transition probabilities that remain constant over time


are characteristic of stationary Markov models, aka
Markov chains

Markov Model Definitions

Any process evolving over time with uncertainty is a


stochastic process, and models based on such
processes are stochastic or probabilistic models

If the process is both stochastic and the behavior of the


model in one time period (i.e., cycle) does not depend
on the previous time period, the process is Markovian
- The process has lack of memory
- Even processes where the previous state does matter can be
made Markovian through definition of temporary states know
as tunnel states

Tunnel States

Well

PostStoke
1

PostStroke
2

Dead

PostStroke
3

PostStroke

Markov Chains: Coke vs. Pepsi Example


Given

that a persons last cola purchase was Coke,


there is a 90% chance that his next cola purchase will
also be Coke.
If a persons last cola purchase was Pepsi, there is an
80% chance that his next cola purchase will also be
Pepsi.
transition matrix:

0.9 0.1
P

0.2 0.8

0.1

0.9
coke

0.8
pepsi

0.2

Powers of transition matrices


Coke vs. Pepsi Example (cont.)
Given that a person is currently a Pepsi purchaser, what is
the probability that he will purchase Coke two purchases
from now?
Pr[ Pepsi?Coke ] =
Pr[ PepsiCokeCoke ] + Pr[ Pepsi Pepsi Coke ] =
0.2 *

0.9

0.8

0.2

= 0.34

0.9 0.1 0.9 0.1 0.83 0.17


P

0.2 0.8 0.2 0.8 0.34 0.66


2

P2 is the transition matrix after two time period.

Powers of transition matrices


Coke vs. Pepsi Example (cont.)
Given that a person is currently a Coke purchaser, what
is the probability that he will purchase Pepsi three
purchases from now?

0.9 0.1 0.83 0.17 0.781 0.219


P

0.2 0.8 0.34 0.66 0.438 0.562


3

Distribution Row Vector


A distribution row vector d for an N-state Markov chain
is an N-dimensional row vector having as its
components, one for each state, the probabilities
that an object in the system is in each of the
respective states.
Example (cont.): Suppose 60% of all people now drink
Coke, and 40% drink Pepsi. Then the distribution
vector is (0.6, 0.4).
Let d(k) denote the distribution vector for a Markov
chain after k time periods. Thus, d(0) represents the
initial distribution. Then
d(k) = d(0)Pk
11

Distribution Row Vector


Example (cont.):
Suppose 60% of all people now drink Coke, and
40% drink Pepsi
What fraction of people will be drinking Coke
three weeks from now?
d(0) = (0.6, 0.4)

0.9 0.1
P

0.2 0.8

0.781 0.219
P

0
.
438
0
.
562

d(3) = d(0)P3
= 0.6 * 0.781 + 0.4 * 0.438 = 0.6438
Pr[X3=Coke]
12

Regular Markov Chains


A Markov chain is regular if some power of the
transition matrix contains only positive elements.
If the matrix itself contains only positive elements
then the power is one, and the matrix is
automatically regular.
Transition matrices that are regular always have an
eigenvalue of unity.
They also have limiting distribution vectors x(),
where the ith component of x() represents the
probability of an object in state i after a large
number of time periods have elapsed.
13

Limiting distribution
Coke vs. Pepsi Example (cont)

2/3

Pr[Xi = Coke]

0.9 0.1
23
3

0.2 0.8

stationary distribution

0.1

0.9

coke

0.8

pepsi
0.2

week - i

14

Regular Markov Chains


Definition: A nonzero vector x is a left eigenvector for
a matrix A if there exists a scalar such that xA =
x.
(Left and right eigenvectors are usually different; they
are the same only for special type of matrices.)
The limiting distribution x() is a left eigenvector of the
transition matrix corresponding to the eigenvalue of
unity, and having the sum of its components equal
to one.
Examples on the board.
15

Defining a Markov Model

Define the initial states

Determine the cycle length

Consider possible transitions among states

Determine transition probabilities

Determine utilities, and costs (if cost-effectiveness


analysis), for each state

Evaluating Markov Models:


Cohort Simulation
State
Cycle

Well

PostStroke

Dead

Sum of
Years Lived

Survival

10,000

7,000

2,000

1,000

9,000

0.9000

4,900

3,200

1,900

8,100

0.8100

3,430

3,860

2,710

7,290

0.7290

2,401

4,160

3,439

6,561

0.6561

1,681

4,224

4,095

5,905

0.5905

1,176

4,138

4,686

5,314

0.5314

824

3,959

5,217

4,783

0.4783

93

9,999

0.0001

94

10,000

0.0000

The data in the last column is used to produce a survival curve, aka a Markov trace.

Estimating Markov Models:


Monte Carlo Simulation

Instead of processing an entire cohort and applying


probabilities to the cohort, simulate a large number
(e.g., 10,000) cases proceeding through the transition
matrix
- Monte Carlo simulation
- TreeAge will do this for you quickly, without programming

The advantage of this approach is that it provides


estimates of variation around the mean
Monte Carlo simulation is most valuable because it
permits efficient modeling of complex prior history
- Such variables are known as tracker variables

Example of a 5-State Markov

Source: Kominski GF, Varon SF, Morisky DE, Malotte CK, Ebin VJ, Coly A, Chiao C. Costs and costeffectiveness of adolescent compliance with treatment for latent tuberculosis infection: results from a randomized
trial. Journal of Adolescent Health 2007;40(1):61-68.

Key Assumptions of the Markov Model


Variable

Value (Range)

Reference

Efficacy of IPT
Cost of treating active TB
Cost of IPT

0.85 (0.75-0.98)
$22,500 ($17,000-$30,000)
Varies by study group and whether 6month IPT is completed

19
17
Current study

TB cases per 100,000


TB case fatality rate
All-cause mortality rate per
100,000

250 (120-560)
0.0045-0.16 (varies with age)
19-15,476 (varies with age)

20
17
National Center for Health
Statistics, 1999 mortality tables

Hepatotoxicity of IPT

0.0008 (age<35, started IPT)


0.0012 (age<35, completed IPT)

21

Hepatitis fatality rate


Cost of treating IPT-induced
hepatitis

0.002
$11,250 ($8,500-$15,000)

21
Authors assumption

QALY Healthy
QALY Positive Skin Test, but
Incomplete IPT

1.00 (0.95-1.00)

Authors assumption

0.90 (0.80-0.95)

Authors assumption

QALY Active TB

0.50 (0.20-0.90)

Harvard Center for Risk Analysis

QALY IPT-induced hepatitis

0.75 (75-0.90)

Harvard Center for Risk Analysis

Discount rate

0.03 (0.00-0.07)

Panel on Cost-Effectiveness

Вам также может понравиться