Вы находитесь на странице: 1из 34

Discrete Markov Chains

A H Chowdhury, PhD
Professor
Dept. of EEE, BUET
Andrey Andreyevich Markov
14 June 1856 – 20 July 1922

• A Russian mathematician best known for his work


on stochastic processes
• A primary subject of his research later became
known as Markov chains and Markov processes
• When teachers of St. Petersburg University were
ordered to monitor their students, in connection
with student riots in 1908, Markov refused to accept
this decree, and to be an "agent of the governance”
• Markov was removed from further teaching duties
at St. Petersburg University
• After February Revolution in 1917, Markov resumed
teaching activities and lectured on probability
theory and calculus of differences until his death in
1922
Introduction

• A Markov chain is a stochastic process with Markov property

• Markov chain refers to sequence of random variables such a


process moves through, with Markov property defining serial
dependence only between adjacent periods (as in a "chain")

• Can be used for describing systems that follow a chain of


linked events, where what happens next depends only on
current state of system
Introduction contd.

• A stochastic process is a collection of random variables,


representing evolution of some system of random values over time

• Probabilistic counterpart to a deterministic process

• A deterministic process can only evolve in one way (e.g., solutions


of an ordinary differential equation)

• In a stochastic or random process there is some indeterminacy 


even if initial condition (or starting point) is known, there are
several directions in which process may evolve
Introduction contd.

• A system with components having repair time which is not


instantaneous can be represented using Markov approach

• Markov approach can be applied to random behaviour of


systems that vary discretely or continuously with respect to
time and space (a stochastic process)

• Not all stochastic processes can be modelled using basic


Markov approach
Introduction contd.
Markov property
• Behaviour of system must be characterized by a lack of memory
– Future states of system independent of all past states except immediately
preceding one
– Future random behaviour of system only depends on where it is at present,
not on where it has been or how it arrived at its present position
• Process must be stationary (homogeneous)
– System behaviour must be same at all points of time  probability of
making a transition from one given state to another is same (stationary) at
all times in past and future  constant hazard rate
• Lack of memory and being stationary  constant hazard rate 
i.e. Poisson and exponential distributions
• If process non-stationary i.e. probability of making a transition
function of time or number of discrete steps  non-Markovian
Introduction contd.

• In general case of Markov models, both time and space may


either be discrete or continuous

• In particular case of system reliability evaluation, space


normally represented only as a discrete function
– since this represents discrete and identifiable states in which system
and its components can reside

– whereas time may either be discrete or continuous

• Discrete case known as a Markov chain

• Continuous case known as a Markov process


General Modelling Concepts

Probabilities of remaining in
or leaving a particular state
in a finite time

• Consider a simple system with two system states  1 and 2

• Probabilities assumed to be constant for all times into future

• This is a discrete Markov chain  i) system stationary, and

ii) movement between states occurs in discrete steps


General Modelling Concepts contd.

• Assume system initially in state 1, consider the first time interval


• System can remain in state 1 with a probability of ½ or it can move into
state 2 with a probability of ½
• Sum of these probabilities must be unity, i.e., system must either
remain in state being considered or move out of state
• This principle applies equally to all systems irrespective of degree of
complexity or ways of moving out of a given state
– Sum of probabilities of remaining in or moving out of a state must be
unity
General Modelling Concepts contd.

Branch probabilities after 4 time intervals


General Modelling Concepts contd.
• Probability of following anyone branch of tree evaluated by
multiplying appropriate probabilities of each step of branch
• Probability of residing in a particular state of system after a
certain number of time intervals evaluated by summating
branch probabilities that lead to that state
– e.g. Probability of residing in State 1 after 2 time interval  (½ x ½) +
(½ x ¼) = 3/8

• If all these probabilities are summated they add up to unity


• Probability of residing in state 1 after 4 time intervals
– Summate branch probabilities leading to state 1  43/128

• Similarly probability of residing in state 2  85/128


General Modelling Concepts contd.

System transient behaviour

• As number of time intervals increased, values of state probabilities


tend to a constant or limiting value
– These limiting values known as limiting-state or time-independent values of
state probabilities
• This is characteristic of most systems which satisfy conditions of
Markov approach
General Modelling Concepts contd.

• Initial conditions  state of system at step 0 or zero time

• Initial conditions are generally known  problem centres


around evaluating system reliability in the future

• Transient behaviour is very dependent on initial conditions

System transient behaviour


General Modelling Concepts contd.

• Previous example  system started in state 1

• If system started in state 2 rather than state 1 similar transient


behaviour obtains
 Limiting values of state probabilities totally independent of initial
conditions and both will tend to same limiting-state values

• A system or process for which limiting values of state


probabilities independent of initial conditions is called ergodic
General Modelling Concepts contd.

• For ergodicity  it is essential that every state of a system can


be reached from all other states of system either directly or
indirectly through intermediate states

• If this is not possible and a particular state or states, once


entered cannot be left  system not ergodic and relevant
states known as absorbing states
General Modelling Concepts contd.

For any ergodic system

• Limiting or steady-state probabilities for states independent


of initial conditions

• Rate of convergence to limiting-state value


– can be dependent on initial conditions

– very dependent on probabilities of making transitions between states


of system
Stochastic Transitional Probability Matrix

• Transition probabilities of the system can be represented by


matrix P

– Pij = probability of making a transition to state j after a time interval


given that it was in state i at the beginning of time interval

• For the first time interval  P11 = ½, P12 = ½, P21 = ¼, P22 = ¾


• Stochastic transitional probability matrix  represents
transitional probabilities of the stochastic process
Stochastic Transitional Probability Matrix
contd.

• Definition of Pij indicates


– Row position of matrix is the state from which transition occurs
– Column position of matrix is the state to which transition occurs

• Summation of probabilities in each


General form of the matrix
row must be unity for an n-state system
 Since row i represents complete
and exhaustive ways in which
system can behave in a particular
time interval given that it is in state
i at the beginning of that time
interval
Time Dependent Probability Evaluation

• Stochastic transitional probability matrix for first time interval 

• Multiplying matrix P by itself

• First element of row 1 (3/8)  probability of being in state 1 after a time


interval given that it started in state 1
• Second element of row 1 (5/8)  probability of being in state 2 after a
time interval given that it started in state 1
Time Dependent Probability Evaluation contd.

• Values in row 1 are identical to state probabilities in Table 8.1 evaluated


after two time intervals given that system commenced in state 1
• Similarly for in row 2 if the system had started in state 2 etc.

• Elements of P2 give all the state probabilities of system after


two time intervals, both those when starting in state 1 and
those when starting in state 2
Time Dependent Probability Evaluation contd.
• Probability of residing in any state can be evaluated provided
it is known for certain in which state the system started
– i.e., probability of starting in a particular state is unity and probability
of starting in all others is zero
– Frequently this is the case in practice because, at time zero,
deterministic state of system known

• If initial conditions not known determinstically  Premultiply


Pn by initial probability vector P(0)
• P(0) represents probability of being in each of the system
states at the start
• Values of probability contained in P(0) must summate to unity
Time Dependent Probability Evaluation contd.

Case 1 system starts in state 1

• Initial probability vector 

• Probability vector representing state probabilities after two


time intervals 
Time Dependent Probability Evaluation contd.

Case 2 system equally likely to start in state 1 or state 2

• Initial probability vector 

• Probability vector representing state probabilities after two


time intervals 

• This principle can again be extended to give 


Time Dependent Probability Evaluation contd.

• State probabilities at any time interval  multiply stochastic


transitional probability matrix by itself relevant number of
times

• Transient behaviour  Continue the process sequentially

• Limiting values of state probabilities  continue


multiplication process sufficient number of times
Limiting State Probability Evaluation

• Limiting values of state probabilities of an ergodic system can


be evaluated using matrix multiplication technique
– Sensible to use this technique if transient behaviour is also required

– If only limiting state probabilities required, matrix multiplication can


be tedious and time-consuming

• Efficient alternative method exists


Limiting State Probability Evaluation contd.

Principle

• Once limiting state probabilities have been reached by matrix


multiplication, any further multiplication by stochastic
transitional probability matrix does not change values of
limiting state probabilities
• i.e., if  represents limiting probability vector and P is
stochastic transitional probability matrix, then 
  limiting probability vector
P  stochastic transitional probability matrix
Limiting State Probability Evaluation contd.

• Let P1, P2  limiting probabilities of being in states 1 and 2


respectively, then

 
Identical equations

• Two unknowns, P1, P2  a third equations 


• With systems of any size, one of the equations will always be
redundant and therefore anyone of the equations must be
replaced by 
Limiting State Probability Evaluation contd.

• Using and

• Expressed in matrix form  

• Solution for X given by: X = A-1b


Absorbing States

• Absorbing state  states which, once entered, cannot be left


until the system starts a new mission
• Can readily be identified in terms of mission oriented systems
as the catastrophic failure event states into which probability
of entering must be minimized to ensure safe operation of
mission
– In such cases, one requirement of reliability analysis is to evaluate
average number of time intervals in which system resides in one of the
non-absorbing states
– i.e. for how many time intervals does the system operate on average
before it enters one of the absorbing states
Absorbing States contd.

• Principle behind such a system can also be applied to


repairable systems to evaluate average number of time
intervals system will operate satisfactorily before entering an
undesirable state or states
– In this case the states may not be real absorbing states because they
can be left following a repair action

• Principle of absorbing states can be used to deduce average


number of time intervals by defining them as absorbing states
Absorbing States contd.

• In the two state system, if system starts in state 1, probability


of continuing to reside in this state without ever entering
state 2 becomes progressively smaller as number of time
intervals increases
– i.e., provided the number of time intervals is allowed to become great
enough, the system must eventually enter state 2

– Mathematically this is because


Absorbing States contd.

• If P is stochastic transitional probability matrix of the system, a truncated


matrix Q can be created by deleting row(s) and column(s) associated with
absorbing state(s)

- truncation creates a matrix Q having one element only  [P11] if state 2


is defined as absorbing state

• Truncated matrix Q represents transient set of states and it is necessary to


evaluate expected number of time intervals for which system remains in
one of the states represented in this matrix
Absorbing States contd.
• Mathematical expectation
• This principle applies to also to multi-probability elements
represented by matrix Q
– If N is the expected number of time intervals

– I is the identity (or unit) matrix  represents probability of all possible


initial conditions i.e., unity in row 1 represents contribution to
expectation of the system starting in state 1, unity in row 2 represents
contribution of system starting in state 2, and so on
– Each of the unity digits in Eq. 8.13 represents one further time
interval, i.e., these are equivalent to xi in Eq. 8.12
– First time interval occurs with probability I, second with probability Q,
third with probability Q2, and so on
– nth time interval being the one that enters the absorbing state
Application of Discrete Markov Techniques
Example 8.1
Evaluate (a) limiting state probabilities associated
with each state and, (b) average number of time
intervals spent in each state if state 3 is an
absorbing state.
Solution
(a) Stochastic transitional probability matrix 
Let the limiting state probabilities  P1, P2, P3

Then,

One of the equation may be replaced by so,

Вам также может понравиться