Академический Документы
Профессиональный Документы
Культура Документы
http://www.cs.tut.fi/kurssit/ELT-53606/
Network analysis and dimensioning I D.Moltchanov, TUT, 2013
OUTLINE:
• Definition and basic notions;
• Characterization of stochastic processes;
• Classifications of stochastic processes;
• Ergodic processes;
• Markov processes;
• Discrete-time homogenous Markov chains;
• Continuous-time homogenous Markov chains;
• Classification of states;
• Ergodic chains and stationary distribution.
S = {S(t), t ∈ T } (1)
Examples of processes:
• packet arrival process to IP router;
• call arrival process to telephone exchange, etc.
If T is countable:
T = ℵ = {0, 1, . . . },
T = Z = {. . . , −2, −1, 0, 1, 2, . . . } (2)
If T is not countable:
If E is countable:
E = ℵ = {0, 1, . . . },
E = Z = {. . . , −2, −1, 0, 1, 2, . . . } (4)
If E is not countable:
Space E
4
3 .10
4
2.8 .10
4
2.6 .10
4
2.4 .10
4
2.2 .10
10 15 20 25 30 35 40 45 50
Space T
0
0 1.25 2.5 3.75 5 6.25 7.5 8.75 10 t
By induction:
2.1. Mean
Mean of the stochastic process {S(t), t ∈ T }:
• non-probabilistic function ms (t);
• for all t ∈ T equals to the mean of corresponding section:
Z ∞
ms (t) = xs(t, x)dx. (9)
−∞
S( t )
10
0
0 1.25 2.5 3.75 5 6.25 7.5 8.75 10 t
2.2. Variance
Variance of the stochastic process {S(t), t ∈ T }:
• non-probabilistic function Ds (t);
• for all t ∈ T equals to the variance of corresponding section:
Z ∞
Ds (t) = (x − ms (t))2 s(t, x)dx. (10)
−∞
3. Classifications
3.1. Based on stationarity
General classification:
• stationary processes:
– at least mean and ACF do not depend on time.
• non-stationary processes:
– mean or ACF or both depend on time;
– practically, they may change in time.
Note: stationarity is advantageous property:
• correlation theory is quite developed;
• we are limited to first two moments (K(ti , ti ) = D(ti )) and linear dependence!
Notes on non-stationary processes:
• very hard to deal with;
• it seems that most processes observed in networks are somewhat non-stationary!
lim Ks (τ ) = 0. (17)
τ →∞
5. Markov processes
Historical facts:
• A.A. Markov is Russian mathematician (1856 − 1922):
– first results in 1906 (discrete-space Markov processes);
– extension of the law of large numbers for dependent events;
– contributors: Kolmogorov, Khinchine, Fokker, Plank, Uhlenbeck, Wiener, Feller,..
• note: classic queuing theory is mostly based on Markovian processes!
Assume the following:
• we are given a discrete-time, discrete-space process {S(t), t ∈ T }.
• stochastic process {S(t), t ∈ T } is called Markov process if it satisfies:
We may use matrix for one-step transitions from i to j between times n and (n + 1):
p11 (n) p12 (n) p13 (n) · · · p1M (n)
p (n) p (n) p (n)
21 22 23 · · · p2M (n)
M
X
P (n) =
p31 (n) p32 (n) p33 (n) ,
· · · p3M (n) pji (n) = 1, ∀j, ∀n. (22)
. .. .. ... .. i=1
..
. . .
pM 1 (n) pM 2 (n) pM 3 (n) · · · pM M (n)
Definitions:
• Markov chain whose transition probabilities depend on time is non-homogenous;
• Markov chain whose transition probabilities do not depend on time is homogenous.
states states
6 p26 p26 6 p36 p36
5 p25 p25 5 p35 p35
4 p24 p24 4 p34 p34
3 p23 p23 3 p33 p33
2 p22 p22 2 p32 p32
1 p21 p21 1 p31 p31
tr: 1 - 0
tr: 0 - 1 tr: 0 - 2
3 t
1
0 1 2 ...
2
p
1-p F
• Markov chain jumps to other state in the next slot given that currently it is in state F :
• Markov chain stays in state F for m time units given that currently it is in state F :
• Markov chain stays in state F for m time units and then exit from F :
Important note:
• the latter gives geometric distribution, which is memoryless in nature.
tr: 1 - 0
tr: 0 - 1 tr: 0 - 2
3 t
1
0 1 2 ...
2
t0 s t
Parameter λ:
• λi is called transition rate out of state i;
• λi is non-negative.
Parameter λ:
• λi = 0: the process always stays in state i;
• 0 < λi < ∞, the probability that process change its state in ∆t is:
λi ∆t. (32)
ë13 ë 3i ë iM
ë12 ë 23
1 2 3 M
ë 21 ë 32
ë 31 ë i3 ë Mi
• this matrix is also called infinitesimal generator of the continuous-time Markov chain.
8. Classification of states
States of Markov chain is classified to:
• recurrent:
– assume that the process leaves a certain state;
– if it may return to this state after some time with probability 1.
• transient:
– assume that the process leaves a certain state;
– if it may not return to this state (return with probability less than 1).
• absorbing:
– assume that the process enters a certain state;
– if it cannot visit any other state.
∞
X ∞
X
fj = fj (n), E[fj ] = nfj (n), (37)
n=1 n=1
• fj : probability that the process returns to the state j some time after leaving it;
• E[fj ]: mean number of steps needed to return to state j after leaving it:
– it is also called mean recurrence time for state j.
Definition: A Markov chain is called ergodic if all its states are ergodic.
Definition: A Markov chain is called irreducible if either:
• all states are transient or;
• all states are recurrent null;
• all states are ergodic: if recurrent positive and aperiodic;
• if periodic: all states have the same period.
Definition: aperiodic irreducible Markov chain with finite number of states is always ergodic.
Important notes:
• Markov chain is simply a class of stochastic processes;
• for Markov chain ergodicity means that there should be stationary state probabilities!
Important notes:
• steady-state probabilities are independent of the initial state probabilities;
• one can find them using mean recurrence time as follows:
1
pj = , ∀j. (40)
E[fj ]
In matrix form: