Вы находитесь на странице: 1из 85

Random processes - Chapter 4 Random process 1

Random processes

Chapter 4 Random process

4.1 Random process

4.1 Random process


Random processes - Chapter 4 Random process 2

Random process

Random process, stochastic process


The infinite set {Xt, t T } of random variables is called a random process, where
the index set T is an infinite set.

In other words, a random vector with an infinite number of elements is called a


random process.

Discrete time process, continuous time process


A random process is said to be discrete time if the index set is a countably infinite set.
When the index set is an uncountable set, the random process is called a continuous
time random process.

4.1 Random process


Random processes - Chapter 4 Random process 3

Discrete (alphabet) process, continuous (alphabet) process


A random process is called a discrete alphabet, discrete amplitude, or discrete state
process if all finite length random vectors drawn from the random process are discrete
random vectors. A process is called a continuous alphabet, continuous amplitude, or
continuous state process if all finite length random vectors drawn from the random
process are continuous random vectors.

A random process {X()} maps an element of the sample space on a time


function X(, t).
A random process {X()} is the collection of time functions X(t) called sample
functions. This collection is called an ensemble. The value of the time function
X(t) is a random variable at t.

4.1 Random process


Random processes - Chapter 4 Random process 4

A random process and sample functions

4.1 Random process


Random processes - Chapter 4 Random process 5

Since a random process is a collection of random variables with X(t) denoting a


random variable at t, the statistical characteristics of the random process can be
considered vi the cdf and pdf of X(t).
For example, the first-order cdf, first-order pdf, second-order cdf, and nth-order
cdf of the random process {X(t)} are
FX(t)(x) = Pr{X(t) x},
dFX(t)(x)
fX(t)(x) = ,
dx
FX(t1),X(t2)(x1, x2) = Pr{X(t1) x1, X(t2) x2},
and
FX(t1), ,X(tn)(x1, , xn) = Pr{X(t1) x1, , X(tn) xn},
respectively.

4.1 Random process


Random processes - Chapter 4 Random process 6

Mean function
The mean function mX (t) of a random process {X(t)} is defined by

mX (t) = Z
E{X(t)}

= xfX(t)(x)dx.

Autocorrelation function
The autocorrelation function RX (t1, t2) of a random process {X(t)} is defined by

RX (t1, t2) = E{X(t1)X (t2)}.

4.1 Random process


Random processes - Chapter 4 Random process 7

Known signal
An extreme example of a random process is a known or deterministic signal. When
X(t) = s(t) is a known signal, we have
m(t) = E{s(t)}
= s(t),
R(t1, t2) = E{s(t1)s(t2)}
= s(t1)s(t2).

Consider the random process {X(t)} with mean E{X(t)} = 3 and autocorrelation
function R(t1, t2) = 9 + 4 exp(0.2|t1 t2|). If Z = X(5), W = X(8), we can
easily obtain E{Z} = E{X(5)} = 3, E{W } = 3, E{Z 2} = R(5, 5) = 13,
E{W 2} = R(8, 8) = 13, Var{Z} = 13 32 = 4, Var{W } = 13 32 = 4, and
E{ZW } = R(5, 8) = 9 + 4e0.6 11.195. In other words, the random variables
Z and W have the variance 2 = 4 and covariance Cov(5, 8) = 4e0.6 2.195.

4.1 Random process


Random processes - Chapter 4 Random process 8

The autocorrelation function of X(t) = Aejt defined with a random


variable A can be obtained as
RX (t1, t2) = E{Aejt1 Aejt2 }
= ej(t1t2)E{|A|2}.

Autocovariance function
The autocovariance function KX (t1, t2) of a random process {X(t)} is defined by
KX (t1, t2) = E{[X(t1) mX (t1)][X (t2) mX (t2)]}.

In general, the autocovariance and autocorrelation functions are functions of t1


and t2.
The autocovariance function can be expressed in terms of the autocorrelation and
mean functions as
KX (t1, t2) = RX (t1, t2) mX (t1)mX (t2).

4.1 Random process


Random processes - Chapter 4 Random process 9

Uncorrelated random process


A random process {Xt} is said to be uncorrelated if RX (t, s) = E{Xt}E{Xs} or
KX (t, s) = 0 for t 6= s.

If a random process {X(t)} is uncorrelated, the autocorrelation and autocovari-


ance functions are
RX (t, s) = E{XtXs}
(
E{|Xt|2}, t = s,
=
E{Xt}E{Xs}, t =
6 s,
and
(
2
X t
, t = s,
KX (t, s) =
0, t 6= s,
respectively.

4.1 Random process


Random processes - Chapter 4 Random process 10

Correlation coefficient function


The correlation coefficient function X (t1, t2) of a random process {X(t)} is defined
by
KX (t1, t2)
X (t1, t2) = p p
KX (t1, t1) KX (t2, t2)
KX (t1, t2)
= ,
(t1)(t2)
where (ti) is the standard deviation of X(ti).

We can show that


|X (t1, t2)| 1,
X (t, t) = 1.

4.1 Random process


Random processes - Chapter 4 Random process 11

Crosscorrelation function
The crosscorrelation function RXY (t1, t2) of random processes {X(t)} and {Y (t)}
is defined by
RXY (t1, t2) = E{X(t1)Y (t2)}.

The autocorrelation and crosscorrelation functions satisfy


RX (t, t) = E{X(t)X (t)}
2
= X (t) + |E{X(t)}|2 0,

RX (t1, t2) = RX (t2, t1),
RXY (t1, t2) = RY X (t2, t1).

The autocorrelation function RX (t1, t2) is positive semi-definite. In other words,


PP
aiaj R(ti, tj ) 0 for non-negative constants {ak }.
i j

4.1 Random process


Random processes - Chapter 4 Random process 12

Crosscovariance function
The crosscovariance function KXY (t1, t2) of random processes {X(t)} and {Y (t)}
is defined by
KXY (t1, t2) = E{[X(t1) mX (t1)][Y (t2) mY (t2)]}
= RXY (t1, t2) mX (t1)mY (t2).

Two random processes which are uncorrelated


The random processes {X(t)} and {Y (t)} are said to be uncorrelated if RXY (t1, t2) =
E{X(t1)}E{Y (t2)} or KXY (t1, t2) = 0 for all t1 and t2.

Orthogonality
The two random processes {X(t)} and {Y (t)} are said to be orthogonal if
RXY (t1, t2) = 0 for all t1 and t2.

4.1 Random process


Random processes - Chapter 4 Random process 1

Random process

Chapter 4 Random process

4.2 Properties of random processes

4.2 Properties of random processes


Random processes - Chapter 4 Random process 2

Stationary process and independent process


A random process is said to be stationary if the probabilistic properties


do not change under time shifts.

Stationary process
A random process {X(t)} is stationary, strict-sense stationary (s.s.s.), or strongly-
stationary if the joint cdf of {X(t1), X(t2), , X(tn)} is the same as the joint cdf
of {X(t1 + ), X(t2 + ), , X(tn + )} for all n, , t1, t2, , tn.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 3

Wide-sense stationary (w.s.s.) process


A random process {X(t)} is w.s.s., weakly-stationary, or second-order stationary if (1)
the mean function is constant and (2) the autocorrelation function RX (t, s) depends
only on t s but not on t and s individually.

The mean function mX and the autocorrelation function RX of a w.s.s. process


{X(t)} are thus
mX (t) = m
and
RX (t1, t2) = R(t1 t2),
respectively.
In other words, the autocorrelation function RX (t1, t2) of a w.s.s. process {X(t)}
is a function of = t1 t2. For all t and , we have
RX ( ) = E{X(t + )X (t)}.
When = 0, RX (0) = E{|X(t)|2}.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 4

Consider two sequences of uncorrelated random variables A0, A1, , Am and


B0, B1, , Bm having mean zero and variance i2. Assume that the two se-
quences are uncorrelated with each other. Let 0, 1, , m be distinct frequen-
P
m
cies in [0, 2), and let Xn = {Ak cos nk +Bk sin nk } for n = 0, 1, 2, .
k=0
Then we can obtain
m
X
E{XnXn+l } = k2 cos lk ,
k=0
E{Xn} = 0.
Thus, {Xn} is w.s.s..

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 5

Properties of the autocorrelation function RX ( ) of a real stationary


process {X(t)}

RX ( ) = RX ( ) : RX ( ) is an even function.
|RX ( )| RX (0) : RX ( ) is maximum at the origin.
If RX ( ) is continuous = 0, then it is also continuous at every value of .
If there is a constant T > 0 such that RX (0) = RX (T ), then RX ( ) is periodic.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 6

Independent random process


A random process is said to be independent if the joint cdf satisfies
n
Y
FXt1 ,Xt2 , ,Xtn (x) = FXti (xi)
i=1

for all n and t1, t2, , tn, x1, x2, , xn .

Independent and identically distributed (i.i.d.) process


A random process is said to be i.i.d. if the joint cdf satisfies
n
Y
FXt1 ,Xt2 , ,Xtn (x) = FX (xi)
i=1

for all n and t1, t2, , tn, x1, x2, , xn.

The i.i.d. process is sometimes called a memoryless process or a white


noise. The i.i.d. process is the simplest process, and yet it is the most
stochastic process in that past outputs do not have any information
on the future.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 7

Bernoulli process
An i.i.d. random process with two possible values is called a Bernoulli process. For
example, consider the random process {Xn} defined by

(
1, if the nth result is head,
Xn =
0, if the nth result is tail,

when we toss a coin infinitely. The random process {Xn} is a discrete-time discrete-
amplitude random process. The success (head) and failure (tail) probabilities are

P {Xn = 1} = p
and
P {Xn = 0} = 1 p,
respectively.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 8

We can easily obtain


mX (n) = E{Xn}
= p
and
KX (n1, n2) = E{Xn1 Xn2 } mX (n1)mX (n2)
(
p(1 p), n1 = n2,
=
0, n1 6= n2.

The mean function mX (n) of a Bernoulli process is not a function of time but a
constant. The autocovariance KX (n1, n2) depends not on n1 and n2 individually, but
only on the difference n1 n2.
Clearly, the Bernoulli process is w.s.s..

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 9

Two random processes independent of each other


The random processes {X(t)} and {Y (t)} are said to be independent of each
other if the random vector (Xt1 , Xt2 , , Xtk ) is independent of the random vector
(Ys1 , Ys2 , , Ysl ) for all k, l, and t1, t2, , tk , s1, s2, , sl .

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 10

Normal process, Gaussian process


A random process {Xt} is said to be normal if (Xt1 , Xt2 , , Xtk ) is a kdimensional
normal random vector for all k and t1, t2, , tk .

A stationary process is always w.s.s., but the converse is not always true. On the
other hand, a w.s.s. normal process is s.s.s.. This result can be obtained from the
pdf


1 1 T 1
fX (x) = exp (x mX ) K X (x mX )
(2)n/2|K X |1/2 2
of a jointly normal random vector.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 11

Jointly w.s.s. processes


Two random processes are said to be jointly w.s.s. if (1) the mean functions are
constants and and (2) the autocorrelation functions and crosscorrelation function are
all functions only of time differences.

If two random processes {X(t)} and {Y (t)} are jointly w.s.s., then {X(t)} and
{Y (t)} are both w.s.s.. The crosscorrelation function of {X(t)} and {Y (t)} is

XY ( )
RXY (t + , t) = R
= E{X(t + )Y (t)}


= E{Y (t)X (t + )}
Y X ( ).
= R

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 12

The crosscorrelation function RXY of two jointly w.s.s. random processes has the
following properties:


1. RY X ( ) = RXY ( ).
p
2. |RXY ( )| RXX (0)RY Y (0) 12 {RXX (0) + RY Y (0)}.
3. RXY ( ) is not always maximum at the origin.

Linear transformation and jointly normal process


Two processes X(t) and Y (t) are w.s.s. if the linear combination Z(t) = aX(t) +
bY (t) is w.s.s. for all a and b. The converse is also true.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 13

Moving average (MA) process


Let a1, a2, , al be a sequence of real numbers and W0, W1, W2, be a sequence
of uncorrelated random variables with mean E{Wn} = m and variance Var{Wn} =
2. Then the following process {Xn} is called a moving average process.
Xn = a1Wn + a2Wn1 + + al Wnl+1
Xl
= aiWni+1.
i=1

The mean and variance of Xn are


E{Xn} = (a1 + a2 + + al )m,
Var{Xn} = (a21 + a22 + + a2l ) 2.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 14

2} = 2 when X
Since E{X n = Wn m, {Xn} is w.s.s. from
n
X l X l
Cov(Xn, Xn+k ) = E Xn m ai Xn+k m ai
i=1 i=1
X
l X
l
= E ni+1
ai X n+kj+1
aj X
i=1 j=1

2
E al alk Xn+kl+1 + al1alk1

= 2
X 2 ,
+ + ak+1a1X k l 1,
n+kl+2 n


0, k l,
(
(al alk + + ak+1a1) 2, k l 1,
=
0, k l.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 15

Autoregressive (AR) process


Let the variance of an uncorrelated zero-mean random sequence Z0, Z1, be
( 2

1 2, n = 0,
Var{Zn} =
2, n 1,
where 2 < 1. Then the random process {Xn} defined by
X0 = Z0, Xn = Xn1 + Zn, n 1.
is called the first order autoregressive process. We can obtain
Xn = (Xn2 + Zn1) + Zn
= 2Xn2 + Zn1 + Zn
...
n
X
= niZi.
i=0

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 16

Thus the autocovariance function of {Xn} is


n n+m
!
X X
Cov(Xn, Xn+m) = Cov niZi, n+miZi
i=0 i=0
n
X
= nin+miCov(Zi, Zi)
i=0
n
!
1 X
= 22n+m + 2i
1 2 i=1
2 m

= .
1 2
Now, from the result above and the fact that the mean of {Xn} is E{Xn} = 0, it
follows that {Xn, n 0} is w.s.s..

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 17

Square-law detector
Let Y (t) = X 2(t) where {X(t)} is a Gaussian random process with mean 0 and
autocorrelation RX ( ). Then the expectation of Y (t) is E{Y (t)} = E{X 2(t)} =
RX (0). Since X(t+ ) and X(t) are jointly Gaussian with mean 0, the autocorrelation
of Y (t) can be found as
RY (t, t + ) = E{X 2(t)X 2(t + )}
= E{X 2(t + )}E{X 2(t)} + 2E 2{X(t + )X( )}
2 2
= RX (0) + 2RX ( ).

Thus E{Y 2(t)} = RY (0) = 3RX


2
(0) and Y2 = 3RX
2 2
(0) RX 2
(0) = 2RX (0). Clearly,
{Y (t)} is w.s.s..

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 18

Limiter
Let {Y (t)} = {g(X(t))} be a random process which is defined by a random process
{X(t)} and a limiter

1, x > 0,
g(x) =
1, x < 0.
Then we can easily obtain P {Y (t) = 1} = P {X(t) > 0} = 1 FX (0) and
P {Y (t) = 1} = P {X(t) < 0} = FX (0). Thus the mean and autocorrelation of
{Y (t)} are
E{Y (t)} = 1 P {Y (t) = 1} + (1) P {Y (t) = 1}
= 1 2FX (0),
RY ( ) = E{Y (t)Y (t + )}
= P {Y (t)Y (t + ) = 1} P {Y (t)Y (t + ) = 1}
= P {X(t)X(t + ) > 0} P {X(t)X(t + ) < 0}.
Now, if {X(t)} is a stationary Gaussian random process with mean 0, X(t + ) and
X(t) are jointly Gaussian with mean 0, variance RX (0), and correlation coefficient
RX ( )/RX (0). Clearly, FX (0) = 1/2.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 19

We have (refer to (3.100), p. 156, Random Processes, Park, Song, Nam, 2004)
n X(t) o
P {X(t)X(t + ) < 0} = P <0
X(t + )
= FZ (0)
1 1 r1
= + tan1
2 1 1 r 2
1 1
= sin1 r
2
1 1 RX ( )
= sin1 ,
2 RX (0)
P {X(t)X(t + ) > 0} = 1 P {X(t)X(t + ) < 0}
1 1 RX ( )
= + sin1 .
2 RX (0)
Thus the autocorrelation of the limiter output is
2 1 RX ( )
RY ( ) = sin ,
RX (0)
from which we have E{Y 2(t)} = RY (0) = 1 and Y2 = 1 {1 2FX (0)}2 = 1.

4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 20

Power of a random process

Power spectrum
The Fourier transform of the autocorrelation function of a w.s.s. random process is
called the power spectrum, power spectral density, or spectral density. It is usually
assumed that the mean is zero.

When the autocorrelation is RX ( ), the power spectrum is


SX () =
F{RX }
X

RX (k)ejk , discrete time,

= Zk

RX ( )ej d, continuous time.

If the mean is not zero, the power spectral density is defined by the Fourier trans-
form of the autocovariance instead of the autocorrelation.

4.2 Properties of random processes / 4.2.2 Power of random process


Random processes - Chapter 4 Random process 21

White noise, white process


Suppose that a discrete time random process {Xn} is uncorrelated so that RX (k) =
2k . Then we have
X
SX () = 2k ejk
k
2
= .
Such a process is called a white noise or white process. If {Xn} is Gaussian in addition,
it is called a white Gaussian noise.

4.2 Properties of random processes / 4.2.2 Power of random process


Random processes - Chapter 4 Random process 22

Telegraph signal

Consider Poisson points with parameter . Let N (t) be the number of points in the
interval (0, t]. As shown in the figure above, consider the continuous-time random
process X(t) = (1)N (t) with X(0) = 1. Here, Wi is the time between adjacent
Poisson points. Assuming > 0, the autocorrelation of X(t) is
RX ( ) = E{X(t + )X(t)}
= 1 P {the number of points in the interval (t, t + ] is even}
+(1) P {the number of points in the interval (t, t + ] is odd}
n ( )2 o n ( )3 o

= e 1+ + e t + +
2! 3!
= e cosh e sinh
= e2 .

4.2 Properties of random processes / 4.2.2 Power of random process


Random processes - Chapter 4 Random process 23

We can obtain a similar result when < 0, Combining the two results, we have
RX ( ) = e2| |. Consider a random variable A which is independent of X(t) and
takes on 1 or 1 with equal probability. Let Y (t) = AX(t). We then have
E{Y (t)} = E{A}E{X(t)}
= 0,
E{Y (t1)Y (t2)} = E{A2}E{X(t1)X(t2)}
= E{A2}RX (t1 t2)
= e2|t1t2|
since E{A} = 0, E{A2} = 1. Thus {Y (t)} is w.s.s., and the power spectral density
of {Y (t)} is
SY () = F{e2| |}
4
= 2 .
+ 42

4.2 Properties of random processes / 4.2.2 Power of random process


Random processes - Chapter 4 Random process 24

Band limited noise, colored noise


When W > 0, let us consider a random process with the power spectral density
(
1, [W, W ],
SX () =
0, otherwise.

Such a process is called a colored noise. The autocorrelation of a colored noise is


thus
RX ( ) = F 1{SX ()}
sin(W )
= .

4.2 Properties of random processes / 4.2.2 Power of random process


Random processes - Chapter 4 Random process 25

Power spectral density is not less than 0. That is,

SX () 0.

Cross power spectral density


The cross power spectral density SXY () of jointly w.s.s. processes {X(t)} and
{Y (t)} is
Z
SXY () = RXY ( )ej d.

Thus SXY () = SY X (), and the inverse Fourier transform of SXY () is


Z
1
RXY ( ) = SXY ()ej d.
2

4.2 Properties of random processes / 4.2.2 Power of random process


Random processes - Chapter 4 Random process 26

Time delay process


Consider a w.s.s. process {X(t)} of which the power spectral density is SX ().
Letting Y (t) = X(t d), we have
RY (t, s) = E{Y (t)Y (s)}
= E{X(t d)X (s d)}
= RX (t s).

Thus the process {Y (t)} is w.s.s. and the power spectral density SY () equals to
SX (). In addition, the crosscorrelation and cross power spectral density of {X(t)}
and {Y (t)} are
RXY (t, s) = E{X(t)Y (s)} = E{X(t)X (s d)}
= RX (t s + d) = RX ( + d)
and
Z
SXY () = F{RX ( + d)} = RX ( + d)ej d

Z
= RX (u)ejuejddu = SX ()ejd.

That is, {X(t)} and {Y (t)} are jointly w.s.s..


4.2 Properties of random processes / 4.2.2 Power of random process
Random processes - Chapter 4 Random process 27

Random process and linear systems

If the input random process is two-sided and w.s.s., the output of a


linear time invariant (LTI) filter is also w.s.s..

If the input random process is one-sided and w.s.s., however, the output
of an LTI filter is not w.s.s. in general.

4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 28

Let h(t) be the impulse response of an LTI system and let


H() = F{h(t)} be the transfer function. Then the crosscorrelation
RXY (t1, t2) of the input random process {X(t)} and output random
process {Y (t)} and autocorrelation RY (t1, t2) of the output are
n Z o

RXY (t1, t2) = E{X(t1)Y (t2)} = E X(t1) X (t2 )h ()d
Z Z
= E{X(t1)X (t2 )}h()d = RX (t1, t2 )h()d

and
RY (t1, t2) = E{Y (t1)Y (t2)}
nZ Z o

= E X(t1 )h()d X (t2 )h ()d
Z Z
= RX (t1 , t2 )h()h()dd
Z
= RXY (t1 , t2)h()d,

respectively.
4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 29

If the input and output are jointly w.s.s., we can obtain


Z
RXY ( ) = RX (t + )h()d
= RX ( ) h( ),
RY ( ) = RXY ( ) h( ).
since RX (t1, t2 ) = RX (t1 t2 + ) = RX ( + ) and RXY (t1 , t2) =
RXY (t1 t2 ) = RXY ( ).
The cross power spectral density and power spectral density of output are
SXY () = SX ()H (),
SY () = SXY ()H(),
respectively.

4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 30

We can express the autocorrelation and power spectral density of the output in
terms of those of the input. Specifically, we have
RY ( ) = RX ( ) h( )
and
SY () = SX ()|H()|2,
where h(t) is called the deterministic autocorrelation of h(t) and is defined by
h(t) = F 1(|H()|2)

= h(t)
Z h (t)
= h(t + )h( )d.

Let SY () be the power spectral density of the output process {Yt}. Then we can
obtain
RY ( ) = F 1{SY ()}
= F 1{|H()|2SX ()}.

4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 31

Coherence function
A measure of the degree to which two w.s.s. processes are related by an LTI trans-
formation is the coherence function ()
defined by
SXY ()
XY () = .
[SX ()SY ()]1/2

Here, |()| = 1 if and only if {X(t)} and {Y (t)} are the linearly related, that is,

Y (t) = X(t) h(t). Note the similarity between the coherence function () and
the correlation coefficient , a measure of the degree to which two random variables
are linearly related. The coherence function exhibits the property
|()|
1
similar to the correlation coefficient between two random variables.

4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 32

Ergodic theorem*

such that
If there exists a random variable X
n1
1X
Xi X, discrete time random process,
n i=0 n

Z T
1
X(t)dt X, continuous time random process.
T 0 T

{Xn, n I} is said to satisfy ergodic theorem.

When a process satisfies an ergodic theorem, the sample mean con-


verges to something, which may be different from the expectation.
In some cases, a random process with time-varying mean satisfies an
ergodic theorem as shown in the example below.

4.2 Properties of random processes / 4.2.4 Ergodic theorem*


Random processes - Chapter 4 Random process 33

Suppose that nature at the beginning of time randomly selects one of two coins
with equal probability, one having bias p and the other having bias q. After the
coin is selected it is flipped once per second forever. The output random process
is a one-zero sequence depending on the face of a coin. Clearly, the time average
will converge: it will converge to p if the first coin was selected and to q if the
second coin was selected. That is, the time average will converge to a random
variable. In particular, it will not converge to the expected value p/2 + q/2.

If lim E{(Yn Y )2} = 0, Yn, n = 1, 2, is said to converge to Y


n
in mean square, which is denoted as
l.i.m. Yn = Y,
n

where l.i.m. denotes limit in the mean.

4.2 Properties of random processes / 4.2.4 Ergodic theorem*


Random processes - Chapter 4 Random process 34

Mean ergodic theorem


Let {Xn} be an uncorrelated discrete time random process with finite mean E{Xn} =
2 2
P
n1
m and finite variance Xn = X . Then the sample mean Sn = Xi/n converges
i=0
to the expected value E{Xn} = m in mean square. That is,
n1
1X
l.i.m. Xi = m.
n n
i=0

A sufficient condition for a w.s.s. discrete time random process


{Xn, n I} to satisfy a mean ergodic theorem is KX (0) < and
lim KX (n) = 0.
n

4.2 Properties of random processes / 4.2.4 Ergodic theorem*


Random processes - Chapter 4 Random process 35

Let {Xn} be a discrete time random process with mean E{Xn} and
autocovariance function KX (i, j). The process need not be stationary
in any sense. A necessary and sufficient condition for
n1
1X
l.i.m. Xi = m
n n
i=0

is
n1
1X
lim E{Xi} = m
n n
i=0

and
n1 n1
1 XX
lim KX (i, k) = 0.
n n2
i=0k=0

That is, if and only if a process is asymptotically uncorrelated and its


sample averages converge, the sample mean converge in mean square.

4.2 Properties of random processes / 4.2.4 Ergodic theorem*


Random processes - Chapter 4 Random process 36

Mean square ergodic theorem


P
n
Let Xn = Xi/n where {Xn, n 1} is a second order stationary process with
i=1
n m)2} = 0
mean m and autocovariance K(i) = Cov(Xn, Xn+i). Then lim E{(X
n
P
n1
if and only if lim K(i)/n = 0.
n i=0

Let K(i) be the autocovariance of {Xn}, a second order stationary Gaussian


process with mean 0. If
T
1X 2
lim K (i) = 0,
T T
i=1

then
T (i) K(i)|2} = 0
lim E{|K
T

T (i) = PXl Xl+i/T is the sample autocovariance.


T
for i = 1, 2, , where K
l=1

4.2 Properties of random processes / 4.2.4 Ergodic theorem*


Random processes - Chapter 4 Random process 37

Ergodicity*

Shift operator
An operator T for which T = {xt+1, t I}, where = {xt, t I} is an infinite
sequence in the probability space (RI , B(R)I , P ), is called a shift operator.

Stationary process
A discrete time random process with process distribution P is stationary if
P (T 1F ) = P (F ) for any element F in B(R)I.

4.2 Properties of random processes / 4.2.5 Ergodicity*


Random processes - Chapter 4 Random process 38

Invariant event
An event F is said to be invariant with respect to the shift operator T if and only if
T 1F = F .

Ergodicity, ergodic process


A random process is said to be ergodic if P (F ) = 0 or P (F ) = 1 for any invariant
event F .

Consider a two-sided process with distribution


P ( , x1 = 1, x0 = 0, x1 = 1, x2 = 0, ) = p,
P ( , x1 = 0, x0 = 1, x1 = 0, x2 = 1, ) = 1 p.
Clearly, F = {sequence of alternating 0 and 1} is an invariant event, and has
probability P (F ) = 1. Any other invariant event - for example, the all 1 sequence
- that does not include F has probability 0. Thus the random process is ergodic.
Ergodicity has nothing to do with stationarity or convergence of sample averages.

4.2 Properties of random processes / 4.2.5 Ergodicity*


Random processes - Chapter 4 Random process 1

Random processes

Chapter 4 Random process

4.3 Process with i.s.i.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 2

Process with i.s.i.

Process with independent increments


A random process {Yt, t I} is said to have independent increments or to be an
independent increment process if for all choices of k = 1, 2, and all choices of
ordered sample times {t0, t1, , tk }, the k increments Yti Yti1 , i = 1, 2, , k
are independent random variables.

Process with stationary increments


When the increments {Yt Ys} are stationary, the random process {Yt} is called a
stationary increment random process.

Process with i.s.i.


A random process is called an independent and stationary increment (i.s.i.) process
if its increments are independent and stationary.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 3

A discrete time random process is an i.s.i. process if and only if it can


be represented as the sum of i.i.d. random variables.

Mean and autocovariance of i.s.i. process


The mean and autocovariance of a discrete time i.s.i. process are
E{Yt} = tE(Y1), t0
and
KY (t, s) = Y21 min(t, s), t, s 0.

Clearly, an i.s.i. process itself is not stationary.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 4

Let the process {Xt, t T } be an i.s.i. process. If


m0 = E{X0}, m1 = E{X1} m0,
02 = E{(X0 m0)2}, 12 = E{(X1 m1)2} 02,
we have
E{Xt} = m0 + m1t,
Var{Xt} = 02 + 12t.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 5

Point process and counting process


A sequence T1 T2 T3 of ordered random variables is called a point
process. For example, the set of times defined by Poisson points is a point process.
A counting process Y (t) can be defined as the number of points in the interval [0, t).
We have, with T0 = 0,
Y (t) = i, Ti t < Ti+1, i = 0, 1, .
.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 6

A counting process

A process constructed by summing the outputs of a Bernoulli process

Let {Xn, n = 1, 2, } be a Bernoulli process with parameter p. Define the


random process {Yn, n = 1, 2, } as
Y0 = 0,
n
Yn = Xi
i=1
= Yn1 + Xn, n = 1, 2, .
Since the random variable Yn represents the number of 1s in {X1, X2, , Xn},
we have
Yn = Yn1 or Yn = Yn1 + 1, n = 2, 3, .
In general, a discrete time process satisfying this relation is called a counting
process since it is nondecreasing, and when it jumps, it is always with an increment
of 1.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 7

Properties of the random process {Yn}

E{Yn} = np, Var{Yn} = np(1 p), KY (k, j) = p(1 p) min(k, j)

Marginal pmf for Yn


pYn (y) = Pr{ there are exactly y ones in X1, X2, , Xn.}
 
n y
= p (1 p)ny , y = 0, 1, , n.
y

Since the marginal pdf is binomial, the process {Yn} is called a binomial counting
process.

The process {Yn} is not stationary since the marginal pmf depends on the time
index n.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 8

Random walk process

One dimensional random walk, random walk

Consider the modified Bernoulli process for which the event failure is represented
by 1 instead of 0.

+1, for success in the nth trial,
Zn =
1, for failure in the nth trial.

Let p = Pr{Zn = 1}, and consider the sum


n

Wn = Zi
i=1

of the variables Zn. The process {Wn} is referred to as one dimensional random
walk or random walk.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 9

Since
Zi = 2Xi 1,
it follows that the random walk process {Wn} is related to the binomial counting
process {Yn} by
Wn = 2Yn n.
Using the results on the mean and autocorrelation functions of the binomial count-
ing process and the linearity of expectation, we have
mW (n) = (2p 1)n,
KW (n1, n2) = 4p(1 p) min(n1, n2),
2
W n
= 4p(1 p)n.

4.3 Process with i.s.i.


Random processes - Chapter 4 Random process 1

Random process

Chapter 4 Random process

4.4 Discrete time process with i.s.i.

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 2

Discrete time process with i.s.i.

Discrete time discrete alphabet process {Yn} with i.s.i.


As mentioned before, {Yn} can be dened by the sum of i.i.d. random variables {Xi}.

Let us consider the following conditional pmf.


pYn|Y n1 (yn|y n1) = pYn|Y n1 (yn|yn1, , y1)
= Pr(Yn = yn|Y n1 = y n1),
where Y n = (Yn, Yn1, , Y1) and y n = (yn, yn1, , y1).

The conditioning event {Yi = yi, i = 1, 2, , n 1} above is the same as the


event {X1 = y1, Xi = yi yi1, i = 2, , n 1}. In addition, under the
conditioning event, we have Yn = yn if and only if Xn = yn yn1.

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 3

Assuming y0 = 0,
pYn|Y n1 (yn|y n1) = Pr(Yn = yn|X1 = y1, Xi = yi yi1, i = 2, 3, , n 1)
= Pr(Xn = yn yn1|Xi = yi yi1, i = 1, 2, , n 1)
= pXn|X n1 (yn yn1|yn1 yn2, , y2 y1, y1),
where X n1 = (Xn1, Xn2, , X1).
If {Xn} are i.i.d.,
pYn|Y n1 (yn|y n1) = pX (yn yn1)
since Xn is independent of Xk for k < n.
Thus the joint pmf is
pY n (y n) = pYn|Y n1 (yn|y n1) pY n1 (y n1)
...
 n n

= pY1 (y1) pYi|Yi1, ,Y1 (yi|yi1, , y1) = pX (yi yi1).
i=2 i=1

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 4

Applying the result above to the binomial counting process, we obtain


n

n
pY n (y ) = p(yiyi1)(1 p)1(yiyi1),
i=1

where yi yi1 = 0 or 1 for i = 1, 2, , n and y0 = 0.

Properties of processes with i.s.i.


We can express the conditional pmf of Yn given Yn1 as follows:
pYn|Yn1 (yn|yn1) = Pr(Yn = yn|Yn1 = yn1)
= Pr(Xn = yn yn1|Yn1 = yn1).
The conditioning event {Yn1 = yn1} depends only on Xk for k < n, and Xn is
independent of Xk for k < n. Thus, this conditioning event does not aect Xn.
Consequently,
pYn|Yn1 (yn|yn1) = pX (yn yn1).

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 5

Discrete time i.s.i. processes (such as the binomial counting process and discrete
random walk) has the following property:
pYn|Y n1 (yn|y n1) = pYn|Yn1 (yn|yn1),
Pr{Yn = yn|Y n1 = y n1} = Pr{Yn = yn|Yn1 = yn1}.
Roughly speaking, given the most recent past sample (or the current sample), the
other past samples do not aect the probability of what happens next.

A discrete time discrete alphabet random process with this property is called a
Markov process. Thus all i.s.i. processes are Markov processes.

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 6

Gamblers ruin problem


A person wants to buy a new car of which the price is N won. The person has k
(0 < k < N ) won, and he intends to earn the dierence from gambling. The game
this person is going to play is that if a toss of a coin results in a head, he will earn 1
won, and if it results in a tail, he will lose 1 won. Let p represent the probability of
heads, and q = 1 p. Assuming the man continues to play the game until he earns
enough money for a new car or lose all the money he has, what is the probability that
the man loses all the money he has?

Let Ak be the event that the man loses all the money when he has started with k
won and B be the event the man wins a game. Then,
P (Ak ) = P (Ak |B)P (B) + P (Ak |B c)P (B c).
Since the game will start again with k + 1 won if a toss of a coin results in a head
and k 1 won if a toss of a coin results in a tail, it is easy to see that
P (Ak |B) = P (Ak+1),
P (Ak |B c) = P (Ak1).

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 7

Let pk = P (Ak ), then p0 = 1, pN = 0, and


pk = ppk+1 + qpk1, 1 k N 1.
Assuming pk = k , we get from the equation above
p2 + q = 0,
which gives 1 = 1 and 2 = q/p.

If p = 0.5, pk = A11k + A22k . Thus using the boundary conditions p0 = 1 and


pN = 0, we can nd
(q/p)k (q/p)N
pk = .
1 (q/p)N

If p = 0.5, we have pk = (A1 + A2k)1k since q/p = 1. Thus using the boundary
conditions p0 = 1 and pN = 0, we can nd
k
pk = 1 .
N

4.4 Discrete time process with i.s.i.


Random processes - Chapter 4 Random process 8

Discrete time Wiener process

Discrete time Wiener process


Let {Xn} be an i.i.d. zero-mean Gaussian process with variance 2. The discrete
time Wiener process {Yn} is dened by
Y0 = 0,
n
Yn = Xi
i=1
= Yn1 + Xn, n = 1, 2, .

The discrete time Wiener process is also called the discrete time diusion process
or discrete time Brownian motion.
Since the discrete time Wiener process is formed as sums of an i.i.d. process, it
has i.s.i.. Thus we have E{Yn} = 0 and KY (k, j) = 2 min(k, j).
The Wiener process is a rst-order autoregressive process.

4.4 Discrete time process with i.s.i. / 4.4.3 discrete time Wiener process
Random processes - Chapter 4 Random process 9

The discrete time Wiener process is a Gaussian process with mean func-
tion m(t) = 0 and autocovariance function KX (t, s) = 2 min(t, s).
Since the discrete time Wiener process is an i.s.i. process, we have
fYn|Yn1 (yn|yn1) = fX (yn yn1),
fYn|Y n1 (yn|y n1) = fYn|Yn1 (yn|yn1)
= fX (yn yn1).
As in the discrete alphabet case, a process with this property is called a Markov
process.
Markov process
A discrete time random process {Yn} is called a rst order Markov process if it
satises
Pr{Yn yn|yn1, yn2, } = Pr{Yn yn|yn1}.
for all n, yn, yn1, yn2, .

4.4 Discrete time process with i.s.i. / 4.4.3 discrete time Wiener process
Random processes - Chapter 4 Random process 1

Random processes

Chapter 4 Random process

4.5 Continuous time i.s.i. process

4.5 Continuous time i.s.i. process


Random processes - Chapter 4 Random process 2

Continuous time i.s.i. process

Continuous time i.s.i. process


When we deal with a continuous time process with i.s.i. we need to consider more
general collection of sample times than in the case of discrete time process.
In the continuous time case, we assume that we are given the cdf of the increments
as
FYtYs (y) = FY|ts|Y0 (y)
= FY|ts| (y), t > s.

4.5 Continuous time i.s.i. process


Random processes - Chapter 4 Random process 3

The joint probability functions of a continuous time process


Dene the random variable {Xn} by
Xn = Ytn Ytn1 .
Then {Xn} are independent and
n

Ytn = Xi,
i=1

Pr{Ytn yn|Ytn1 = yn1, Ytn2 = yn2, } = FXn (yn yn1)


= FYtn Ytn1 (yn yn1).

As in the case of discrete time processes, these can be used to nd the joint pmf or
pdf as
n

pYt1 , ,Ytn (y1, , yn) = pYti Yti1 (yi yi1),
i=1
n
fYt1 , ,Ytn (y1, , yn) = fYti Yti1 (yi yi1).
i=1

4.5 Continuous time i.s.i. process


Random processes - Chapter 4 Random process 4

If a process {Yt} has i.s.i., and the cdf pdf, or pmf for Yt = Yt Y0 is
given, the process can be completely described as shown above.

As in the discrete time case, a continuous time random process {Yt} is


called a Markov process if we have
Pr{Ytn yn|Ytn1 = yn1, Ytn2 = yn2, } = Pr{Ytn yn|Ytn1 = yn1},
fYtn |Ytn1 , ,Yt1 (yn|yn1, , y1) = fYtn |Ytn1 (yn|yn1),
or
pYtn |Ytn1 , ,Yt1 (yn|yn1, , y1) = pYtn |Ytn1 (yn|yn1)
for all n, yn, yn1, , and t1, t2, , tn .
A continuous time i.s.i. process is a Markov process.

4.5 Continuous time i.s.i. process


Random processes - Chapter 4 Random process 5

Wiener process

Wiener process
A process is called Wiener process if it satises
The initial position is zero. That is, W (0) = 0.
The mean is zero. That is, E{W (t)} = 0, t 0.
The increments of W (t) are independent, stationary, and Gaussian.

Wiener process is a continuous time i.s.i. process.

The increments of Wiener process are Gaussian random variables with


zero mean.

The rst order pdf of Wiener process is


 
1 x2
fWt (x) = exp 2 .
2t 2 2t

4.5 Continuous time i.s.i. process / 4.5.1 Wiener process


Random processes - Chapter 4 Random process 6

Wiener process, Brownian motion


Wiener process is the limit of the random-walk process.

Properties of Wiener process

The distribution of Xt2 Xt1 , t2 > t1, depends only on t2 t1, not t1 and t2
individually.

4.5 Continuous time i.s.i. process / 4.5.1 Wiener process


Random processes - Chapter 4 Random process 7

Let us show that the Wiener process is Gaussian.


From the denition of the Wiener process, the random variables W (t1), W (t2)
W (t1), W (t3) W (t2), , W (tk ) W (tk1) are independent Gaussian random
variables. Thus the random variables W (t1), W (t2), W (t3), , W (tk ) can be
obtained from the following linear transformation of W (t1) and the increments

W (t1) = W (t1),
W (t2) = W (t1) + {W (t2) W (t1)},
W (t3) = W (t1) + {W (t2) W (t1)} + {W (t3) W (t2)},
...
W (tk ) = W (t1) + {W (t2) W (t1)} +
+{W (tk ) W (tk1)}.

Since W (t1), W (t2), W (t3), , W (tk ) is jointly Gaussian, {W (t)} is Gaussian.

4.5 Continuous time i.s.i. process / 4.5.1 Wiener process


Random processes - Chapter 4 Random process 8

Poisson counting process


Poisson counting process
A continuous time counting process {Nt, t 0} with the following properties is called
the Poisson counting process.

N0 = 0.
The process has i.s.i.. Hence, the increments over non-overlapping time intervals
are independent random variables.
In a very small time interval, the probability of an increment of 1 is proportional to
the length of the interval, and the probability of an increment larger than 1 is 0.
Thus, Pr{Nt+t Nt = 1} = t + o(t), Pr{Nt+t Nt 2} = o(t), and
Pr{Nt+t Nt = 0} = 1 t + o(t), where is a proportionality constant.

4.5 Continuous time i.s.i. process / 4.5.2 Poisson counting process


Random processes - Chapter 4 Random process 9

The Poisson counting process is a continuous time discrete alphabet i.s.i. pro-
cess.

We have obtained the Wiener process as the limit of a discrete time discrete
amplitude random-walk process. Similarly, the Poisson counting process can
be derived as the limit of a binomial counting process using the Poisson ap-
proximation.

4.5 Continuous time i.s.i. process / 4.5.2 Poisson counting process


Random processes - Chapter 4 Random process 10

The probability mass function pNtN0 (k) = pNt (k) of the increment
Nt N0 between the starting time 0 and t > 0
Let us use the notation
p(k, t) = pNtN0 (k), t > 0.
Using the independence of increments and the third property of the Poisson counting
process, we have
k

p(k, t + t) = Pr{Nt = n}Pr{Nt+t Nt = k n|Nt = n}
n=0
k

= Pr(Nt = n)Pr(Nt+t Nt = k n)
n=0

p(k, t)(1 t) + p(k 1, t)t,


which yields

p(k, t + t) p(k, t)
= p(k 1, t) p(k, t).
t
4.5 Continuous time i.s.i. process / 4.5.2 Poisson counting process
Random processes - Chapter 4 Random process 11

When t 0, the equation above becomes the following dierential equation


d
p(k, t) + p(k, t) = p(k 1, t), t > 0,
dt
where the initial conditions are

0, k = 0,
p(k, 0) =
1, k=0
since Pr{N0 = 0} = 1. Solving the dierential equation gives
pNt (k) = p(k, t)
(t)k et
= , k = 0, 1, 2, , t 0.
k!

The pmf for k jumps in an arbitrary interval (s, t), t s


((t s))k e(ts)
pNtNs (k) = , k = 0, 1, , t s.
k!

4.5 Continuous time i.s.i. process / 4.5.2 Poisson counting process


Random processes - Chapter 4 Random process 12

Martingale

Martingale property
An independent increment process {Xt} with zero mean satises
E {X(tn) X(tn1)|X(t1), X(t2), , X(tn1)} = 0
for all t1 < t2 < < tn and integer n 2. This property can be rewritten as
E {X(tn)|X(t1), X(t2), , X(tn1)} = X(tn1),
which is called the martingale property.

Martingale
A process {Xn, n 0} with the following properties is called a martingale process.
E{|Xn|} < .
E{Xn+1|X0, , Xn} = Xn.

4.5 Continuous time i.s.i. process / 4.5.3 Martingale


Random processes - Chapter 4 Random process 1

Random processes

Chapter 4 Random process

4.6 Compound process*

4.6 Compound process*


Random processes - Chapter 4 Random process 2

Discrete time compound process*

Discrete time compound process


Let {Nk , k = 0, 1, } be a discrete time counting process such as the binomial
counting process, and let {Xk , k = 0, 1, } be an i.i.d. random process. Assume
that the two processes are independent of each other. Dene the random process
{Yk , k = 0, 1, } by
Y0 = 0,
Nk

Yk = Xi, k = 1, 2, ,
i=1

where we assume Yk = 0 if Nk = 0. The process {Yk } is called a discrete time


compound process. The process is also referred to as a doubly stochastic process
because of the two sources {Nk } and {Xk } of randomness.

4.6 Compound process* / 4.6.1 Discrete time compound process*


Random processes - Chapter 4 Random process 3

The expectation of Yk can be obtained using the conditional probability as


Nk 
E{Yk } = E Xi (1)
i=1

= E{E{Yk |Nk }}

= pNk (n)E{Yk |Nk = n}
n

= pNk (n)nE{X}
n

= E{Nk }E{X}.

Let X1, X2, be an i.i.d. sequence of random variables and GX be


their common moment generating function. Let the random variable
N be independent of Xi and GN be the moment generatingN function
of N . Then the moment generating function of S = i=1 Xi is
GS (t) = GN (GX (t)).
4.6 Compound process* / 4.6.1 Discrete time compound process*
Random processes - Chapter 4 Random process 4

Continuous time compound process*

When a continuous time counting process {Nt, t 0} and an i.i.d.


process {Xi, i = 1, 2, } are independent of each other, the process
Nt

Yt = Y (t) = Xi
i=1

is called a continuous time compound process. Here, we put Y (t) = 0


when Nt = 0.

We have
E{Yt} = E{Nt}E{X},
MYt (u) = E{MXNt (u)}.

4.6 Compound process* / 4.6.2 Continuous time compound process*


Random processes - Chapter 4 Random process 5

A compound process is continuous and dierentiable except at the


jumps, where a new random variable is added.

If {Nt} is a Poisson counting process, we have

E{Yt} = tE{X},

 (t)k et
MYt (u) = MXk (u)
k!
k=0

 (tMX (u))k
= et
k!
k=0

= et(1MX (u)).

4.6 Compound process* / 4.6.2 Continuous time compound process*

Вам также может понравиться