Вы находитесь на странице: 1из 13

School of Information Science

Source and Channel Models


2009 2-2 Course

- Information Theory -

Tetsuo Asano and


Tad matsumoto

Email: {t-asano, matumoto}@jaist.ac.jp


Japan Advanced Institute of Science and Technology
Asahidai 1-1, Nomi, Ishikawa 923-1292, Japan
http://www.jaist.ac.jp/~t-asano

School of Information Science

Objectives of this Chapter


Information
Source

Encoder

Trans
mitter

Recei
ver

Decoder

Destination

Noise
General Description of Communication System

Requirements for the source model: CCC


- Convenience

: for development of source coding techniques suitable


for the source.
- Consistency
: for performance analysis and comparison of compression
techniques.
- Completeness : for the analysis of the limitation of source compression.

Requirements for the model: CCC


- Convenience

: for development of channel coding techniques suitable


for the channel.
- Consistency
: for performance analysis and comparison of coding
techniques.
- Completeness : for the analysis of the limitation of channel capability.

School of Information Science

Assumptions in this Chapter


We dont touch this part!!

Analog waveform
Sample
&
Hold

Quantization

Source
Encoder

(A/D)

Channel
Encoder

We only look at this part!!

Source information is expressed as a train of binary or non-binary


finite alphabet.
We skip the transmitter and receiver!!!!

Channel
Encoder

Binary/Non-binary finite alphabet

Channel
Decoder

Noise = Error source

The transmitter and receivers are ignored (Channel output is directly connected
to the channel decoder).

The both channel encoder output and noise take form of binary or
non-binary finite alphabet.

Note: Those assumptions are to be eliminated later.

School of Information Science

Outline

1. Source Model
- Memory-less Source
- Markov Chain
- Source with Memory

2. Channel Model
- Memory-less Channel
- Memory Channel
- Wireless Channels

School of Information Science

Source Model (1)


Statistical Properties of Source:
Information
source

Output sequence = Message: X0, X1,., Xi,

Xi A

Finite Alphabet: A={a1, a2, a3, ., aq}, where q is the alphabet size.
The output sequence Xi is a random variable.
Let the joint probability of X0 =x0, X1 =x1,, Xn-1 =xn-1 be denoted as

PX 0 , X 1 ,..., X n1 ( x0 , x1 ,..., xn 1 )

or simply

P( x0 , x1 ,..., xn 1 )

if there is no risk of confusion.

Reinforcement of probability theory:

- Marginalization: P xi , x j = L P x1 , x2 ,L, xi ,L, x j , L, xn 1


x1

x2

xn1

P(x , x ,L, x ,L, x ,L, x )

xi ,i =1,L, n 1
i i , j

n 1

School of Information Science

Source Model (2)


Reinforcement of probability theory (Continued):

- Conditioning:

, x ,L, x )
) P(x , x ,LP, x(x,L
,x )

P x1 , x2 ,L , xi 1 , xi +1 ,L , x j 1 , x j +1 , L, xn 1 xi , x j =

n 1

Exercise
(1) Calculate P(x0=0), given the probabilities P(x0,x1), shown in the table:
(2) Calculate P(x0=0|x1=1).

x1 x0

0.26

0.26

0.27

0.21

Note: Detector that makes decisions only based on P(x0) or P(x1) is called
single user detector. Detector that makes decisions based on P(x0,x1)
is called multi user detector.

School of Information Science

Source Model (3)


Definition 3.1.1: Memoryless Source
The source is memoryless, if

n 1

PX 0 , X1 ,..., X n1 ( x0 , x1 ,..., xn 1 ) = PX ( xi )
i =0

Definition 3.1.2: Stationary Source


The source is stationary, if the joint probability is independent of time, i.e.,
for i, PX 0 , X 1 ,..., X n1 ( x0 , x1 ,..., xn 1 ) = PX 0+i , X1+i ,..., X n1+i ( x0 , x1 ,..., xn 1 )

Definition 3.1.3: Memoryless Stationary Source


The source is memoryless stationary, if
n 1

for i, PX 0 , X 1 ,..., X n1 ( x0 , x1 ,..., xn 1 ) = PX 0+i , X1+i ,..., X n1+i ( x0 , x1 ,..., xn 1 ) = PX ( xi )


i =0

School of Information Science

Source Model (4)


Now, consider a memoryless stationary source with n=1. Obviously,
PX 0 ( x ) = PX i ( x )

The source is Ergodic, if, for an arbitrary function f(.),

lim

1 n 1
f ( xi ) = xAlphabet
f (Axi ) p X ( x)
n i =0

Namely, if the time average and ensemble average of f(.) is the same,
the source is Ergodic.

Exampe: Non-Ergodic source


X=0

Probability of switch is , but it is turned only once.


All 0 or all 1

X=1

lim

1 n 1
f ( xi ) xAlphabet
f (Axi ) p X ( x)
n i =0

School of Information Science

Source Model (5)


Definition 3.1.4: Source with memory
The source is with memory, if

n 1

PX 0 , X1 ,..., X n1 ( x0 , x1 ,..., xn 1 ) PX ( xi )

Definition 3.1.5: Markov Model

i =0

The source is m-th order Markov source, if for n>m,

P (xi xi 1 , xi 2 ,..., xi n ) = P (xi xi 1 , xi 2 ,..., xi m ) .


i.e., The source output at the timing index i is determined only conditioned
upon its m previous output symbols.
Sometimes m is called memory length of the source.

School of Information Science

Source Model (6)


Example:

Markov Source

Memoryless
Binary Source

Yi

Xi=Xi-1 + Yi

P (Yi = 1) = p
Xi-1
P (Yi = 0 ) = 1 p

P ( X i = 1 X i 1 = 1) =

Unit
Delay

P (Yi = 0, X i 1 = 1)
= 1 p
P ( X i 1 = 1)

P (X i = 0 X i 1 = 0 ) =
P (X i = 0 X i 1 = 1) =

In this example, Xi is only determined


by Xi-1. m=1.

P (X i = 1 X i 1 = 0 ) =

P(Yi = 0, X i 1 = 0 )
= 1 p
P ( X i 1 = 0 )
P(Yi = 1, X i 1 = 1)
=p
P( X i 1 = 1)

P (Yi = 1, X i 1 = 0 )
=p
P ( X i 1 = 0 )

Definition 3.1.6: State of Markov Process


Since the source x are an element the alphabet A={a1, a2, a3, ., aq}, there are qm
possible conditions. They are called states of the Markov process.

School of Information Science

Source Model (7)


Markov Source

Example:
0/1-p

1/p
S0

Yi

Memoryless
Binary Source

1/1-p

P (Yi = 1) = p
Xi-1
P (Yi = 0 ) = 1 p

S1
0/p

Xi=Xi-1 + Yi

+
Unit
Delay

with the notation: output/probability

Note: Number of the state should not necessarily be qm.


0/1-p1

1/p1

S0

S1
0/1-p2
1/1-p3

1/p2
S2

0/p3

School of Information Science

Example: English Letters

School of Information Science

Source Model (8)


Definition 3.1.6: State Probability of Markov Process
Let the probability that the i-th state of an N-state Markov process is
occupied at the n-th timing index be denoted by wni , which defines the
state probability of the Markov process. The state probability vector is
then defined as:
t
w n = wn0 , w1n ,..., wnN 1

Definition 3.1.7: State Transition Matrix


By definition of the Markov process, the state probability at the n-th
timing index is determined only by the state probability at the (n-1)-th
timing index, as
p0,1 L p0, N 1 wn 1
wn0 p0, 0
n
0
p1,1
p1, N 1 wn 1
w1 p1,0
1
M = M
O
M M

n 1
wn p
N 1 N 1,0 PN 1,1 L p N 1, N 1 wN 1
or w n = w n 1.
The matrix is called state transition matrix.

School of Information Science

Source Model (9)


Definition 3.1.8: Stationarity
If the matrix U is uniquely determined
such that:
n
U = lim ,
n
the state probability converges into:
w n w , which is called stationary state vector.
The stationary state vector satisfies: w = Uw .

Exercise
Derive the stationary state vector for
the Markov process shown in the figure
on the right hand side:
.

0/1-p1

1/p1

S0

S1
0/1-p2
1/1-p3

1/p2
S2

0/p3

School of Information Science

example
transition matrix 1
0.6 0.4 0.0
0.3 0.0 0.7
0.2 0.0 0.8

Iteration 1
0.480 0.240 0.280
0.320 0.120 0.560
0.280 0.080 0.640

Iteration 4
0.357 0.143 0.500
0.357 0.143 0.500
0.357 0.143 0.500

Iteration 2
0.386 0.166 0.448
0.349 0.136 0.515
0.339 0.128 0.533

Iteration 5
0.357 0.143 0.500
0.357 0.143 0.500
0.357 0.143 0.500

Iteration 3
0.359 0.144 0.497
0.357 0.142 0.501
0.356 0.142 0.502

Iteration 6
0.357 0.143 0.500
0.357 0.143 0.500
0.357 0.143 0.500

School of Information Science

example
transition matrix 2
0.300 0.200 0.100
0.300 0.000 0.300
0.200 0.000 0.400
0.100 0.100 0.300
0.200 0.300 0.400

0.200
0.000
0.100
0.000
0.100

0.200
0.400
0.300
0.500
0.000

Iteration 3
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.126

0.310
0.310
0.310
0.310
0.310

0.100
0.100
0.100
0.100
0.100

0.239
0.238
0.238
0.238
0.239

Iteration 1
0.230 0.140
0.230 0.180
0.210 0.140
0.220 0.170
0.240 0.050

0.270
0.310
0.330
0.360
0.300

0.090
0.130
0.110
0.100
0.080

0.270
0.150
0.210
0.150
0.330

Iteration 4
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127

0.310
0.310
0.310
0.310
0.310

0.100
0.100
0.100
0.100
0.100

0.239
0.239
0.239
0.239
0.239

Iteration 2
0.226 0.124
0.224 0.138
0.224 0.130
0.223 0.136
0.227 0.115

0.308
0.312
0.312
0.312
0.307

0.099
0.103
0.101
0.104
0.096

0.242
0.223
0.233
0.225
0.256

Iteration 5
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127

0.310
0.310
0.310
0.310
0.310

0.100
0.100
0.100
0.100
0.100

0.239
0.239
0.239
0.239
0.239

School of Information Science

example
Iteration 4
0.288 0.330 0.381
0.381 0.288 0.330
0.330 0.381 0.288

transition matrix 3
0.000 0.900 0.100
0.100 0.000 0.900
0.900 0.100 0.000
Iteration 1
0.180 0.010 0.810
0.810 0.180 0.010
0.010 0.810 0.180

Iteration 5
0.335 0.336 0.329
0.329 0.335 0.336
0.336 0.329 0.335

Iteration 2
0.049 0.660 0.292
0.292 0.049 0.660
0.660 0.292 0.049

Iteration 6
0.333 0.333 0.333
0.333 0.333 0.333
0.333 0.333 0.333

Iteration 3
0.387 0.149 0.464
0.464 0.387 0.149
0.149 0.464 0.387

Iteration 7
0.333 0.333 0.333
0.333 0.333 0.333
0.333 0.333 0.333

School of Information Science

example
transition matrix 4
0.200 0.800 0.000
0.800 0.000 0.200
0.000 0.000 0.300
0.000 0.000 0.700

Iteration 4
0.118 0.101 0.399 0.382
0.101 0.093 0.409 0.397
0.000 0.000 0.500 0.500
0.000 0.000 0.500 0.500

0.000
0.000
0.700
0.300
Iteration 1
0.680 0.160
0.160 0.640
0.000 0.000
0.000 0.000

0.160 0.000
0.060 0.140
0.580 0.420
0.420 0.580

Iteration 5
0.024 0.021 0.479 0.476
0.021 0.019 0.481 0.479
0.000 0.000 0.500 0.500
0.000 0.000 0.500 0.500

Iteration 2
0.488 0.211 0.211 0.090
0.211 0.435 0.158 0.196
0.000 0.000 0.513 0.487
0.000 0.000 0.487 0.513

Iteration 6
0.001 0.001 0.499 0.499
0.001 0.001 0.499 0.499
0.000 0.000 0.500 0.500
0.000 0.000 0.500 0.500

Iteration 3
0.283 0.195
0.195 0.234
0.000 0.000
0.000 0.000

Iteration 7
0.000 0.000
0.000 0.000
0.000 0.000
0.000 0.000

0.288
0.290
0.500
0.500

0.234
0.282
0.500
0.500

0.500
0.500
0.500
0.500

0.500
0.500
0.500
0.500

School of Information Science

Channel Model (1)


Channel
Encoder

Xi A

Binary/Non-binary finite alphabet

Yi B

Channel
Decoder

Noise = Error source

Channel input sequence: X0, X1,., Xi,,

Xi A

Finite Alphabet: A={a1, a2, a3, ., aq}, where q is the alphabet size.
Channel output sequence: Y0, Y1,., Yi,,

Yi B

Finite Alphabet: B={b1, b2, b3, ., br}, where r is the alphabet size.
Let the conditional joint probability of Y0 =y0, Y1 =y1,, Yn-1 =yn-1, conditioned upon
X0 =x0, X1 =x1,, Xn-1 =xn-1 be denoted as

PY0 ,Y1 ,...,Yn1 X 0 , X 1 ,..., X n1 ( y0 , y1 ,..., yn 1 x0 , x1 ,..., xn 1 )

School of Information Science

Channel Model (2)


Definition 3.2.1: Memoryless Channel
The channel is memoryless, if

n 1

PY0 ,Y1 ,...,Yn1 X 0 , X 1 ,..., X n1 ( y0 , y1 ,..., yn 1 x0 , x1 ,..., xn 1 ) = PY X ( yi xi )


i =0

Definition 3.2.1: Channel Matrix


The channel matrix T={tij} is given as a matrix of which entry is defined as
the probability that the transmitted symbol xi is received as yj, as:
y1 t1,1 t1, 2 L t1,q x1


t 2,q x2
y2 t 2,1 t1,1
M = M
O M M


y t
r r ,1 t r , 2 L t r ,q xq

or y = Tx .

x1

x2
M .
.
x .
q

T = {tij }

y1

. y2
.
M
.

y
r

School of Information Science

Channel Model (3)


Example: Binary Symmetric Channel (BSC)
x=0

p
1 p
T =

p 1 p

x=1

1-p
p
p
1-p

y=0
y=1

Example: Binary Erasure Channel (BEC)


1 p p
T =
p

1-p-p

1 p p
p

x=0

p p

x=1

p p

y=0
E=Can not decide

1-p-p

y=1

School of Information Science

Channel Model (4)


Definition 3.2.2: Time invariant channel
The channel is time-invariant if the transition matrix is constant, i. e.,

y = Tx

Definition 3.2.3: Time varying channel


The channel is time-varying if the transition matrix is a function of
the timing index k, i.e.,

y = T (k ) x

Observation:
Time invariant channels causes random errors while time varying channels
cause burst errors.

School of Information Science

Channel Model (5)


Example 1: Markov burst error model
0/1-p

1/p
S0

1/1-q
S1

0/q
When the channel is in S0, it produces no error with a probability 1-p, and produces
error with a probability p and migrate to S1.
When the channel is in S1, it produces error with a probability 1-q, and produces no
error with a probability q and migrate to S0.

The stationary state vector w = (w0 , w1 ) satisfies:


t

p
1 p
and w0 + w1 = 1
w = w = w
1 q
q

Bit error probability of this channel =

w0 = p + q

p
w1 =

p+q

Pe = w0 p + w1 (1 q ) = w1 =

p
p+q

School of Information Science

Channel Model (6)


Observation:
Assume that Pe << 1.0 which requires. p << q
The average length L of the error can then be approximated by:

1
l 1
L = l (1 q ) q =
q
l =1

Distribution of burst length: p=0.0001, q=0.1

School of Information Science

Channel Model (7)


Example 2: Gilbert burst error model
1-p

pe = 0

pe = h

1-q

q
The channel migrates between G and B states. When the channel is in G, it produces
no errors; It stays in the state G with a probability 1-p and moves to the state B with
a probability p.
When the channel is in B, it produces errors with a probability h; It stays in the state B
with a probability 1-q and moves to the state G with a probability q.

Average bit error probability of this channel

= Pe = wB h =

ph
p+q

School of Information Science

Summary

1. Source Model
- Memory-less Source
- Markov Chain
- Source with Memory

2. Channel Model
- Memory-less Channel
- Memory Channel
- Wireless Channels

Вам также может понравиться