Академический Документы
Профессиональный Документы
Культура Документы
- Information Theory -
Encoder
Trans
mitter
Recei
ver
Decoder
Destination
Noise
General Description of Communication System
Analog waveform
Sample
&
Hold
Quantization
Source
Encoder
(A/D)
Channel
Encoder
Channel
Encoder
Channel
Decoder
The transmitter and receivers are ignored (Channel output is directly connected
to the channel decoder).
The both channel encoder output and noise take form of binary or
non-binary finite alphabet.
Outline
1. Source Model
- Memory-less Source
- Markov Chain
- Source with Memory
2. Channel Model
- Memory-less Channel
- Memory Channel
- Wireless Channels
Xi A
Finite Alphabet: A={a1, a2, a3, ., aq}, where q is the alphabet size.
The output sequence Xi is a random variable.
Let the joint probability of X0 =x0, X1 =x1,, Xn-1 =xn-1 be denoted as
PX 0 , X 1 ,..., X n1 ( x0 , x1 ,..., xn 1 )
or simply
P( x0 , x1 ,..., xn 1 )
x2
xn1
xi ,i =1,L, n 1
i i , j
n 1
- Conditioning:
, x ,L, x )
) P(x , x ,LP, x(x,L
,x )
P x1 , x2 ,L , xi 1 , xi +1 ,L , x j 1 , x j +1 , L, xn 1 xi , x j =
n 1
Exercise
(1) Calculate P(x0=0), given the probabilities P(x0,x1), shown in the table:
(2) Calculate P(x0=0|x1=1).
x1 x0
0.26
0.26
0.27
0.21
Note: Detector that makes decisions only based on P(x0) or P(x1) is called
single user detector. Detector that makes decisions based on P(x0,x1)
is called multi user detector.
n 1
PX 0 , X1 ,..., X n1 ( x0 , x1 ,..., xn 1 ) = PX ( xi )
i =0
lim
1 n 1
f ( xi ) = xAlphabet
f (Axi ) p X ( x)
n i =0
Namely, if the time average and ensemble average of f(.) is the same,
the source is Ergodic.
X=1
lim
1 n 1
f ( xi ) xAlphabet
f (Axi ) p X ( x)
n i =0
n 1
PX 0 , X1 ,..., X n1 ( x0 , x1 ,..., xn 1 ) PX ( xi )
i =0
Markov Source
Memoryless
Binary Source
Yi
Xi=Xi-1 + Yi
P (Yi = 1) = p
Xi-1
P (Yi = 0 ) = 1 p
P ( X i = 1 X i 1 = 1) =
Unit
Delay
P (Yi = 0, X i 1 = 1)
= 1 p
P ( X i 1 = 1)
P (X i = 0 X i 1 = 0 ) =
P (X i = 0 X i 1 = 1) =
P (X i = 1 X i 1 = 0 ) =
P(Yi = 0, X i 1 = 0 )
= 1 p
P ( X i 1 = 0 )
P(Yi = 1, X i 1 = 1)
=p
P( X i 1 = 1)
P (Yi = 1, X i 1 = 0 )
=p
P ( X i 1 = 0 )
Example:
0/1-p
1/p
S0
Yi
Memoryless
Binary Source
1/1-p
P (Yi = 1) = p
Xi-1
P (Yi = 0 ) = 1 p
S1
0/p
Xi=Xi-1 + Yi
+
Unit
Delay
1/p1
S0
S1
0/1-p2
1/1-p3
1/p2
S2
0/p3
n 1
wn p
N 1 N 1,0 PN 1,1 L p N 1, N 1 wN 1
or w n = w n 1.
The matrix is called state transition matrix.
Exercise
Derive the stationary state vector for
the Markov process shown in the figure
on the right hand side:
.
0/1-p1
1/p1
S0
S1
0/1-p2
1/1-p3
1/p2
S2
0/p3
example
transition matrix 1
0.6 0.4 0.0
0.3 0.0 0.7
0.2 0.0 0.8
Iteration 1
0.480 0.240 0.280
0.320 0.120 0.560
0.280 0.080 0.640
Iteration 4
0.357 0.143 0.500
0.357 0.143 0.500
0.357 0.143 0.500
Iteration 2
0.386 0.166 0.448
0.349 0.136 0.515
0.339 0.128 0.533
Iteration 5
0.357 0.143 0.500
0.357 0.143 0.500
0.357 0.143 0.500
Iteration 3
0.359 0.144 0.497
0.357 0.142 0.501
0.356 0.142 0.502
Iteration 6
0.357 0.143 0.500
0.357 0.143 0.500
0.357 0.143 0.500
example
transition matrix 2
0.300 0.200 0.100
0.300 0.000 0.300
0.200 0.000 0.400
0.100 0.100 0.300
0.200 0.300 0.400
0.200
0.000
0.100
0.000
0.100
0.200
0.400
0.300
0.500
0.000
Iteration 3
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.126
0.310
0.310
0.310
0.310
0.310
0.100
0.100
0.100
0.100
0.100
0.239
0.238
0.238
0.238
0.239
Iteration 1
0.230 0.140
0.230 0.180
0.210 0.140
0.220 0.170
0.240 0.050
0.270
0.310
0.330
0.360
0.300
0.090
0.130
0.110
0.100
0.080
0.270
0.150
0.210
0.150
0.330
Iteration 4
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.310
0.310
0.310
0.310
0.310
0.100
0.100
0.100
0.100
0.100
0.239
0.239
0.239
0.239
0.239
Iteration 2
0.226 0.124
0.224 0.138
0.224 0.130
0.223 0.136
0.227 0.115
0.308
0.312
0.312
0.312
0.307
0.099
0.103
0.101
0.104
0.096
0.242
0.223
0.233
0.225
0.256
Iteration 5
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.225 0.127
0.310
0.310
0.310
0.310
0.310
0.100
0.100
0.100
0.100
0.100
0.239
0.239
0.239
0.239
0.239
example
Iteration 4
0.288 0.330 0.381
0.381 0.288 0.330
0.330 0.381 0.288
transition matrix 3
0.000 0.900 0.100
0.100 0.000 0.900
0.900 0.100 0.000
Iteration 1
0.180 0.010 0.810
0.810 0.180 0.010
0.010 0.810 0.180
Iteration 5
0.335 0.336 0.329
0.329 0.335 0.336
0.336 0.329 0.335
Iteration 2
0.049 0.660 0.292
0.292 0.049 0.660
0.660 0.292 0.049
Iteration 6
0.333 0.333 0.333
0.333 0.333 0.333
0.333 0.333 0.333
Iteration 3
0.387 0.149 0.464
0.464 0.387 0.149
0.149 0.464 0.387
Iteration 7
0.333 0.333 0.333
0.333 0.333 0.333
0.333 0.333 0.333
example
transition matrix 4
0.200 0.800 0.000
0.800 0.000 0.200
0.000 0.000 0.300
0.000 0.000 0.700
Iteration 4
0.118 0.101 0.399 0.382
0.101 0.093 0.409 0.397
0.000 0.000 0.500 0.500
0.000 0.000 0.500 0.500
0.000
0.000
0.700
0.300
Iteration 1
0.680 0.160
0.160 0.640
0.000 0.000
0.000 0.000
0.160 0.000
0.060 0.140
0.580 0.420
0.420 0.580
Iteration 5
0.024 0.021 0.479 0.476
0.021 0.019 0.481 0.479
0.000 0.000 0.500 0.500
0.000 0.000 0.500 0.500
Iteration 2
0.488 0.211 0.211 0.090
0.211 0.435 0.158 0.196
0.000 0.000 0.513 0.487
0.000 0.000 0.487 0.513
Iteration 6
0.001 0.001 0.499 0.499
0.001 0.001 0.499 0.499
0.000 0.000 0.500 0.500
0.000 0.000 0.500 0.500
Iteration 3
0.283 0.195
0.195 0.234
0.000 0.000
0.000 0.000
Iteration 7
0.000 0.000
0.000 0.000
0.000 0.000
0.000 0.000
0.288
0.290
0.500
0.500
0.234
0.282
0.500
0.500
0.500
0.500
0.500
0.500
0.500
0.500
0.500
0.500
Xi A
Yi B
Channel
Decoder
Xi A
Finite Alphabet: A={a1, a2, a3, ., aq}, where q is the alphabet size.
Channel output sequence: Y0, Y1,., Yi,,
Yi B
Finite Alphabet: B={b1, b2, b3, ., br}, where r is the alphabet size.
Let the conditional joint probability of Y0 =y0, Y1 =y1,, Yn-1 =yn-1, conditioned upon
X0 =x0, X1 =x1,, Xn-1 =xn-1 be denoted as
n 1
or y = Tx .
x1
x2
M .
.
x .
q
T = {tij }
y1
. y2
.
M
.
y
r
p
1 p
T =
p 1 p
x=1
1-p
p
p
1-p
y=0
y=1
1-p-p
1 p p
p
x=0
p p
x=1
p p
y=0
E=Can not decide
1-p-p
y=1
y = Tx
y = T (k ) x
Observation:
Time invariant channels causes random errors while time varying channels
cause burst errors.
1/p
S0
1/1-q
S1
0/q
When the channel is in S0, it produces no error with a probability 1-p, and produces
error with a probability p and migrate to S1.
When the channel is in S1, it produces error with a probability 1-q, and produces no
error with a probability q and migrate to S0.
p
1 p
and w0 + w1 = 1
w = w = w
1 q
q
w0 = p + q
p
w1 =
p+q
Pe = w0 p + w1 (1 q ) = w1 =
p
p+q
1
l 1
L = l (1 q ) q =
q
l =1
pe = 0
pe = h
1-q
q
The channel migrates between G and B states. When the channel is in G, it produces
no errors; It stays in the state G with a probability 1-p and moves to the state B with
a probability p.
When the channel is in B, it produces errors with a probability h; It stays in the state B
with a probability 1-q and moves to the state G with a probability q.
= Pe = wB h =
ph
p+q
Summary
1. Source Model
- Memory-less Source
- Markov Chain
- Source with Memory
2. Channel Model
- Memory-less Channel
- Memory Channel
- Wireless Channels