Вы находитесь на странице: 1из 29

Convolutional Code Performance

ECEN 5682 Theory and Practice of Error Control


Codes
Convolutional Code Performance
Peter Mathys
University of Colorado

Spring 2007

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Performance Measures
Definition: A convolutional encoder which maps one or more data
sequences of infinite weight into code sequences of finite weight is
called a catastrophic encoder.
Example: Encoder #5. The binary R = 1/2, K = 3 convolutional
encoder with transfer function matrix


G(D) = 1 + D 1 + D 2 ,
has the encoder state diagram shown in Figure 15, with states
S0 = 00, S1 = 10, S2 = 01, and S3 = 11.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

S1
1/01

1/11

0/00

S0

0/10

1/10

0/01

S3

1/00

0/11
S2

Fig.15 Encoder State Diagram for Catastrophic R = 1/2, K = 3 Encoder

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

S3
10

S2

01

01
10
10

01

01

10
10

01

00

S1
11

S0

11
00

11
00

10

01

10

01

10

01

10

01
00

11
11

00

01
10

00

11
11

00

01
10

00

11
11

00

01
10

00

11
11

00

01
10

00

11

Performance Measures

11
11

00

00

Fig.16 A Detour of Weight w = 7 and i = 3, Starting at Time t = 0

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Definition: The complete weight distribution {A(w , i, `)} of a


convolutional code is defined as the number of detours (or
codewords), beginning at time 0 in the all-zero state S0 of the
encoder, returning again for the first time to S0 after ` time units,
and having code (Hamming) weight w and data (Hamming)
weight i.
Definition: The extended weight distribution {A(w , i)} of a
convolutional code is defined by
A(w , i) =

A(w , i, `) .

`=1

That is, {A(w , i)} is the number of detours (starting at time 0)


from the all-zero path with code sequence (Hamming) weight w
and corresponding data sequence (Hamming) weight i.
Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Definition: The weight distribution {Aw } of a convolutional code


is defined by

X
Aw =
A(w , i) .
i=1

That is, {Aw } is the number of detours (starting at time 0) from


the all-zero path with code sequence (Hamming) weight w .
Theorem: The probability of an error event (or decoding error) PE
for a convolutional code with weight distribution {Aw }, decoded
by a ML decoder, at any given time t (measured in frames) is
upper bounded by
PE

Aw Pw (E) ,

w =dfree

where
Pw (E) = P{ML decoder makes detour with weight w } .
Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Theorem: On a memoryless BSC with transition probability


 < 0.5, the probability of error Pd (E) between two detours or
codewords distance d apart is given by

Pd (E) =

8
>
>
>
>
>
<

d
X
e=(d+1)/2

d
e

!
e (1 )de ,

d odd ,

!
d
>
X
>
1 d
d/2
d/2
>
>

(1

)
+
>
: 2 d/2

e=d/2+1

d
e

!
e (1 )de ,

d even .

Proof: Under the Hamming distance measure, an error between


two binary codewords distance d apart is made if more than d/2 of
the bits in which the codewords differ are in error. If d is even and
exactly d/2 bits are in error, then an error is made with probability
1/2.
QED

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Note: A somewhat simpler but less tight bound is obtained by


dropping the factor of 1/2 in the first term for d even as follows
Pd (E)

 
d
X
d e
 (1 )de .
e

e=dd/2e

A much simpler, but often also much more loose bound is the
Bhattacharyya bound
Pd (E)

d/2
1
4 (1 )
.
2

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Probability of Symbol Error. Suppose now that Aw =


substituted in the bound for PE . Then
PE

X
X
w =d

free

i=1

A(w , i) is

A(w , i) Pw (E) .

i=1

Multiplying A(w , i) by i and summing over all i then yields the total number of
P
data symbol errors that result from all detours of weight w as
i=1 i A(w , i).
Dividing by k, the number of data symbols per frame, thus leads to the
following theorem.

Theorem: The probability of a symbol error Ps (E) at any given


time t (measured in frames) for a convolutional code with rate
R = k/n and extended weight distribution {A(w , i)}, when
decoded by a ML decoder, is upper bounded by
1
Ps (E)
k

X
X

i A(w , i) Pw (E) ,

w =dfree i=1

where Pw (E) is the probability of error between the all-zero path


and a detour of weight w .
Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

The graph on the next slide shows different bounds for the
probability of a bit error on a BSC for a binary rate R = 1/2,
K = 3 convolutional encoder with transfer function matrix


G(D) = 1 + D 2 1 + D + D 2 .

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Binary R=1/2, K=3, dfree=5, Convolutional Code, Bit Error Probability

10

10

10

Pb(E)

10

15

10

20

10

Pb(E) BSC
Pb(E) BSC Bhattcharyya
Pb(E) AWGN soft

25

10

5.5

4.5

Peter Mathys

3.5
log ()
10

2.5

1.5

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Upper Bounds on Pb(E) for Convolutional Codes on BSC (Hard Decisions)

10

10

10

R=1/2,K=3,dfree=5
R=2/3,K=3,dfree=5
R=3/4,K=3,dfree=5
R=1/2,K=5,dfree=7
R=1/2,K=7,dfree=10

10

Pb(E)

10

10

10

10

10

10

10

10

3.5

2.5
log () for BSC
10

Peter Mathys

1.5

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Peter Mathys

Performance Measures

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Transmission Over AWGN Channel


The following figure shows a one-shot model for transmitting a
data symbol with value a0 over an additive Gaussian noise (AGN)
waveform channel using pulse amplitude modulation (PAM) of a
pulse p(t) and a matched filter (MF) receiver. The main reason for
using a one-shot model for performance evaluation with respect
to channel noise is that it avoids intersymbol interference (ISI).
Noise n(t), Sn (f )
s(t) = a0 p(t)

r(t)

+
|

{z
Channel

Peter Mathys

Filter
hR (t)
|

b(t)

{z
Receiver


t=0
}

b0

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

If the noise is white with power spectral density (PSD)


Sn (f ) = N0 /2 for all f , the channel model is called additive white
Gaussian noise (AWGN) model. In this case the matched filter
(which maximizes the SNR at its output at t = 0) is
p (t)
2
|p()| d

hR (t) = R

P (f )
,
2
|P()| d

HR (f ) = R

where denotes complex Rconjugation. If the PAM pulse p(t) is

normalized so that Ep = |p()|2 d = 1 then the symbol


energy at the input of the MF is
Z




Es = E
|s()|2 d = E |a0 |2 ,

where the expectation is necessary since a0 is a random variable.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

When the AWGN model with Sn (f ) = N0 /2 is used and a0 = is


transmitted, the received symbol b0 at the sampler after the
output of the MF is a Gaussian random variable with mean and
variance b2 = N
0 /2. For
antipodal binary signaling (e.g., using
BPSK) a0 { Es , + Es } where Es is the (average) energy per
symbol. Thus, b0 is characterized by the conditional pdfs

2
p
e (+ Es ) /N0

,
fb0 (|a0 = Es ) =
N0

and

2
p
e ( Es ) /N0

fb0 (|a0 =+ Es ) =
.
N0

These pdfs are shown graphically on the following slide.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

a0 = Es a
0 = + Es

fb0 (|a0 = Es )

fb0 (|a0 =+ Es )

Es

2 Es

+ Es

If the two values of a0 are equally likely or if a ML decoding rule is


used, then
per symbol is to decide
the (hard) decision threshold

a0 = + Es if > 0 and a0 = Es otherwise.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

The probability of a symbol error when hard decisions are used is


Z
r E 

p
1
1
s
(+ Es )2 /N0
P(E|A0 = Es ) =
d = erfc
e
,
2
N0
N0 0
R
2
2
where erfc(x) = 2 x e d e x . Because of the symmetry
of antipodalsignaling, the same result is obtained for
P(E|a0 = + Es ) and thus a BSC derived from an AWGN channel
used with antipodal signaling has transition probability
r E 
1
s
 = erfc
,
2
N0
where Es is the energy received per transmitted symbol.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

To make a fair comparison in terms of signal-to-noise ratio (SNR)


of the transmitted information symbols between coded and
uncoded systems, the energy per code symbol of the coded system
needs to be scaled by the rate R of the code. Thus, when hard
decisions and coding are used in a binary system, the transition
probability of the BSC model becomes
r E 
1
s
R
,
c = erfc
2
N0
where R = k/n is the rate of the code.
The figure on the next slide compares Pb (E) versus Eb /N0 for an
uncoded and a coded binary system. The coded system uses a
R = 1/2 K = 3 convolutional encoder
with

2
2
G(D) = 1 + D 1 + D + D .

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Binary R=1/2, K=3, d


2

Performance Measures

=5, Convolutional Code, Hard decisions AWGN channel

free

10

P (E) uncoded
b
Pb(E) union bound
P (E) Bhattacharyya

10

10

Pb(E)

10

10

10

10

10

12

10

6
8
E /N [dB], E : info bit energy
b 0
b

Peter Mathys

10

12

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Definition: Coding Gain. Coding gain is defined as the reduction


in Es /N0 permissible for a coded communication system to obtain
the same probability of error (Ps (E) or PB (E) as an uncoded
system, both using the same average energy per transmitted
information symbol.
Definition: Coding Threshold. The value of Es /N0 (where Es is
the energy per transmitted information symbol) for which the
coding gain becomes zero is called the coding threshold.
The graphs on the following slide show Pb (E) (computed using the
union bound) versus Eb /N0 for a number of different binary
convolutional encoders.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Upper Bounds on P (E) for Convolutional Codes on AWGN Channel, Hard Decisions
b

10

10

10

P (E)

10

10

Uncoded
R=1/2,K=3,dfree=5
R=2/3,K=3,dfree=5
R=3/4,K=3,dfree=5
R=1/2,K=5,dfree=7
R=1/2,K=7,dfree=10

10

10

12

10

6
8
Eb/N0 [dB], Eb: info bit energy

Peter Mathys

10

12

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Soft Decisions and AWGN Channel


Assuming a memoryless channel model used without feedback, the
ML decoding rule after the MF and the sampler is: Output code
sequence estimate c = ci iff i maximizes
fb (|a=ci ) =

N1
Y

fbj (j |aj =cij ) ,

j=0

over all code sequences ci = (ci0 , ci1 , ci2 , . . .) for i = 0, 1, 2, . . ..


If the mapping 0 1 and 1 +1 is used so that cij {1, +1}
then fbj (j |aj =cij ) can be written as

e (j cij Es )

fbj (j |aj =cij ) =


N0

Peter Mathys

2 /N

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Taking (natural) logarithms and defining vj = j / Es yields


ln fb (|a=ci ) = ln
=

N1
Y
j=0
N1
X
j=0

fbj (j |aj =cij ) =

N1
X

ln fbj (j |aj =cij )

j=0

(j cij Es )2 N
ln(N0 )
N0
2

N1
Es X 2
N
=
(vj 2 vj cij + cij2 ) ln(N0 )
N0
2

2Es
N0

= K1

j=0
N1
X

vj cij

j=0
N1
X

 ||2 + NE
N0


N
ln(N0 )
2

vj cij K2 ,

j=0

where K1 and K2 are constants independent of the codeword ci


and thus irrelevant for ML decoding.
Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Example:
 Supposethe convolutional encoder with
G(D) = 1 1 + D is used and the received data is
v = -0.4, -1.7, 0.1, 0.3, -1.1, 1.2, 1.2, 0.0, 0.3, 0.2, -0.2, 0.7, . . .

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Soft Decisions versus Hard Decisions


To compare the performance of coded binary systems on a AWGN
channel when the decoder performs either hard or soft decisions,
the energy Ec per coded bit is fixed and Pb (E) is p
plotted versus


1
of the hard decision BSC model where  = 2 erfc Ec /N0 as
before. For soft decisions the expression
r w E 
1
c
Pw (E) = erfc
2
N0
is used for the probability that the ML decoder makes a detour
with weight w from the correct path. Thus, for soft decisions with
fixed SNR per code symbol

r w E 
1 X
c
Dw erfc
.
Pb (E)
2k
N0
w =dfree

Examples are shown on the next slide.


Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Upper Bounds on Pb(E) for Convolutional Codes with Soft Decisons (Dashed: Hard Decisions)

10

R=1/2,K=3,dfree=5
R=2/3,K=3,dfree=5
R=3/4,K=3,dfree=5
R=1/2,K=5,dfree=7
R=1/2,K=7,dfree=10

Pb(E)

10

10

10

15

10

3.5

2.5
log10() for BSC

Peter Mathys

1.5

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Coding Gain for Soft Decisions


To compare the performance of uncoded and coded binary systems
with soft decisions on a AWGN channel, the energy Eb per
information bit is fixed and Pb (E) is plotted versus the
signal-to-noise ratio (SNR) Eb /N0 . For an uncoded system
r E 
1
b
,
(uncoded) .
Pb (E) = erfc
2
N0
For a coded system with soft decision ML decoding on a AWGN
channel

r wR E 
1 X
b
Pb (E)
Dw erfc
,
2k
N0
w =dfree

where R = k/n is the rate of the code.


Examples are shown in the graph on the next slide.
Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Convolutional Code Performance

Performance Measures

Upper Bounds on P (E) for Convolutional Codes on AWGN Channel, Soft Decisions
b

10

10

10

P (E)

10

10

Uncoded
R=1/2,K=3,dfree=5
R=2/3,K=3,dfree=5
R=3/4,K=3,dfree=5
R=1/2,K=5,dfree=7
R=1/2,K=7,dfree=10

10

10

12

10

6
8
Eb/N0 [dB], Eb: info bit energy

Peter Mathys

10

12

ECEN 5682 Theory and Practice of Error Control Codes

Вам также может понравиться