Вы находитесь на странице: 1из 11

' $

Channel Coding
• Channel Capacity
Channel Capacity, C is defined as
‘the maximum mutual information I(X ; Y) in any single use of
the channel, where the maximization is over all possible input
probability distributions {p(xj )} on X”

C = max I(X ; Y) (1)


p(xj )

C is measured in bits/channel-use, or bits/transmission.

Example:
For, the binary symmetric channel discussed previously, I(X ; Y
will be maximum when p(x0 ) = p(x1 ) = 12 . So, we have
& %
' $

C = I(X ; Y|p(x0 )=p(x1 )= 21 (2)

Since, we know

p(y0 |x1 ) = p(y1 |x0 ) = p (3)

p(y0 |x0 ) = p(y1 |x1 ) = 1 − p (4)

Using the probability values in Eq. 3 and Eq. 4 in evaluating


Eq. 2, we get

C = 1 + p log2 p + (1 − p) log2 (1 − p)
(5)
=⇒ 1 − H(p)
& %
' $

• Channel Coding Theorem:


Goal: Design of channel coding to increase resistance of a
digital communication system to channel noise.

The channel coding theorem is defined as


1. Let a discrete memoryless source
– with an alphabet S
– with an entropy H(S)
– produce symbols once every Ts seconds
2. Let a discrete memoryless channel
– have capacity C
– be used once every Tc seconds.
3. Then if,

& %
' $

H(S) C
≤ (6)
Ts Tc
There exists a coding scheme for which the source output
can be transmitted over the channel and be reconstructed
with an arbitrarily small probability of error. The parameter
C
is called critical rate.
Tc
4. Conversly, if

H(S) C
> (7)
Ts Tc
it is not possible to transmit information over the channel
and reconstruct it with an arbitrarily small probability of
error.

& %
Example:
' $

Considering the case of a binary symmetric channel, the source


entropy H(S) is 1. Hence, from Eq. 6, we have

1 C
≤ (8)
Ts Tc

Tc
But the ratio equals the code rate, r of the channel encoder.
Ts
Hence, for a binary symmetric channel, if r ≤ C, then there
exists a code capable of achieving an arbitrarily low probability
of error.

& %
' $
Information Capacity Theorem:
The Information Capacity Theorem is defined as
‘The information capacity of a continuous channel of bandwidth B
hertz, perturbed by additive white Gaussian noise of power spectral
N0
density and limited in bandwidth to B, is given by
2

!
P
C = B log2 1+ (9)
N0 B

where P is the average transmitted power. Proof:


Assumptions:
1. band-limited, power-limited Gaussian channels.
2. A zero-mean stationary process X(t) that is band-limited to B
& %
' $
hertz, sampled at Nyquist rate of 2B samples per second
3. These samples are transmitted in T seconds over a noisy
channel, also band-limited to B hertz.
The number of samples, K is given by

K = 2BT (10)

We refer to Xk as a sample of the transmitted signal. The channel


output is mixed with additive white Gaussian noise(AWGN) of zero
mean and power spectral density N0 /2. The noise is band-limited
to B hertz. Let the continuous random variables Yk ,
k = 1, 2, · · · , K denote samples of the received signal, as shown by

Yk = Xk + Nk (11)
& %
' $
The noise sample Nk is Gaussian with zero mean and variance
given by

σ 2 = N0 B (12)

The transmitter power is limited; it is therefore

E[Xk2 ] = P (13)

Now, let I(Xk ; Yk ) denote the mutual information between Xk and


Yk . The capacity of the channel is given by

C = max I(Xk ; Yk ) : E[Xk2 ] = P (14)


fXk (x)

& %
' $
The mutual information I(Xk ; Yk ) can be expressed as

I(Xk ; Yk ) = h(Yk ) − h(Yk |Xk ) (15)

This takes the form

I(Xk ; Yk ) = h(Yk ) − h(Nk ) (16)

When a symbol is transmitted from the source, noise is added to it.


So, the total power is P + σ 2 .
For the evaluation of the information capacity C, we proceed in
three stages:
1. The variance of sample Yk of the received signal equals P + σ 2 .

& %
Hence, the differential entropy of Yk is
' $

1
h(Yk ) = log2 [2πe(P + σ 2 )] (17)
2
2. The variance of the noise sample Nk equals σ 2 . Hence, the
differential entropy of Nk is given by

1
h(Nk ) = log2 (2πeσ 2 ) (18)
2
3. Now, substituting above two equations into
I(Xk ; Yk ) = h(Yk ) − h(Nk ) yeilds

1 P
C = log2 (1 + 2 )bits per transmission (19)
2 σ
With the channel used K times for the transmission of K samples
of the process X(t) in T seconds, we find that the information

& %
capacity per unit time is K/T times the result given above for C.
' $

The number K equals 2BT . Accordingly, we may express the


information capacity in the equivalent form:

P
C = Blog2 (1 + )bits per second (20)
N0 B

& %

Вам также может понравиться