Академический Документы
Профессиональный Документы
Культура Документы
In two previous posts, we have discussed Convolutional Coding and the associated hard decision
Viterbi decoding. In this post lets extent Viterbi decoding algorithm to soft input decision
scheme. The modulation used is BPSK and the channel is assumed to be AWGN alone.
System Model
The received coded sequence is
, where
is the modulated coded sequence taking values if the coded bit is 1 and if the
coded bit is 0,
is the Additive White Gaussian Noise following the probability distribution function,
The conditional probability distribution function (PDF) of if the coded bit is 0 is,
Euclidean distance
In the hard decision Viterbi decoding, based on the location of the received coded symbol, the
coded bit was estimated if the received symbol is greater than zero, the received coded bit is 1;
if the received symbol is less than or equal to zero, the received coded bit is 0.
In Soft decision decoding, rather than estimating the coded bit and finding the Hamming
distance, the distance between the received symbol and the probable transmitted symbol is found
out.
As the terms , , and are common in both the equations they can be ignored. The
simplified Euclidean distance is,
and
As the Viterbi algorithm takes two received coded bits at a time for processing, we need to find
the Euclidean distance from both the bits.
Note:
For details on branch metric, path metric computation and trace back unit refer to the post on
hard decision Viterbi decoding.
Simulation Model
Octave/Matlab source code for computing the bit error rate for BPSK modulation in AWGN
using the convolutional coding and soft decision Viterbi decoding is provided.
(d) Received soft bits and hard bits are passed to Viterbi decoder
(e) Counting the number of errors from the output of Viterbi decoder
Click here to download Matlab/Octave script for computing BER for BPSK with AWGN in soft
decision Viterbi decoding
(Warning: The simulation took around 5 hours in desktop to generate the plots)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% All rights reserved by Krishna Pillai, http://www.dsplog.com
% The file may not be re-distributed without explicit authorization
% from Krishna Pillai.
% Checked for proper operation with Octave Version 3.0.0
% Author : Krishna Pillai
% Email : krishna@dsplog.com
% Version : 1.0
% Date : 14th January 2009
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear
N = 10^6 ;% number of bits or symbols
refHard = [0 0 ; 0 1; 1 0 ; 1 1 ];
refSoft = -1*[-1 -1; -1 1 ;1 -1; 1 1 ];
ipLUT = [ 0 0 0 0;...
0 0 0 0;...
1 1 0 0;...
0 0 1 1 ];
for yy = 1:length(Eb_N0_dB)
% Transmitter
ip = rand(1,N)>0.5; % generating 0,1 with equal probability
% convolutional coding, rate - 1/2, generator polynomial - [7,5] octal
cip1 = mod(conv(ip,[1 1 1 ]),2);
cip2 = mod(conv(ip,[1 0 1 ]),2);
cip = [cip1;cip2];
cip = cip(:).';
% Noise addition
y = s + 10^(-Ec_N0_dB(yy)/20)*n; % additive white gaussian noise
% receiver
cipHard = real(y)>0; % hard decision
cipSoft = real(y); % soft decision
% Viterbi decoding
pmHard = zeros(4,1); % hard path metric
svHard_v = zeros(4,length(y)/2); % hard survivor path
pmSoft = zeros(4,1); % soft path metric
svSoft_v = zeros(4,length(y)/2); % soft survivor path
for ii = 1:length(y)/2
rHard = cipHard(2*ii-1:2*ii); % taking 2 hard bits
rSoft = cipSoft(2*ii-1:2*ii); % taking 2 soft bits
if (ii == 1) || (ii == 2)
else
% branch metric and path metric for state 0
bm1Hard = pmHard(1,1) + hammingDist(1);
bm2Hard = pmHard(2,1) + hammingDist(4);
[pmHard_n(1,1) idx] = min([bm1Hard,bm2Hard]);
svHard(1,1) = idx;
bm1Soft = pmSoft(1,1) + euclideanDist(1);
bm2Soft = pmSoft(2,1) + euclideanDist(4);
[pmSoft_n(1,1) idx] = min([bm1Soft,bm2Soft]);
svSoft(1,1) = idx;
pmHard = pmHard_n;
svHard_v(:,ii) = svHard;
pmSoft = pmSoft_n;
svSoft_v(:,ii) = svSoft;
end
prevSoftState = svSoft_v(currSoftState,jj);
ipHatSoft_v(jj) = ipLUT(currSoftState,prevSoftState);
currSoftState = prevSoftState;
end
end
close all
figure
semilogy(Eb_N0_dB,theoryBer,'bd-','LineWidth',2);
hold on
semilogy(Eb_N0_dB,simBer_HardViterbi,'mp-','LineWidth',2);
semilogy(Eb_N0_dB,simBer_SoftViterbi,'cd-','LineWidth',2);
axis([0 10 10^-5 0.5])
grid on
legend('theory - uncoded', 'simulation - hard Viterbi', 'simulation - soft
Viterbi');
xlabel('Eb/No, dB');
ylabel('Bit Error Rate');
title('BER for BCC with Viterbi decoding for BPSK in AWGN');
BER plot for BPSK with AWGN in soft decision Viterbi decoding
Figure: BER plot for BPSK with AWGN in soft decision Viterbi decoding
Summary
1. When compared with hard decision decoding, soft decision decoding provides around 2dB of
gain for bit error rate of .
2. In the current simulation model, soft bit are used with full precision for obtaining the BER
curves. However in typical implementations, soft bits will be quantized to finite number of bits.