Вы находитесь на странице: 1из 35

Channel equalization Channel equalization

Introduction

In a communication system, the information is transmitted over a RF channel


The RF channel distorts the transmitted signal

Amplitude, frequency, phase are changed

A receiver should compensate the distortions to recover the transmitted signal


Channel equalization is the compensation process of the distortions in a RF transmission In order to counter intersymbol interference effect, the observed signal is passed through equalizer whose characteristics are the inverse of the channel characteristics. If the equalizer is exactly matched to the channel, the combination of the channel and equalizer is just a gain so that there is no intersymbol interference present at the output of the equalizer

Need for equalizer


The purpose of an equalizer is to reduce the ISI as much as possible to maximize the probability of correct decisions Noise

Channel

Equalizer

Channel Equalizer

An equalizer performs these important functions: Filtering: Finding information about the data of interest at time t using data extracted upto time t Prediction: finding information about future quantity of interest by using the data measured upto the present time.

Linear and Non linear Filters

A filter is said to be linear if the filtered and predicted quantity at the output is a linear function of the quantity applied at the input of the filter otherwise it is a non linear filter.

Weiner Filter
(A linear Optimum Filter)

In these filters firstly an error signal is obtained by subtracting a desired response with actual filter output and then minimizing the mean square value of the error signal.
When the input is stationary the filter used is weiner filter. Weiner filter is not able to cope up with the problem of non stationary input .

Adaptive filters

The object is to adapt the coefficients to minimize the noise and intersymbol interference (depending on the type of equalizer) at the output. The adaptation of the equalizer is driven by an error signal.
2 The aimis to minimize: J E e k

Error signal

Equalizer

Linear adaptive filters

The operation of a linear adaptive filtering algorithm involves two basic processes;

(1) A filtering process designed to produce an output in response to a sequence of input data. (2) An adaptive process, the purpose of which is to provide a mechanism for the adaptive control of an adjustable set of parameters used in the filtering process.
These two processes work interactively with each other.

Adaptive equalization

Linear Adaptive Equalization


There are two modes that adaptive equalizers work:

Decision Directed Mode:


The receiver decisions are used to generate the error signal. Decision directed equalizer adjustment is effective in tracking slow variations in the channel response. However, this approach is not effective during initial acqusition .

Training Mode:
To make equalizer suitable in the initial acqusition duration, a training signal is needed. In this mode of operation, the transmitter generates a data symbol sequence known to the receiver. Once an agreed time has elapsed, the slicer output is used as a training signal and the actual data transmission begins.

Minimum Mean-Squared-Error Equalization

The mean-squared-error cost function is defined as :

When the filter coefficients are fixed, the cost function can be rewritten as follows:

Where p is the cross-correlation vector and R is the input signal correlation matrix

Minimum Mean-Squared-Error Equalization

The gradient of the MSE cost function with respect to the equalizer tap weights is defined as follows:

The optimal equalizer taps fo required to obtain the MMSE can be determined by replacing f with fo and setting the gradient above to zero:

Minimum Mean-Squared-Error Equalization

Finally, the MMSE is expressed as follows:

Method of Steepest Descent

We begin with the initial value f(0) for the tap weight vector which provides a guess as to where the minimum on error performance surface may be located. using this guess we calculate the gradient vector the real and imaginary parts of which defines the derivative for the mean square error with respect to the real and imaginary parts of tap weight vector. We compute the next guess at the tap weight vector by making a present guess in the direction opposite to the direction of the gradient vector. We go back to step 2 and repeat the process.

Hence the recursive relation can be written as

LMS Filter

A computationally simpler version of the gradient search method is the least mean square (LMS) filter, in which the gradient of the mean square error is substituted with the gradient of the instantaneous squared error function. Note that the feedback equation for the time update of the filter coefficients is essentially a recursive (infinite impulse response) system with input

LMS Filter

The LMS adaptation method is defined as

The instantaneous gradient of the squared error can be expressed as

Substituting this equation into the recursion update equation of the filter parameters, yields the LMS adaptation equation

LMS Filter

The main advantage of the LMS algorithm is its simplicity both in terms of the memory requirement and the computational complexity which is O(P), where P is the filter length Leaky LMS Algorithm

The stability and the adaptability of the recursive LMS adaptation can improved by introducing a socalledleakage factor as
w(m +1) =.w(m) + .[y(m).e(m)]

When the parameter <1, the effect is to introduce more stability and accelerate the filter adaptation to the changes

RLS Algorithm

In contrast to the LMS algorithm the RLS algorithm uses information from all the past samples (and not only from the current tap input samples) to estimate the autocorrelation matrix of the input vector
decays exponentially with each sample and is known as decaying factor

Now we search for the minimum of cost function not by finding the descent of cost function but immediately putting it to 0.

Blind Equalization

When training signals are entirely absent, the transmission is called blind, and adaptive algorithms for estimating the transferred symbols and possibly estimating channel or equalizer information are called blind algorithms.

Problem At Hand

To solve Blind Equalization problem, we need to provide a probabilistic model for signal x(n). Two Assumptions for x(n):

1)

2)

UNIFORM distribution

Cost Function

SATO
DECISION DIRECTED ALGO

BUSSGANG

Constant Modulus Algorithm

g(.)=

Practical Implementation:
Desired Signal

Design Filter

CHANNEL

d(n) Output = desired Signal again

Designing Weiner Filter

Find Variance of desired signal. Find Auto-correlation of u(n) (output form channel) Find cross correlation of u(n) and d(n) (desired signal) Then Weights, W = [R(inverse) * P] Cost function =variance W*P + W*R*Conjugate (W) Given Transfer function of channel,

Thereotical results
Thereotical Results= Desired signal Variance = 0.9486 R =[1.1 , 0.5 ; 1.1 , 0.5] and P = [ 0.5272, -0.4478] Weights, W1 = 0.8360, W2 = - 0.7853 Jmin = 0.1579

Practical Output Results= Desired signal Variance = 0.9486 R =[1.1 , 0.5212 ; 1.1 , 0.5212] and P = [ 0.5533, -0.4899] Weights, W1 = 0.8112, W2 =

Cost Function Graph

Using CMA equalizer, with different input

Using CMA equalizer, with different input

FACTORS DETERMINING THE CHOICE OF ALGORITHM

Rate of convergence. defines the number of iterations required for the algorithm, in response to stationary inputs, to converge to the optimum Misadjustment of the amount by which the final value of the mean-square error deviates from the minimum mean-square error Tracking. the algorithm is required to track statistical variations in the non stationary environment Robustness. For an adaptive filter to be robust, small disturbances can only result in

CHOICE OF ALGORITHM:Computational requirements.

(a)

(b)

(c)

the number of operations required to make one complete iteration of the algorithm the size of memory locations required to store the data and program, and the investment required to program the algorithm on a computer.

THE CHOICE OF ALGORITHM: Numerical properties.

Numerical stability is an inherent characteristic of an adaptive filtering algorithm. Numerical accuracy, on the other hand, is determined by the number of bits used in the numerical representation of data samples and filter coefficients. An adaptive filtering algorithm is said to be numerically robust when it is insensitive to variations in the word length used in its digital implementation.

Applications of adaptive filter

System Identification. Given an unknown system its purpose is to design an adaptive filter that provides an approximation. Equalization. Given a channel of unknown impulse response, the purpose of an adaptive equalizer is to operate on the channel output such that the cascade connection of the channel and the equalizer provides approximation to an ideal transmission medium. Predictive coding. The adaptive prediction is used to develop a model of a signal of interest; rather than encode the signal directly Noise cancellation. The purpose of an adaptive noise canceller is to subtract noise from a received signal in an adaptively controlled

Conclusion

Linear apaptive filters and Linear optimum filters like Weiner are well suited for finding desired signal for channel equalization. For cases where channel coefficients are not known, Bussgang algorithm like CMA give a fair estimate of the input signal. For most of the applications where probabilistic model for input signal can be assumed, the Algorithms like CMA can be satisfactory used. [Higher order statistics]

Вам также может понравиться