Вы находитесь на странице: 1из 38

CHANNEL/SYSTEM IDENTIFICATION USING

TOTAL LEAST MEAN SQUARES ALGORITHM


by
Viput Subharngkasen

Final Project
in
ECE 539
Introduction to Artificial Neural Network
and Fuzzy systems

ABSTRACT

The Total Least Mean Squares (TLMS) algorithm is recursive technique used to
identify transfer function of unknown systems by using input and output data, which is
corrupted by noise. This paper evaluates the accuracy and stability of the TLMS
algorithm. The TLMS algorithm demonstrates a better performance over the Least Mean
Square (LMS) and Total Least Square (TLS) algorithms. It provides accurate results and
faster computational time as compared to conventional approaches.

TABLE OF CONTENTS

I.

INTRODUCTION 1

II.

REVIEW OF LITERATURE ... 4


Least Mean Squares (LMS) algorithm . 4
Total Least Mean Squares (TLMS) algorithm . 8

III.

METHODOLOGY ... 12
Part I ..12
Part II 13

IV.

RESULTS . 15
Part I ..15
Part II 19

V.

DISCUSSION ... 22

VI.

CONCLUSION .25

VII.

REFERENCE 26

APPENDIX A ... 27
APPENDIX B ... 31

I. INTRODUCTION

In the adaptive signal processing, the adaptive linear combiner is the simplest and
the most widely used method. The basic properties of the adaptive linear combiner
system are time varying and self-adjusting performance. Thus, the adaptive linear
combiner systems are adjustable, and their adaptations usually depend upon the average
values of finite-duration of input rather than upon the instantaneous values of the input.
The adaptive system can be used in various applications such as the prediction
application, the inverse modeling application, the interference canceling application, the
channel equalization application, and the system identification application.

In the system identification application, the Least Mean Squares (LMS) algorithm
is widely used to determine the transfer function of an unknown system. By using inputs
and outputs of that system, the LMS algorithm is applied in an adaptive process based on
the minimum mean squares error. The LMS algorithm is the unsupervised learning
algorithm, which is used to extract some features from data. The LMS algorithm can be
used with both stationary and non-stationary signal processing. In the situation of having
no interference in both inputs and outputs or having interference only in the outputs of
the unknown system, the LMS algorithms results can always achieve the optimal
solution. However, if the interference exists in both the input and the output of the
unknown system, the LMS algorithms results can obtain only the sub-optimal solution
for that unknown system.

The Total Least Squares (TLS) algorithm, proposed in 1901, is also used in the
adaptive signal processing to identify the impulse response of the unknown system. The
solutions of the TLS algorithm can be determined by minimizing the Frobenius norm of
the cascaded matrix between the input interference matrix and output interference matrix.
The results of this minimization can be achieved from the singular value decomposition
(SVD) of that combined matrix. The results of the TLS algorithm are based on the
assumption that the interference in the input and output are independent of each other.
The computation time of the TLS algorithm is substantially time consuming; for example,
the N-by-N matrix is 6N3 per iteration. The TLS algorithms performance, which is the
same as the LMS algorithms performance, cannot be used to find the global optimal
solution for the situation if the interference exists in both the input and the output signals
of that system.

One of the ways to find the optimal solution of the impulse response of the
unknown system when it has the interference presented in both the input and the output is
by using the Total Least Mean Squares (TLMS) algorithm. Instead of basing the
approach on the minimum mean squares error as the LMS algorithm does, the TLMS
algorithm is based on the total least mean squares or the minimum Raleigh quotient
approach. Like the LMS algorithm, the TLMS algorithm is the unsupervised learning
algorithm. The TLMS algorithm was derived from Oja and Xus learning algorithm,
which is used for extracting only the minor features from the data sequence, unlike the

LMS algorithm. By extracting only the subsidiary information, the effect of the
interference can be eliminated. Moreover, the TLMS algorithm has also an advantage
over the Total Least Squares (TLS) algorithm in the computation time. The TLMS
algorithms computation time for N-by-N matrix is 4N per iteration, whereas the TLS
algorithms is 6N3 per iteration.

In this paper, I developed the methodology for making comparisons between the
LMS and TLMS algorithm. These comparisons will help to prove the accuracy of the
two methods. In addition, it will also show the different results for both methods from
the ideal solution.

II. REVIEW OF LITERATURE

Least Mean Squares (LMS) algorithm

The LMS algorithm is very useful and easy to compute. The LMS algorithm will
perform well, if the adaptive system is an adaptive linear combiner, as well as, if both the
n-dimensional input vector X (k) and the desire output d (k) are available in each iteration,
where X (k) is

X (k )

x1 (k)
x (k )
2


x n (k )

and the n-dimensional corresponding set of adjustable weights W(k) is

W (k )

w1 (k )
w (k )
2


wn ( k )

By having the input vector X (k), the estimated output y (k), can be computed as a linear
combination of the input vector X (k) with the weight vector W (k) as
y (k ) X T (k )W ( k )

Thus, the estimated error e (k), the difference between the estimated output y (k), and the
desired signal d (k), can be computed as
e( k ) d ( k ) y (k ) d ( k ) X T ( k )W ( k )

From a mathematics perspective, to find the optimal solution is to minimize the E


{e2 (k)}, whereas E {.} is the expectation value. In order to find the minimum value of
e2(k), lets expand the e2 (k), and get
e 2 (k ) d 2 ( k ) 2 d ( k ) X T (k )W ( k ) W T (k ) X ( k ) X T ( k )W (k )

After that, take the expectation value for both sides of the equation,
E{e 2 ( k )} E{d 2 ( k )} 2 E{d (k ) X T (k )}W ( k ) W T (k ) E{ X (k ) X T (k )}W (k )

Let R be defined as the autocorrelation matrix as


R E{ X (k ) X T (k )}

and also let P be defined as

P E d (k ) X T (k )

Thus, E{ e2(k) } can be rewritten as


E{e 2 (k )} E{d 2 ( k )} 2 P T W ( k ) W T ( k ) RW ( k )

Since the LMS algorithm is the descending on the performance surface algorithm.
Therefore, we use e2(k) to estimate the gradient vector,
(k )

e 2 (k )
2 e(k ) X (k )
W ( k )

From the steepest descent type of adaptive algorithm.


W (k 1) W (k ) ( k )

Replace (k), we get


W ( k 1) W (k ) 2 e(k ) X (k )

Substitute e(k), we get


W (k 1) W (k ) y (k ) d (k ) X (k )

This equation is the LMS algorithm, which uses the difference between the
desired signal d(k) and the estimated output y (k) to drive each weight tap to find the
impulse response of the unknown system. W(k) is the present value of weight and
W(k+1) is the new value, which eventually will be converted. The parameter is a
positive constant that controls the rate of convergence and the stability of the algorithm,
called the learning rate. The value of the learning rate has to be smaller than 1/2max,
where max is the largest eigenvalue of the correlation matrix, R, or the learning rate can
be selected as small as possible to make sure that the algorithm will be converted. The
smaller the learning rate is employed, the longer the training sequence needs to be.
Input
X (k)

Weight
W (k)

x1k

W1k

x2k

W2k

Desire Signal
d(k)

+
+

xnk

Sum
Output
Y(k)

Wnk

Sum
Error
E(k)

Figure 1: LMS algorithm diagram

From the LMS algorithm diagram in figure 1, the estimated output Y(k) is
computed from a linear combination of the input signal, X(k) and the weight vector, W(k).
The estimated output y(k) is then compared to the desired output, d(k), to find the error,
e(k). If there is any difference, the error signal will be used as a feedback mechanism to
adjust the weight vector W(k). On the other hand, if there is no difference between these
signals, it means that no adjustment is needed, since the estimated output is desirable.

Total Least Mean Squares (TLMS) algorithm

Let Z(k) be the augmented vector of the input vector, X(k), and the desired output, d(k)

x1 (k )
x (k )
2

Z (k ) X (k ) | d (k )

x n (k )
d (k )

T

Assume that interference exists in both the input and the output called X(k) and d(k)
respectively. Also, define the augmented interference vector as Z(k)

x1 (k )
x (k )
2

Z (k ) X (k ) | d (k )
T

xn (k )

d (k )

Thus, the augmented observation vector can be shown as

~x1 (k )
~
x (k )

~
~ ~
Z ( k ) Z ( k ) Z ( k ) X T ( k ) | d ( k )
~
x n (k )
~
d (k )
where
~
X ( k ) X ( k ) X ( k )
~
d ( k ) d ( k ) d ( k )

Recall that the weight vector W(k) is

W (k )

W1 (k)
W (k )
2


Wn ( k )

Therefore, let the augmented weight vector to be

W1 (k )
W (k )
2

T
~
T
W (k ) W (k ) | Wn1

Wn (k )
Wn1 (k )

In the TLMS algorithm, the estimation of the desired output, y(k), can be
expressed as a linear combination of the input, X(k), and the weight vector, W(k), as
shown in the LMS algorithm as
y (k ) X T (k ) W (k )

To find the optimal solution, the effect of the interference has to be minimized.
Thus,

x1 (k )

x 2 ( k )

min E X T (k ) | d (k ) min E Z (k ) 2 min E


2
x ( k )
n
d (k )

where || . ||2 is the Euclidean Distance. The above equation can also be achieved through
the minimization of

10

min E

~
~
X T ( k )WTLMS d ( k )

~
~
min E Z T ( k )W ( k )

Subject to

W (k )

Note in the above equation that WTLMS is changed in each iteration, and can be any
positive constant.
When the equation is expended, we get

~
~
~
~
min E W T (k ) Z (k ) Z T (k )W (k )

~
~
min E W T ( k ) RW (k )

where R is the augmented correlation matrix

~
~
R E Z (k ) Z T (k )

According to LaGrange expansion, min E W T ( k ) RW (k ) can be expressed as


~

~T

~
~
( k ) RW ( k ) W ( k ) 2

where is a constant.
Next, in order to find the minimum value of J, use the differentiation technique
J
~
2 RW ( k ) 0
~
W ( k )

Hence, the solution of this optimization problem is the eigenvector, which relates
to the smallest eigenvalue of the augmented correlation matrix.
Referred to the LMS algorithm, the search equation for finding the eigenvector of
the augmented correlation matrix R is
~
~
~
~
W ( k 1) W ( k ) [W ( k ) W (k )

11

2
2

~
RW ( k )]

where k is an iteration number. The parameter is a positive constant that controls the
rate of convergence and the stability of the algorithm. The augmented correlation matrix
R is a positive definite matrix, thus, it makes the term ||W(k)||22 RW(k) be a higher order
decay term. Hence, the weight vector, W(k), is bounded.
Recall that the augmented correlation matrix R is

1
~ ~
R E Z (k ) Z T (k )
M

~T

Z (k ) Z

(k )

k 1

where M is the total number of samples. However, in the TLMS algorithm, we take each
Z(k)ZT(k) value as an augmented correlation matrix in each iteration. In addition, we can
also rewrite the searching equation as
~
~
~
Y ( k ) Z ( k ) T W (k )

~
~
~
~
W (k 1) W (k ) W (k ) W ( k )

2
2

~
~
Y (k ) Z (k )

The above equation is the TLMS algorithm that will be used in the simulation
processes in this study. As a final note, the value of the constant , it has to be very small
in order to make ensure that the TLMS algorithm is converted. However, if we pick the
value of too small, the algorithm will need a lot of training sequences to make it
converted.

12

III. METHODOLOGY

Part I

By using the inputs and the outputs of the unknown system, the Least Mean
Squares (LMS) and the Total Least Mean Squares (TLMS) algorithms are employed to
identify the impulse response. In this simulation, the inputs and the outputs that are
corrupted with zero mean Gaussian Noise are used. First, we apply the signals with
signal to noise ration (SNR) equal to 10 dB into the algorithm while the lengths of the
input and the output samples are varied from 1,000 to 100,000 samples. Next, the LMS
and TLMS algorithms are used to compute the transfer function of the unknown system
at each length of the samples (see figure 2).
Start

Use TLMS and LMS


algorithm to compute
the transfer function

Number of samples = 1,000

Increase number
of samples

No

Find the average of the


last 100 results

Find Error
Error = ||HAverage - HIdeal ||2

100,000
Samples
Yes
Plot the Error of TLMS
and LMS algorithm
Vs the number of sample in
semilog graph

End

Figure 2: The flow chart shown the simulation process


for each SNR in the methodology part I

13

The last 100 results were averaged. Then, the error of each of the averaged results
compared with the ideal situation are computed by the following formula
Error H Average H Ideal

where || . ||2 is the Euclidean Distance or norm.

The results of the errors at each SNR are shown in semi-logarithm graphs in
which the Y-axis represents the number of samples used in the simulations, and the Xaxis represents the errors in dB. The processes are repeated for different SNR: 5, 2, 0,
and 2 dB.

Part II

To better demonstrate the ability and stability of the TLMS algorithm over the
LMS algorithm, the fixed number of inputs and outputs at 50,000 samples were used in
the 2nd simulation. These inputs and outputs are also corrupted with Gaussian Noise. In
addition, fixed SNRs at 10, 5, 2, 0, and 2 dB are used respectively. As in the part one,
the simulation begins with SNR equal to 10 dB. The averages of the last 100 results were
computed for both the LMS and the TLMS techniques. After that, we repeated the
simulation and computation 50 times in order to find the true averages and variances of
the result (see figure 3). All processes were repeated for the different SNRs and the
results computed at the different SNRs were compared to the ideal solution in a table
format.

14

Start

Use TLMS and LMS


algorithm to compute
the transfer function

Number of samples
=50,000

No

Repeat for
50 times

Find the average of the


last 100 results

Find Error
Error = ||HAverage - HIdeal ||2

Yes
Compute average
of the average
and its variance

End

Figure 3: The flow chart shown the simulation process


for each SNR in the methodology part II

15

IV. RESULTS

Part I

In the first simulation, I used the TLMS and LMS algorithms to find the transfer
function of an unknown system from the varied length of inputs and outputs. As
mentioned in the methodology section, the last 100 values of the results are used to
compute the averages, which represent the computation of the TLMS and LMS
algorithms for each number of samples. In order to determine the performance of the
TLMS and LMS algorithms, we have to find the errors between the averaged results of
both algorithms and compare those to the ideal solution for every number of samples.
The error can be computed by

Error 20 log H Average H Ideal

where || . ||2 is the Euclidean distance. At each SNR, the error of both LMS (- -) and
TLMS (--) algorithms are shown in semi-logarithm graphs in which the Y-axis represents
the number of samples, and the X-axis represents the errors in dB (see figure 5-9).

According to the error plots, we can see that the TLMS algorithms performance
is better than the LMS algorithms performance after being subjected to enough training
sequences. In this simulation, the enough training sequences are about 20,000 samples.
Moreover, the errors appear to convert after being subjected to 50,000 samples of training
sequences for the TLMS algorithm. However, after the errors were converted, the error

16

plots of both the LMS and TLMS algorithms still fluctuate. For the TLMS algorithm, the
error plots appear to move back and forth between the ranges of 10dB to -25 dB. For
the LMS algorithm, the error plots appear to swing around 0 to 5 dB
Error of TLMS Vs Error of LMS in Noise 10 db

10

Error LMS
Error TLMS

5
0

Error in dB

-5
-10
-15
-20
-25
-30

4
5
6
Number of samples

Figure 5: The error of TLMS and LMS algorithm


Vs the number of samples at SNR = 10 dB

17

10
x 10

Error of TLMS Vs Error of LMS in Noise 5 db

20

Error LMS
Error TLMS

15
10
5
Error in dB

-5

-10
-15
-20
-25

-30
0

4
5
6
Number of samples

10
4

x 10

Figure 6: The error of TLMS and LMS algorithm


Vs the number of samples at SNR = 5 dB
Error of TLMS Vs Error of LMS in Noise 2 db

15

Error LMS
Error TLMS

10
5

Error in dB

0
-5
-10
-15
-20
-25

4
5
6
Number of samples

10

Figure 7: The error of TLMS and LMS algorithm


Vs the number of samples at SNR = 2 dB

18

x 10

Error of TLMS Vs Error of LMS in Noise 0 db

20

Error LMS
Error TLMS

15
10

Error in dB

5
0
-5
-10
-15
-20

4
5
6
Number of samples

10
4

x 10

Figure 8: The error of TLMS and LMS algorithm


Vs the number of samples at SNR = 0 dB
Error of TLMS Vs Error of LMS in Noise -2 db

25

Error LMS
Error TLMS

20
15

Error in dB

10
5
0
-5
-10
-15
-20

4
5
6
Number of samples

10

Figure 9: The error of TLMS and LMS algorithm


Vs the number of samples at SNR = -2 dB

19

x 10

Part II

According to the 2nd simulation, I developed a method to compare the ability and
the stability of the TLMS algorithm and the LMS algorithm. The fixed numbers of input
samples are used. The simulations in part one show that the TLMS algorithms results
have already been converted after 50,000 samples are used. Thus, I used 50,000 samples
in this simulation to ensure the conversion of the TLMS algorithm. Like the first
simulation, the last 100 converted results are employed to compute the averages, which
represent the results of the TLMS and LMS algorithms. In order to find the true average
and the variance of the computed results, I repeated the simulation 50 times, and
computed the average of those 50 averages and variances. Afterwards, the whole process
(see figure 3) is repeated for the different SNRs to determine the averages and the
variances. The results of each SNR are shown in the table 1.

According to table 1, the averages computed from the TLMS algorithm are much
closer to the ideal solution than the results computed from the LMS algorithm. When the
SNR is equal to 10 dB, the average weight computed from the LMS algorithm starts to
deviate from the ideal weight. For the rest of the SNRs, the results derived from the LMS
algorithm continue to deviate away from the ideal transfer function. However, the results
of the TLMS algorithm, unlike the LMS algorithm, were close to the ideal situation even
at the low SNR. Furthermore, the variances of the averages computed from both the
TLMS and the LMS algorithm are considerably small; most of the variances are less than

20

0.01. Meaning that for each algorithm the computed averages are close to each other and
to its mean.

Table 1: The TLMS and LMS average weights and their variances

SNR
10 dB

Average Weight
H1 = [-0.3003, -0.9016, 0.8091, -0.7046, 0.6180]

Variance of average weight


Var1 = [0.0020, 0.0035, 0.0028, 0.0028, 0.0026]

5 dB

H2 = [-0.2270, -0.6757, 0.6033, -0.5262, 0.4583]


H1 = [-0.3054, -0.9014, 0.7969, -0.7044, 0.5966]

Var2 = [0.0038, 0.0023, 0.0025, 0.0032, 0.0023]


Var1 = [0.0024, 0.0032, 0.0044, 0.0030, 0.0033]

2 dB

H2 = [-0.1977, -0.5550, 0.4932, -0.4351, 0.3714]


H1 = [-0.2924, -0.9080, 0.8053, -0.7016, 0.6054]

Var2 = [0.0053, 0.0039, 0.0037, 0.0049, 0.0049]


Var1 = [0.0038, 0.0049, 0.0052, 0.0055, 0.0041]

0 dB

H2 = [-0.1538, -0.5097, 0.4336, -0.3777, 0.3291]


H1 = [-0.3148, -0.8988, 0.7875, -0.6937, 0.6152]

Var2 = [0.0051, 0.0054, 0.0046, 0.0055, 0.0050]


Var1 =[0.0030, 0.0043, 0.0049, 0.0048, 0.0039]

-2 dB

H2 = [-0.1678, -0.4230, 0.3912, -0.3481, 0.3209]


H1 = [-0.2983, -0.8934, 0.8041, -0.6942, 0.5920]

Var2 = [0.0050, 0.0043, 0.0069, 0.0059, 0.0039]


Var1 =[0.0051, 0.0057, 0.0050, 0.0046, 0.0047]

-5 dB

H2 = [-0.1384, -0.3882, 0.3428, -0.3088, 0.2382]


H1 = [-0.2966, -0.8950, 0.7934, -0.7026, 0.5984]

Var2 = [0.0090, 0.0085, 0.0080, 0.0095, 0.0065]


Var1 = [0.0039, 0.0099, 0.0069, 0.0056, 0.0044]

H2 = [-0.0869, -0.3227, 0.2699, -0.2470, 0.2173]

Var2 = [0.0105, 0.0106, 0.0102, 0.0097, 0.0072]

Notes:
1. Ideal transfer function = [ -0.3, -0.9, 0.8, -0.7, 0.6 ]
2. Number of samples = 50,000
3. TLMS = 0.001
4. LMS = 0.001
5. H1 = TLMS average weight
6. H2 = LMS average weight
7. Var1 = Variance of TLMS weight

21

8. Var2 = Variance of LMS weight

22

V. DISCUSSION

According to the results shown in the previous section, the performance of the
Total Least Mean Squares (TLMS) algorithm is superior to the Least Mean Squares
(LMS) algorithm. From figure 5-9, the errors resulted from the TLMS algorithm are
minimized after using 30,000 samples, where the LMS algorithms errors can not be
minimized even though the larger numbers of samples are used. Recall that in these
simulations 100,000 samples are employed.

When the SNR is decreased, the errors of both the TLMS and LMS algorithms are
increased. Although, the errors of both methods are increased, only the errors of the
TLMS algorithm are eventually minimized to the acceptable value. The errors resulted
from the TLMS algorithm are averaged around 15dB which is considerably negligible,
while the errors computed from the LMS algorithm are averaged around 0 dB.

As shown in figure 5-9, the errors of the simulations fluctuate. This variation is
due to the nature of the Gaussian interference. To reduce this effect, more simulations are
needed in order to find the true average result. As a consequence, the simulation in part
two were developed. From part one, I learned that the errors of the TLMS algorithm are
converted after being subjected to 50,000 training samples. Therefore, we use 50,000
samples for the 2nd simulation. In order to find the true averages of the results, the
simulation needed to be repeated 50 times as mentioned in the methodology section.

23

In table 1, the results show that the averages computed from the TLMS algorithm
are closer to the ideal solution than the results computed from the LMS algorithm. The
variances of both algorithms are significantly small, which means the averages of both
algorithms do not vary much from each other and its mean. These small variances
strengthen the stability of both algorithms. However, the TLMS algorithm has smaller
variances than the LMS algorithm, this implies that the TLMS algorithm has more
stability in computation over the LMS algorithm. Although the LMS algorithm gives a
small value of variances, the average results still significantly deviate away from the
desired value. As mentioned in the introduction section, the nature of the LMS algorithm
is used to extract information from the input signals. However, due to the heavy noise
corruption in the signals, the LMS algorithm cannot extract the information since the
algorithm extracts the noise effects from the signal instead of the original information
(input signals). Therefore, the LMS algorithm results in poor performance.

On the other hand, the TLMS algorithm provides even a smaller value of
variances and its average results are very much closer to the ideal solution. Because the
TLMS algorithm, unlike the LMS algorithm, extracts only the minor features from the
signals, the TLMS algorithm shows a better performance over the LMS algorithm in both
the ability and the stability aspects. Consequently, the TLMS algorithm is recommended
for applying in the system identification applications in most situations.

24

In general, when no interference is involved, both the TLMS and LMS algorithms
can be used to achieve the global optimal solution. However, the LMS algorithms
computation time is slightly faster than the TLMS algorithm. On the contrary, when the
interference is presented, only the TLMS algorithm shows the desired results, whereas
the LMS algorithm offers unacceptable solutions (see table 1).

One of the difficulties in using the TLMS algorithm is how to choose the
appropriate value of the learning rate, . If the selected is too small, the TLMS
algorithm needs a substantial number of samples to make the algorithm convert.
However, if the value is too large, the algorithm will fail to convert. For that reason,
we need to trade off between the value of the learning rate, , or the number of samples
used in the simulation. As a result, I recommend in a further study that search should be
conducted to discover the best way to select the appropriated learning rate in order to
utilize the efficiency of the TLMS algorithm.

25

VI. CONCLUSION

The Total Least Mean Squares (TLMS) Algorithm is the unsupervised learning
adaptive linear combiner based on the total least mean squares or the minimum Raleigh
quotient, which is used to extract minor features from training sequences. This algorithm
can be best used in situations when there is interference presented in both the input and
output signals of the system. According to the simulations in this study, I found that the
TLMS algorithm provides better ability and stability and a more accurate way to identify
the transfer function of the unknown system than the LMS algorithm does. In addition,
the TLMS algorithms computation time, 4N per iteration, is also faster than the TLS
algorithms, 6N3 per iteration. However, one of the problems of using the TLMS
algorithm is the difficulty of selecting the suitable learning rate, . In conclusion, the
advantages of using the TLMS algorithm are its ability to perform in a system, which is
heavily corrupted with noise, and the stability of its results.

26

VII. REFERENCES

1. Abatzoglou, T.J., Mendel, J.M., and Harada G.A. (1991), The Constrained Total
Least Squares Technique and its Application to Harmonic Supersolution, IEEE
Transactions on Signal Processing, vol. 39, pp. 1053-1086.
2. Feng, D.Z., Bao, Z., and Jiao, L.C. (1998) Total Least Mean Squares Algorithm, IEEE
Transactions on Signal Processing, vol. 46, pp. 2122-2130.
3. Mendel, J.M. (1995). Lessons in Estimation Theory for Signal Processing,
Communications, and Control. Englewood Cliffs, NJ: Prentice-Hall.
4. Proakis, J.G., Rader, C.M., Ling F., and Nikias, C.L. (1992). Advanced Digital
Signal Processing. Macmillan, NY: Macmillan.
5. Widrow, B. and Stearns, S.D. (1985). Adaptive signal processing. Upper Saddle
River, NJ: Prentice-Hall.

27

APPENDIX A

Part1: The simulation in Methodology part1


This program uses the TLMS and LMS algorithms to identify the system (Hi),
which is corrupted with noise at SNR equals to s. This program simulates from
1,000 data to maximum data. The results are transfer function computed by the
TLMS and LMS algorithms including their errors, and also a plot shows the
relation between errors and number of samples at selected SNR.
Input
Hi : Transfer function used in the simulation.
s : Signal to Noise Ratio.
maxnum : Maximum number in the simulation.
u_tlms : The learning rate of the TLMS algorithm.
u_lms : The learning rate of the LMS algorithm.

Output
h1 : Transfer function computed from the TLMS algorithm.
h2 : Transfer function computed from the LMS algorithm.
e1 : Error in dB of the TLMS algorithm's results.
e2 : Error in dB of the LMS algorithm's results.

28

function [h1,h2,e1,e2] = part1(Hi,s,maxnum,u_tlms,u_lms)


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Part1 : The simulation in Methodology part1
%
Find the error of the TLMS and LMS algorithms Vs
%
the number of samples at the different SNRs.
%
% Input
%
Hi : Transfer function used in the simulation.
%
s : Signal to Noise Ratio.
%
maxnum : Maximum number in the simulation.
%
u_tlms : The learning rate of the TLMS algorithm.
%
u_lms : The learning rate of the LMS algorithm.
%
% Output
%
h1 : Transfer function computed from the TLMS algorithm.
%
h2 : Transfer function computed from the LMS algorithm.
%
e1 : Error in dB of the TLMS algorithm's results.
%
e2 : Error in dB of the LMS algorithm's results.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Ideal System
H_Ideal = Hi;
% Signal to Noise ratio
SNR = s
% Maximum number of the simulation
MaxNumber = maxnum
% Learning rate for TLMS and LMS algorithms
mu_tlms = u_tlms
mu_lms = u_lms
% Number of sample using in the simulation
num1 = 1000:1000:9000;
num2 = 10000:2000:MaxNumber;
Number = [num1, num2];
t = length(Number);
Taps = length(H_Ideal);
PP = 1;
Last = 50;
AA = eye(Taps);
AA = [AA, zeros(Taps,1)];
H_TLMS = [];
H_LMS = [];
Error_TLMS = [];
Error_LMS = [];

29

for i = 1:length(SNR),
for j = 1:t,
Signal = PP*randn(1,Number(j));
% Find the Power of the Signal
Power = 0;
for k = 1:Number(j),
Power = Power + Signal(k)^2;
end
Power = Power/Number(j);
% Noise
% SNR = 20 log(Px/No)
% No = Px*10^(-SNR/20)
No = Power* 10.^(-SNR/10);
% Ideal Signal come out of the system
DesireSignal = conv(H_Ideal,Signal);
Signal = Signal + No*randn(1,Number(j));
DesireSignal = DesireSignal +
No*norm(H_Ideal)^2*(randn(1,length(DesireSignal)));
% Beginning the weight with random number
h_tlms = zeros(Number(j)-Taps+1,Taps+1);
h_tlms = [randn(1,Taps+1)/10; h_tlms];
h_lms = zeros(Number(j),Taps);
for k = 1:Number(j)-Taps+1,
X = [];
for q = 1:Taps,
X = [X; Signal(k+Taps-q)];
end
Z = [X; DesireSignal(k+Taps-1)];
Y = Z'*h_tlms(k,:)';
h_tlms(k+1,:) = h_tlms(k,:) + mu_tlms*(h_tlms(k,:) norm(h_tlms(k,:))*Y*Z');
Error = DesireSignal(k+Taps-1) - h_lms(k,:)*X;
h_lms(k+1,:) = h_lms(k,:) + mu_lms*Error*X';
end
W1 = zeros(Taps+1,1);
for k = 1:Taps+1,
for q = Number(j)-Last-Taps:Number(j)-Taps,
W1(k) = W1(k) + h_tlms(q,k);
end
end
H1 = W1/Last;
H1 = H1/(-H1(6));
H1 = AA*H1;
H_TLMS = [H_TLMS H1];
E1 = norm(H_Ideal - H1);

30

Error_TLMS = [Error_TLMS, E1];


W2 = zeros(Taps,1);
for k = 1:Taps,
for q = Number(j)-Last:Number(j),
W2(k) = W2(k) + h_lms(q,k);
end
end
H2 = W2/Last;
H_LMS = [H_LMS H2];
E2 = norm(H_Ideal - H2);
Error_LMS = [Error_LMS, E2];
end
end
h1
h2
e1
e2

=
=
=
=

H_TLMS
H_LMS
20*log10(Error_TLMS)
20*log10(Error_LMS)

figure;
plot(Number,e1,'-',Number,e2,'-.');
title('TLMS Vs LMS Algorithm at SNR = 0');
xlabel('Number of samples');
ylabel('Error in Db');
legend('Error TLMS','Error LMS');

31

APPENDIX B

Part2: The simulation in Methodology part2


This program uses the TLMS and LMS algorithms to identify the transfer
function of system (Hi), which is corrupted with noise at SNR equals to s by
using fixed number of samples (num). This program repeats the simulation for T
times. The results are average transfer function computed by the TLMS and LMS
algorithms including their variances at selected SNR.

Input
Hi : Transfer function used in the simulation.
s : Signal to Noise Ratio.
num : Number of samples using in the simulation.
u_tlms : The learning rate of the TLMS algorithm.
u_lms : The learning rate of the LMS algorithm.
T : Number of repetitions.

Output
h1 : Average transfer function computed from the TLMS algorithm.
h2 : Average transfer function computed from the LMS algorithm.
v1 : Variance of the transfer function computed by the TLMS algorithm
v2 : Variance of the transfer function computed by the LMS algorithm

32

function [h1,h2,v1,v2] = part2(Hi,s,num,u_tlms,u_lms,T)


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Part2 : The simulation in Methodology part2
%
Find the average transfer function of the TLMS and LMS
%
algorithm and their variances.
%
% Input
%
Hi : Transfer function used in the simulation.
%
s : Signal to Noise Ratio.
%
num : Number of samples using in the simulation.
%
u_tlms : The learning rate of the TLMS algorithm.
%
u_lms : The learning rate of the LMS algorithm.
%
the average.
%
T : Number of repetitions.
%
% Output
%
h1 : Average transfer function computed from the TLMS algorithm.
%
h2 : Average transfer function computed from the LMS algorithm.
%
v1 : Variance of the transfer function computed by the TLMS alg.
%
v2 : Variance of the transfer function computed by the LMS alg.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Ideal System
H_Ideal = Hi
% Signal to Noise ratio
SNR = s
% Number of sample using in the simulation
Number = num
% Learning rate for TLMS and LMS algorithms
mu_tlms = u_tlms
mu_lms = u_lms
time = T;
Last = 50;
Taps = length(H_Ideal);
PP = 1;
AA = eye(Taps);
AA = [AA, zeros(Taps,1)];
H_T = [];
H_L = [];
E_T = [];
E_L = [];
mu_tlms = 0.001;
mu_lms = 0.001;
for i = 1:length(SNR),

33

for iteration = 1:time,


Signal = PP*randn(1,Number);
% Find the Power of the Signal
Power = 0;
for k = 1:Number,
Power = Power + Signal(k)^2;
end
Power = Power/Number;
% Noise
% SNR = 20 log(Px/No)
% No = Px*10^(-SNR/20)
No = Power* 10.^(-SNR/20);
% Ideal Signal come out of the system
DesireSignal = conv(H_Ideal,Signal);
Signal = Signal + sqrt(No)*randn(1,Number);
DesireSignal = DesireSignal +
sqrt(No)*randn(1,length(DesireSignal));
h_tlms = zeros(Number-Taps+1,Taps+1);
h_tlms = [randn(1,Taps+1)/10; h_tlms];
h_lms = zeros(Number,Taps);
for k = 1:Number-Taps+1,
X = [];
for q = 1:Taps,
X = [X; Signal(k+Taps-q)];
end
Z = [X; DesireSignal(k+Taps-1)];
Y = Z'*h_tlms(k,:)';
h_tlms(k+1,:) = h_tlms(k,:) + mu_tlms*(h_tlms(k,:) norm(h_tlms(k,:))*Y*Z');
Error = DesireSignal(k+Taps-1) - h_lms(k,:)*X;
h_lms(k+1,:) = h_lms(k,:) + mu_lms*Error*X';
end
W1 = zeros(Taps+1,1);
for k = 1:Taps+1,
for q = Number-Last-Taps:Number-Taps,
W1(k) = W1(k) + h_tlms(q,k);
end
end
H1 = W1/Last;
H1 = H1/(-H1(6));
H1 = AA*H1;
H_T = [H_T, H1];
E1 = norm(H_Ideal - H1);

34

E_T = [E_T, E1];


W2 = zeros(Taps,1);
for k = 1:Taps,
for q = Number-Last:Number,
W2(k) = W2(k) + h_lms(q,k);
end
end
H2 = W2/Last;
H_L = [H_L, H2];
E2 = norm(H_Ideal - H2);
E_L = [E_L, E2];
end
end
h1 = sum(H_T')/time
h2 = sum(H_L')/time
v1 = var(H_T')
v2 = var(H_L')

35

Вам также может понравиться