Вы находитесь на странице: 1из 6
ADAPTIVE FILTERS IN MATLAB: FROM NOVICE TO EXPERT Scott C. Douglas! Ricardo Lovada? ‘Department of Electrical Engineering, Southern Methodist Univesity, Dallas, Texas 75275 USA DSP Development Group, ‘The MathWorks, Tn, Natick, Massachusetts 01760 USA ABSTRACT Adaptive filters are ubiquitous tools for numer- ‘ous ‘real-world scientific and industrial. applica- tone: Many educators and. practitioners employ the MATLAB technical computing to implement and study adaptive filters. ‘This paper describes the design and implementation issues re- garding a recently-doveloped set of comprehensive Matlab adaptive FIR filtering tools. In addition to ‘2 complete suite of algorithms, the tool set includes ‘analysis functions that enable users to quickly char- acterize the average performance of selected al rithms when limited data are available. We provid. execution speed comparisons for to guide users in algorithm select LAB execution time is most critical. 1. INTRODUCTION Since the development of the lesst-mean-square (LMS) adaptive alter over forty years ago, adaptive Alters have be- come well-known signal processing tools for numerous ap- plications in communications, signal processing, and col ‘trol. The widespread use of adaptive ters has created ‘strong need for educational materials that allow practicing engineers to learn about and explore their wse. Many texts hhave been written on adaptive filters, including (10,17, 2) Many are used in senior and graduate-level courses on sdap- ‘ive filters in colleges and universities worldwide. ‘Computer simulation is an important learning toot for understanding how adaptive fiters work. Simulations give Students an opportunity to understand how signal statis ties affect the convergence properties of these methods and to study implementation issues. Many educators and practitioners find the technical computing software pack- lage MATLAn to be an ideal environment in which to per form these studies. The “language” of linear algebra that MATLAN ses makes it'a natural adaptive Altering simi- lation environment. The pseudo-random sequence genera tors included in MarLan age extremely useful (or making synthetic signals for adaptive processing. Some adaptive filtering textbook authors (cf. [29)) have based their coma- puter simulations on MavLas. The Fiter Design Toolbox Shipped with Version 6.5 of MATLAR contains only a few adaptive filtering fonctions, including the LMS, normalized IMS, and conventional RLS algorithms. ‘This paper describes the design and implementation is- ‘sues regarding a revently-developed definitive set of com prehensive MATLAB adaptive FIR filtering tools! ‘The Vb be included ‘Toolber yn when MAT- The nest release of the Filter Design (0-7803-8116-5/02/817,00 ©2002 IEEE. 168 {2 algorithms in this tool set include gradient and signed ‘UMS variants; recursive least-squares methods in conven- tional, squaze root, lattice, and fast transversal filter forms frequency domain, transform domain, and subband. apy proaches; and fast projection algorithms. The algorithms ‘un efficiently in MATLAB, In some cases, the new algorithen ‘verons are over one hundred times faster than thelr ld votsons. Analysis fonctions enable users to quickly charac- teriae the average performance of selected slgorithexs when limited data are available. We provide a simple example to show how this analysis functionality ean aid in the under- Standing of an adaptive flter's behavior, 2. ALGORITHM DESIGN ISSUES ‘The design of any large algorithm toolset in software re- quires careful planning, Specifications as to which algo- ‘athms to implement, their desired features, and the way they are kay tobe used must be carefully delineated. Once the algorithmi: functionality has been specified, similari- tis between algorithms should then be exploited to reduce coding time and make future maintenance easier. Enhance- tents to the took, such as demonstrations, should also be considered. ‘The design of the adaptive filter functionality bogan with, ‘specification for which algorithms to include inthe toolset. A taxonomy of adaptive filtering algorithms was developed ‘as shown in Fig. 1. This connected graph depicts all of the algorithens chosen for implementation. The nodes on the right contain stochastic gradient and related methods, whereas the nodes on the left are deterministic (projection and least-squares) methous. The links between the meth- ods indicate similarities between algorithm pairs in terms of implementation structure o= cost fonction. ‘This taxanory has a number of interesting features: + The most-popular and widely-used algorithms-LMS, IRIS, and NUMS~are located in the center and have ni. ‘merous links to related methods. Most special-purpose llgorithms such as PTE, FAP, adjeint LMS, and parti- {oned block FDAF appear near the geographical limits ‘of the graph. © There are two distinct classes of adaptive algorithms (stochastic and deterministic methods) with relatively {ew links between them, Within (hese algorithm classes fare clusters of algorithms that are similar either in implementation of in the principles underlying their Serivation, ‘We can use the natural clustering provided by this taxon- tony to group the various algorithm implemented im this coker, Fave lists al ofthese adaptive algorithms along FIG. 1: TAXONOMY OF ADAPTIVE FILTERS DETERMINISTIC METHODS, (©2002, S.C. Douglas. Used with permission STOCHASTIC METHODS ‘Tuble 1: Adaptive algorithms within the adaptive Alters toolset. am] is Nar Ci Ace | agar Name ———| rea) l Singh Updater = Dilaged Updater ‘als —— [east-mean gue TD DENS— [sae oO, se —| Seer FXLaS— Hee TAS id a fe ROUEN atiane LN ieee Proeagn Matas lock and Feguency Donat Mode EMS —[ normalised DS a TENS — | Hees par tAP —tafine pecion OO") a BLMSFFT_| block LMS (PFT) 13) AP? affine projection O(N") [15] CPDAF ‘frequency-domain adaptive filter ii] | FAP fast affine projection 723, 24] (UFDar “unconstrained FDAF [ia] BAP ‘Block affine projection Pay PSPDAT—| patna eek FORF i oration Len Squares Meio PORE | partons Hock unconaaad FOAF— [1h | RIS reuse toques 5 "aay Dorn ond Soba We [Sivas [sing window hl i pare eer DENT Sere et pape Wa His {grapes | dtedecmpatton fi 2 AF abba alae fee TH PRES | Howesaler acs aI Laat Seon THWRES | Hostel sng-vndaw 00S — “a ‘ORL — | pat Sec ot at Tanta Piers far. —Hsrnare tance Pieter — rest it Eo [QRDLSL | QR-decamposition least-squares lattice | [T SWETF | sliding-window fast transversal Alter | [19]_) 169 ‘with thelr corresponding references, "This taxonomy was ‘also used for the technical documentation within the a gorithm files; the “see also" references within the help re- Sources employ this graph's structure. With the algorithm list defined, the core functionality of each algorithm was then implemented as single inline .m program in MATLAB. Great care was taken to ensure that the implementations ran as fast as possible under the cur- rent operating structure of MATLAB. All of the algorithms ‘employ state variables that allow one to partition signals Jnto blocks to manage memory isues within MATLAR. In addition, all algorithms save for the sign-data and sign-sgn algorithms have been tested and work for complex-valued Signals and coefficients Other 1, BLMSFFT vss overap-save fat convolution fr is plementation, whereas BLAIS employs traditional convoln. Boo uring the #21ter command in Mati.aD. ‘The latter Inmplemeatation allows an arbitrary block sie to provide ‘uch finer contra of the computational speedup for dere tnt bloc sacs and Ste lengths 2. "The three afine projection algorithms AP, AP2, and FAP lifer in the way the input signal autocorrelation in formation needed for the N-imensional projection le up- dled. AP tees dec inversion af the (N°) input sigual Sutocorrelation matrix. AP2 uses a par of Kalmamgain- {ype updates to propagote the autocorrelation mate in- terse. Finally, AP ues least-squares linear prediction va an embedded siding-window FTF algorithm. 3. ‘The translorm-domain adaptive fitering algorithms TTDAEDPT and TDAFDCT difler in the chee o siding transform used. Both algorithms tse eficen recarsive or tlmators for the DPT of DCT input signal frequency bin ‘ales, 4. The SBAF algorithm isa special-purpose function that an be called with one or more othe ber adaptive fering anetons slang with prototype analas and synthesis fiers for the Siter bank structores This gorithm then runs n= vidual adaptive filters onthe individual input and deed response signals generated from the sutband structures. ‘The rich choice of algorithms means that one has many possibilities i Smplementation structure is ot an issue. For example, six different functions RLS, HRLS, QRDRLS, FTF, LSL, and QRDLSL-implement ‘an exponentially: windowed recursive least-squares adaptive filter. Three dit ferent versions of the affine projection algorithm are also available. Studies describe later provide some guidance in ‘algorithm selection in such cases 3. ANALYSIS FUNCTIONS Adaptive filters has been called a “twiddler's paradise" ‘whereby the multitude of algorithm cholees is dwarfed only by the near-infinite choices of parameter values within each algorithn, An unaided novice user of adaptive flters wl invariably run an adaptive algorithm to divergence with ‘out-of-range parameter valucs. Alternatively, he or she will choose conservative values, resulting in an adaptive fleer that “isn't doing anything” Textbooks provide answers to such dificulties, but they ean potentially be avoided with the right assistive tools. ‘Tothis end, aset of analysis tools has been of the adaptive filters LMS, NLMS, BLMS, Bl Five MSFFT, 170 Fig. 2 Anslysis of «32-ooffcient LMS adaptive ter. (a) Sample coeffiient trajectories computed from analysis and estimated from simulations. (b) Mean-square error perfor- ‘mance. (c) Sum-of squared coeficent erors. ‘and SE-have optional analysis functions. ‘These tools en- capsulate existing Imowledge about how these adaptive fl- ters work [7, 9, 11, 12, 20) Each of these procedures cal- cates the Statistica! information needed to evaluate the adaptive ter analy such as the Input signal autocoe- ‘tatrix, the optimum coefcient vector, and the min tm NSE: fom tesa provided, Tes eine are then used to calculate maximum step size values, predic- tons of mean-squared error (MSE) at each time instant, the mean coefficient values at each time instant, predicy tons of the coefficient error powers at each time instant, land final total and exoess MSEs at convergence, Another [general-purpose function provided in the toolset allows the fier to calculate ensemble-avernged values of the afore: ‘mentioned quantities for any adaptive filter supported in the toolset. ‘The vser can then plot the ensemble-averaged ‘curves against the predicted values to check the accuracy fof the analytical results. The analysis tools also have some ‘convenient built-in features that help the aoviee user avoid imistakes. For example, the analysis functions return ‘warning ifthe chosen step sie exceeds half ofthe maximus step size value predicted from the mean or mean-square ‘azalysis, depending on which one is called ‘Shown in Fig. 2 are the results produced by the LMS adaptive filtering and analysis functionality, where a 32- Cooffcient LMS adaptive fter with correlated Gaussian in- ppt and desired response signals is being studiet. All input signals were created such that Bf2(n)2(n—D} =0.5", The Gesired response signals wore ereated using a noisy system identification model corresponding to a low-pass FIR Alter ‘with digital cutoff frequency w = 0.57 and an observation ‘oise variance of 0.01. For the adaptive flter, = 0.008 fand zero initial cooficients were chosen. The theoretics) Curves are computed from one pass of the analysis fonction, ‘whereas the simlation curves are computed by fy su Tation rons wsing the adaptive filter function. The closeness of the corresponding carves shows that the analysis is 26 curate for the chosen step size and data statistics. These fesults can be starting points for further explorations with Giferent signals, step sizes, and filter lengths. This study. oes not take significant computing time; the generation of ‘hese results including all siguals and plots took 25.2 sec- fonds on 2 700MHe Pent Taptep computer. (A Wis ae | :. PY leone peooe dss ee , oe Fig. : Execation times for sir diferent RLS algorithms as ‘2 functica of filter length over 2000-sample signal sets, 4, INITIALIZATION ISSUES ‘The most important internal variables of any adaptive FIX, fiter are the coeficiont values being updated at each time Sastant and the store input signal samples sed to compte the output and error sigaals fom the coeficients. all but the simplest gradient methods, however, have additional state variables that specify the adaptive fiter’s current op- rating point. For example, all direct-form least-squares ‘adaptive filters have state variables assoclated with the up- ‘ating of the Kalman gain vector, such asthe inverse of the exponentially-weighted input signa autocorrelation matrix. In'some cases, the numbers of these internal variables is large; for example, the FAP algorithm has 3Z + 10N ~ 8 such internal variables for an L-coeficint filter and a pro- jection order of V. Keeping track of these variables and ‘their values can be problematic without a systematic pro- cedure for handling them within the toolset. In addition, ‘certain algorithms, such asthe fast transversal lve, requ special care in the initialization of their state variables oF gence wil occur. ‘As part of the adaptive fiers toolset, we have devel coped a set of initialization routines for each adaptive fier. ‘These programs create a structure variable for the adap- tive filter that includes placeholders for all nocessary state Information for the adaptive filter's operation. Wherever poteibie, we have also included default values that repre- Sent the most-neutral choices for their values. For example, ‘when calling the FAP algorithm, one only needs to spec: ity) the inal coefficient vector, i) the step size ti) the projection offset, and iv) the (N ~ 1)eh-order input signal ‘cross-corelation vector whose length determines the projec tion order. ‘The help resources for this function suggest an all-zero vector for this crot-correlation vector as well. The ion of the sliding-window PTF algorithm embed ‘the PAP algorithm is built into the initiahzation ‘ontine. At any time during the adaptive Site's operation, the values of these various states are accesible ac well, al- lowing one to track diagnostic information if needed. ‘The use of a state variable struccue also allows one to handle memory issues with care when local memory is st 2 premium. Every adaptive iverng routine has been tested {to run in 2 block fashion, where the input and desired re sponse signal are parsed into blocks and the adaptive flter successively processes each block, im Fig. 4: Execution times for the AP, AP2, and PAP alyo- ithms as a function of projection order aver 10000-sample Signal sets, where L = 100. 5. PERFORMANCE RELATIONSHIPS Many signal processing experts use MATLAB to prototype systems and methods, The execution time associated with f type numerical procedure ls'a primary concera in such ‘explorations; one doesn't want to waste time wating for mu- ‘merical results to become available. Our adaptive filtering tools have been designed to provide the fastest execution time possible without going to extreme steps such as loop ‘unrolling. We also have provided several implementations ‘of particular approach wherever possible to allow the ust 2 choice of algorithms Fig. 3shows the execution times of six different RIS algo ‘rithms a6 a function of ter length over 2000-sample sgaal ecards on the afarementioned 700MH laptop computer. All algorithms give identical behavior up to namerical er zors with an equivalent iitaiaation and a unity forgetting factor. While the O(E) stabilized PTF algorithm is more flcient for longer filter lengths (Z > 26), both the conven tional RLS and (square root) Householder RLS algorithins run faster for shorter filter lengthe and are co be preferred fover the Master" PTF algorithm if Marian execution time is of primary concern. Fig. shows the execution times of the AP, AP2, and FAP versions of the afine projection algorithm as a function of| projection order IN for an L = 10l-ap fiter over 10000- sample records. In this example, the “brute force” OWN) ‘version of the algorthra runs the fastest for N< 15, and the OU?) version ofthe algorithm runs the fastest for all projection orders between N= 15 and N = 80. The O(N) FAP algorithm version requises the most execution tine iw all cases. ‘Thus, unless specific issues regarding the mo merical behavior or initialization of the FAP version of the affine projection algorithm are under scrutiny, the “more complex” versions of the algorithm are to be preferred Fig. 5 shows the execution times of the LMS, BLMS, and BLMSHPT adaptive filters as function of Blter length fon 81920sample signal records. Here, we have chosen ‘block sizes equal to the filer lengths, such that BLMS and BLMSEFT perform the same, The implementations, how- fever, allow for block sizes that differ from the flter lengths. In the case of the BLMS algoithea, the choice of filter length Land block size N are completely arbitrary, wheress the BLMSFFT implementation requires that L-+N be a power (of two. For small sep sizes and stationary signals, the con- . 4] i. r oa i L I ot x a le 50° . - * Soe Pl apes a | meee eee ee ' eke . ae i isis, Fig. 5: Pxevution times for the LMS, BLMS, and BLMSFPT algorithms a a Function of filter (equal to Siock) length over 81200-sample signal sts, vergence behaviors of all three algorithms are similar (2) Except for 2, the block algorithms run faster Moreover, the speedap is significant as Z and N are in- creased, ‘The speedup ofthe BLMS algorithm is particularly ‘aseful given the freedom with which Land N can be chosen, allowing the user to employ BLMS in a “ast prototyping esign procedure. In this procedure, the BLMS algorithsa {is used to choose approximate values of L and the algorithmn step size given candidate input and desired response signals fand a small block length 1V. Then, the LMS algorthaa can bbe employed to fine-tune the algorithm's behavior and to verify the adaptation performance. 6. DEMONSTRATIONS AND EXAMPLES ‘Wren first earning about adaptive filters itis useful to Ihave a set of practical examples that illustrace the main application areas of adaptive filters. These examples are essentially specified by the choice of the input and desired response signals, as any adaptive FIR. flter is defined by these signals and the chosen parameters. In this adaptive ‘ltertoolet, two types of examples have been incorporated: + Every adaptive filter function hasan associated simple example contained vithin the help resources for the Staptve fer. "This example consists of a series of Commands that when run within MaTvAp (1) generate the input and desired response signals, (2) specify the fdaptive FIR filter parameters, (3) apply the adaptive fier to the generated signals, and (8) plot quantities related tothe performance ofthe adaptive Alter. Demonstrations iustrating the use ofthe adaptive ter touset in well-known signal processing tanks are tise prowided. At the tne of pubbeation, these derson- Strations incloded (i) acoustic echo eaneelation, (8) fudaptive equalization, ii) adaptive nose cancellation, x) adaptive foe enhancement of speech, and (v) a: {ire Rote ceural Fig.6 shows one ofthe simple examples contained within ‘the help resources for the adaptive flters toolset. This ex- ample depicts a baseband adaptive equalization task, in which the input signal is a complex-valued 4-QAM signal ‘with pseudo-random real and quadrature components. ‘The ‘Complex-valued channel used in the example is described by In Fig. 6: Adaptive equalization example for the TDAFDFT algorithm. (a) Real-valued component of the error signal (b) Tmaginary-valued component of the error signal (c) ‘plot ofthe last 101 samples ofthe input signal. (2) plat of the last 101 samples of the output signal ‘he transfer fonction ia which » 16-tap delay has been introduced to improve iets convergence performance. ‘The desired sg- nal is generated as the output of the above channel when Corrupted by cormplex-valited uncorrelated Ganseian noise ‘whose real and imaginary components have identical vari- lances of 0.01. Shown in the upper two windows of this figure are the errors generated from the TDAFDFT algo- thm on one realization of these signals, whore a 32-tap filter has eeu used. As can be seen, both the real and imaginary components of the error signal converge to small ues, The lower two srindows show scatter-plots of the ‘input and output signals of the adaptive filter, respectively, for the last 101 samples of both signals in the simulation, ‘The erdered navare of the outpat signal seatter-plat illus. ‘trates that equalization has been obtained. Fig. 7 shows the behavior of the sign-error algorithm as, applied to an adaptive line enhancement task, ‘The desired ‘Sjgnal for this example is generated as a(n) = sin(0.17n) + n(n), ® where n(n) isa real-valued pseudo-random Gaussian noise Sequence with unit variance. A S2-tap adaptive filter in a one-step Linear prediction structure is employed to extract fan estimate of the sinusoidal signal in the adaptive fitr's ‘output. Shown in the upper portion of the figure are the original desired response signal, the adaptive filter's output ‘Signal, and the clean sinusoidal signal aver the last 101 sam- ples of the 5000-sample simulation. The similarity between the adaptive filter's ontput and the clean sinusoid indicate tae the adaptive ine enhancer is functioning properly. The lower portion of the figure shows the power spectra of the Aesired and output signals, respectively over the last 1000 satuples of the simulation. ‘Like its namesake, the adaptive Tine enhancer enhances the sinusoidal "line" within the sigr nal spectrum, effectively reducing the noie power in the system's output. aL Fig. 7: Adaptive line enhancement example for the sign- ith. (a) The original noisy sinusoidal signal, the outpit of the adaptive ster, and the noiseless sinw ‘soidal signal. (b) Power spectra of che desired response and ‘adaptive filter output signals. 7. CONCLUSIONS Adaptive ters are important signal processing tools. This ‘paper describes a comprehensive adaptive filter toolset de- ‘eloped for the MATLAB technical computing software pack: age. The toolset includes adaptive fier functions, analysis finctions, examples, and demonstrations. Great care has. ‘been taken to make the toolset easy t0 use for the novice et powerful and flexible enough for the expert. Tei hoped rat the eapabilities contained in this toolset wil enrich ongoing developments in the adaptive filtering field REFERENCES Al) B. Widrow and MB. Hoff, Jr., "Adaptive switching revits,” IRE Wescon Cons. tee, pt 4, pp. 96104, ion 1 IEW. Lucky, “Techniques for adaptive equalization of Giga contaaieaton systema," "Sell Spt ech tip Bee re ee 13 JoNdgamo and A Noga, A learning method for ae tem Menten?” IEEE Trane Atomate Cont Se ACh pp ab2.267, Jone 1967 (3: MoschlO, Adie Ste with ped inp hia" FAB" thas, Santrd Tat, Senor, Ca, (5) LD Gein, =A contimiaslyadapive ster imple Eeted spate sacra” Boe ISB Int Con ‘cout, "Syeth, Stn! Protsing arford, C2 pp eens, ir 16 Retin “Adaptive feng with bnay renee: tent TEE rane. Taform Theory, vols 1E.30, Pp Tatton, Mor, 18 ta Wa Garda, Stesning characteris of stochastie- (Padint dence algarits: A general stay ely Sea citique Sigal Proceusnge Wt no 8 pS. 13s ape ised [8] K. Osaki and P. Umeda, “An adaptive filtering algo- ‘hm sing an oftogoay projection to a ae > ‘pace ander properties" Bison. Commens Joye Picbreks no, © yp 1827, Moy 3064 i iter abd & Katies !Sorvergence analy of LS iter with uncalaed Gauslan data” IBS ‘han. Acoust, Sec, Signal Proeanng, wa 3,20 Tipp tea i 108, oe 173 (0) B, Widrow and, Stans, Adgptie Signal Pree tng englovod Che 8) Pree Hal 8) oy V9 AREtoe Si Cnc prone conezence Ronin ofstocast paint aaptve eae the Sena BEE fron Sper Sia ing vl ASSP. py. GUS, Ape 1887 (n2] NJ Berchad, “Behavior of the e-norimalteed EMS al fev wih Casts pole’ TEBE Pon dene, Sesh Snel Precevsing wh ASSP8" pp. SSCL wo tar (1) G. tong, P, Lng, and 1.6, Pouk “The LMS ab fee wi dnied coset adaption! BS Front Ait Spach Sygal Prosar, vel, ASSP 37, pp. 1397-1405, Sept. 1969. (14 18% and RR Pang, iia block fogoency domain Saptive ser? 1888 Non, Isat eee Set eee ASSPanr op Seater ee 1960 115] Vo Naruyama, “A fest method of projection algo- lea" Bree 1990 TRICE Spring Conf, Be. t16) Beek. Stock and 7. Keath, "Numerically stable fst teansveral Stes for recursive least squares adaptive ‘tering’ IBBE Trans. Signal Prooesing, vol 38, 0. pp, 2-114, Jan, 1992 (17) 8 aynin Adaptive Fiter Theory, 2nd ed. (Engle ‘wood, Cis, NI? Prentice Hall, 1991). 28) S2°Shynk, *Prequency-domsin and mulirate adaptive iiterng?” ERB Sigal Proesing Mag, va. 9, 20-1, Being Jan 192 (ns) Braise Siockeand T: Kailath, “A modular prewindom- ing framework for covariance PTF RLS algorithms” Signal Processing, vel 28 pp 47-61, 1902 (20) SE bospjas and TEV" Meng, "Normalized data onlinearties for LMS adaptation,” BBE Trans. Sig tal Processing, vo. 42, pp 19621365, June 1004 (21) St Montaser‘and P Duhamel, “A tet of algori¢his linking NIMS and block RLS algorithms’. TBBE Trans: Signal Processing, vol 43, pp. 444-453, Feb 1995, [22] S.C. Douglas, “Analysis of the multiple error and block Teast-mean-square adaptive algorithms,” TBE Trans Gireuts Syst It: Analog Digital Signal Processing, vol 2, pp, 92-101, Feb. 1995, [23] S2."Gay and 8. Tavathia, “The fast affine projection algorithm,” Proc. IEEE Int. Conf. Acoust, Speech, Signal Processing, Detroit, Ml, vo. 5, pp. 3023-3026, May 1905. [24] M."Tanala, Y. Kaneda, $. Makino, and J. Kojima, “Fast projection algorithm for adaptive Altering," 12. ICE Trans. Fund. of Electron, Commun, Comput. ‘Sei, vol. E78.A, no, 10, pp. 1358-1361, Oct. 1096 [25] M. Hayes, Statttical' Digital Signal’ Proceuing ond Maaeing (New York, Wty, 1008). [26] EA Wan, “Adjoint LMS" Am efficient alternative to the fitered-X LMS and multiple err LMS algo- thins," Proc. IEEE Int. Conf. Acoust, Spch, Signal Proczssing, Atlanta, GA, vol. 3, pp. 1842-1845, May ‘006 [27] ALA’ Rontogiannis and S, Theodorids, “Invere factor ization adaptive least-squares algorthins.” Signal Pro- seating, vl 52, no. 1, pp, 3647, July 1986 (28) $C Bonels, “Numeicaliyrabat O(N") RLS algo- Fithms using least-squares prewhitening,” Proe. IEEE Tat. Conf. Acoust, Speech, Sigal Procrasng Istanbul, ‘Tune, opp. A128, Jone 2000. [25] B. Farhang-Horoujeny, Adaptice Piers: Theory and ‘Applications (ew York Wiley, 2000)

Вам также может понравиться