• Nie Znaleziono Wyników

Adaptive Filtering Adaptive Filtering

N/A
N/A
Protected

Academic year: 2021

Share "Adaptive Filtering Adaptive Filtering"

Copied!
21
0
0

Pełen tekst

(1)

CHAPTER 4

Adaptive Tapped-delay-line Filters Using the Least Squares

Adaptive Filtering

Adaptive Filtering

(2)

In this presentation the method of least squares will be used to derive a recursive algorithm for automatically adjusting the coefficients of a tapped-delay-line filter, without invoking assumptions on the statistics of the input signals. This procedure, called the recursive least-squares (RLS) algorithm, is capable of realizing a rate of convergence that is much faster than the LMS algorithm, because the RLS algorithm

utilizes all the information contained in the input data from the start of the adaptation up to the present.

Adaptive Tapped-delay-line Filters Using the Least Squares

(3)

Adaptive Tapped-delay-line Filters Using the Least Squares The Deterministic Normal Equations

The requirement is to design the filter in such a way that it minimizes the residual sum of squares of the error.

Figure 4.1 Tapped-delay-line filter

n

i

i e n

J

1

2( ) )

( (4.1)

(4)

Adaptive Tapped-delay-line Filters Using the Least Squares The filter output is the convolution sum

M

k

n i

k i u n k h i

y

1

,..., 2 , 1 )

1 (

) , ( )

( (4.2)

which upon substituting becomes e(i) d(i) y(i)



 

n i M

k

M m

n i n

i

M

k

m i

u k

i u n

m h n k h

k i u i d n

k h i

d n

J

1

1 1

1

1 1

2

) 1 (

) 1 (

) , ( ) , (

) 1 (

) ( )

, ( 2

) ( )

(

(4.3) where where M n

(5)

Adaptive Tapped-delay-line Filters Using the Least Squares Introduce the following definitions:

1) We define the deterministic correlation between the input signals at taps k and m, summed over the data length n, as

n

i

M m

k m

i u k i u m

k n

1

1 ,...,

1 , 0 ,

) (

) (

) ,

;

(

(4.4)

2) We define the deterministic correlation between the desired response and the input signal at tap k, summed over the data length n, as

n

i

M k

k i u i d k

n

1

1 ,...,

1 , 0 )

( ) ( )

;

(

(4.5) 3) We define the energy of the desired response as

n

i

d n d i

E

1

2( ) )

( (4.6)

(6)

Adaptive Tapped-delay-line Filters Using the Least Squares The residual sum of squares is now written as

) 1 ,

1

; ( ) , ( ) , (

) 1

; ( ) , ( 2

) ( )

(

1 1

1



m k

n n

m h n k h

k n n

k h n

E n

J

M k

M m

M k d

(4.7)

We may treat the tap coefficients as constants for the duration of the input data, from 1 to n. Hence, differentiating Eq. (4.7) with respect to h(k, n), we get

M k

m k

n n

m h k

n n k h

n

J M

m

1,2,..., )

1 ,

1

; ( ) , ( 2

) 1

; ( ) 2

, (

) (

1

(4.8)

Let denote the value of the kth tap coefficient for which the

derivative is zero at time n. Thus, from Eq. (4.8) we get

) , ˆ(k n h

) , ( )/

(n h k n

J

M k

k n m

k n n

m h

M m

1,2,..., )

1

; ( )

1 ,

1

; ( ) , ˆ(

1

(4.9)

(7)

Adaptive Tapped-delay-line Filters Using the Least Squares This set of M simultaneous equations constitutes the deterministic

normal equations whose solution determines the “least-squares filter”.

h n h n h M n

T

n) ˆ(1, ), ˆ(2, ),..., ˆ( , ) ˆ(

Vector form of the least-squares filter h

(4.10) The deterministic correlation matrix of the tap inputs

) 1 ,

1

; ( .

. ) 1 , 1

; ( )

0 , 1

; (

. .

. .

.

. .

. .

.

) 1 ,

1

; ( .

. )

1 , 1

; ( )

0 , 1

; (

) 1 ,

0

; ( .

. )

1 , 0

; ( )

0 , 0

; ( )

(

M M

n M

n M

n

M n

n n

M n

n n

n

Φ

and the deterministic cross-correlation vector

n n n MT

n) ( ;0), ( ;1),..., ( ; 1)

(

θ

(4.11)

(4.12)

(8)

Adaptive Tapped-delay-line Filters Using the Least Squares

With these definitions the normal equations are expressed as

) ( )

ˆ( )

(n h n θ n

Φ (4.13)

Assuming (n) is nonsingular

) ( ) ( )

ˆ(n -1 n θ n

h Φ (4.14)

and for the resulting filter the residual sum of squares attains the minimum value:

) ( ) ˆ (

) ( )

min(n E n n n

J d hT θ (4.15)

(9)

Adaptive Tapped-delay-line Filters Using the Least Squares Properties of the Least-squares Estimate

Property 1. The least-squares estimate of the coefficient vector

approaches the optimum Wiener solution as the data length n approaches infinity, if the filter input and the desired response are jointly stationary ergodic processes.

Property 2. The least-squares estimate of the coefficient vector is unbiased if the error signal e(i) has zero mean for all i.

Property 3. The covariance matrix of the least-squares estimate equals , except for a scaling factor, if the error vector has zero mean and its elements are uncorrelated.

Φ-1 e0

Property 4. If the elements of the error vector are statistically

independent and Gaussian-distributed, then the least-squares estimate is the same as the maximum-likelihood estimate.

e0

(10)

Adaptive Tapped-delay-line Filters Using the Least Squares

Let A and B be two positive definite, M by M matrices related by

CT

CD B

A -1 -1 (4.16)

The Matrix-Inversion Lemma

where D is another positive definite, N by N matrix and C is an M by N matrix. According to the matrix-inversion lemma, we may express the inverse of the matrix A as follows:

D C BC

C B

BC -

B

A-1 T -1 T (4.17)

(11)

Adaptive Tapped-delay-line Filters Using the Least Squares The Recursive Least-Squares (RLS) Algorithm

The deterministic correlation matrix is now modified term by term as

) Φ(n

n

i

c mk

k i u m i

u m

k n

1

) (

) (

) ,

;

(

(4.18)

where c is a small positive constant and is the Kronecker delta; mk

k m

k m

mk 0

1

(4.19)

(12)

Adaptive Tapped-delay-line Filters Using the Least Squares

This expression can be reformulated as

1 1

) (

) (

) (

) (

) ,

;

( n

i

c mk

k i u m i

u k

n u m n

u m

k

n

(4.20)

where the first term equates yielding (n 1;k,m)

1 ,...,

1 , 0 ,

) (

) (

) ,

; 1 (

) ,

;

(n k m n k m u n m u n k k m M

(4.21)

Note that this recursive equation is independent of the arbitrarily small constant c.

(13)

Adaptive Tapped-delay-line Filters Using the Least Squares Defining the M-by-1 tap input vector

u n u n - u n - M

T

n) ( ), ( 1),..., ( 1)

(  

u (4.22)

we can express the correlation matrix as

) ( )

( ) ( )

(n Φ n u n uT n

Φ (4.23)

and make the following associations to use the matrix inversion lemma

1 )

(

1) (

)

( -1

D u

C

B A

n

- n

n Φ

Φ

Thus the inverse of the correlation matrix gets the following recursive form

) ( 1) (

) ( 1

1) (

) ( ) ( 1) 1) (

( )

( -1

-1 -1

1 - 1

-

n -

n n

- n n

n -

- n -

n

n T

T

u u

u u

Φ

Φ Φ Φ

Φ

(4.24)

(14)

Adaptive Tapped-delay-line Filters Using the Least Squares )

( )

(n Φ-1 n P

For convenience of computation, let

and 1 ( ) ( 1) ( )

) ( 1) ) (

( n n - n

n -

n T n

u P

u

u k P

Then, we may rewrite Eq. (4.24) as follows:

1) (

) ( )

( - 1) (

)

(n P n - k n uT n P n -

P (4.27)

(4.26) (4.25)

The M-by-1 vector k(n) is called the gain vector.

Postmultiplying both sides of Eq.(4.27) by the tap-input vector u(n) we get )

( 1) (

) ( )

( - ) ( 1) (

) ( )

(n u n P n - u n k n uT n P n - u n

P (4.28)

Rearranging Eq. (4.26) we find that

) ( )

( 1) (

) ( 1) (

) ( )

(n uT n P n - u n P n - u n -k n

k (4.29)

Therefor substituting Eq. (4.29) in Eq. (4.28) and simplifying we get )

( ) ( )

(n P n u n

k (4.30)

(15)

Adaptive Tapped-delay-line Filters Using the Least Squares Reminding that the recursion requires not only updates for as given by Eq. (4.27) but also recursive updates for the deterministic cross-correlation defined by

) ( ) ( )

ˆ(n -1 n θ n h Φ

) ( )

-1(n P n Φ

) (n θ

n

i

M k

k i u i d k

n

1

1 ,...,

1 , 0 )

( ) ( )

;

(

(4.5) which can be rewritten as

)

; 1 (

) (

) ( )

( ) ( )

( ) ( )

;

( 1

1

k n

k n u n d k

i u i d k

n u n d k

n n

i

(4.31)

yielding the recursion

) ( ) ( )

1 (

)

(n θ n d n u n

θ (4.32)

(16)

Adaptive Tapped-delay-line Filters Using the Least Squares As a result

 

) ( ) ( )

1 (

) ( )

( ) ( ) ( )

1 (

) (

) ( ) ( )

1 (

) ( )

( ) ( )

ˆ(

n d n n

n n

d n n

n n

n d n n

n n

n n

k θ

P u

P θ

P

u θ

P θ

P h

With the suitable substitutions we get

(4.33)

 

( ) ( ) ( 1) ( 1)

) ( )

1 (

) 1 (

) ( ) ( )

1 (

) 1 (

) ( ) ( )

1 (

) ˆ(

n n

n n

d n n

n

n d n n

n n

n n

n

T T

θ P

u k

θ P

k θ

P u

k P

h

(4.34) which can be expressed as

( ) ( )ˆ( 1)

ˆ( 1) ( ) ( )

) ( )

1 ˆ(

)

ˆ(n h n k n d n uT n h n h n k n n

h (4.35)

where (n) is a “true” estimation error defined as

) 1 ˆ(

) ( )

( )

(n d n uT n h n

(4.36)

Equations (4.35) and (4.36) constitute the recursive least-squares (RLS) algorithm.

(17)

Adaptive Tapped-delay-line Filters Using the Least Squares Summary of the RLS Algorithm

1. Let n=1

2. Compute the gain vector

) ( 1) (

) ( 1

) ( 1) ) (

( n n - n

n -

n T n

u P

u

u k P

3. Compute the true estimation error (n) d(n) uT (n)hˆ(n 1) 4. Update the estimate of the coefficient vector

) ( ) ( )

1 ˆ(

)

ˆ(n h n k n n h

5. Update the error correlation matrix

1) (

) ( )

( - 1) (

)

(n P n- k n uT n P n - P

6. Increment n by 1, go back to step 2

Side result: recursion of the minimum value of the residual sum of squares )

( ) ( 1)

( )

( min

min n J n - n e n

J (4.37)

(18)

Comparison of the RLS and LMS Algorithms

Adaptive Tapped-delay-line Filters Using the Least Squares

Figure 4.2 Multidimensional signal-flow graph (a) RLS algorithm (b) LMS algorithm

(19)

Adaptive Tapped-delay-line Filters Using the Least Squares 1. In the LMS algorithm, the correction that is applied in updating the old estimate of the coefficient vector is based on the instantaneous sample value of the tap-input vector and the error signal. On the other hand, in the RLS algorithm the computation of this correction utilizes all the past available information.

2. In the LMS algorithms, the correction applied to the previous

estimate consists of the product of three factors: the (scalar) step-size parameter , the error signal e( n-1), and the tap-input vector u(n-1). On the other hand, in the RLS algorithm this correction consists of the

product of two factors: the true estimation error (n) and the gain vector k(n). The gain vector itself consists of -1(n), the inverse of the

deterministic correlation matrix, multiplied by the tap-input vector u(n).

The major difference between the LMS and RLS algorithms is therefore the presence of -1(n) in the correction term of the RLS algorithm that has the effect of decorrelating the successive tap inputs, thereby making the RLS algorithm self-orthogonalizing. Because of this property, we find that the RLS algorithm is essentially independent of the eigenvalue spread of the correlation matrix of the filter input.

(20)

Adaptive Tapped-delay-line Filters Using the Least Squares

3. The LMS algorithm requires approximately 20M iterations to converge in mean square, where M is the number of tap coefficients contained in the tapped-delay-line filter. On the other band, the RLS algorithm converges in mean square within less than 2M iterations. The rate of convergence of the RLS algorithm is therefore, in general, faster than that of the LMS algorithm by an order of magnitude.

4. Unlike the LMS algorithm, there are no approximations made in the derivation of the RLS algorithm. Accordingly, as the number of iterations approaches infinity, the least-squares estimate of the coefficient vector approaches the optimum Wiener value, and correspondingly, the mean-square error approaches the minimum value possible. In other words, the RLS algorithm, in theory, exhibits zero misadjustment. On the other hand, the LMS algorithm always exhibits a nonzero misadjustment; however, this misadjustment may be made arbitrarily small by using a sufficiently small step-size parameter .

(21)

Adaptive Tapped-delay-line Filters Using the Least Squares

5. The superior performance of the RLS algorithm compared to the LMS algorithm, however, is attained at the expense of a large increase in computational complexity. The complexity of an adaptive algorithm for real-time operation is determined by two principal factors: (1) the number of multiplications (with divisions counted as multiplications) per iteration, and (2) the precision required to perform arithmetic operations. The RLS algorithm requires a total of 3M(3 + M )/2 multiplications, which increases as the square of M, the number of filter coefficients. On the other hand, the LMS algorithm requires 2M + 1 multiplications, increasing linearly with M. For example, for M = 31 the RLS algorithm requires 1581 multiplications, whereas the LMS algorithm requires only 63.

Cytaty

Powiązane dokumenty

A numerical ex- ample demonstrated that the use of the adaptive genetic algorithm support vector machine model in selecting the support vector machine parameters increases

Changes in the physical length of the space vector output currents of the voltage inverter result from the changes of the con- trol process, or due to asymmetry of load

Find the vector equation of the line of intersection of the three planes represented by the following system of equations.. (a) Write the vector equations of the following lines

(b) Find the Cartesian equation of the plane Π that contains the two lines.. The line L passes through the midpoint

This paper contains a comparison of the new technique of impulsive noise reduction with the standard procedures used for the processing of vector valued images, as well as examples

In particular, compact convex sub- sets of R n with nonempty interior, fat subanalytic subsets of R n and sets in Goetgheluck’s paper [G] (where a first example of Markov’s

The study examined the stress strain state is synthesized adaptive clamping elements The aim is to develop constructive schemes clamping elements for machine lathe

Two kinds of strategies for a multiarmed Markov bandit prob- lem with controlled arms are considered: a strategy with forcing and a strategy with randomization. The choice of arm