• Nie Znaleziono Wyników

Selected aspects of discrete-time filtering techniques as applied to sensor control and signal processing problems

N/A
N/A
Protected

Academic year: 2021

Share "Selected aspects of discrete-time filtering techniques as applied to sensor control and signal processing problems"

Copied!
237
0
0

Pełen tekst

(1)

EC' CF

ILTERING TECHNIQUES AS APPLIED T(

:NSOR CONTROL AND SIGNAL PROC

PROBLEMS

(2)

.

J 1 U

SELECTED ASPECTS OF DISCRETE-TIME

FILTERING TECHNIQUES AS APPLIED TO

SENSOR CONTROL AND SIGNAL PROCESSING

(3)

SELECTED ASPECTS OF DISCRETE-TIME

FILTERING TECHNIQUES AS APPLIED TO

SENSOR CONTROL AND SIGNAL PROCESSING

PROBLEMS

Proefschrift

ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus, Prof dr. J. M. Dirken, in het openbaar te verdedigen ten overstaan van een commissie aangewezen

door het College van Dekanen op 21 januari 1988 te 16.00 uur

door

Lars Bj0rset,

(4)
(5)
(6)

The author is employed by SHAPE Technical Centre, The Hague, The Netherlands, and gratefully acknowledges the permission and support given by that Centre to write and publish this thesis.

The support rendered by my previous employer, Norsk Forsvarsteknologi A/S, Kongsberg, Norway, is also gratefully acknowledged.

(7)

TABLE OF CONTENTS

SUMMARY VII

CHAPTER 1

A BRIEF INTRODUCTION TO DISCRETE-TIME SIGNALS AND THEIR BASIC PROPERTIES

1.1 INTRODUCTION 1 1.2 CONTINUOUS-TIME SIGNALS AND SYSTEMS 2

1.3 DISCRETE-TIME SIGNALS AND SEQUENCES 9 1.4 CONTINUOUS-TIME SIGNAL SAMPLING 17 1.5 BASIC DISCRETE-TIME FILTER STRUCTURE 18 1.6 THE Z-TRANSFORM AND THE SYSTEM FUNCTION 21

CHAPTER 2

SOME DISCRETE-TIME FILTERING TECHNIQUES AS APPLIED TO CONTROL SYSTEMS ANALYSIS

AND REALIZATIONS 24

2.1 INTRODUCTION 24 2.2 THE FILTER AS A SYSTEM COMPONENT 25

2.2.1 The basic filter element 25 2.2.2 The paralleling of filter elements 25

2.2.3 Cascading filter elements 27 2.2.4 Combined paralleling and cascading

filter elements 28

2.2.5 Feedback 29 2.2.6 Feedforward 32 2.2.7 Initial and final value analysis 33

2.3 FILTER DESIGN 37 2.3.1 IIR filter design 37

(8)

CHAPTER 3

ASPECTS OF SENSOR SIGNAL PROCESSING 65

3.1 INTRODUCTION 65 3.2 COMPLEX SIGNAL REPRESENTATION 67

3.2.1 Hilbert transform representation 68

3.2.2 Exponential representation 72 3.3 COMPLEX SYSTEMS AND FILTERS 78 3.3.1 Basic complex filter representation 78

3.3.2 Complex filter properties 81 3.3.3 Centre frequency variable filters 83

3.3.4 Input/output relations 88 3.3.5 Linear phase FIR filters 93

3.4 NONUNIFORM SAMPLING 98 3.4.1 Sampled signal characteristics and

representation 98 3.4.2 Some properties of the complex sampled

signal autocorrelation function 105 3.4.3 Classes of nonuniform sampling processes 107

3.4.4 Autocorrelation sequences of some

non-uniform sampled signals 112 3.5 OPTIMUM LINEAR FILTERS 12 5

3.5.1 Maximizing signal-to-noise ratio, known

complex signal 128 3.5.2 Maximizing signal-to-noise ratio, signal

characteristics defined by the

covari-ance matrix 133 3.5.3 Maximizing signal-to-noise ratio with

upper limit 139 3.5.4 Maximizing the probability of target de­

tection 140 3.5.5 Constraints and simplifications 141

3.5.6 Frequency translation of the transfer

function of an optimum filter 144 3.5.7 Linear mean square estimation by FIR

filters 149

CHAPTER 4

EXPERIMENTS AND PRACTICAL VERIFICATIONS 153

4.1 INTRODUCTION 153 4.2 DEFINITIONS AND NOTATION 155

(9)

4.3 CONVENTIONAL FILTERS AND UNIFORM SAMPLING 158

4.3.1 Conventional matched filters 158 4.3.2 Conventional MTI filters 166 4.4 OPTIMUM FILTERS - UNIFORM SAMPLING 172

4.4.1 Signal waveform known in the time domain 172 4.4.2 Signal characteristics known from

statistical properties 179

4 . 5 NONUNIFORM SAMPLING AND FILTERING 196

4.5.1 Matched filters and nonuniform sampling 198 4.5.2 MTI filters and nonuniform sampling 205

CHAPTER 5

CONCLUDING REMARKS 213

REFERENCES 217

SAMENVATTING 222

(10)

SUMMARY

Discrete-time signal processing has become an important discipline in a variety of scientific and technical applica­ tions.

Although the theory is based on centuries-old mathematical knowledge, practical applications first became possible in the mid 1960s, when high-speed digital system components became available.

Large-scale integration (LSI) and, more recently, very large scale integration (VLSI) technologies have resulted in digi­ tal components becoming smaller, cheaper and faster.

For example, commercially available single-chip processors can now (1987) carry out a 64-point fast Fourier transform (FFT) in 150 /isec and a 50-tap finite impulse response (FIR) filter algorithm in 5 fisec.

The analysis and synthesis of discrete-time filters are important topics in digital signal processing and control problems. An introduction to basic terms and definitions commonly found in textbooks is given in Chapter 1. The discrete-time signal is defined as an extension of the continuous-time signal but is deliberately not regarded as merely an approximation to the continuous-time signal.

In Chapter 2, the general structure of the discrete-time linear filter and basic rules for paralleling and cascading multiple filters are defined.

Rules for feedback and feedforward within complexes of in­ terconnected filters are established.

Discrete-time initial and final value theorems are defined, and some applications for the analysis of control systems are discussed.

Basic filter synthesis techniques described in the litera­ ture are defined.

Some implications of sampling rate conversions (decimation and interpolation) in discrete-time control systems are analysed, and a number of applications to sensor systems are considered.

(11)

Chapter 3 is devoted to some aspects of the solution of sensor signal processing problems through the application of discrete-time finite impulse response (FIR) filters.

Complex signal representations are defined, together with the generalized complex filter, and some basic properties are discussed.

Particular attention is paid to the feasibility of parallel-shifting the characteristics of FIR filters along the frequ­ ency axis. The resulting filters are shown to have close similarities to filter (banks) realized by windowing and subsequent discrete Fourier transform (DFT) processing.

Linear and, in some cases, zero-phase filters are desirable in some applications. The conditions for linear and zero-phase complex filters are defined.

Extensions of Shannon's sampling theorem to include nonuni-form sampling have been suggested by several authors. The present thesis suggests an approach based on the representa­ tion of the nonuniform sampling process and the resulting discrete-time signals by statistical properties and charac­ teristics.

Nonuniform sampling has a special application in radar sig­ nal processing, to overcome so-called blind speed problems. This application is usually referred to as pulse repetition rate (PRF) staggering.

Representative signals and nonuniform sampling processes are investigated and their statistical characteristics estab­ lished.

These characteristics of signals and noise are then utilized as parameters in the synthesis of complex filters optimized in accordance with predefined criteria.

By maximizing the output signal-to-noise power ratio, com­ plex FIR filter coefficients are established when the com­ plex input signal is known in the time domain. In cases in which the noise is white, the resulting filters have been referred to in the literature as matched filters.

Optimum complex FIR filter coefficients are achieved when signal and noise characteristics, defined using statistical parameters, are such that the output signal-to-noise power

(12)

In his classic paper, Emerson [1] solved this problem for the special case in which all signal frequencies are equally probable.

This thesis presents an extension to Emerson's solution valid for more general signal characteristics.

In addition to the solutions mentioned above, a numerical steepest descent search algorithm has been devised.

Variations of this optimization criterion are discussed and algorithms established.

Constraints on the filter coefficients (symmetry, restric­ tion to real coefficients) which reduce complexity and cost are discussed. These constraints may be imposed as condi­ tions in optimization algorithms.

The object of optimizing filter functions is in this thesis to enhance a subsequent detection process. No account is taken of the effects of any distortion of the filtered signal in the time domain. As far as signal estimation is concerned, minimization of the mean square estimation error is often used for optimization. In Chapter 3, Section 3.5.7 of this thesis, a discrete-time finite impulse response counterpart to the continuous-time Wiener filter is sugges­ ted.

Chapter 4 describes experimental and practical work rela­ ting to the thesis.

The procedures and algorithms described in previous chapters have been programmed in FORTRAN IV and implemented on CDC 825 and NORD 10 computer facilities.

The validity of some of the derivations described in earlier chapters have been demonstrated using this programming and

implementation.

A number of filter configurations have been investigated and their performances assessed, taking account of variations of the sampling process, which was, in general, nonuniform.

(13)

CHAPTER 1

A BRIEF INTRODUCTION TO DISCRETE-TIME SIGNALS AND THEIR BASIC PROPERTIES

1.1 INTRODUCTION

This chapter introduces basic terms and definitions relating to discrete-time signals. The material covered may be found in textbooks, exept for a proof on page 15. It is reviewed only briefly here.

A signal contains information about the condition or state of a system. It may be represented mathematically as a function of one or more independent variables, in this thesis, the independent variable is referred to as time, although this is not strictly correct in all cases. Time can be either continuous or discrete.

A continuous-time signal is one in which time is continuous. A discrete-time signal is one which is defined at discrete instants. It is represented here by a seguence of numbers,

(x(n)}, where n can vary over a finite or countably infinite range.

In the past, there has been a tendency to consider discrete-time signals as approximations to their continuous-discrete-time counterparts. Thus analogue signal processing methodology has been applied to discrete-time signals. Some early ap­ proaches were based on treating discrete-time signals as analogue signals represented by weighted (Dirac) impulse trains [2, 3 ] .

(14)

1.2 CONTINUOUS-TIME SIGNALS AND SYSTEMS

A continuous-time, or analogue, signal can be described as a function of time, f(t).

If the signal is completely defined by its instantaneous value for all values of t, the signal is said to be determi­ nistic.

The Fourier transform of f(t) then normally exists, under the usual conditions [4]. It is given by the integral:

+ x

F(jco) = ff(t)exp(-jut)dt . (1.1)

zc

The inverse Fourier transform is given by: + x

f(t) - ~ /F(jw)exp(jut)d . (1.2) — 35

F(jw) may be referred to as the frequency spectrum of the signal f(t). The variable represents frequency in radians per second.

*

FjJjuj)F (ju) is called the power spectrum of the signal. ( denotes complex conjugation).

Closely related to the Fourier transform is the Laplace transform:

73

L(s) - /f(t)exp(-st)dt . (1.3)

From (1.1) and (1.3), the Fourier transform of a signal where f(t) = 0 for t < 0 is identical to the Laplace trans­

form if s = ju.

(15)

+ |00

f(t) = —/L(s)exp(st)ds, for t > O . (1.4; - J O O

If the signal f(t) is specified by statistical properties rather than its actual values throughout time, the signal is said to be stochastic.

The first order moment and the autocorrelation function of the signal amplitude are important statistical properties and, for some signals (those which have a Gaussian distribu­ tion) , are sufficient to define the signal completely, in a statistical sense.

The first order moment (mean value or average) may be defin­ ed as the expected value of the function f(t) at an arbitra­ ry time t1:

m = E[f(tL)] . (1.5)

This is normally referred to as the ensemble average of x=f(t), and can be expressed by:

T

me = /xp(x)dx , (1.6)

— oo

where p(x) is the probability density function of x=f(t.). Alternatively, a time average, m., of f(t) can be defined as:

+ T

^ / f(t)dt . (1.7)

"t" J S . R / «

-T

The a u t o c o r r e l a t i o n function (t-w^o) *s d e fin e d a s t h e ensemble average of f ( tn) f * ( t _ ) :

1 2 tp{tvt2) =• E [ f ( tx) f * ( t2) ]

(16)

w h e r e xx = f ( tx)f x2 = f ( t2 > and p(x..,x_) is t h e second o r d e r o r j o i n t p r o b a b i l i t y d e n s i t y f u n c t i o n of t h e s i g n a l v a l u e s a t t1 and t _ . T h e s i g n a l is said t o b e w i d e s e n s e s t a t i o n a r y if t h e f o l ­ l o w i n g t w o c o n d i t i o n s are f u l f i l l e d : 1) T h e first m o m e n t , m , is i n d e p e n d e n t of t i m e , i.e. E [ f ( t ) ] = c o n s t a n t . 2) T h e a u t o c o r r e l a t i o n f u n c t i o n , <^(t..,t ) , d e p e n d s only o n t h e t i m e d i f f e r e n c e , T = t _ - t1. A s i g n a l is s a i d t o b e s t r i c t s e n s e s t a t i o n a r y if n o n e of its s t a t i s t i c a l p r o p e r t i e s a r e a f f e c t e d b y a s h i f t in t h e t i m e o r i g i n . S i g n a l s c o n f o r m i n g t o a G a u s s i a n p r o c e s s w i l l a l s o b e s t r i c t s e n s e s t a t i o n a r y if t h e t w o c o n d i t i o n s for t h e m t o b e w i d e s e n s e s t a t i o n a r y a r e f u l f i l l e d . In r e f e r r i n g t o s t a t i o n a r y s i g n a l s b e l o w , it is a s s u m e d t h a t t h e t w o c o n d i t i o n s r e q u i r e d for t h e s i g n a l s t o b e w i d e s e n s e s t a t i o n a r y a r e f u l f i l l e d . If i t is a s s u m e d t h a t s i g n a l s a r e s t a t i o n a r y , t h e a u t o c o r r e ­ l a t i o n f u n c t i o n c a n b e d e f i n e d as a f u n c t i o n of t h e t i m e d i f f e r e n c e , 7 = t - t1. <p(T) = E[f (t)f*(t+T)] + CO+CO f x1x2* p ( x1, x2) d x1d x2 . (1.9) ■t-co-t-cx

■ƒƒ-— CO■ƒƒ-— CO T h e t i m e a v e r a g e of t h e p r o d u c t f (t) f * (t+T) , <pt(r) , is given b y t h e f o l l o w i n g e q u a t i o n : b y t h e f o l l o w i n g e q u a t + T ipt{T) = lim i ƒ f(t)f*(t+7)dt . (1.10)

(17)

A signal is said to be erqodic if the time averages equal the ensemble averages.

The first moment can, therefore, be established either from the ensemble average (1.6) or the time average (1.7) since m = m. under the condition of ergodicity.

Similarly, in the case of an ergodic signal, the autocorre­ lation function can also be obtained from equation (1.10). Signal variance, v, and standard deviation, o, are defined

by the first and second order moments m and <p(0) , as fol­

lows:

v = o2 = y?(0)-m2 . (1-11)

The Fourier transform of the autocorrelation function repre­ sents the signal's power spectral density, <J>(co) :

<ï>(w) = /y?(T)exp(-jco7)dT , (1.12) — 00

The inverse transform relationship is

f{r) = 2i ld)(co)exp(jcor)dco . (1.13)

— rx>

A specific input signal can be imposed upon a system, affec­ ting the output signal.

Systems in which input and output signals are related through sets of linear differential equations are of parti­ cular interest. These are referred to as linear systems. In the following, linear time-invariant systems will be considered.

It has been assumed that basic definitions and terms are known to the reader. Those unfamiliar with them may refer to

(18)

A l i n e a r t i m e - i n v a r i a n t system can be described by i t s

t r a n s f e r function or t h e corresponding u n i t impulse respon­

s e , a s shown i n F i g . 1 . 1 .

1NPUT ',<t>.F,CW SIGNAL (p.dJ.Ojfw) fo( t )-Fo('( 0 ). OUTPUT SIGNAL <P0(T),4>o(W) Fig. 1.1

System transfer function

A linear time-invariant system can be characterized by its unit impulse response, h ( t ) :

h(t) = fQ(t) , (1.14)

when the input signal, f.(t), is given as a unit impulse <5(t) (Dirac delta function!":

fL(t) = 6(t) .

The Laplace transform of (1.14) is the system's transfer function in the s-domain:

oo

(19)

A realizable, or causal system will not respond to an input signal event before the event has occurred.

Therefore, if

fA(t) = 0 for t < 0 ,

then

fQ(t) = 0 for t < 0

From (1.14), for a causal system, the unit impulse response will satisfy the condition

h(t) = 0 for t < 0 . (1-16)

The Fourier transform and the Laplace transform of the unit impulse response, h ( t ) , will, therefore, be identical:

H(jco) = L(s) , s = jco . (1.17)

with reference to Fig. 1.1, the following input/output rela­ tionships will then exist:

1) Convolution in the time domain: + co

f0(t) = /h(r)f(t-7)dr . (1.18)

- c o

2) Signal freguency spectrum transfer:

FQ(jto) = H(jco)Fi(jo>) . (1.19)

3) Autocorrelation function transfer, from (1.9) and (1.18):

+ CO+CO

(20)

4) Power spectral density transfer:

<ï>0(w) = H(joj)H*(ja))$i(o)) . (1.21)

5) Coherent signal transfer:

fQ(t) = AH(j0J1)exp(jw1t) (1.22)

(21)

1.3 DISCRETE-TIME SIGNALS AND SEQUENCES

A discrete-time signal can be described by a sequence of numbers x, the nth number in the sequence being denoted by x(n).

It has become common to refer to "the sequence x(n)" rather than give the more formal definition:

x = {x(n) }, -oo< n < oo . (1.23)

This practice has been adopted in this thesis.

The index n resembles the independent variable, time, t, for contiuous-time signals and can in many but not all cases define a sampling instant, t = t(n).

Again, the discrete-time signal as represented in Fig. 1.2 should not be considered as a sequence of weighted Dirac delta functions where the signal is zero between the im­ pulses.

x(n) ^

Q

* o •

(22)

By adopting discrete-time sequence signal representation, use of the Dirac delta function is avoided.

As for continuous-time signals, some sequences are of parti­ cular interest in the case of discrete-time signals:

Unit Sample Sequence 5(n)

0 n * 0

5<n) =<( (1.24) 1 n = 0

Unit Step Sequence u(n)

0 n < 0

u(n) =

i (1.25)

1 n > 0

Sinusoidal Sequence

y(n) = hsin(u)Qn+(j>) . (1.26)

A sequence is said to be periodic, with a period N, if x(n) = x(n+N) for all n. The sinusoidal sequence (1.26) will therefore not be periodic for all values of co .

r o

The sequences can be manipulated in a number of some funda­ mental ways, defined below:

1) Sum of two sequences

The sum of two sequences x(n) and y(n) forms a new sequ­ ence z(n), in which individual elements are given by

z(n) = x(n)+y(n) . (1.27) 2) Product of two sequences

The product of two sequences x(n) and y(n) is a new sequence z(n), in which individual elements are given by

(23)

3) Multiplication by a constant

Multiplication of a sequence x(n) by a constant k gives a new sequence y ( n ) , the elements of which are given by

y(n) = kx(n) . (1.29) 4) Delay or shift

A sequence y(n) is said to be a delayed or shifted ver­ sion of a sequence x(n) if:

y(n) = x(n-N) , (1.30) where N is an integer. N > 0 indicates a delay.

The Fourier transform of a sequence x(n) exists under cer­ tain conditions [5]. It is given by the sum

■ ,, + o o

X(eJW) = £ x(n)exp(-jwn) . (1.31)

n=—oo

The inverse Fourier transform is given by: +7T

x(n) = ~jX(e:iü,)exp(ja)n)dü; . (1.32)

-7T

As in the continuous-time case, the Fourier transform of the sequence (1.31) represents a continuous frequency spectrum of the signal.

The z-transform of a sequence is closely related to the Fourier transform. It is dealt with in greater detail in Section 1.6.

A sequence of N elements, x ( n ) , n - 0,...,N-1, has a dis­ crete Fourier transform (DFT), defined by

N-l

X(k) = £ x(n)exp(-j27Tkn/N) . (1.33) n=0

(24)

Clearly, this transform exists if all elements are finite. The inverse DFT is defined by:

1 N - i

*(n) = S I X(k)exp(j27Tkn/N) . (1.34)

N k=0

Unlike the continuous frequency spectrum defined previously in (1.31), the DFT represents a discrete frequency spectrum of the signal. The spectrum is represented by a sequence X(k) of N elements, k = 0, ... , N-l. It is therefore well adapted for handling by modern digital processing equipment. Fast algorithms have been developed for effective calcula­ tion of the equations (1.33) and (1.34). These are referred to as fast Fourier transforms (FFT).

These FFTs have become particularly important in modern digital signal processing applications. Modern radar and sonar systems with phased array antennas/transducers are examples of systems operating with signals represented by finite sequences.

Like continuous-time signals, discrete-time signals can be specified by their statistical properties rather than their time-domain values.

The first order moment and the autocorrelation function or sequence are the properties to which attention is usually paid in this connection.

The first order moment or ensemble average is defined by

m(n) = E[x(n)] + 00

= lx(n)p[x(n) ]dx(n) , (1.35) —oo

where p[x(n)J is the probability density function of the individual sample x(n).

(25)

The alternative time average, mt, is given by:

1 N

m. = lim X x<n) ' (1.36)

C N-co 2N+1 n=-N

The time average of a sequence of N elements is

1 N-l

V " " Z

x

(°) • (1-37)

N n=0

The autocorrelation sequence is defined by p(i,j) = E[x(i)x*(j)]

+C0+C0

x(i)x*(j)p[x(i),x(j)]dx(i)dx(j) , (1.38) — 03—CO

where p[x(i),x(j)] is the second order or joint probability density function of the signal at indices i and j .

The covariance sequence c(i,j) is defined by

c(i,j) = E[{x(i)-m(i)){x(j)-m(j)}*] . (1.39)

The autocorrelation and covariance sequences are, clearly, identical when the first moment m(i) = m(j) = 0.

A sequence the first moment of which is equal to zero and which has N elements can thus have associated with it a two-dimensional autocorrelation or covariance sequence defined by an N x N matrix C. This matrix is called the covariance matrix. It is made up of elements c.. = ^(i-1,j-1).

(26)

The discrete-time signal is said to be wide sense stationary if, as in the continuous-time case, the two following condi­ tions are fulfilled:

1) The first moment, m, is independent of discrete "time" or index n.

2) The autocorrelation sequence ^>(i,j) depends only on the difference between the indices k = j-i.

The autocorrelation can then be defined as a one-dimensional sequence of k = j-i:

<p(k) = E[x(i)x*(i+k)]

+ CD + QO

=ƒƒ

x(i)x*(i+k)p[x(i),x(i+k)]dx(i)dx(i+k) .

-0 0-0 0 (1.40)

The autocorrelation sequence can also be defined in such cases as the time (index) average <prCk.) of the product

x(i)x*(i+k), if the signals are ergodic:

1 N *

<p (k) = lim I x ( i ) x (i+k) , (1.41)

N - ^ 2N+1 i=-N

Since the signals are ergodic,

Vt(k) = <p(k) for all k . (1.42)

The discrete-time signal variance, v, and standard devia­ tion, a, are similarly defined by the first and second order

moments m and <p(0) :

(27)

The two dimensional autocorrelation sequence or correspon­ ding covariance matrix is symmetrical, as can easily be seen from equation (1.39):

¥>(i,j) = V*(j,i) • (1-44)

The covariance matrix C is therefore a Hermitian matrix. Assuming stationarity, the one-dimensional autocorrelation

sequence as defined by (1.40) has the following symmetry:

¥>(-k) = <p*(K) . (1.45)

In such cases, the covariance matrix C degenerates to a circulant matrix.

The covariance matrix C is always positive definite. This property results from the general condition

yy* i 0 , (1.46)

therefore,

E[yy*] > 0 . (1.47)

If it is assumed that y is formed as a weighted linear combination of terms in the sequence x(i), i = 0,...,N-1, i.e.

N-l

y = Z w , x ( i ) . ( i . 4 8 )

(28)

Then , introducing (1.48) into (1.47)

N-l N-l * *

E[yy*] = E{[ X w.x(i)][ X w.*x*(j)]}

i=o 1 j=o J N-l N-l t t = 1 1 wiw-i E[x(i)x (j)] i=0 j=0 J N-l N-l A i=0 j=0 J

or, in matrix form,

WtCW > 0 , (1-49)

where Wt denotes the Hermitian conjugate of W.

No restrictions are imposed upon the vector w. with the corresponding column matrix representation W. Therefore

(1.49) defines C to be positive definite.

As in the continuous-time case, the Fourier transform is defined when conditions for stationarity are fulfilled.

In such cases, a continuous frequency power spectral density of the signal can be defined as

+ CO

P(u) = 2>(k)exp(jcok) . (1.50) k=-oo

The inverse transform gives +7T

<PW = 2i-J P(ö>)exp(jwk)dw . (1.51)

(29)

1.4 CONTINUOUS-TIME SIGNAL SAMPLING

A sequence x(n) with values x(n) = f(nT) can be derived from the continuous-time signal f(t) by uniform sampling.

The time increment T is called the sampling period. The reciprocal of T is called the sampling frequency or samp­ ling rate.

If it is assumed that the continuous-time signal f(t) is lowpass limited and contains no spectral components above

0) , as shown in Fig. 1.3(a),

(a) «> -(O, ( 0f XD(e'<°) - 3 J C / T - 2 J t / T I t / T 271/T 3 J I / T Fig. 1.3

Fourier transforms of continuous-time signal (a) and the corresponding

discrete-time signal obtained by sampling (b).

then, according to the sampling theorem, the original conti­ nuous-time signal f(t) can be determined from the sequence x(n) if the samplinq rate f = 1/T satisfies the condition

(30)

This sampling rate is usually referred to as the Nyquist rate.

Figure 1.3 (b) shows the effect of aliasing or folding if the condition in (1.52) is not satisfied.

In some cases, sampling will be nonuniform by intention or by physical constraints on realization. If so, there will be a more complex relationship between the continuous-time signal and the resulting sequence. In such instances, deter­ mination of the original continuous-time signal f(t) from the sequence x(n) is a more complex task. It can be achieved only under certain conditions. This topic is discussed in Section 3.4.

Radar and sonar systems with staggered pulse repetition frequencies (PRF's) are examples of systems in which non-uniform sampling techniques are used intentionally.

Nonuniform sampling in such applications is discussed more extensively in Section 3.4, where terms and definitions are also suggested.

(31)

1.5 BASIC DISCRETE-TIME FILTER STRUCTURE

Input and output sequences inter-related by a set of linear difference equations are of special interest, since com­ prehensive mathematical tools exist for their solution. These difference equations are referred to below as linear discrete-time systems or filters [5-7].

The class of linear filters considered in this report is defined by the equation

N M

y(n) = £a.y(n-i) + £ b,x(n-k) . (1.53) i=l Jc=o

This equation represents a causal system, since the output y(n) is independent of future inputs and outputs, when

initial conditions are satisfied [5].

The equation will satisfy the criteria for linearity if the coefficients a. and b. are independent of x(n) and y(n)

values. x K

In addition, if the filter coefficients a. and b. are also constants independent of n, the system is said to be linear shift-invariant.

In the general case, filter coefficients are complex numbers.

Filters described by (1.53) are divided in two classes: 1) Infinite Impulse Response (IIR) filters

If N > 0 in (1.53), previous filter outputs influence the instantanous filter output y(n).

If the filter input x(n) in such cases is equivalent to the unit impulse sequence 5(n), the impulse response h(n) = y(n) will be of infinite duration.

The similarity of (1.53) to the autoregressive-moving average (ARMA) process [8] is noteworthy.

(32)

2) Finite Impulse Response (FIR) filters

If N = 0, filter outputs depend on present and previous inputs, but are independent of present or previous outputs.

The unit impulse response sequence will have zero ele­ ments for n > M and can be shown to be

b /a n = 0,...,M

h(n) =i (1.54) i n > M

This corresponds closely to a moving average (MA) process [8].

(33)

1.6 THE Z-TRANSFORM AND THE SYSTEM FUNCTION

The z-transform of a sequence is a generalization of the Fourier transform as defined by equation (1.31). The z-transform X(z) of a sequence x(n) is defined by

+ CO

X(z) = ]Tx(n)z n . (1.55)

n=-co

The inverse transform is given by a contour integral derived from the Cauchy integral theorem:

x(n) = t7)X(z)zn~1dz , (1.56)

27TJ T

where C is a closed contour in the region of convergence of X(z) [9].

For the special case in which z = exp(joj) , (1.55) and (1.56) reduce to the Fourier transform pair of (1.31) and (1.32). Basic z-transform theorems, properties and applications may be found, e.g., in Oppenheim and Schafer [5].

From these theorems, it can be shown that a linear shift-invariant system can be described in terms of the z-trans­ form of the unit-sample response.

Denoting the input, output and unit-sample response by x(n), y(n) and h(n), respectively, and their corresponding z-transforms by X(z), Y(z) and H(z), the following equations can be derived: + CO y(n) = £h(k)x(n-k) k=-co + 00 = £x(k)h(n-k) k=-co

(34)

In addition,

Y(z) = X(z)H(z) . (1.58)

The z-transform of the unit-sample response is known as the system function or transfer function.

By letting z = exp(ju>), the frequency response of the system can be obtained.

If the system is described by the causal, linear shift-invariant filter structure of (1.53), the system function will be given as:

M -i j=0 3 H(z) = . (1.59) N 1 - Y a.z"1 i=1 1

The system function can also be expressed in factored form: M A [I (1-C.z 3=1 J -1, H(z) = . (1.60) N -1 i=l

The numerator of (1.60) contributes zeros at all z = C and M poles at z = 0. The denominator contributes poles at all z = D. and N zeros at z = 0.

The (causal) system is stable if all poles are contained within the unit circle (i.e. the circle with radius 1 with its centre at the origin).

A third approach is to expand (1.59) into a sum of partial fractions:

N A. i=l (1+Giz )

(35)

A system having a real unit-sample response h(n) has a symmetrical frequency response around co= 0:

| H ( « * ) | = |H(e-^)|

<_H(e*°) = - ^ H ( e - ^ )

A system described by (1.53) with real and constant coeffi­ cients a. and b. will have a real unit sample response.

1 K

The corresponding poles and zeros in the transfer function of (1.59) will then be real and/or complex conjugated pairs. The poles of (1.59) will be complex conjugated.

It will be shown in subsequent sections that these proper­ ties do not exist in the general case with a complex unit sample response.

(36)

CHAPTER 2

SOME DISCRETE-TIME FILTERING TECHNIQUES AS APPLIED TO CONTROL SYSTEMS ANALYSIS

AND REALIZATIONS

2.1 INTRODUCTION

Modern radar and sonar systems use discrete-time filtering techniques in many applications, ranging from sensor signal processing, discussed in Chapter 3, to control system appli­ cations such as tracking loops and servo systems. Discrete-time filtering techniques are also used as tools in the analysis and simulation of systems and system complexes. Discrete-time filtering fundamentals and topics relevant to systems analysis and simulations, and to control problems are discussed in this chapter.

The related subject of optimum state estimation is not considered in this thesis since it is a broad topic which has been extensively dealt with in the literature [2, 10]. A non-recursive, linear, mean-square estimation filter is, however, derived in Section 3.5.6.

Section 2.2 deals with discrete-time filters as system com­ ponents. The combination of filter elements into complexes, and the concepts of feedback and feedforward are discussed. A basis for initial and final value analysis is established. Section 2.3 describes some established techniques for dis­ crete-time filter design. Optimum filter design is a major topic of Section 3.4 and is discussed there in detail.

The concept of sampling rate conversion is introduced in Section 2.4, where some applications are also discussed. These are, firstly, the technique of oversampling with subsequent filtering and decimation as a means of signal-to-noise improvement in some radar applications, secondly, decimation and interpolation techniques to compensate for limitations in data transfer rates in complex feedback con­ trol systems and, thirdly, filters with characteristics not otherwise available are realized by the use of sampling rate conversions.

(37)

2.2 THE FILTER AS A SYSTEM COMPONENT

2.2.1 The basic filter element

The system function in factorized form ((1.60)) indicates that the simplest filter will be either of the form

H(z) = A^l-Cz"1) , (2.1)

or of the form

H(z) = ^~- . (2.2)

(1-Dz"1)

If C or D is complex, a corresponding complex conjugated filter element will exist for real filters.

This will not apply, however, as far as complex filters are concerned. This is discussed in Section 3.3.

The partial fraction expansion ((1.61)) indicates that the simplest filter element will be of the form of (2.2), be­ cause a higher order filter can be expanded into partial fraction elements only when the order of the denominator is at least as high as that of the numerator [11].

Basic filter elements can be combined in parallel or cas­ caded configurations to form higher order filters.

2.2.2 Paralleling of filter elements

Filter elements (not necessarily of the forms of (2.1) and (2.2)) can be connected in parallel, as indicated in Fig. 2.1.

Basic z-transform properties [5] are such that the resulting transfer function H(z) = Y(z)/X(z) is given by

N

H(z) = X *Mz) • (2.3)

(38)

One approach to synthesis of a higher order filter is to undertake partial fraction expansion (provided that M > N ) , to define the individual elements of (2.2) and then to undertake parallelling in accordance with (2.3).

Fig. 2.1

(39)

2.2.3 Cascading filter elements

Filter elements (of any complexity) can be connected in a cascade, as indicated in Fig. 2.2.

X(z)

H,(z) H2(z) HN(z)

Y(z)

Fig. 2.2

Cascading filter elements

The transfer function in such cases is

N

H(z) = II H.(z)

i=l

(2.4)

Higher order filters can be realized by cascading lower order elements.

One approach to synthesis of a higher order filter is to express the filter in the form of (1.60), identifying the individual elements, of form (2.1) or (2.2), to be cascaded.

(40)

2.2.4 Combined paralleling and cascading filter elements X ( z ) 4

r

L

r

i

i.

r

L

HA( z ) HA 1( Z ) HB , <Z> HA 2( 2 ] HB(2) HB 2( z )

1

1

1

_l

HB 3( z )

+

(

-r

i

i

i Y ( z ) J '

i

J

Fig. 2.3

Combined parallelling and cascading

From the sections above, the transfer function H(z) Y(z)/X(z) resulting from combination of paralleling and cascading structures is found by determining local transfer

functions through use of (2.3) or (2.4). These correspond to H.(z) and H (z) in Fig. 2.3. The same equations are then used to determine the next level of transfer functions. This process is repeated until the overall transfer function is determined.

(41)

2 . 2 . 5 Feedback X ( s ) + Hc( s ) Y ( s ) X ( z ) * > HD( z ) , - 1 Y ( z )

(a) C O N T I N U O U S - T I M E SYSTEM ( b ) D I S C R E T E - T I M E SYSTEM

F i g . 2 . 4 Simple f e e d b a c k A c o n t i n u o u s - t i m e system w i t h f e e d b a c k i s shown i n F i g . 2 . 4 ( a ) . The r e s u l t i n g t r a n s f e r f u n c t i o n H(s) = Y ( s ) / X ( s ) i s H(s) = + Hc( s ) ( 2 . 5 )

Equation (2.5) cannot be used directly in relation to the corresponding discrete-time system, because of the inherent delay between input and output.

The resulting transfer function H(z) = Y(z)/X(z) for the discrete-time case illustrated in Fig. 2.4(b) is

H(z) =

HD(z)

1 + HD(z)z -1"

(2.6)

This may be proved as follows.

The input/output relation of the forward transfer function H (z) is expressed in the time domain by the difference equation (1.53):

(42)

The output of the filter at time n, y(n), is subtracted from the input for the next calculation step, x(n+l), forming the difference d(n+l):

d(n+l) = x(n+l)-y(n) or

d(n) = x(n)-y(n-l) . (2.8)

Combining (2.7) and (2.8) and solving for y(n)

N M M y(n) = X a.y(n-l) - X b.y(n-j-l) + £b.x(n-j)

i=i x j=o J j=o J

(2.9) Taking the z-transform of (2.9) and solving for H(z) Y(z)/X(z) gives M . j=0 J H(z) = Y(z)/X(z) = (2.10) N _. M . _

1 - X a.z"^ ( Xb.z 3)

2 x i=i x j=o ■» The z - t r a n s f o r m of ( 2 . 7 ) i s M _ '+" 3=0 HD(z) = Y ( z ) / D ( z ) = ( 2 . 1 1 ) N

z

i - y a . . - -

1

iti-i

Insertion of (2.11) into (2.6) gives the function as defi­ ned by (2.10), providing the necessary proof.

If the feedback loop also contains a transfer function, as shown in Fig. 2.5,

(43)

X ( z )

v

■*v > > ^ U f - O ' A l' HB( z ) Y ( z ) —— 1 r- ~-1 Fig. 2.5

Local transfer function in feedback loop

the resulting transfer function H(z) = Y(z)/X(z) is given by

H(z) =

HA(z)

1 + HA(z)HB(z)z -1

(2.12)

(44)

2-2.6 Feedforward X(z] HF ^ / HA( z , X(z) HF " » / HAI , + + H H f . 1 . U I > 1 nA1* ' Bl" c( z ) - 1 Z ' »-J Y(z) (c) Fig. 2.6 Feedforward

A system comprising both feedback and feedforward sections is shown in Fig. 2.6(a).

Linearity implies that the feedforward sumpoint can be moved to the input of H.(z) in Fig. 2.6(b) and that the feedfor­ ward transfer fufiction H_(z) can simultaneously be divided by H (z).

(45)

Since the order of summation is irrelevant, the two sumpoints in Fig. 2.6(b) may be rearranged to give the configuration shown in Fig. 2.6(c).

The resulting transfer function H(z) = Y(z)/X(z) is given by H (z)H (z)

H(z) = [1+Hp(z)/H (z)][ £ 2 -] (2.13)

1 + HA(z)HB(2)Hc(z)z-1

where the first factor represents the feedforward portion and the second the feedback loop.

It is noteworthy that the feedforward portion resembles the corresponding continuous-time case. The feedback portion differs, because of the delay inherent in the feedback.

2.2.7 Initial and final value analysis

A sensor pedestal will, ideally, direct the sensor's bore-sight axis towards the target with zero or minimal borebore-sight error under all target manoeuvering conditions.

Under realistic conditions, the synthesis or design of trac­ king systems gives rise to conflicting requirements as re­ gards suppression of noise and target glint, on the one hand, necessitating low servo bandwidth, and wide bandwidth and minimum lag, on the other hand, to keep up with target dynamics.

A common practice in servo system design is to specify abi­ lity to follow an input transient defined by an impulse, a unit step, a ramp, etc.

In continuous-time systems, the initial and final value theorems are well known, and have been applied in these ana­ lyses.

In what follows, the discrete-time system transfer function is denoted by H(z), input and output sequences are denoted by x(n) and y(n), and their z-transforms by X(z) and Y(z), respectively.

(46)

Discrete-time initial value theorem

y(0) = lim X(z)H(z) = lira Y(z) (2.14)

Equation (2.14) is derived from the condition

Y(z) = X(z)H(z) ,

and the definition of the z-transform (1.55), with fl n = 0

lim z

_ n

= {

z— » 10 n > 0 , and

sc(n) = 0 n < 0 .

Discrete-time final value theorem

To derive the discrete-time final value theorem, the one­ sided z-transform for the output sequence may be rewritten as

n

Y(z) - lim X y(i)z_1 (2.15)

n -x i=0

Assuming causality and the input sequence x(n)=0 when n < 0, (2.15) is clearly identical to (1.55).

The basic z-transform property of an incremental time shift may be written as

Z[y(n-l)] = z_1Y(z) , (2.16)

where Z[ ] denotes a z-transform of the function within the square brackets.

(47)

Assuming causality, the following equation can then be writ­ ten:

n •

z_1Y(z) = lim Xy(j-l)z J . (2.17)

n-oo j=0

Subtracting (2.17) from (2.15) gives

lim y(n) = lim (l-z-1)Y(z) , (2.18)

n—co z—co

which defines the discrete-time final value theorem.

Representative input sequences and their z-transforms for final value analysis may be derived from (2.15):

Unit sample sequence 6(n) :

X^(z) = 1 . (2.19)

Unit step sequence u(n):

X (z) = l+z_1+z~2+ ... = l/(l-z-1) . (2.20)

Unit ramp sequence r(n):

Xr(z) = 1 / d - z "1) "2 (2.21)

The poles at z=l in (2.20) and (2.21) and the zero at z=l in (2.18) are noteworthy.

A tracking system can be described by the discrete-time transfer function H(z), as shown in Fig. 2.7.

(48)

The ability of the tracker to follow the input signal can be described by the difference between the output and the input sequences, d(n):

d(n) = y(n)-x(n) . (2.22)

The transfer function between the input sequence and the difference between the output and the input sequences is illustrated in Fig. 2.8. x(n) X(z) h(n) H(z) y(n) Y(z) Fig. 2.7

Tracker servo transfer function

x ( n ) X ( z ) h ( n ) H ( z ) y ( n ) Y ( z ) d ( r r_ 0 ( 2 Fig. 2.8

Tracker boresight error transfer function

Introducing (2.22) into (2.18), lim d(n) n-so = lim (1-z z-1 -1 )[H(z)-l]X(z) (2.23)

It can easily be shown that for d(n) to be bounded, H'(z) = H(z)-1 must have at least one zero at z «= 1 for a step input and at least two zeros at z = 1 for a ramp input.

For d(n) to converge to zero as n increases, the expression H'(z)= H(z)-1 must have at least two zeros at z = 1 for a step input and at least three zeros at z = 1 for a ramp input.

(49)

2.3 FILTER DESIGN

Various methods for design of linear discrete-time filters exist, for both IIR and FIR filter types.

The design process generally starts with specification of the desired properties of the filter. This is followed by determination of specifications using an appropriate filter algorithm.

Filter specifications are often given in the frequency do­ main as desired transfer function gain and phase characteri­ stics, for example.

In this section, methods based on frequency domain specifi­ cations in common use, as recorded in the literature [5, 7 ] , are reviewed.

For other applications, filter specifications may be formu­ lated in terms of optimization criteria, on the basis of a priori information relating to signal and noise characteri­ stics. Such filters are generally referred to as optimum filters. They should not be confused with the filters resul­ ting from optimal (minimax error) filter design methods of the kind referred to in Section 2.3.2, described, e.g., in

[61.

Optimum filters form one of the major topics of Section 3.4. Experiments and investigations carried out are described in Chapter 4.

2.3.1 IIR filter design

There are three main classes of design techniques in rela­ tion to IIR filters. These are transformation of an analogue filter to a discrete-time filter, direct design of a dis­ crete-time filter in the z-plane, and optimization methods. Transformation methods are widely used because there are powerful design methods for continuous-time filters. For systems analysis purposes, it is also often desirable to use a discrete-time filter equivalent to simulate continuous-time linear continuous-time-invariant filters or transfer functions. Rabiner and Gold [6] have described and discussed transfor­ mation methods involving the mapping of differentials, im­

(50)

For continuous-time filter or systems analysis, bilinear transformation has attractions. The resulting filter is always stable when it is derived from a stable continuous-time filter. Problems of aliasing, which are encountered on using other transformation methods are also avoided. One disadvantage is, however, distortion in the frequency axis. Morin and Labbe [12] have described two extensions to the

impulse invariance transformation method. These extensions are referred to as step-invariance and ramp-invariance tran­ sformations. They are claimed to be less sensitive to re­ duced sampling rates than the impulse-invariance method itself.

Direct design methods include, inter alia, magnitude-squared function design and time domain design. These methods do not appear, however, to have been used to any great extent [6]. Optimization methods seem to have been used more widely. A number of techniques exist [6]. In such methods, a mathema­ tical optimization procedure is used to determine filter coefficients which will minimize some error criterion, e.g. the mean squared error between desired and actual filter transfer functions at a discrete set of frequencies.

2.3.2 FIR filter design

An FIR filter can easily be designed by obtaining an impulse response of finite length through truncation of an impulse response sequence of infinite duration.

Such truncation can, however, lead to undesired sidelobes (the Gibbs phenomenon) [5] in the filter characteristics. To reduce side-lobe effects, the technique of windowing can be applied to the truncated impulse response sequence. A number of types of window (e.g.the Kaiser, Hanning, and Hamming windows) have been described and discussed in the literature [5, 6 ] ,

Another design technique described in literature is known as frequency sampling [6]. The desired frequency response is approximated by sampling at N evenly spaced points and then obtaining an interpolated frequency response which passes through the frequency samples. N in this case is the number of filter taps.

(51)

A third common class of design methods is the class resul­ ting in what are known as optimal (minimax error) filters. The methods are similar to the corresponding methods for IIR

filters [6].

FIR filters with linear phase characteristics are desirable in many applications. Conditions for the linear phase char­ acteristics of general complex filters are discussed in Section 3.3.5 and can be introduced as a constraint in the design procedures mentioned above.

(52)

2.4 SAMPLING RATE CONVERSION

Sampling rate conversion is needed in many practical appli­ cations such as reduction of data rate because of transmis­ sion channel bandwidth limitations or for economy of use of digital signal processing equipment, signal-to-noise enhan­ cement, improvement of digital feedback control system cha­ racteristics, and smoothing of output signals to (analogue) mechanical servos.

The process of decreasing the sampling rate is known as decimation and that of increasing the sampling rate as interpolation.

Sampling rate conversion is described in a number of sources [13-20].

As will be shown later, a nearly ideal low-pass filter plays an important role in both decimation and interpolation pro­ cesses. In such processes, an FIR-filter structure is gene­ rally preferable because of its phase characteristics

(linear, or even zero phase in some non-causal applications)

2.4.1 Decimation

The cost and complexity of digital (signal) processing equi­ pment and signal transfer systems increases very consider­ ably as speed requirements become high.

An important consideration in system optimization is there­ fore minimization of the data sampling frequency.

The requirement for sampling frequency reduction implies a need for bandwidth reduction before the final sampling pro­ cess.

In many cases, the bandwidth reduction needed is achieved by analogue filters prior to analogue to digital (A/D) conver­ sion. However, in many cases, a digital filter is desirable or necessary following the converter, to allow a second sampling process to take place at a lower rate.

As has been pointed out in [21], digital filters are parti­ cularly desirable for input filtering in systems which are required to process signals from several sources with diffe­ rent centre frequencies and bandwidths. In addition, phase distortions introduced by analogue filters with sharp cutoff characteristics can cause problems.

(53)

Because of the importance of phase linearity, a preference for use of FIR-filters to achieve it has been expresed

[13, 16, 21].

In the following sections, some basic properties and rela­ tionships as regards the process of decimation are reviewed. These properties and relationships have been described by

[16, 17, 21], inter alia.

Two applications of signal-to-noise enhancement by filtering and decimation after initial A/D conversion are then dis­ cussed.

2.4.1.1 Properties of the decimation process

According to the sampling theorem, the original signal is retrievable when the sampling rate is at least twice the signal bandwidth.

The information signal can consist of a desired signal component and an unwanted noise component.

Aliasing of the noise component causes an unwanted reduction in the signal-to-noise ratio.

Figure 2.9 shows the normal procedure of signal and noise bandwidth reduction prior to A/D-conversion.

In some cases, a significant noise component may be associ­ ated with the A/D-conversion process, as quantizing noise. This is discussed further in Section 2.4.1.3.

A nearly ideal analogue low-pass filter is difficult to achieve in practice.

In some systems, e.g. pulse radars, an inherently high sampling rate, considerably higher than the information signal and noise bandwidth, can exist.

For these reasons, a two stage sampling process is often used, as illustrated in Fig. 2.10.

The analogue signal, s(t), is here first sampled at a suffi­ ciently high sampling rate to ensure no aliasing of input signal and noise components (oversampling).

(54)

The bandwidth of the sampled signal is then reduced by filtering through an (ideal) digital low-pass filter. For the reasons stated above, only FIR-filters will be discussed in relation to this application.

Conditions for causal linear phase and non-causal zero phase FIR-filters are discussed in Section 3.3.5.

H(jCO) SAMPLER S(t) Fc(j(0) nt-> sL(t) FCL(jco)

<So

Y(n) JCO,

■V

e Fc(j(0) SIGNAL H(j<0) NOISE IDEAL L.P. / FILTER CO CO ■ « o t(ao FC L( j C O )

é\

FDI J " H CO

^\/\,/-Nr/-\,/\

CO - 2 i t - ! t o n 2 n in mi sn en F i g . 2 . 9 Bandwidth r e d u c t i o n p r i o r t o A/D c o n v e r s i o n

(55)

SAMPLER SAMPLER 1 H ( e, 0 )) 2 S ( t ) FC( ' *C) o ^ c X " ( n ) /o<^ X ( n ) F 'D( e ^ ) - -F.D L ( eU o ^ o Y(n) FD( e ' - ) Fc( j o >c) 0) CO CO CO Fig. 2.10

(56)

The resulting output sequence x(n) is then decimated by forming a new sequence consisting of every Mth element of x(n). Thus

y(n)=x(nM) , (2.24)

where M is an integer, known as the decimation ratio.

From Figures 2.9 and 2.10, it is evident that the sampled signal sequences x(n) from the two processes are identical. Assuming stationarity, the output power of the desired sig­ nal, Pg 0, can be found from (3.57):

L N-l j

so=

X

Q

Z V j ^ s t f "

1

! '

(2

-

25

>

where a are FIR-filter coefficients and (fi„ is the autocor­

relation1 sequence of the signal after the first (high rate) sampling.

Alternatively, if the filter is defined by its transfer function in the frequency domain, H(e^w), and the signal by its power spectral density «j>„(u>) , the output power P can be found by combining (3.36) and (1.48):

+7T

PS 0= 27j$s(w)H(ejC0)H(e"jC,;)dü; . (2.26)

-7T

The n o i s e output power, P„ , can be determined from s i m i l a r

e x p r e s s i o n s :

PNO

Ifcl fcl

= Z Z

a

i

a

i ¥ U J - i ) i (2.27)

i=0 j=0 X J o r +7T P = NO = ^ / *N( w ) H ( ej W) H ( e j U) d U , ( 2 . 2 8 ) -7T

(57)

where <pN and <£N are the autocorrelation sequence and the power spectral density, respectively, of the noise after the first sampling.

From (2.26) and (2.28) it is clearly evident that a sampling rate higher than twice the bandwidths of the signal and noise components (after digital filtering) yields no further improvement in the output signal-to-noise ratio.

2.4.1.2 Target angular data extraction by monopulse tracking radars

The target angular information produced by the first stages of a monopulse tracking radar receiver is a typical example of application of oversampling, low-pass filtering and subsequent decimation.

Target angular data normally have narrow-band characteris­ tics, with a typical bandwidth of a few Hz.

Samples are, however, produced at the pulse repetition rate of the radar, which normally lies in the kHz region.

Angular measurements can be disturbed by noise, consisting of both wide- and narrow-bandwidth components.

Major angular noise contributions are caused by what are termed angular glint and multipath effects [20, 22].

For a fixed frequency radar tracking a nearly stationary target, these disturbances consist mainly of narrow-band components with a standard deviation, a, depending upon

target size and shape, and tracking geometry.

Figure 2.11 shows signal and noise in the case of narrow­ band noise.

However, techniques exist for decorrelation of angular glint and multipath distortion components [20].

Using these techniques, the standard deviation of the dis­ turbances (and, consequently, the noise power) remains unchanged. The correlation between samples is reduced or ceases to exist in cases in which the noise is white.

(58)

In this ideal case, the noise autocorrelation function, ^g(n)» i s

faN n=0

^ ( n ) - (2.29) I 0 n*0 .

The c o r r e s p o n d i n g n o i s e power s p e c t r a l d e n s i t y , <j> (co) , c a n b e found from ( 1 . 5 0 ) :

4>NM = CTN2 . ( 2 . 3 0 )

Thus the noise power is uniformly distributed over all frequencies.

(59)

<I> ( < i > ) i ( a ) C O N T I N U O U S - T I M E C C f . S I G N A L S I C N A L A N D , N O I S E

A^

NOISE SPECTRA -<0_ O(C0)

i^

ffl

H

( b ) SAMPLED S I C N A L A N D D I C . L . P . NOISE SPECTRA F I L T E R 0>r - 2 « - * "«o ° ' " O * 2 rr 'HI J L l

li

( c ) SAMPLED NOISE A U T O C O R R E L A T I O N SEQUENCE <»S

X?

5LiL

il

( d ) SAMPLED S I G N A L A U T O C O R R E L A T I O C SEQUENCE

li

- 1 - 3 - 2 - 1 0 1 2 3 1 S - 1 - 3 - 2 - 1 0 1 2 3 1 S Fig. 2.11

Narrowband signal and noise

Figure 2.12 shows signal and noise in the decorrelated case.

< I > ( W )* D F r O R R F L A T F D <») SAMPLED SIGNAL D E C O R R E L A T E D A N D N 0|S E NOISE SPECTRA

-7^sr>

r

~zi"V

r-

/^*V,

r^ '/x \T\

-211 - 7 1 - 0 > 0 ♦<>> IC 2TI <PN J ( b ) SAMPLED NOISE A U T O C O R R E L A T I O N SEQUENCE —o—o o—o—*—o—o o — Ö — o — ^ n - 1 - 3 - 2 - 1 0 1 2 3 1 S

il

( c ) SAMPLED S I G N A L A U T O C O R R E L A T I O N SEQUENCE

l l L U n

- 1 - 3 - 2 - 1 0 1 2 3 1 S

(60)

By introducing an (ideal) low-pass filter, H(e^ ) , with

bandwidth co just wide enough to cover the desired signal component, most of the decorrelated noise spectrum is rejec­ ted.

Introducing (2.30) into (2.28) together with the filter characteristics, o) = kir + 7T P

NO=

h \ °**<*~

k

°U

2

• '

2

-

3 1

>

-IT The p a r a m e t e r k i s g i v e n by k = fi0/Tfs , ( 2 . 3 2 )

where U is the corresponding continuous-time signal band­

width of the discrete-time filter, and f is the sampling rate.

Without filtering, the output noise power would be equal to the input noise power a

Thus the parameter k expresses the noise suppression ratio between input and output and (2.32) indicates clearly that a high sampling rate is desirable.

Decimation for further signal processing and subsequent target state estimation can then be carried out at the rate

f * > kf s s

to avoid aliasing.

2.4.1.3 Oversampling to suppress A/D converion quantizing errors

Radar video signals are normally sampled at the pulse repe­ tition rate for each resolution cell and range increment. The sampling process described in Section 1.4 assumes the samples of the continuous-time signal to be known with infinite precision.

(61)

Present-day (1987) A/D converter technology restricts reso­ lution in relation to radar video conversion to 12 bits or less.

In many cases, the effects of restrictions on sampling precision can be modelled as an additive quantization error e(n) [5] with the following idealized properties:

1) The error sequence e(n) is a stationary random pro­ cess.

2) The error sequence is not correlated with the exact samples of the continuous-time signal.

3) The error sequence is a white noise process.

4) Individual elements within the error sequence are uni­ formly distributed over the range of quantization errors. Assuminq that the quantization error is uniformly distribu­ ted, in accordance with property 4, above, its standard deviation, a , is [5]

a„ = L/v/Ï2 , (2.33)

where L is the resolution increment of the A/D converter As discussed in the preceding section, unwanted noise should be decorrelated, i.e. spread over as wide a frequency range as possible prior to filtering.

If the input to the A/D converter (signal plus noise) is narrow-band, the samples taken will be highly correlated and thus property 3 above will not be present.

Decorrelation in this application depends on inherent wide bandwith noise causing the A/D outputs to "jitter" around the least significant bits during subsequent sampling.

For this reason, the continuous-time signal (e.g. radar video) should have a wider bandwidth than that prescribed by a matched filter approach.

The sampling frequency for this application will typically lie in the MHz reqion. Accordingly, subsequent diqital fil­

(62)

To realize such filters, simple FIR structures realized by digital accumulators have been used. The output consists of the average of the last N samples:

1 N

y = - I x , , (2.34)

N 1=1

where x. are individual samples within a resolution cell. This constitutes an FIR-filter with all coefficients

aL = 1/N .

The corresponding transfer function in the z-domain is

1 N-l .

H(z) = - Z

z

• (

2

-

35

)

N i=0

The sampling rate and bandwidth of this filter are large. The signals wanted (target echoes) vary slowly or remain practically constant in amplitude during the period of ob­ servation.

The signal's autocorrelation sequence, V<,(i) is then

V?s(i) = <JS2 for all i . (2.36)

Assuming ideal decorrelation of quantizing errors achieving white noise characteristics, the noise autocorrelation sequ­ ence, VN( i ) , is

I

%

2 i=0

VN( D = < (2.37)

I 0 i+0 .

Signal power after filtering and decimation, P_n, can then be found from (2.25) and (2.36):

1 N-l N-l N i=0 j=0

(63)

Noise power after filtering and decimation, P , can also be found from (2.27) and (2.37):

-. N-l N-l 1 ,

The signal-to-noise ratio at the output is therefore in­ creased by the number of samples at the first sampling stage, N. This is also identical to the decimation ratio.

2.4.2 Interpolation

According to the sampling theorem, the original continuous-time signal is retrievable at any continuous-time between samples if the sampling rate is higher than the Nyquist rate (Section 1.4).

An interpolation function has been derived by, inter alia, Oppenheim and Shafer [5] to allow retrieval of the original continuous-time sample:

OD sin[ (7T/T) (t-kT) ]

x(t) = Y x ( k ) , (2.40) k=-a> (7T/T) (t-kT)

where x(k) is the sampled sequence and T is the sampling period.

This equation is impractical because all past and future samples are required, i.e. the process is non-causal.

As an alternative, linear interpolation between adjacent samples can be undertaken. However, this process is ineffi­ cient and is associated with deficiencies [13].

Handling of interpolation as a linear filtering process is generally recommended.

(64)

2.4.2.1 Interpolation as a linear filtering process

To assist visualization of the initial sampling and subse­ quent interpolation processes, the continuous-time signal s(t) shown in Fig. 2.13a may be considered. On being sampled at the desired (high) sampling rate f = 1/TT the sequence x(n) is produced.

The continuous-time signal has a Fourier transform S(ju> ) . The signal is lowpass limited to fiQ (rad/sec).

According to Oppenheim and Schafer [5] ((1.28)), the dis­ crete-time Fourier transform of the sampled signal x(n); X ( 9 * ) , is

1 +co

Xfe3") = — £s[(j«/T ) + (j2Ti/TT)] . (2.41) T- i=-co

I

As discussed in Section 2.4.1, iny sequence y(n) formed from every Mth sample of an original sequence x(n) can be consi­ dered as sampled directly from s(t) at the rate f =1/T

where D *'*D

TD = MTj. .

The. Fourier transform of the (decimated) sequence y(n), Y(eJ ) , is then, similarly,

1 +03

Y(e3ü)) = — SS[(J0)/T ) + (j27Ti/T )] . (2.42)

TD i — «

Given the decimated sequence y(n), the problem is to find a procedure to derive the sequence x(n) with the higher samp­ ling rate.

This constitutes the interpolation process.

The first step is to form a new sequence v(n) at the higher sampling rate f = Mf-, as shown in Fig. 2.13d:

fy(n/M) n=0,±M,±2M...

v(n) = < (2.43)

(65)

(a) S(t) S(j(Oc)

L\

-nQ o0 (0 c (Rad/S) X(n) —j }— ( b ) X(e'«1

n

Q

0 1 2 3 1 5 6 7 8 9 10 1112 -00T, 00T, « 2TC <0 (Rad/lncr) ( c ) y(n) Y ( e iW) i

f ''T

(VVVVVV

0 1 <d) v(n] -2K 0 2)1 « fX I I I V.e*") 0> o o a i a o o

L X /XWYYY

(I) 0 1 2 3 1 5 6 7 8 9 10 1112 (e) ft 2TC v'(n) —9~9- 9 ?

:iii_ inn

0 1 2 3 1 5 6 7 8 9 10 11 12

(66)

The z - t r a n s f o r m of v ( n ) i s ro - n

V(z) = £ y ( n / M ) z

n=-oo

- | y ( n ) z -

M

"

n=—oo = Y(ZM) . ( 2 . 4 4 )

By setting z = e-1 and combining (2.44) with (2.42), the Fourier transform of v(n), V(e^w) is

V(ejOJ) = — fs[(JwM/T ) + (j2uk/T )] . TD k=-cc

Substituting TD = MTj and rearranging

1 1 CD V ( e: i ) = - { — X S [ ( j w / TT) + (j27Tk/T ) ] M T j k=-co x x 1 _ + — Z S [ ( ] W / T ) + (:27TL/MTT)]) T j L X X L = ±1, ±2, +3 .. excluding +M, ±2M, ±3M (2.45) The first sum term within the main brackets is clearly

identical to the Fourier transform of the desired sequence x(n), as defined by (2.41) and indicated by solid lines in Fig. 2.13d.

The second sum term constitutes undesired spectral compo­ nents, indicated by dashed lines in Fig. 2.13d.

On suppression of these spectral components by (ideal) fil­ tering, the filter output is the desired sequence, x(n).

(67)

In some applications, the missing samples are replaced by the value of the last previous sample, rather than set to zero before low-pass filtering. This can, in some cases, simplify realization. Its effect is discussed further in the next section.

This may be considered as a discrete-time counterpart of the continuous-time sample and hold function shown in Fig. 2.13e.

2.4.2.2 Interpolator filters

The ideal low-pass filter as foreseen in the sections above cannot be realized.

Digital filters which approximate the ideal filtering pro­ cess are therefore used.

Several authors [13, 18] have expressed preferences for FIR filters because they allow causal linear phase and non-causal zero phase filters to be realized.

Such filters have been extensively analysed in the litera­ ture, where design criteria are also discussed and des­ cribed.

If the sequence w(n) before low pass filtering is formed by the discrete-time "sample and hold" process described in Section 2.4.2.1, some caution is required.

On considering a corresponding sequence v(n) formed by the conventional procedure of substituting zeros for the inter­ mediate samples, it is obvious that:

M-l

w(n) = £ v(n-i) . (2.46) i=0

This equation describes a linear filter process with low pass characteristics and with the corresponding z-transform:

(68)

When the discrete-time "sample and hold" process is imple­ mented, the resulting interpolator filter constitutes the inherent transfer function G(z) cascaded with the subsequent digital filter H(z), as shown in Fig. 2.14.

V(n) o o e « o eee- o o o ' o o o ' W(n) X(n) QOQQQQQQ o0o<?99??o

m

ïïïta

Fig. 2.14

Interpolation with discrete-time sample and hold

2.4.2.3 Sample rate conversion in feedback control systems Control systems made up of distributed processing units can exchange data over transmission links with restrictions on information flow rate.

For example, consider the remote-controlled servo system shown in Fig. 2.15.

Cytaty

Powiązane dokumenty

Необходимо тут упомянуть, широко рас- пространенное в науке, положение Джошуа хаймса о коммуникативной компетенции, по своей

(nieratyfikowana przez Polskę). Dokument ob- liguje sygnatariuszy do zapewnienia kompensacji państwowej ofiarom prze- stępstw popełnionych z przemocą oraz osobom pozostającym

In 1999, he costarted a Digital Radio Frequency Proces- sor (DRP) Group within Texas Instruments with a mission to in- vent new digitally intensive approaches to traditional RF

Janow i W alkuszowi udało się uka­ zać przedstaw icieli duchow ieństw a katolickiego na określonym terytorium i w określo­ nym czasie, którzy prow adzili

Z Górnego Śląska wykazywany ostatnio ponad 100 lat temu, z Beskidu Wschodniego ponad 80 lat temu, zaś z Pobrzeża Bałtyku i Niziny Mazowieckiej blisko 60 lat temu (B URAKOWSKI

[ 76 ] devel- oped a novel method to estimate the fluid flow rate in nano- channels accurately and observed that the slip length of water in CNTs increases monotonically with a

A model ‐based simulation of a diffusion experiment was used to determine the theoretical accuracy as a result of six sources of error: solute sorption, biomass deactivation,

The PIE language would then have had an ergative type for personal pronouns and ani- mate substantives, but an accusative type, or rather a neuter type, only for inanimate