• Nie Znaleziono Wyników

An algorithm for digital Wiener filters and its application to seismic wavelets in noise

N/A
N/A
Protected

Academic year: 2021

Share "An algorithm for digital Wiener filters and its application to seismic wavelets in noise"

Copied!
77
0
0

Pełen tekst

(1)

ITS APPLICATION TO SEISMIC WAVELETS IN NOISE

N. DE VOOGD

(2)

C1005 3 9766 4

fill i l l

iiiiliiii II \ 1 It

11'

II Ij;

i' nil 1 III mill 1 ii 11' III'

;i'i

ir.ii ii I' ii i Hill 1

Jli!fl>

- ^ ,

AN ALGORITHM FOR DIGITAL WIENER FILTERS AND ITS APPLICATION TO SEISMIC WAVELETS IN NOISE

PROEI'SCHRII T

TF,R VKRKRIJGING VAN DE GRAAD VAN DOCTOR IN DE TECHNISCHE WETENSCHAPPEN A AN DE TECH-NlSCHi; HOGESCHOOL DELET, OP GEZAG VAN DE RECTOR MAGNII-ICUS DR. IR. H. VAN BEKKUM, HOOGLERAAR IN DE Al DELING DER SCHEIKUNDIGE TirHNOLOGIE, VOOR EEN COMMISSIE AANGEWEZEN

DOOR HET COLLEGE VAN DEKANEN TE VERDEDIGEN OP DINSDAG 25 MEI 1976 TE 16.00 UUR

DOOR

NICOLAS DE VOOGD

DOC:TORANDUS IN DE GEOFYSICA GEBOREN TE SEVILLA, SPANJE

BIBLIOTHEEK TU Delft P 1799 6441

539766

1976

(3)
(4)
(5)

CONTENTS

PRLFACE 7 1. INTRODUCTION 9

2. COMPUTATION Ol DIGITAL WIENER FILTERS

2.1. Definition of least squares filters 12

2.2. The normal equations 14 2.3 Filter determination by means of orthogonalization 15

2.4. Determination of the optimum spike position 20 2.5. Determination of the optimum output shape position 21

2.6. Derivation of the filter 23

2.7. Computational aspects 23 2.8. The phase-lag of a seismic wavelet 26

3. SEISMIC SIGNALS IN NOISE 2 8 4. THE CENTER OI A RANDOM NOISF REALIZATION OF

1 INITE DURATION

4.1. Introduction 30 4.2. The center of a realization of white noise 30

4.2.1. Characterization of white noise 30 4.2.2. Experimental evaluation of mean, variance and skewness 31

4.2.3. Derivation of mean and variance 34 4.3. The center of a realization of seismic noise 36

4.3.1 Characterization of seismic noise 36 4.3.2. Experimental evaluation of mean, variance and skewness 37

4.3.3. Derivation of mean and variance 39 4.3.4. Approximate evaluation of mean and variance 42

4.4. Conclusions 51 5. THE PHASE-LAG OF A NOISE-CONTAMINATED SEISMIC

WAVFLI T

5.1. Introduction 52 5.2. The center of two added signals 52

5.3. The Fourier spectrum 55 5.4. Consequences for the derivation of stabilized Wiener filters 59

(6)

6 W A V F L L T SHAPING AND NOISI Rl DUCTION

6 1 Application of the orthogonalization scheme to seismic

wavelets in noise 61 6 2 Examples and discussion 64

SUMMARY AND C O N C L U S I O N S 7 0 SAMI N V A T T I N G I N C O N C L U S I E S 7 2

(7)

PREFACE

A part of the investigations reported in this thesis was carried out at Texaco's European Research Center in Ghent, Belgium The author is grateful to Texaco Inc for the permission to publish the results of that work Special thanks are due to Dr Roger De Meersman for his guidance and inspiration during that stage of the investigations

Helpful discussions with Dr K Kubik of Rijkswaterstaats Analysis Group, as well as the assistance given by Wim Jan Adolfs, Eddy Maitimo and Job Roosma while working for their engineering thesis, are gratefully acknowledged

Tlianks are also due to Mr A F G Faessen of the Drawing and Photography section. Department of Mining Engineering, who made the figures, and to Miss Netty Gramsma for typing a sizeable portion of the manuscript Finally, the author wishes to thank his colleague-members of various Univer-sity committees for their understanding dunng the last months that this thesis was being prepared

(8)

1 INTRODUCTION

An ideal seismic reflection record j'(f) is built up by plane waves reflected from the interfaces which separate the geological layers in the subsurface This record can be considered as the result of the convolution

y(t)=foa(T)h(t~T)dT

where a(t) is the seismic signal wavelet and h(t) the unit impulse of the sub-surface, representing the distribution in time of primary and multiple reflec-tions

The shape of the signal wavelet a(t) results from the frequency-selective ab-sorption of seismic energy as the signal propagates through the earth This effect can be described by convolution of the initial pulse p{t), which is ide-ally a Dirac pulse, with the unit impulse teponse f{t) of the 'earth filter'

a(t) = f; p(T)At-T)dT

The result of 'earth filtenng' is an increase in the duration of the seismic dis-turbance This pulse-broadening is a major weakness of the seismic reflection method since it causes loss of resolution When, for instance, the two-way traveltime between two interfaces is less than the duration of the signal, the wavelets reflected by these interfaces overlap each other and cannot be dis-tinguished as separate events on the record One of the main objectives of modern seismic processing is therefore to improve the resolution of the reg-istrations This IS done by compensating the earth filtering by means of an appropriate inverse or deconvolution filter When in this process, the initial pulse IS assumed to be a Dirac function, the compensating filter is m fact designed to transform the seismic signal into a 'spike' and is therefore called a spiking filter

The objective of resolution improvement can also be satisfied by transform-ing the seismic signal into any wavelet shape which is noticeably sharper A filter which transforms an input into some desired output is called a 'shaping filter' It has been shown (Treitel and Robinson, 1966) that shaping filters perform better than spiking filters of the same length

Seismic data processing is almost entirely done by means of digital com-puters Calculations therefore involve digital filters, digital signals and discrete convolution The action of the earth filter, on the initial disturbance is, for instance, described by

(9)

An ideal deconvolution filter is a time series of infinite duration In practical applications, of course, a digital filter is of finite length in computer proces-sing the filter length has to satisfy constraints imposed by available memory space and computing time To obtain the best approximation to a desired output by means of a digital filter with finite lengths the filter coefficients are calculated so that they satisfy some criterion of optimization Such a filter IS called a Wiener or optimum filter In this thesis the least squares criterion features as a measure of optimization The usual procedure is to formulate the sum of squared deviations between desired output and actual filter out-put as a functional of the filter coefficients and to form the, so-called, normal equations by equating to zero the derivatives of this functional with respect to the filter coefficients The approach used in this thesis is based on vector space methods and avoids the normal equations

A study on inverse filters was published by Treitel and Robinson (1964) The theoretical basis for least squares filters was developed by Wiener (1949) The application of the original concept to seismic reflection records was reported by Rice (1962), Simpson et al (1963), Ford and Hearne (1966), Robinson and Treitel (1967) and others

The performance of a least squares filtei is numerically measurable and is usually expressed by the filter performance parameter P, given by

P= 1 t

where E is the normalized minimum error energy (Levinson, 1947) The num-ber /'lies in the range O^/'^l and the closer P is to 1, the better the filter is The parameter/'IS a function of filter length (Claerbout and Robinson, 1964) and of the time-lag between input and desired output This time-lag can be varied, the number of possible output positions depending on input length and filter length The filter, of certain length, which has the largest P value is the one which incorporates the correct time-lag between input and desired output (Treitel and Robinson, 1966) This correct time-lag depends on the phase-lag characteristic of the input signal, which is usually unknown In con ventional calculations, therefore, a filter is calculated for each possible output position The optimum-lag shaping filter is selected by monitoring the perfor-mance P for all possible lags

Chapter 2, on least squares filters, deals with an algorithm that avoids forming the normal equations It is demonstrated that the optimum output position can be determined without filter computations Once the optimum output position IS known, the optimum-lag Wiener filter can be calculated

(10)

Another major problem encountered in seismic data processing is the pres-ence of noise and unwanted signals on the seismogram. An effective method, being widely used, to enhance the signal-to-noise ratio is common datum point stacking (Mayne, 1962). This method is essentially based on the anal-ysis of effective seismic velocities (Taner and Koehler, 1969) It has been found, however, that velocity analyses are strongly improved if the data are previously deconvolved. Therefore, Wiener filter algorithms are employed to calculate filters that perform two duties shape as well as possible the signal into some desired wavelet and reduce noise These filters are often called stabilized filters in contrast with ordinary least squares filters because the latter magnify high-frequency components on the record Work on this sub-ject has been reported by Robinson (1967), Treitel and Robinson (1969),

Deregowski (1971) and Douze (1971).

Chapters 3, 4 and 5 deal with seismic noise and Chapter 6 presents a new method to derive stabilized Wiener filters The author previously published a paper on this subject (De Voogd, 1974).

Firm arguments have been put forward lately for the application of deter-ministic Wiener filters rather than deconvolution filters based on statistical assumptions since they give the improved results that are necessary for de-tailed interpretation of seismic records (Neidell and Lindsey, 1974)

Throughout this thesis it has been assumed that noise characteristics and seis-mic signal have been determined. Methods to do such have been pubhshed for instance by Dash and ObaiduUah (1970) and White and O'Brien (1974). The problem of non-stationanty of seismic data is supposed to have been dealt with by subdivision of the record in gates in which stationanty is more or less valid Optimum gate lengths can be determined with a method pro-posed by Wang (1969).

(11)

2 COMPUTATION OF DIGITAL WIENER FILTERS

2 1 Definition of least squares filters

The calculation of a least squares Wiener filter x = (xo,Xi,X2, , x„) is based

on the shape of the input signal a = (flo, <?i, ^2 > > ^m ) ^'^ which the filter is to be applied and on the desired output d = ((io.fi^i ,^2> '^m+n) The filter has to be solved from the system of equations

2 X Of ^ = df, {or t = 0,1,2, ,m + n

T - 0 (1)

These equations represent discrete convolution They can also be written in

matrix form fl2 « ! flo . ^2 ai OQ zero s zero s 12 Xo Xi X2 X < ) d2 (2) or Ax = d

The matrix A will be called the convolution matrix The vector d contains

zero's except for the elements rf, which are equal to the sample values of the desired signal shape

The convolution matrix A is a rectangular cyclic matrix its /-th column is

obtained by cyclic permutation of column/ 1 Therefore, A is completely determined by the elements in its first column The properties of A are sum

(12)

''ii~''i+i, /+1 ' ^"*^ / modulo wj + M

a, Q = 0 for i > m. (3) Matrix A represents a linear mapping from an (« + l)-dimensional vector

space V"^', with basis eo, Cj, 62,..., e„, into an (m + n + l)-dimensional vec-tor space V'"^"^', with basis fo, fi, f2, —>fm+n' where the vecvec-tors Cj and f,-are unit vectors (0, 0,..., 1,..., 0) with the /-th element equal to one. The/-th column of A contains therefore the coordinates, with respect to the basis vectors/^- of the image Ae- of the/-th basis vector of V"^'.

The vectors Ae- generate a subspace R(A) of V""^""^' which is called the range, or columnspace, of A.

In general there is no exact solution for (2) since the system of equations is overdetermined:

d^ R(A)

The system is therefore solved in the least squares sense, which means that a vector xG V" ^' is sought such that

II Ax — d II = II d' - d II = minimal,

where II II denotes the Euclidean norm, given by the square root of a vector's scalar product with itself, and where of course

Ax = d'GR(A)

The vector d' represents the actual output of tlie filter, while d contains the desired output. Minimization of lid' — dII is equivalent to minimization of the energy I of the error signal:

m + n n , m + n , ,

1= 2 {d,~ t M f - r ) = 2 {d,-d\f

f = 0 «=0 f=0

The performance Pis the one's complement of the normalized minimum error energy

II d-d'IP

^ = 1

(13)

2.2. The normal equations.

The conventional method to solve the system of equations (2) is to set the derivative of the error energy IIAl — AP with respect to the n + 1 filter coefficients equal to zero. This leads to the so-called normal equations (Robinson and Treitel, 1967).

More generally, the least squares solution of equation (2) is based on the following theorem (Luenberger, 1969)

For a fixed dG ym+n+\^ (j^g vector xG F"^' minimizes llAx — dll if and only if

A* Ax = A*d where A* is the adjoint of A.

In real vector spaces, together with the scalar product which is used here, the adjoint of A is equal to the transpose of A. Eq. (2) must therefore be multiplied from the left by A' which leads to

A'Ax = A'd

This equation can be rewntten as follows:

ri ro r, ro •n-2 Xo ffo (4) or Rn^ = g (5)

The matrix R^ is called the autocorrelation matrix because the elements r, represent the autocorrelations of a^'

r, = E ifit^i for' = 0, 1,2,..., n. r-o

The nght-hand member of eq (4) contains the cross-correlations of a, with the desired output d^ and is given by

(14)

m+n

g,= 2 d ^ , _ , f o r i = 0, 1,2, ,n

t=o

The autocorrelation matnx has a Toephtz structure the elements on any given diagonal are the same Moreover it is symmetrical and contains only

n + 1 distinct elements, namely TQ , r j , r^, , r„ The special structure of R„

enables the solution of eq (5) by means of special algonthms First there is the well-known Levinson algorithm which performs recursive solution of eq (5) (Levinson, 1947)

An optimum lag filter for a mixed-phase input can subsequently be found by means of the Simpson sideways recursion algorithm (Simpson et al, 1963) This scheme starts from a zero-lag filter as computed with the Levinson algonthm, and computes subsequently the filters for all possible lags by shifting the desired output one position at a time The optimum output po-sition IS then determined by calculating the actual output and the perfor-mance P of all these filters and selecting for the best perforperfor-mance Second, a special algorithm for the inversion of a Toephtz matrix can be applied This leads to

The matrix R~ * can be apphed to any number of vectors g An efficient scheme for the inversion of a Toephtz matrix is the algorithm of Trench (Zohar,

1969)

2 3 Filter determination by Means of Orthogonalization

There are some advantages in solving directly the system Ax = d by means of orthogonalization of the columns of A This approach is based on the classical projection theorem which states that the norm of the error vector,

IIA' All, IS minimal if and only if this vector is orthogonal to R(A) (Luen

berger 1969) The vector d' = Ax must therefore be the orthogonal projection of d on R(A) and the error vector d' — d its projecting vector

In order to find the orthogonal projection of d it is necessary to have an orthogonal basis for the columnspace R(A) This basis can be constructed by applying Gram-Schmidt orthogonalization to the onginal basis vectors

{Aco.Ae,, ,Ae„ }

(Luenberger, 1969) Because of the cyclic structure of A it is, however, pos-sible to derive special orthogonalization procedures Special algorithms for the orthogonalization of rectangular cyclic matrices have been developed by

(15)

De Meersman (1975) and an algorithm for the computation of optimum-lag spiking filters based on such a procedure was published by De Meersman and De Voogd (1974). These algorithms exploit the special structure of cyclic matrices, resulting in a considerable reduction in the number of arithmetic operations as compared with more conventional orthogonalization processes. They can be adapted to the convolution case, by taking account, for instance, of the zeros in matrix A. One of these algorithms, m the form in which it has been implemented in the present work, is derived below.

The following derivations apply to real vectors and matrices.

Let ao, ai,..., a„ be the columns of the convolution matrix A. The vectors a^ satisfy the relation

Pa, * / pk- ' a o , (6) where P is a (w + « +1) by (m + « + 1) permutation matrix given by

r

0 0 0 . . l o o . . 0 1 0 . . 0 . 0 o 1 0 0 ^ 0 0 0 . . . 1 o

A set of n + 1 orthogonal vectors z^ can be derived from the vectors a^^ such that

R(A)= [ao,a,,...,a„] = [ZQ, z,,..., z„],(z,, z^ = 0 for ; ^ / where (,) denotes the inner product of two vectors and a set of vectors be-tween square brackets denotes a subspace.

The conventional derivation by means of the Gram-Schmidt process is effected in the following way.

First let Zo ~ ao next let ( a i , Z o ) Zi - a i — Zo. (Zo,Zo)

(16)

The remaining z^ are found by induction. The vector Zy^_ , is fonned ac-cording to the equation

*f' W - i ' ' . ) /=o (z,,z,)

^k-\ ~^k-\ ~ ^ /-, , ^ " ^'

or

- a ; _ , (7)

where a^_, is the projection of a^_i on [zy, ..,Z(^_2].

A special process can be derived for the case of a rectangular cyclic matrix by the introduction of an auxiliary set of vectors, defined by

Vo =ao

Yi = ao - (projection of ao on ai)

(8)

Vk-i -^0 -(projection of ao on [a,,..., a^^_J).

Premultiphcation of (7) by P gives

P z , _ , = a , - P a ; _ , (9) The vector Pz^ , is orthogonal to [ai,..., a^^ _, ] since

(PZfc_ ,, a,) = (z^_ ,, P-' a,) = (Zfc_ ,, ao) = 0

according to (6) and the definition of the z^. Analogous relations apply to the vectors a2, .., a^ _ ,.

Furthermore, Pal^_ , G[ai,..., a(^_ , ] since a^_ , can be constructed by a linear combination of the vectors ao,..., a^_2.

Next the vector z^ is formed by ^*=P^fc-i +'^fcyfc-i

From the previous reasoning it follows that z^ is orthogonal to [ai, •••,si,^_ , ] for any V/^. It will, moreover, be orthogonal to [ao,..., a;j^_ , ] if

(17)

This is the case for ( P z , _ , . a o )

* ( y * - i . a o ) ^ ' Since furthermore the vector —PaJt_ j + f;ty*- i ^ [ a o , , a^_, ] (9 and 10) It follows that z^ IS the projecting vector of a^ on this subspace

In order to derive the vector z^^. j it is necessary to find y^^ From (8) it follows that y ^ . , is the projecting vector of ao on [ai," , a;^_, ] The pro-jection of ao on this subspace can be represented by a linear combination of

the vectors a i , , a^ , Premultiplying (8) by P"' therefore gives P ' y^ _ , = P ' ao — (linear combination of ao, , a^ _ 2)

k - 2 ^ P _ V j < 0 (z,,z,) °' p i p i *;;^ ( P ' a o , z , ) P ' y f c _ i = P ' a o - 2 , . — z, which leads to F ' y , = P " ' y f c _ , - A < f c Z , _ , (12) where _ ^ - ' Z O . Z A : - . ) (13) " ( Z f c _ , , Z k - , )

Premultiplying (12) by P finally gives a convenient recurrent relation

yk = yk-i ^^k^H-^ (14)

Tlie process can be continued until k = n

In the computations, account should be taken of the zeros in the a^^, z^^ and

y^ vectors in order to save computer time

The following relations can be derived

( y ; , _ , , a o ) = -(iU;,_i + ) ( P z ^ _ 2 ' a o ) (15) ^k- 1

and

(18)

which lead to a recurrent relation for P^

f^fc-i

From (11) and (13) it follows that 1^2 = —M2 and (17) then leads to

i^, =-M;t for all/t (18) The vectors z^_ , and y^_ j belong to [ao, , a^_ , ] and can therefore

be constructed by linear combination of the vectors ao, >^ic-i

Zfc_i = f o f c - i a o + ^ fc_,ai + +tk~ik-i^k-i (19)

y * _ , = i o f c - i a o + S i A _ , a , + +Sfc_, ft_,a;t-i (20)

The coefficients ?o * - 1 'fc-1 fc-1 ^^^ the first k elements of a vec-tor ti^_ , GR^^ J, the remaining elements being zero Similarly, the vecvec-tor S;^_ J €/?„+, can be defined Wlien the vectors Z/^ and y^^ are considered as the columns of, respectively, the matrices Z and Y, equations (19) and (20) can be written Z = AT and Y = AS, where T and S are upper triangular ma-trices

Combining (19) with (10) and (20) with (14), one finds

t , = P t A - i + ' ' f c S * - i (21) and

By means of (18) the last equation can be rewritten as

Sfc=S/t-i+^fcPt* I (22) It follows that the first k elements of s^ are the first k elements of t^^,

reversed in order

Once the n + 1 basis vectors a. of R(A) have been replaced by a set of orthogonal vectors z,

R ( A ) = [ a o , a i , a 2 , , a„] = [zo, z , , Z2, , z „ j , (z„ z ^ = 0 forZT^/ the projection of d on R(A) is found by means of

d = Q ; O Z „ + a i z , +a2Z2 + + a z , a = - (23) ( z „ z , )

(19)

The calculation of the least squares error then is straightforward and sub-sequently the filter performance

IIA' - Af

'-''^lAff^ ^24)

can be computed

When the input signal is mixed phase, the performance can be significantly enhanced by giving the desired output a certain time-lag with respect to the input origin (Treitel and Robinson, 1966) This will first be investigated for the case that the desired output is a spike and subsequently for the shaping filter.

2 4 Determination of the Optimum Spike Position

With a signal wavelet of lentgli m -i- 1 and a filter with n + 1 coefficients there is an output array of length m + n + \ The spike can be placed any-where in this array Therefore, there are m + n + 1 possible desired output unit vectors. They can be arranged in a square matrix to form the identity matrix 1 The m + n + I least squares filters of length n + 1 which respec-tively can be computed for these desired outputs can be arranged in a matnx with n + 1 rows and m + n + \ columns This leads to the matrix equation

AX = I,

in the least squares sense From this equation it follows that X is equal to the pseudo-inverse of A:

X = A'

The m + n + \ digital filters which approximate the m + n + \ possible desired outputs are therefore contained in the columns of the pseudo-inverse A' The filter of this set which produces the smallest least squares error is the one which incorporates the optimum spike position and is therefore called the optimum-lag deconvolution filter It is in other words the filter for which

ti^= lldj^ — A'l^ IP = minimal

Since d^^ is a unit vector and since dj^ is the orthogonal projection of d^ on R(A) It follows that

(20)

It follows from eq (24) that \\A'^ 11^^^ is the maximum performance P^^ax From eq (23) it follows that

iid;ip = ii 2 4 ^ ^ z i p

I 0 (z,, z,) '

Since the vectors z are orthogonal and since d^ is a unit vector this expres-sion can be reduced to

lldi^lP = II 2 - ^ z^lP = II 2 - ^ f - l P = 2 - % (25)

where z^^ is the fc-th element of z

Once the orthogonalization has been terminated is it therefore possible to predict the filter performances for the m+ n + \ spiking filters without calcul-ating these filters and their actual outputs Subsequently the maximum Pand the corresponding optimum spike position can be determined

2 5 Determination of the optimum output shape position

With a signal wavelet of length m + 1 and a filter with n + 1 coefficients there IS an output array of length m + n + 1 The objective is to transform the input signal into a signal shape of shorter duration, with for instance k + 1 elements Let the desired output shape be given by

b = (Z?o,/'i, ,b^), Q<k<m

The origin of this signal can be shifted in the desired output array There are

{m + n + I — k) possible output positions and therefore as many vectors d

and least squares filters

d^=(o, ,o,bo,bi, ,b^,o, ,o) (26)

t

t=/,j = 0,1,2, ,m+n k

Of this set of filters, the one for which

/^ = lid, - A'j IP = minimal (27) IS the optimum-lag shaping filter

Since d' is the orthogonal projection of d on R(A) it follows that l i d , - d ; I P = lid,IP - l i d ; I P

(21)

From (26) it follows that

lldjlP = llblP forall/ (28) The normalized least squares error is obtained by division of/ by (28)

^' llblP ^ ^ llblP ^ ^ From the definition of the performance parameter P then follows

j d ^

' lib IP

In order to satisfy (27) it is theretore sufficient to find the vector d for which lid! IP = maximal

By means of (23) it is found that

lid; IP =

2f^z, II

(30)

, = 0 (z,,z,) II

At this point. It should be noted that the scalar product (d,, z,) is the cross-correlation of b and z, for time-shift/ Eq (30) obviously leads to

By means of (31) lid 'IP can be rapidly calculated for all/ This calculation can be performed simultaneously with the orthogonalization of the columns of the convolution matrix A On^e the orthogonalization has been termin-ated It IS therefore possible to predict the filter performances of the (m+n+1 —k) shaping filters without calculating these filters and their actual outputs

Subsequently, the maximum P and the corresponding optimum output pos-ition can be determined

(22)

2 6 Derivation of the filter

The advantage of the direct procedure is evident from (25) and (31) the optimum spike position or the optimum output shape position can be selec-ted without filter calculations Wlien the orthogonalization and the selection of the optimum output position have been performed, the computation of the optimum-lag filter is straightforward. The orthogonalization process can be represented by a transformation

A T = Z

where T is an upper tnangular matrix with columns t, and Z the resulting matrix containing the orthogonal vectors z, as columns

The derivation of T was given in section 2 3 It follows that A t, = z,

Equation (23) can thus be changed into d'= 2 a, At,

f = 0

The least squares solution x now satisfies the equation

Ax = d = 2 a,At, ( = 0 and therefore x= 2 a,t,. (32) / = 0 2 7 Computational aspects

The direct procedure needs less anthmetic operations than the Levinson-Simpson algorithm For the calculation of an optimum-lag shaping filter of length 25, for a 25 points input and a 5 points desired output, a program based on direct soultion of Ax = d by means of orthogonalization performs 16350 arithmetic operations For the same task, the Levinson-Simpson al-gorithm needs 25250 arithmetic operations, roughly 55% more

For the derivation of an optimum-lag spiking filter of length k, for an input of length/, the procedure performs a total of

(23)

For the same problem the combined algorithms of Levinson and Simpson need

{4k + 3)/^ + (4/ + 16.5)A:^ + 10/A: - 7.5/ - 15A:

arithmetic operations.

To illustrate the difference in computation time which this represents both methods have been compared by means of numerical examples. The results are presented in the table below. The computation times are for an IBM-1130.

Example 1

30-points input signal

a, = (0.0,0.1,0.25,0.5,0 85, 1.35,2.0,5.05,6.0,5.3, 1 4 , - 4 . 8 5 , - 9 . 8 , -13.0, -13.85,-13.6,-10.45,0.0,3.6,4.7,5.0,4 15, 2 15, 1 4, 0.95,0.65,0.4,0.15,0.0,0.0), number of filter coefficients. 8 12 15 20 25 computation time in

seconds, with Levinson/

Simpson: 25.1 42.0 55.8 81.8111.7 with new method: 2.9 4.6 6.1 8.7 11.9

Example

2-16-points input signal

fl, = (0,0, - 2.317626, -6.067625, -5.182374, 2.317616, 11.24998, 13.56763, 6.067675, --6.067564, -13.5676, -11.25006, -2.317722, 5.182322, 6.067654, 3.37680, 0.0), number of filter coefficients. 10 15 20 25 30 computation time in

seconds with Levinson/

Simpson 13 2 23.1 35 4 48.6 67.6 with new method: 2.5 4.2 6.3 8.6 11.3

It IS well-known that the solution of an overdetermined system of hnear equations (2) by means of orthogonalization is in principle more accurate than solution by means of the normal equations. In order to explain this statement it is convenient to introduce the condition number K(A) of a ma-trix A. This number is defined by

(24)

where IIKll, the norm of matrix A is defined by //A// = max (//Ax// / //x//), 11x11 ^ 0 It can be shown that

//A//=/X„

//A'//= 1//X, = \lmm{llkxll I llxlf)

where Xj and X„ are the smallest and the largest eigenvalue of A'A A matrix with a large condition number is said to be lU-conditioned If Ax = d has been solved in the least squares sense to give a solution X = A'd with Ax = d', the projection of d on R(A), a perturbation 5 A on A will result in a perturbation 5x on x such that

(A + 5A)(x + 5x) = d m the least squares sense

It can be shown that the relation between the perturbation on A and the resulting perturbation on x is given by

llhxil lib Ml n + bxil ^"^"^^ //A//

Therefore, a small perturbation on A may cause a large error 5x

If the linear equation Ax = d is replaced by Bx = A'd, where B = A* A this IS done at the expense of the condition number It has been shown (Kowalik and Osborne, 1968) that K(B) = { K(A) Y In the case of an ill-conditioned problem, the error in the solution obtained by solving the normal equations can be expected to be appreciably larger than the error in the solution ob-tained by the method described in this chapter

If the Euclidean matrix norm //A//=(2afj)14

' /

IS employed, it follows from the cyclic structure of the convolution matrix that

//A//= { ( n + O E j ' ' ' ^

(25)

Furthermore, the perturbation matrix 5A can be thought to represent either measurements errors on the wavelet aj or noise components If A contains noise then

llbkll = { (n + 1)E^ \'' = { (n + 1) (m + 1)P}'''^

where E^ is the noise energy and P the noise power It then follows that

llbkll / //A// = (E^/EJ"/^ = {(m + 1)P/E, f

The conclusion therefore is that, if the wavelet a^ is measured together with noise, the perturbation 6x on the denved least squares filter will depend on the noise power to signal energy ratio and on the condition number of A It should be mentioned that the convolution matnces that were encountered in this work were all well conditioned It is however known that the con-dition number will tend to increase with the decrease of the samphng mterval of aj

2 8 The phase-lag of a seismic wavelet

In this thesis the concepts of minimum-phase and mixed-phase wavelets have already been used repeatedly These concepts will again occur in the sequel and therefore a brief review of the subject is in order

The z-transform of a digital wavelet a, is defined as m

A(z)= 2 a,z« t-o where z = e"^'^^*^'

If the polynomial A(z) has no poles and zeros inside the unit circle in the complex plane, Izl = 1, the signal a, is minimum phase

It can be shown that, of all the wavelets with the same amplitude spectmm, the energy of the minimum-phase correspondent is optimally concentrated toward the ongin t=0 Of the entire set of wavelets with the same amphtude spectrum, the minimum phase wavelet has a larger coefficient for time index zero than any other wavelet in the set (Robinson, 1967a) An extensive treatise on digital minimum-phase signals has been published by Berkhout (1970)

Berkhout (1970 and 1973) has proposed an elegant yardstick for the phase of digital signals which he calls the effective signal length This entity is defined by

L= 2 w, sf/ 2 s^ t = 0 t 0

(26)

for real-valued signal coefficients S(, where Wj is a positive non-decreasmg series with Wo = 0

In this thesis the first-order length of the wavelet a, will be employed Li = 2 tfl^ / 2 a^

t=i t - o

This entity is identical with the center of a.^

A number of conclusions will be based on the following theorem (Berkhout,

1970)

Of all the wavelets with identical amphtude spectra, the effective length is minimal if and only if the corresponding signal is minimum-phase

If L IS not minimal, the signal is mixed-phase

In seismic prospecting it is often assumed that the information wavelet is minimum-phase This is partly based on physical considerations regardmg the nature of earth-filtenng but is also a matter of convenience since the minimum-phase assumption enables the determination of the wavelet shape from its autocorrelafion (Robinson, 1967b) (White and O'Bnen, 1974)

That optimum-lag filtering is required for mixed phase input signals is well-known (Treitel and Robinson, 1966) and is shown in fig 1, a plot of filter performance as a function of lag

Qaerbout and Robinson (1964) have proved that, for a non-mimmum-phase input wavelet, the error of a digital Wiener filter wiU tend to zero for mcreas-ing filter length, only if the output is chosen to come after a sufficiently long time delay

Fig I Filter performance and norm as a function of lag for a mixed phase mput (a)

mput (b) gaussian desired output (c) performance (d) norm Lag is m number of sampling intervals

(27)

3 SEISMIC SIGNALS IN NOISE

In chapter 2 the derivation of least squares filters for noise-free signals was

presented Naturally, in practice one has to deal with seismic wavelets that are recorded against a background of noise If the unit impulse response of the subsurface can be represented by a white series w,, the distribution m record time of primary and multiple reflections, an ideal noise-free seismic trace can be represented by

s, = aj * Wj

where a, is the seismic information wavelet and the asterisk denotes discrete convolution It has been assumed that all individual seismic events may differ in amplitude and polarity but have the same shape This assumption may well be valid for respective time gates in the trace as was already mentioned in chapter 1 The noise on a trace can be represented by

r, = b , 4^ V,

where v, is another white series representing disturbances caused by buried scatterers, wind, instrument noise and so on, and where b, is a smoothing operator called the disturbance wavelet

The seismic trace x, can be considered as the sum X, = s, + rj = (aj * w,) + (b, * v,)

It IS known that deconvolution filters may grossly magnify high-frequency components on the record that are due to the noise background (Deregowski,

1971)

In the case of white noise, the power of the filtered noise equals the power of the received white noise times the energy of the filter (Robinson 1967a) The filter energy

//X//^=(X,X) = (X^+A?+ +Xl)

can therefore be used as a measure of expected white noise amphfication The square root of filter energy is equal to the norm of the filter vector, which is mathematically a more convenient entity Line d in fig 1 is a plot of filter norm against time lag Clearly, this norm is greater than one in the entire region where filter performance has an acceptable value This means that all the filters, in this example, with a good performance will amplify high-frequency noise

(28)

The optimum-lag filter in this example has a norm of 1.63, which corresponds to a white noise amplification of approximately 4 dB.

A deconvolution filter that is also able to reduce noise can be determined by adding noise energy to the autocorrelation of the input signal before filter calculation. This technique is well-known and has been treated (Treitel and Robinson 1966) as the design of a filter that will perform two duties, namely to (i) shape as well as possible the signal wavelet into the desired output and (ii) produce as little output power as possible when stationary noise is its only input. Such a filter can be solved from the augmented normal equations.

(R + XQ) X = g (33) where Q is the matrix containing the autocorrelations of the noise.

The relative weighting between the two duties can be adjusted by means of the parameter X.

The present work has been restricted to the case of white noise addition but there is no essential difference with more general cases. When white noise energy is added to the autocorrelation of the input signal, (33) can be simplified by replacing Q by the identity matrix

(A'A + XI)x = A'd (34) The procedure of (34) is identical with the ridge regression technique, and

it can be shown (Marquardt 1963) that 11x11 decreases for increasing X. By means of this method one can derive a filter subject to the constraint that its norm be less than one.

In section 2.8 it was already mentioned that a mixed phase seismic wavelet requires an optimum-lag filter. Such a filter can be selected by monitoring the performance of the set of filters corresponding to all possible output positions. As was shown by Treitel and Robinson (1966), the least squares filter will have the easiest job if the desired output position coincides with the area of the input signal where most of the energy is concentrated. This explains in a qualitative manner that the phase-lag characteristic of the input wavelet in-fluences the filter derivafion through the crosscorrelation terms in the righ-hand side of the normal equations. When a noise-contaminated seismic wavelet is considered one has to deal with the sum of wavelet and a noise realization of the same duration. This noise realization will have an expected phase-lag or, equivalenfly, an expected effective length. The effective length of the noise will have no influence on the filter derived by means of equations (33) and (34),since the noise is only represented by its autocorrelation that is present in the left-hand side. Therefore, a noise-contaminated minimum phase wavelet will lead to a zero lag optimum filter although the sum of signal and noise may very well be mixed phase.

(29)

4. THE CENTER OF A RANDOM NOISE REALIZATION OF FINITE

DURATION

4 1 Introduction

In this chapter, a realization of duration nAr of a real random process is con-sidered The sampling interval Ar is defined as 1 time unit and so the amph-tude at a certain tune instant tAr can, without ambiguity, be designated by the coefficient x,, where the suffix t is an integer The entire realization is represented by the vector.

X - ( x o , X i , , X ( , , Xj,)

It IS assumed that the process is stationary and that the x, are gaussian with zero mean and fixed variance Thus x, is N(o,ajj) The center of gravity is a functional of x defined by

^, , (x,Dx) , 1 , txf

^ =

f(^)

=

V ^ "

I 2

(^5>

{x,x) 2 ^2 t=o where D = diag ( 0 , 1 , . , n)

This ratio of quadratic forms is itself a random variable, therefore its behav-iour must be described statistically

4 2 The center of a realization of white noise

4.2.1 Characterization of white noise

A stationary random process { x j is called a white noise process if its auto-correlation function IS a spike at zero time-shift

E{x^}=a^ forr = 0 (^xx(r) = E { x , + ^ x J =

0 for r 9t 0

The height of the spike is equal to the power a^ of the process Since x, is a random vanable from a fixed Gaussian distribution function N(o, a^), as was assumed in the introduction, the power of the process is equal to the variance of X,, since E {x^}= a^ Moreover, since gaussian random variables are statis-tically independent if they are uncorrelated, the elements of the vector x are statistically independent.

(30)

4.2.2. Experimental evaluation of mean, variance and skewness.

A pseudo-white gaussian process can be simulated in a digital computer by

means of a random number generator. The plot of a sequence of such random numbers placed at regular intervals, is shown in fig. 2. together with the auto-correlation function of the sequence, estimated by means of the time average.

T T

t = 0 t = 0

It is well-known that the resemblance of this time-average to the ensemble average improves with increasing T.

A sequence of independent center-values Zj may be computed by

ki+n

2 tx?

^i^-j^iTTi for k i = ' ( n + l ) , i = 0, l,...,m 2 x^

t=ki

and the mean and variance of z may be estimated by

_ ZQ + Z I + + Zm z = m+ 1 2 . s m + 1 _ a (z) = V, where m ^ _ ( Z Q - Z ) ^ + + ( z , n - z ) ' m+ 1

Since knowledge of the variance is sufficient only if the probability density function of z is symmetric it is necessary to compute the coefficient of skew-ness 1 "• -> -TV ^ ( ^ j - ^ ) 1+ 1 j = 0 m a = (/7)^

For a symmetrical density function, a = o.

The experimental results for realizations of increasing duration nAr and for m + 1 = 1000 are presented in table 1.

It is apparent that z = ^ and that the skewness remains small for all values of n.

(31)

.i^4|''%^'Hi4it#^^^^^^^

t : 1 . 2 ICC

I'—• • • - - ^ i " ^ v " ^ "''' ' -•^ ^•\/S-»Av'^^^" v'Vl"^"' vV'V^"^~^"-•<A«v\A fAiV • * . » • * - w • ^ / V - ^ . |

0.1 0.5

0.1 0.2 0.3

/%. 2 (a) Plot of a computer-generated pseudo-white sequence (b) Autocorrelation function of sequence (a)

(32)

A plot of the experimental variance as a function of n is shown in fig. 3. It should be noted that for this experiment the random numbers may be generated from a standardized N(0,1) distribution. A gaussian variable x with

zero mean and variance a^ is related to a standardized gaussian variable s by X = as. Substitution of this expression in (35) shows that the variance in nom-inator and denomnom-inator cancels.

Table I. Experimental results for the center of a white noise realization.

noise center realization

duration in

time units mean variance stand, dev. skewness

4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 1.9671 2.9573 4.0228 4.9522 5.9563 6.9769 7.9412 9.0540 9.9917 10.9265 12.0086 13.0083 14.0047 14.9734 15.8680 17.0570 18.0302 18.8513 20.0789 21.0446 21.7894 23.0531 24.0836 24.9650 0.5759 0.8986 1.2121 1.5596 1.8320 2.3009 2.5478 2.6198 2.9921 3.3866 3.7352 4.1256 4.2273 4.5469 4.2878 5.3759 5.6933 6.2328 6.2769 6.6542 7.0307 7.0569 7.9669 8.1150 0.7589 0.9480 1.1010 1.2488 1.3535 1.5169 1.5962 1.6186 1.7298 1.8403 1.9327 2.0312 2.0560 2.1323 2.2995 2.3186 2.3861 2.4966 2.5054 2.5796 2.6515 2.1565 2.8226 2.8487 0.0673 - 0.0725 0.0316 - 0.0244 - 0.0138 - 0.0411 - 0.1697 - 0.0788 - 0.0784 - 0.0269 0.0252 0.0705 0.0361 0.0767 0.0039 - 0.0345 - 0.1118 0.0140 0.0514 0.0788 - 0.0070 0.0406 0.0060 - 0.0514

(33)

4.2.3. Derivation of mean and variance

The ratio of quadratic forms (35) may be reformulated as follows X? 2x1 , , i^n

' ^ 1 ' ' x5+ + x^ xS+ + x^ xS+ +x^

= yi +2y2 + + n y „

If the random variables Xo, Xi, , x„ are independent and all have the dis-tribution N(0,1) then (y i, y2, •.-, yn) has the n-dimensional Dirichlet distri-bution D(i^, , '/4, ^A) (Wilks, 1962).

The means, variances and covanances of the y's from a k-dimensional Dirichlet, distnbutionD(u,, ,Vy; u^+i) are given by

E(y,)= , i = l , . . . , k U, (U| + . . . . + tJK+i - U , ) (u, + . . . . + ( ; k + , ) ' ( i ' i + . . . . + U k + i + l ) o'iy)=v... : : ' T . : . ' : : ' . . . , i = i , . - . k o (y„ y,) = --f—7 7 'vTr—7 7 777 ' ' ^ J " l , - . k J (i>, + . . . . + u ^ + i ) (t^i + . - . + y k + i + i )

Since in the present case u, = /4, i = o,l, , n the following results are straightforward

E ( y , ) = ; ^ , i = l , , n (36)

'^'^^•^=(;r+l)'"(n + 3 ) ' ' = ^ ' ' " (3^)

'^^y"y^^ = - ( n + l ) 4 + 3 ) ' ' ^ J = ' ' ' " (^«)

The n-dimensional random variable ( y , , . . . . , y^^) has a constant mean given by l/(n + 1) and a covariance matrix 2 , the elements of which are given by (37) and (38).

(34)

• ' • ' i -»•

x:

• ^ y y< y y^ y^\^ -^ >^ y y^ >* j < y ^^^ ^ *y^ y ^

^x

y y< y j t ^ y^ Jr ^ y^ y j/^ ^ *y^ y y^ y *jy ^ *^ y y y ^^ y* ^ 4 y^ y j ^ ^ j ^ y * y ^ -4 ^ •^ \y^

^V^

•'* y^ X zy "^y^ ..^^^ n I n n E ( z ) = 2 t E ( y , ) = — 2 t = -1 -1 n + -1 t - i 2 W 20 W W W DURATION OF WHITE NOISE REALIZATION (IN NUMtER OF SAMPLING INTERVALS)

Fig 3 Variance of the center position of a white noise realization, as a function of

realization duration The + symbols are the experimental values, full hne is theoret-ical relation, dashed line is n/6

The center is a linear function of the y's and its mean is therefore given by

(39) The vanance is given by

0 ( z ) = ^ 2; i j a

1 - 1 J I

or, in matrix notation a^(z) = (d,2d) where d = (1, , n)

By straightforward calculation it is found that n(n^ + n + 1)

a^(z) =

(35)

This rational function of n is equivalent to

,„^ s n 7 1 1 .,,. R(n)= - + — — ; - (41)

^ ^ 6 4(n + 3) 12(n+l) 2 ^ ^

which IS the sum of a linear function and two hyperbola's. A plot of expres-sion (40) is shown in fig. 3 where the agreement between experimental and theoretical outcome is evident.

Expression (41) suggests that for practical purposes the standard deviation of the center position may be estimated by \/'(n/6). So, as a final rule-of-thumb, the center of a white noise realization of duration nAr may be expressed by

n n

Anderson (1948) has pointed out that for large n, expressions hke (35) are approximately normally distnbuted. This is in agreement with the expen-mental results, where the skewness remains small for all employed values of n. Second order statistics therefore seems a vahd approach to describe the behaviour of the center position.

4.3. The center of a realization of seismic noise 4.3.1. Characterization of seismic noise

The noise that is measured in seismic exploration is usually considered as a linear process:

(43)

where x, is a white series which represents disturbances caused by buried scatterers, wind and instrument noise and so on, and where a, = (ao, ai,..., a^) IS some seismic wavelet, which in this context is usually called the dis-turbance wavelet. Again it is assumed that x, is a random variable from a fixed gaussian distribution with zero mean N(0, 0^).

Since the output of a linear system which is excited by a gaussian process is also gaussian, the random variable Uj has a normal distribution N(0, a^),

E(u,) = E ( 2 a ^ x , _ J = 2 a^E(X(_J = o

(36)

A realization of duration nAr is represented by the vector u = (UQ, ..., u„) and the center is given by

(u, Du)

(u, u) (44)

where D = diag(0,l,..., n).

The coloured noise generated by (43) is correlated and its covariance matrix IS given by V = E ( u u ' ) = CT=' 1 P I PI " Pn Pn P I P I • 1 (45)

where the apostroph denotes transposition.

This IS a real, symmetnc Toephtz matrix. There are n + 1 distinct elements instead of (n + 1 ) ^ .

4.3.2. Experimental evaluation of mean, variance and skewness.

The seismic noise described in the previous section can be simulated by con-volving a seismic wavelet with a white series from a gaussian random number generator, such as was described in section 4 2.2. As a representative seismic signal one may choose the well-known Ricker wavelet (Ricker, 1944) This is a wavelet whose shape depends on a number of medium parameters and on the traveltime. If the Ricker wavelet of figure 4 is used as disturbance wavelet, a coloured noise is obtained of which a sample of 1 2sec is shown in fig. 5, together with a noise record obtained in the field, for comparison. The auto-correlation function of this sample of simulated seismic noise is also shown in fig. 5.

The medium parameters for the employed wavelet are density = 2.3 gr/cm^, velocity = 2230 m/sec, viscosity = 3.8 x 10^ gr/cm/sec, while the traveltime IS 0 6 sec.

The sample mean, variance, standard deviation and skewness, averaged over 1000 experimental center positions, for realizations of increasing duration, are given in table 2. The experimental results suggest that z" = n / 2 , that skew-ness IS negligible and that there is no significant influence of the phase-lag characteristic of the disturbance wavelet since there is but little difference be-tween the results obtained from a noise series formed with a minimum phase wavelet and the results obtained from a realization formed with the maximum phase correspondent of that wavelet

(37)
(38)

Table 2. Experimental results for the center of synthetic seismic noise realizations of

duration nAt.

with minimum phase disturbance wavelet n mean 4 2.003 6 3.0383 8 3.9910 10 4.9395 12 5.9536 14 7.1914 16 7.9477 18 9.0066 20 10.0305 22 10.9801 24 12.0313 26 13.0961 28 14.0587 30 14.9791 32 16.0819 34 17.0218 36 17.8702 38 18.8808 40 19.7735 42 20.9621 44 21.9240 46 22.9594 48 24.1749 50 24.8711 variance 0.6473 1.0787 1.7597 2.4951 3.4864 4.2324 5.5576 6.6744 8.3755 8.7144 10.8251 10.9559 12.4672 13.8121 15.8940 16.9678 18.2908 19.8851 21.4402 23.0036 24.4893 25.7836 27.8822 27.7210 stand dev. 0.8046 1.0387 1.3265 1.5796 1.8672 2.0573 2.3575 2.5835 2.8940 2.9520 3.2901 3.3100 3.5309 3.7165 3.9867 4.1192 4.2768 4.4593 4.6304 4.7962 4.9487 5.0778 5.2804 5.2651 skewness 0.0319 0.0385 -0.0617 0.0655 -0.0414 -0.0987 0.0373 0.0247 0.0058 0.0936 0.0369 -0.0079 -0.1007 0.0103 -0.0452 0.0380 -0.0411 -0.0412 0.1649 -0.0029 -0.0220 0.0719 -0.0136 0.0395 with mean 1.9850 3.0015 4.0686 5.0543 5.9461 6.8760 8.0350 8.9738 9.9812 11.0349 11.9294 13.1405 14.0756 15.1788 16.0540 16.9626 18.0894 18.7789 19.8117 20.8320 21.8655 22.9009 24.0624 25.1535

max. phase correspondent variance 0.6777 1.0727 1.7213 2.6448 3.3294 4.3971 5.6257 6.6795 7.6153 8.9340 10.1708 11.5991 12.4730 14.3263 15.8463 16.5263 18.2095 19.4205 21.2827 21.4941 23.9192 24.5027 26.1029 26.2099 stand dev. 0.8232 1.0357 1.3120 1.6263 1.8247 2.0969 2.3719 2.5845 2.7596 2.9890 3.1892 3.4057 3.5317 3.7850 3.9807 4.0652 4.2673 4.4069 4.6133 4.6362 4.8907 4.9500 5.1091 5.1196 skewness -0.0235 -0.0167 0.0150 -0.0198 -0.0209 0.0904 0.0460 -0.0123 0.0621 -0.0457 0.0240 0.0751 -0.0163 0.0175 0.1374 -0.0016 0.1039 0.0436 0.0036 0.0621 -0.0023 -0.0085 0.0845 0.0566

A plot of experimental variance, as a function of the duration of the noise realization is shown in figure 6.

Naturally, the noise power cancels in the calculation of the center position and the simulated noise may therefore be generated by a standardized N(0,1) white series convolved with a normalized disturbance wavelet.

4.3.3 Derivation of mean and variance

The derivation of E(z) and a^(z) given in 4.2.3. for white noise is not appli-cable to correlated noise since the random variables ^x^ are not independent. By means of an orthogonal transformation the u's may be transformed into uncorrelated random variables v,. Since the u, are gaussian the v, are also gaussian. Since uncorrelated gaussian variables are independent, the v, are independent.

(39)

ma-^^p^'.

r--AM|tfvfrv\A/^vMfW^^^

^jy^^Yf\JW\j^

fV^

mc-shif I i n %9C

Fig. 5. (a) specimen of synthetic seismic noise (b) a specimen of seismic noise recorded

(40)

trix P contains the n+1 eigenvectors of the covariance matrix V (45) in its columns It may therefore be derived from

P'VP = A, P'P = PP' = I, PAP' = V

where I is the identity matrix and A a diagonal matrix contammg the eigen-values of V Substitution of u = Pv into (44) leads to

v'P'DPv v'Qv

z, = } = — - > — ( 4 6 )

" - W W ^ -^

The matrix Q = P'DP is symmetric so that for (46) the following expression may be written

2 Q„ vf 2 2 Q,j V, v^ 2 vf 2 vf 1=0 1-0

The first term in the right-hand side of (47) has the same value as the center position of white noise since

Q„ = (P'DP),, = (DP'P)„ = D„ = 1,1 = 0 , 1 , ,n and the v, are N(o,a^,) and independent Therefore

z, = z^ + r (48) where z^ is the center of a white noise reahsation of equal duration and r is

the second term of (47) containing the crossproducts v,v

The first conclusion that may be drawn from (47) is that the behaviour of the center of a seismic noise realization depends on the eigenvectors of the covanance matnx (45) Since the covariance of a signal contains only infor-mation on the amplitude spectrum and none on the phase spectrum of the signal in question it is thus estabhshed that the center position is independent of the phase-lag of the disturbance wavelet

A second conclusion is that the expectation of the center position is equal to that for the white noise case, for

E ( z J = b ( z J + E ( r ) = E ( z J = ^

since E(r) = 0 because it contains the crossproducts of the independent random variables v..

(41)

A further statistical description of z^, is impeded since the variance cannot be formulated as a simple function of the covariance matrix of v, as in the white noise case

Abrahamse (1969) has pubhshed a numerical method to calculate the signifi-cant points of the distribution of quotients of quadratic forms that are similar to the expression for the center position z^. The distnbution must however be calculated for every individual case and his approach naturally does not lead to a simple expression for the variance of z^. An approximate evaluation of the variance is therefore in order

4 3 4 Approximate evaluation of mean and vanance

The center position z(x) is a functional of the vector x whose components x, are normally distributed with z^ro mean and fixed standard deviation a^

z(x) =

'n 2 txf xDx _ (-, '

^^ 2xf

t - o

The center position can also be expressed as a functional of a vector | whose components contain the squared values of the respective components of x

^t=x%

The mean and variance of §, are

M = E ( | , ) = E ( x f ) = a^,

o'(^0=^< (49)

Since E (xj) = 0 The center position as a function of ? is given by

n

^(I)=^^r^—

t = 0

The probability density function of |( is

a^/(27r?,)

(42)

Expanding z (^) in a Taylor series one obtains z(S) = Z(M) + (z'in), I - ^) + 5i((| - M), ? - M) + = zOu) + 2z;(A<)(?,-p) + '^ 2 z ' ; , ( M ) ( ? , - M ) ( ? , - / i ) + s,t where and z ; ( M ) = ( | ^ ) ^

The expectation of z(|) can now be approximated by

E{z(?)} = E{z(M)} + 2 z; (A.) E d , - M ) + '/i 2 z^', (/.) E {(?3-p)(|, - M ) } + •

t s,t '

(50) = Z(A() + o + >i 2 z';(Ai) a^ + '/i 2 z;', (M) p^t a^

t ^ s^t '

where p^., is the normalized autocorrelation function of ^j. The variance of z(5) is given by

a' [z(|)l = E [{z(|)-z(?)}^] = E[{z(?) z(//)-(r(?)-z(ju))}^] =

= E[{z(0-z(M)}' + {z(|)-z(M)}'-2{z(|)-z(M)} {z(|)-z(M)}i = = E[{z(?)-z(A.)}']- E[{z(?)-z(M)y ]

where z(|)=E[z(?)].

In first approximation the second term of this expression is zero and

a' {z(?)} = E[{2 z[in) (|,-/i)}^] = 2 {z'^in)Y E { ( | , - M ) ^ }+

t t

+ 2 z;(M)z;(,i)E{(?,-M)(|,-/i)}

S T t t

= 2 {z;(M)}' o^(?,) + 2 2 z>)z;(/i) P„ oHkt) (51) t s<t

(43)

The denvatives z'^ and z", are given by n n t 2 ?, - 2 r|, f _ r = o r I ^ t - n

( 2 i,f

r 0 2 2 r|, (s+t) 2 ?, »f _ r = l f - Q ^st n ; ( 2 >i^f r o

Finally it is necessary to express the autocorrelation of ^j = xf in terms of the autocorrelation of X(

If X( IS a stationary gaussian process with zero mean and standard deviation

a^ then the autocorrelation of | , is given by

E ( | , ^ . , ) = R j ( r ) = a t + 2 R ^ , ( r ) (54) (Davenport and Root, 1958)

The relation between non-normalized and normalized autocorrelation func-tion IS

^f(^)= ^

and the correlation factor in the expression (50) and (51) is therefore E{(|, P ) ( t , p)} = a^(|)p^, = R j ( r ) - / i |

where T = Is-tl

Combination of this expression with (54) and (49) gives

a\k) P^ir) = 2R^,(r) = 2[a^ P,(r) + p ^ , r (55)

As a check, the theory descnbed above can be applied to the white noise of section 4 2 Since this noise has zero mean and is uncorrelated it follows from (55) that the normalized autocorrelation of the squared series ?, is zero for

T T ^ O

The fourth term of (50) is therefore zero The third term can be evaluated by substitution of/i for ?,. in equation (53), which leads to

(52)

(44)

1 2 " n(n+l)M - 2t(n+l)u n n 2 "

-- at X ^-: :^ = -z- 2 1 — : -^ 2 t=o

2 ^ t=o /i^(n+l)^ (n+1)^ ,=„ (n+1)^ t=o therefore E{z(?) } = Z(/L() = —

which confirms the result of section 4.2.3.

The second term of (51) is zero since Ptir) = o. The first term is found by squaring (52) and substituting ji for ^ j .

This leads to

:. r . . 1 , n t^ - tn + i^n^ 2 n 2n n

+ -,- 2 1 2(n+l)^ t-o from which follows

Comparison with the exact expression (40) learns that (56) will produce too large values for a^ {z }. This discrepancy will occur specifically for small val-ues of n. For n = 24, for instance, expression (40) gives a^ {z} = 3 56 while (56) gives a^ {z}= 4.16. However, for large n, (56) approaches n/6, which for practical purposes may be accepted, as was suggested at the end of sec-tion 4.2. The approximasec-tion then, leads to the same result as was obtained by simphfication of the exact expression, which resulted in equation (42).

The approximate evaluation of the center mean and variance for a coloured noise senes u, is somewhat more involved. Let the corresponding squared series be denoted by TJ^. Since u, is correlated and has zero mean it follows from (39) and (55), that

Pr,(^) = Pu(^) (57) In second approximation the expected value of z(ij) is

E{z(r7)} = z(p)= ^ (58)

(45)

For the third term this can be shown in the same manner as it was done in the white noise case. The fourth term is

2 2 rr?^ - (s+t) 2 r), ^ 2 z ' » p , . a ^ = 2 a : " 2 " ' 2 , ^ s=^f s = o t = s + l ( •^ l?r) 2 z ' » p , , a ^ = 2 a : 2 2 , { ' , ^ „ ,3 ^ } P^t n - 1 n n — (s+t) 2 ^ n - 1 n n - 1 n

= ^^ ^ , („IJ ^st=7;TT^(" S ^ P s . - 2 2 (s+t)p,,}

s=o t=s+i (n+1) ( n + 1 ) s=o t=s+i s=o t=s+i

since u^ is a stationary process the autocorrelation depends on Is—tl rather than on s and t. With r = Is-tl the double summations between the braces can be shown to reduce both to

n 2 ( n - r + 1 )P(T)

T=I

The fourth term is therefore zero.

Apphcation of expression (51) to the squared series rjj gives for the first term the same result (56) as in the white noise case. The second term, however, is not equal to zero. It can be evaluated as follows, by substitufion of (52),

t 2 77^ - 2 rr?^ s 2T/^ - 2 irj^ 2 2 z;(M)z;(M)p,,< = 2 2 { ' ' ^ „ V ^ > P^t< s<t s<t i^Vr) K^ Vr) r r - V r ' ^ ^ t - 2n(s+t) + n^ _ 1 V ' T r i v o . ^ - 5 ^ Tr^U^ ^ ^st = 7-77^,2 ^ 2 (2s - n)(2t-n)p3t s<t (n+1) (n+1) s=o t=s+i and with T = t-s this reduces to

- J - i 2 p (r) "2 ' ( 2 s - n ) (2s+2r--n) (n+1) T=i s-o

The second summation can be reformulated:

"2 '' ( 2 s - n ) (2s+2T-n) = 4 2 ^ ^ - 4 ( n - r ) "2 '^ s + "2'^n (n-2T) =

s=o s= 1 s=l s=o

^ ( n - r X n - T + l ) ( 2 n - 2 T + l ) ^ ^ ( n - T ) ( n - r + l ) ,

= 4-^ f ' ^ - 4 ( n - 7 - ) - ^ — ^ ^ + n(n 2T)(n-r+l) =

(46)

Two expressions may therefore be written for the vanance of z(i7): ^'^^(^)> = ^ r T f ^ ^ r ^ ^ p^,(r) 2 " ' ( 2 s - n ) ( 2 s + 2 r - n ) (59) 6(n+l) (n+1) r=\ s=o and - , - n(n+2) 2 " , , 3n^+6n+2 n n(n+2) n, 2 2 P',(r) (60) 3(n+l) r=i

The initial assumption is that u, is generated by a linear process

u . = 2 a , x , _ 3 (61)

s = o

where a, is the disturbance wavelet and Xj is white noise of zero mean and power 1, that is

1, s = t

E(x,) = o E(x,x,)= (62) O, S=Ft

The autocorrelation of the stationary time senes u, is 'Puu('^) = E(Ut+^ u,)

Expression (61), written exphcitly, gives Uj = a„x, + a | X j _ , +a2X,_2 + and

" t + r = 2 ^ x , + ^ _ , = a o X , ^ ^ + a , x , + ^ _ , + . . + a ^ x , + a ^ ^ . , x , _ i +

and therefore

(47)

By equation (62) one sees that the crossproduct terms vanish. Furthermore, the constant coefficients a, can be taken outside the expectation symbol and

thus

>^uu ('") = 3o \ E(xf) + a, a^^ , E(xf_,) + = ao a^ + ai a^+, +

= 2 a, a,+^ t=o

The autocorrelation of the noise series represented by the linear process of

equation (61) is therefore equal to the autocorrelation of the disturbance wavelet:

Vuu (^) = '^aa ( 0

To obtain the normalized autocorrelation coefficients p^(j) it is therefore valid to compute the autocorrelation of the disturbance wavelet and to sub-sequently normalize it:

2 a, a,+^

2 af

t = o

If the wavelet ?i^ has duration mAr, or, in other words, contains m+1 samples, and the noise realization is of shorter duration, n < m, then the variance of Z(TJ) should be computed by means of equation (59). When n > m, on the other hand, one can exploit the fact that the three summations in (60) remain constant, which leads to an expression for the variance that depends only onn

It IS, however, cumbersome having to determine p^^ir) for each individual case of a seismic noise realization. The remedy for this is to make a general and acceptable assumption about the nature of the disturbance wavelet a,. If It IS, for instance, assumed that a( is a Ricker wavelet (Ricker, 1944) like the one employed in section 4.3.2., with m = 24, the autocorrelation coeffi-cients can be computed and subsequently the summations of expression (60) can be evaluated. This results in

" ^ , 2 2 ^ P a ( t ) = 1 8 4 2 tp^(t) = 6.46 t = i 2 t^p^(t)= 201.94 t = i

(48)

Substitution of these values into equation (60) leads to o\z) = n(n+2) 2 6(n+l) 3(n+l)^ X 202 - {1 1 3(n+l) n(n+2) 2}6.46 + - e — r ^ x l . 8 4 3(n+l) 4.68 n(n+2) 410.46 6.46, n > m = 24 (63) 6(n+l) 3(n+l)^ For n = 24, (T^(z) = 4.68 X 4.16 + 0.22 - 6.46 = 13.23

whereas the corresponding experimental value is 10.82. The variances mated by equation (59) are given in table 3. As was to be expected the esti-mations are larger than the experimental values of table 2. A plot of the estimated variances as a function of n is shown in fig. 6.

30

111 20

10

+ +

10 20 30 1,0 SO DURATION OF NOISE REALIZATION (IN NR OF SAMPLING INTERVALS! Fig. 6. Variance of the center position of a seismic noise realization, as a function of

realization duration (full line). The + symbols represent experimental values, the dashed line is the relation for white noise.

(49)

Table 3 Results of approximate evaluation of the variance of the center position of

seismic noise realizations with duration nAt.

n first term of standard approximation second term variance deviation

4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 0.8 1.14 1.48 1.82 2.15 2.48 2.82 3.16 3.49 3.82 4.16 4.49 4.83 5.16 5.49 5.83 6.16 6.5 6.83 7.16 7.5 7.83 8.16 8.5 0.25 0.2 0 58 1.35 2.29 3.32 4.42 5.56 6.72 7.89 9.09 10.28 11.49 12.7 13.92 15.13 16.35 17.57 18.79 20.01 21 23 22.46 23.68 24.91 1.05 1.35 2.06 3.17 4.44 5.81 7.24 8.71 10.21 11.72 13.25 14.78 16.32 17.86 19.41 20.96 22.51 24.06 25.62 27.17 28.73 30.29 31.85 33.4 1.03 1.16 1.44 1.78 2.11 2.41 2.69 2.95 3.19 3.42 3.64 3.74 4.04 4.23 4.41 4.57 4.74 4.91 5.06 5 21 5.36 5.5 5.64 5.78

The quantity that is employed for practical purposes is the standard devia-tion, which IS found by taking the square root of the variance. The percent discrepancy between "true" and estimated standard deviation will thus be one half of the percent discrepancy between the "true" and estimated var-iance.

Expression (63) may be simphfied for practical purposes. For large n the var-iance of the center is

oHz) = 0.78 "Z""^^,^ - 6.46 * (3n - 26)/4

(50)

And, as a final rule-of-thumb, the center of a seismic noise realization of duration nAr may be expressed by

z= - + i / 4 / ( 3 n - 2 6 ) , n > m = 24 (64) 4.4. Conclusions

The center position, or first order length, of a realization of a random noise process has an expected value that is equal to half its time duration. In terms of poles and zeros in the complex plane this means that the z-transform of the realization has an equal number of poles and zeros inside and outside the unit circle. The center may deviate from its expected position by an amount of order ± J (n/6) for a white noise realization of duration nAr and by an amount of order ± M J(3n — 26), n > m = 24 for a seismic noise realization generated by a linear process with a Ricker-shaped disturbance wavelet of duration mAr, with m = 24. The deviation from the expected position, in the seismic case, depends on the autocorrelation and thus on the amplitude spec-trum of the disturbance wavelet. The phase lag of the disturbance wavelet has no influence.

The center position is a random variable equal to a quotient of quadratic forms. Experimental evidence suggests that the probability density function of the center position is symmetrical. A method to derive the distribution function of some quotients of quadratic forms has been published by Abrahamse (1969). It is however a rather involved numerical procedure and therefore it is perhaps best to confine oneself to the experimental evidence that suggests that the center position has a symmetrical distribution.

(51)

5. THE PHASE-LAG OF A NOISE-CONTAMINATED SEISMIC WAVELET.

5.1. Introduction.

In this section the phase-lag of the sum of a seismic wavelet of duration nAt and a noise realization of equal duration is studied. The following definitions will be required:

A digital signal with coefficients a, is an energy signal if its energy is finite:

E= 2 a f < « ' . (65) A digital signal with coefficients x, is a power signal if its power is finite:

P = lim 2 xf < °° (66) T^"" 2T + 1 t = -T

A wavelet with coefficients b, is a one-sided energy signal:

b, = 0 for t < 0 (67) 2 bf is finite

t = o

A wavelet is therefore a transient. 5.2. The center of two added signals.

A digital signal built up by the addition of two other signals is considered:

s, = X, + y, (68) It is assumed that x, and y, are wavelets. The center of s, is given by

2tsf 2t(x,+y,)^ 2 t x f + 2 tyf + 2 2 tx,y,

'=' _ ^ U t^i 1=1 t=2 .gg.

2 sf 2 (x,+y,)' 2 xf + 2 yf + 2 2 x.y,

t = o t = o t = o t = o t = o

Cytaty

Powiązane dokumenty

ponieważ byliśmy sobie pisani teleologiczna II 4 po prostu, takie rzeczy nie mają celu anty-teleologiczna 1 ponieważ widocznie tak miało być teleologiczna I 3 ponieważ

Since no conventional methods to produce silicon nano-particles have been already established, several synthesis routes have been explored: Electrochemical Etching, Spark

Clearly (snÇBC) converges uniformly to /£ BC.. Thus in [5] we introduce the concept of integration relative to a measure defined on an ideal... To facilitate

The following easy result shows that countably incomplete ultrapowers of infinite structures are always non-trivial..

The claim of the theorem concerned Galois module properties of class groups of towers of cyclotomic fields and was reformulated by Iwasawa in [I2] as a conjecture, later named the

Bogaty we wrażenia pierwszy dzień zjazdu zakończył się przy ognisku, które zapłonęło nad brzegiem Jeziora Lednickiego przy siedzibie dyrekcji Muzeum Pierwszych Piastów

Studia Philosophiae Christianae 10/1,

Figure 6a shows one shot record 共primaries plus sur- face as well as internal multiples 兲, and Figure 6b shows this shot record without surface multiples 共primaries plus