• Nie Znaleziono Wyników

Inversion methods for the separation of blended data

N/A
N/A
Protected

Academic year: 2021

Share "Inversion methods for the separation of blended data"

Copied!
136
0
0

Pełen tekst

(1)

Inversion methods

(2)
(3)

Inversion methods

for the separation of blended data

PROEFSCHRIFT

ter verkrijging van de graad van doctor

aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof. ir. K.C.A.M. Luyben,

voorzitter van het College voor Promoties,

in het openbaar te verdedigen

op maandag 14 januari 2013 om 10:00 uur

door

Panagiotis DOULGERIS

electrical and computer engineer, Aristotle University of

Thessaloniki

(4)

Dit proefschrift is goedgekeurd door de promotor: Prof. dr. W.A. Mulder

en copromotor: Dr. ir. G. Blacqui`ere

Samenstelling promotiecommissie:

Rector Magnificus, voorzitter

Prof. dr. W.A. Mulder, Technische Universiteit Delft, promotor Dr. ir. G. Blacqui`ere, Technische Universiteit Delft, copromotor Prof. dr. ir. A. Gisolf, Technische Universiteit Delft

Prof. dr. ir. E.C. Slob, Technische Universiteit Delft Prof. dr. ir. C. Vuik, Technische Universiteit Delft

Dr. ir. D.J. Verschuur, Technische Universiteit Delft, advisor Dr. ir. Z. Tang, Shell Global Solutions International, advisor Prof. dr. ir. R.F. Hanssen, Technische Universiteit Delft, reservelid

ISBN 978-94-6182-218-5

Copyright c 2012, by P. Doulgeris.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopy-ing, recording or otherwise, without the prior written permission of the author. SUPPORT

The research for this thesis was financially supported by the Delphi consortium.

Typesetting system: LATEX.

(5)
(6)
(7)

Contents

1 Introduction 1

1.1 The Seismic experiment . . . 1

1.2 Modelling the seismic experiment . . . 5

1.3 Blending . . . 10

1.4 Objective . . . 13

1.5 Literature review . . . 13

1.6 Thesis outline . . . 15

2 Blending and pseudodeblending 17 2.1 Formulating blending . . . 17

2.2 Inverse problem . . . 19

2.3 The minimum-norm solution . . . 21

3 Deblending with coherency constraints 25 3.1 Coherency constraint . . . 25

3.2 Iteration . . . 26

3.3 Coherency–pass filter . . . 28

3.4 Extended algorithm . . . 32

4 Convergence analysis 35 4.1 Convergence to the unique solution of the subproblem . . . 35

4.2 Leakage subspace . . . 37

4.3 Convergence in the presence of leakage subspace . . . 39

4.4 Convergence rate . . . 40

4.5 Convergence check . . . 41 4.6 Connecting acquisition with deblending via the leakage subspace . 43

(8)

2 CONTENTS

4.7 The role of the outer iteration . . . 46

4.8 Acknowledgement . . . 47

5 Application to field data 49 5.1 Deblending with different coherency-pass filters . . . 49

5.2 A real blended experiment . . . 55

5.3 Blended experiments with more sources . . . 56

5.4 Acknowledgements . . . 63

6 Integrating surface-related multiple elimination 69 6.1 Motivation . . . 69

6.2 Surface-related multiple elimination . . . 70

6.3 Forward model with surface multiples . . . 71

6.4 Deblending and surface multiple removal . . . 71

6.5 Example . . . 72

6.6 Discussion . . . 73

6.7 Acknowledgement . . . 73

7 Conclusions and Discussion 77 7.1 Conclusions . . . 77

7.2 Discussion: to deblend or not to deblend . . . 79

A Proofs 81 A.1 Change of notation . . . 81

A.2 Convergence to the unique solution . . . 82

A.3 Convergence to the minimum norm solution . . . 86

A.4 Convergence rate . . . 90

B Computation of the leakage subspace 91 B.1 Method . . . 91

C Coherency-based separation methods 93 C.1 Assumptions . . . 93

C.2 Methods . . . 94

D Estimation of deblended primaries by sparse inversion 97 D.1 Introduction . . . 97

D.2 Method . . . 98

D.3 Example . . . 102

D.4 Conclusions . . . 102

E Business value of a deblending algorithm 105 E.1 The value chain of the upstream business . . . 105

(9)

CONTENTS 3

E.3 Computation of the added value . . . 107

Bibliography 111

Summary 119

Samenvatting 121

CV note 123

(10)
(11)

1

Introduction

1.1

The Seismic experiment

The seismic experiment belongs to the broad(er) category of non–destructive test-ing. As the name suggests, such methods are utilized when we wish to explore the properties of the interior of an object without destroying it. In the oil & gas in-dustry, the objective is the detection of hydrocarbon reservoirs in the subsurface, hence, the object we wish to explore is the subsurface.

Exploration methods utilized today can be divided into active and passive. Active methods require sources, which excite the earth, and detectors that measure the earth’s response. Passive methods require only detectors since they rely on the excitation by noise sources, which are always present. In both cases, in order to retrieve information about the subsurface, we need to apply an inversion proce-dure. This process can be described as making sophisticated guesses about what the subsurface looks like, and then evaluating them using the laws of physics. The disciplines of physics that are most frequently deployed are concerned with elec-tromagnetic fields, elastic fields, ground movement, gravity and others. Among these, pressure and ground movement are typically used for oil & gas exploration purposes as part of the seismic experiment.

In this thesis, I will study only active surface seismic experiments. The minimum gear needed to perform such an experiment consists of a source, a detector and

(12)

2 Introduction

a recording device. The source emits acoustic waves into the ground and the detector records the reflected wave field that is generated mainly by the boundaries between different layers in the subsurface. In principle, the source should have an infinite bandwidth in order to recover the exact impulse response of the subsurface. In practice, the sources currently used produce a band–limited signal which is typically within the range of 5 - 250 Hz. The direct wave of the source as well as the reflected ones are then received by at least one detector. The signal is then stored in a recording device. In practice, measurements are done such that sufficient time intervals exist between the firing of successive sources. In this way, there is no overlap between the responses of different sources.

The simplest configuration of a seismic experiment is to deploy the source and the detector along a straight line. In this way we can disregard one dimension (2D assumption), which simplifies the processing of the data. Figure 1.1 illus-trates such a simple seismic experiment. Since the 1970’s, the seismic exploration industry has moved to more sophisticated 3D experiments. In such experiments, the sources and/or receivers extend over two dimensions at the surface. This enables better illumination and sensing of the boundaries in the subsurface from several angles. Taking into account the logistics involved, these experiments are organized in large-scale seismic surveys. These surveys fall under two main

(13)

1.1 The Seismic experiment 3

gories, depending on location, namely onshore (or land) and offshore (or marine) surveys.

Land surveys

Land surveys can extend up to many thousands of square kilometres and typically employ large crews. Most land surveys use 3D acquisition designs and the duration of the experiments combined with the cost of the necessary infrastructure turn this type of surveys into multi-million investments with great impact on local societies and economies.

Depending on the exact location, certain challenges arise in such surveys. For example, the complexity of the near surface, i.e., the first few layers beyond the surface, can distort the acquired images of the underlying formations. Such chal-lenges are strongly present in data acquired in the deserts of the Middle East, e.g. see Keho and Kelamis [2012]. Another challenge might be the surface terrain it-self, since such surveys often take place near or even within populated areas. Such challenges have been addressed in the last decades by the use of non-impulsive sources. The shift from dynamite to vibroseis, as this technology of vibrating seismic sources is called, has introduced both complexity and opportunities to the seismic acquisition designs.

Marine surveys

Several configurations for acquiring seismic data in marine environments have been proposed and implemented throughout the years. Among them, the towed streamer technology has been the most popular and most-widely deployed method for offshore exploration. The equipment for conducting a seismic experiment consists of at least one source and one receiver, as depicted in Figure 1.1. In the towed streamer case, a vessel is towing both a source and a streamer. The source is typically an array of airguns that act as an impulsive source. A streamer is a cable carrying hydrophones that receive the acoustic waves and send them to recording devices, which are kept on the vessel.

Following the paradigm of land surveys, 3D designs have been introduced to the marine case as well. In these cases, more than one streamer sail in parallel, all towed by the main vessel. Figure 1.2 illustrates a simple 3D configuration, which includes the vessel, an airgun as source and four streamers.

During the last decade, another type of acquisition design has been introduced, namely, wide-azimuth (WAZ) acquisition. In these cases, additional boats car-rying seismic sources sail in parallel to the main vessel that tows the streamers. Such surveys can sometimes become prohibitively expensive since they require

(14)

4 Introduction

more than one pass from a certain area. An overview of the different types of acquisition for the towed-streamer configuration can be found in Figure 1.3. Other possible configurations that are common in marine environments include the ocean bottom cable (OBC) and ocean bottom nodes (OBN) technologies. In these cases, the receivers are located on the sea floor and vessels equipped with seismic sources sail at the surface. Such systems are frequently installed for mon-itoring purposes in areas where a reservoir is already being produced. The area covered during a marine seismic survey can range from few hundreds to hundreds of thousands of square kilometres. During these surveys, the seismic sources emit excessive amounts of acoustic energy that can have a direct impact on the pop-ulations of the regional or oceanic ecosystems. Although precaution for marine mammals is part of the standard procedure of a seismic survey, awareness has risen lately. This leads to the establishment of regulations for securing the safety of marine populations. This can impose additional constraints on the duration and design of acquisition surveys.

1

2

3

Figure 1.2: A towed streamer configuration requires (1) a seismic vessel, (2) an airgun or airgun array that acts as the source, and (3) a network of streamers.

(15)

1.2 Modelling the seismic experiment 5

Figure 1.3: The basic elements of 3 acquisition types for a towed streamer configura-tion: 2D, 3D, and WAZ. The stars denote marine sources, whereas the triangles denote hydrophones.

1.2

Modelling the seismic experiment

The main goal of the seismic experiment is to obtain structural or property in-formation on the in-formations that are present in the subsurface. The problem of extracting this information from the acquired data is an inverse problem, see Tarantola [2005]. One of the most crucial parts in solving such problems is form-ing an accurate forward problem, in this case a method for generatform-ing seismic data for a given model of the subsurface.

In this thesis I generally follow wave field extrapolation-based modelling as de-scribed by the WRW model, see Berkhout [1982]. The WRW model offers not only a modelling technique but also a simplified matrix notation for describing complex operations. In the core of this notation lies the way we arrange the acquired data in the so-called data matrix .

The data matrix

During the data acquisition phase in the field, the data recorded by each detector are stored as separate traces that in addition carry information about the source and detector locations. Based on these locations, these traces can be arranged in a data matrix.

(16)

6 Introduction

experiment can be arranged in a rectangular parallelepiped. The dimensions are the source location, the receiver location and time, see Figure 1.4(a). If we compute the Fourier transform of every trace along the time direction and we select one frequency component from every trace, then we can form the data matrix P(zd, zs), see Figure 1.4(b). The depth indication (zd, zs) follows the

convention (detectors’ depth, sources’ depth).

T ime Receiver position Shot (a) Source position Receiver position (b) (c)

Figure 1.4: (a) Seismic data arranged in a rectangular parallelepiped. (b) The monochro-matic data matrix. (c) The monochromonochro-matic data matrix for 3D data.

(17)

1.2 Modelling the seismic experiment 7

In the 3D case, every trace has a unique set of coordinates, namely the source x and y coordinates and the receiver x and y coordinates. Following the data-matrix convention described above a 3D seismic dataset would be arranged as a 5D volume. Kinneging et al. [1989] describes an efficient way of representing such volumes by appending 2D datasets. Figure 1.4(c) shows an example of what a 3D (monochromatic) data matrix looks like. x and y stand for the x and y coordinates where as the s and r subscripts denote the source and receiver side, respectively.

The WRW parametrisation

The key feature of the WRW model is that it describes the seismic data in terms of matrix operators in the frequency domain. Assuming that both source and receivers are located at the surface (zd = zs= z0), each monochromatic

compo-nent P(z0, z0) of the primary wave field, can be described in the space-frequency

domain as P(z0, z0) = D(z0) M X m=1 [W(z0, zm)R(zm, zm)W(zm, z0)]S(z0). (1.2.1)

The depth levels zm are the levels where the waves are reflected. The spatial

location indices xs, xr, ys, and yr are implicit in the matrix notation. The

designation ‘WRW model’ stems from the matrix operators W and R, which are explained below. It is worthwhile to mention that the WRW model serves as the vehicle for the CFP technology (see Thorbecke [1997]). Figure 1.5 shows a schematic representation of the model.

The matrix operators in equation 1.2.1 have the following meaning:

• S(z0): source matrix, each column represents the source wave field for one

experiment at depth level z0, typically at the surface. In the simplest case a

column contains only one non-zero element, meaning that only one location is active at the time. Complex source arrays can be easily expressed by the source matrix.

• W(zm, z0): forward wave field propagation matrix. Each column contains

a discretised Green’s function describing wave propagation from one point (one lateral location) at depth level z0 to many points at depth level zm.

• R(zm, zm): reflectivity matrix, which performs the conversion of an incident

wave field into a reflected wave field at depth level zm.

• W(z0, zm): forward wave field propagation matrix. Each column contains

a discretised Green’s function describing wave propagation from one point at the reflection level zmto many points at the receiver level z0.

(18)

8 Introduction

• D(z0): detector matrix, equivalent of the source matrix. It contains the

impulse response of the detecting devices, e.g., hydrophones, and their lo-cations. Complex detector arrays can be easily expressed by the detector matrix.

Since every column of the data matrix contains traces that recorded the response from the same source, it represents a common source gather (shot record). Sim-ilarly, every row represents a common receiver gather. Data gathers such as common offset or mid-point gathers can also be identified in the data matrix of Figure 1.4(b) as diagonals or anti-diagonals, respectively.

The Reflectivity Matrix

In the previous section, we reviewed the general description of the WRW model. The reflectivity matrix was introduced as a matrix that converts the incident wave field to reflected wave field. This matrix will be further studied in this paragraph.

Figure 1.5: The WRW model according to equation (1.2.1). For every depth level, a

(19)

1.2 Modelling the seismic experiment 9

The most compact form of a reflectivity operator can be given in the wave number– frequency domain. In this domain, the reflected (up going) wave field at depth z1, ˜P−(kx, z1, z0, ω), is given by an element-by-element multiplication of the down

going wave field, ˜P+(k

x, z1, z0, ω), with the reflectivity operator, ˜R(kx, z1, z1, ω),

as follows

˜

P−(kx, z1, z0, ω) = ˜R(kx, z1, z1, ω) × ˜P+(kx, z1, z0, ω). (1.2.2)

For example, in the simple case of a horizontal reflector, the reflectivity oper-ator, ˜R, is given by ˜ R(kx, z1, z1, ω) = ρ2kz,1− ρ1kz,2 ρ2kz,1+ ρ1kz,2 , (1.2.3)

according to Wapenaar and Berkhout [1989], where kz,(1,2)=

q

k1,22− kx2 (1.2.4)

and

kx= k1,2sin αi,t. (1.2.5)

Here, ρ is density, and subscripts 1 and 2 refer to the upper and lower layer around depth level z1, respectively. αiand αtare the angles of incidence and transmission

respectively. In the following equations, kxand kzare used to denote two spatial

components of the wave number k, see eq. (1.2.4). In the concept of plane waves decomposition, each plane wave and the corresponding vector k have a certain dip. kx is k’s projection on the x axis that in this case is the receivers’ axis.

Therefore, it is inversely proportional to the apparent velocity of the plane wave. Moreover, the desired spatial sampling on the acquisition surface for alias - free results, is directly related to the maximum kx value present in the wave field.

Due to the fact that lateral variations can not be manipulated in the wave number - frequency domain easily, the aforementioned expressions should be formulated in the space - frequency domain. As a result, the multiplication should be replaced by a convolution in the space direction between the reflectivity operator in the space -frequency domain, R(x, z1, z1, ω), and the down going wave field expressed in the

same domain, ˜P+(x, z

1, z0, ω). The convolution can be implemented as a matrix

multiplication by forming a convolution matrix based on the reflectivity operator R(x, z1, z1, ω). Figure 1.6 schematically represents the formation of a convolution

(20)

10 Introduction

Figure 1.6: Forming a convolution matrix.

matrix from an operator. As shown in this figure, in order to convolve the m-element operator Xj with the n-element operator Yi, the (m+n−1,n)-element

convolution matrix is formed to be multiplied with column matrix Yi.

In this way, each row of the reflectivity matrix, R(z1, z1), contains an operator in

the space-frequency domain which corresponds to a certain grid point at depth z1. In the simple situation that all grid points at a certain depth have the same

reflectivity properties (homogeneous reflector), the reflection matrix has a Toeplitz structure.

1.3

Blending

In current acquisition designs, we allow for sufficient time intervals between the firing of sources. The aim is to make sure that all the reflected energy from the deepest layers in the subsurface arrives at the surface before we let the next source go off. In this way, the shot records we acquire have no interference from sources that were previously active in the area. But is this the most optimal and time-efficient way to perform a survey? What if we let sources fire in a blended way?

(21)

1.3 Blending 11

Definition

According to Oxford Dictionary of English [2010]: blend: /blǫnd/

verb

mix (a substance) with another substance so that they combine together: blend the cornflour with a tablespoon of water add the grated cheese and blend well

• (often as adjective blended) mix (different types of the same substance, such as tea, coffee, spirits, etc.) together so as to make a product of the desired quality:

a blended whisky

• put or combine (abstract things) together:

I blend basic information for the novice with some scientific gardening for the more experienced

(as noun blending) a blending of romanticism with a more detached mod-ernism

• merge (a colour) with another so that one is not clearly distinguishable from the other:

blend and smudge the darker colours under the bottom lashes • (no object) form a harmonious combination:

costumes, music, and lighting all blend together beautifully

• (blend in/into) be an unobtrusive or harmonious part of a greater whole by being similar in appearance or behaviour:

she would have to employ a permanent bodyguard in the house, someone who would blend in

The word blending has been adopted to express a change in the way seismic ac-quisition takes place. The term was introduced by Berkhout [2008], who proposes to abandon the condition of non-overlapping shot records. Instead, a plea is made to move to densely sampled and wide-azimuth source distributions with relatively small time intervals between consecutive shots (‘blended acquisition’). The small time intervals lead to interference between the responses of consecutive sources, hence, the acquired data are blended, see Figure 1.7.

Blended surveys

The constraint of non-overlapping or unblended data leads to severe waste of acquisition time. While in onshore surveys long waiting times are encountered,

(22)

12 Introduction

Figure 1.7: Blending: each box represents a conventional shot record. A blended survey has the survey time of a sparse survey but the number of shots of a dense survey. Figure adapted from Berkhout [2008].

offshore, a seismic vessel may have to pass the same region several times in a WAZ survey. The cost of seismic acquisition occupies the largest share of exploration budgets, hence, any method to reduce acquisition times would be extremely ben-eficial for the economics of a seismic survey. A more thorough description of the logistics of a blended survey can be found in appendix E.

Blending can also be performed to increase the quality of the acquired data. Recent advances in 3D acquisition designs require good source coverage, hence, more time in the field. By reducing the required passes, blending can save time and make such surveys affordable. In this way, very sophisticated 3D surveys can be realized even at the first stages of exploration.

Apart from the quality/price improvement, the reduced acquisition times accom-plished with blending can be desirable in cases where the actual duration of a survey is an issue. Such cases might include environmentally sensitive locations or locations with strict time regulations on the license acquired. The upcoming legislation for the protection of sea mammals in certain areas might also prove to be a boost towards the design of more blended surveys in the future.

(23)

1.4 Objective 13

The benefits of blended or simsource surveys, as they are alternatively called, come with some cost. Since we no longer wait for the response of one source to be recorded before we fire the next one, the responses of more than one source exist in every shot record. This type of interference noise or cross-talk can create severe artefacts during the processing of seismic data and may also harm weak reflections.

1.4

Objective

The objective of this thesis is to provide a method for the separation of blended data. The result of this method will contain the response of only one source in every shot record, hence, the subsequent processing steps will not suffer from interference noise. Specific focus is given to the marine case and, in this context, to the integration with a surface-related multiple elimination algorithm.

1.5

Literature review

There has recently been a good deal of attention paid to the concept of blended acquisition designs by both industry and academia. The idea of allowing more than one source fire simultaneously has been around for long time in land surveys, e.g. see Silverman [1979] or Bagaini [2006] for an overview. However, new success stories of great production increase due to blending of more sources in land surveys have been recently reported, see Howe et al. [2009], and Pecholcs et al. [2010]. In addition, blending has been brought to the spotlight in the recent years by a number of publications that discuss both the advantages and issues that arise in such surveys, e.g., Berkhout [2008], Hampson et al. [2008].

While encouraging results have been obtained in land surveys, the concept of blending is not so widespread in the marine case. The idea of letting two sources fire simultaneously in marine surveys was first introduced by Beasley et al. [1998], while the use of random, quasi-random, or systematic time delays between firing sources was proposed by Vaage [2002]. Such an approach was used by Hampson et al. [2008] during their blended survey. The link between incoherent shooting and il-lumination properties of the subsurface was established by Berkhout [2008] and Berkhout et al. [2010]. This link is important because it connects directly the acquisition design with the information that we wish to extract from the recorded data.

A major decision to be taken regarding the processing of blended data is whether they will be processed directly or first separated and then processed. In the first case, all the processing tools need to be redesigned in order to handle blended

(24)

14 Introduction

data. So far, such attempts aim at developing specialized least-squares migration schemes, e.g., see Dai et al. [2010], and Verschuur and Berkhout [2011]. An earlier article by Romero et al. [2000] is relevant to this work since it investigates the use of phase encoding in the simultaneous pre-stack shot record migration of multiple shot records. In the second case, an algorithm that will be able to deblend the data is required, i.e., separate the blended records into shot records that contain the response of only one source.

Deblending can be considered a denoising problem, treating the interference due to blending as noise. It has been reported by various authors (e.g., Doulgeris et al. [2010]) that by sorting the acquired blended data into a different domain than the common source domain, e.g., the common offset domain, the blending noise appears incoherent. Based on this property, Huo et al. [2009] use a vector median filter after resorting the data into common mid-point gathers. Kim et al. [2009] build a noise model from the data itself and then adaptively subtract the modelled noise from the acquired data. This algorithm is implemented in the common offset domain. Hassanien et al. [2012], based on reciprocity, blend numerically the data in order to obtain a new blended dataset. Comparing this with the recorded one, they distinguish between signal and interference noise. Then, they use a moving average filter to suppress the interference noise.

Another approach is to formulate deblending as an inverse problem that estimates the unknown unblended data. Since this is an ill-posed problem, a regularisation term is required. Akerberg et al. [2008], Moore et al. [2008], and Moore [2010] use a sparsity constraint in the Radon domain in order to regularize the inversion. Similarly, Herrmann et al. [2009] and Lin and Herrmann [2009] use an inversion approach that constrains the separated data to be sparse in the curvelet domain. They link their sparse inversion algorithm to the growing field of compressive sensing. It is common practice in such an approach to use sophisticated source codes (e.g., sweeps or random phase or/and amplitude encoding). Wapenaar et al. [2012] make the link with interferometry and propose a direct inversion that makes no explicit assumptions rather than the inherent band limitation of seismic data. Their method requires the blending of adjacent sources and it assumes that the responses to these sources will be similar.

Another set of inversion algorithms is based on the coherency of the seismic data. Abma et al. [2010] describe two methods inspired by the POCS algorithm, which is well-known in the geophysical and signal processing communities, for which the basic theorem can be found in Cheney and Goldstein [1959]. Their implemen-tation involves a filter and the application of a threshold. An extension to this method has been described by Beasley et al. [2012]. Their alternating projections method introduces a second filter that focuses in retrieving the incoherent inter-ference noise. The methods presented in the main body of this thesis are also based on the coherency of the seismic data but use a somewhat different inversion

(25)

1.6 Thesis outline 15

scheme. The differences between the coherency-based methods are highlighted in appendix C.

1.6

Thesis outline

This thesis consists of seven chapters that aim at covering the most fundamental aspects of the proposed method for the separation of blended data. The first chapter introduces the notion of blending, starting from the basics of seismic exploration. A brief literature review highlights the extensive interest on blending in the late years.

The second chapter is dedicated to the mathematical formulation of blending. A way to turn the abstract idea of blending into a matrix multiplication is described in this chapter. The beauty of the mathematics will be revealed when a naive separation method will be explained in a mathematical context and will provide a well-known pseudo-solution to the blending problem.

The readers that would like to read only one chapter of this thesis should prob-ably move directly to the third chapter. The iterative separation method is here explained in an intuitive way. Since this chapter describes the core of this study, at the end of it the reader should be able to understand why this method works. The fourth chapter is probably the most important but also challenging part of this thesis. Following the intuitive explanation of the third chapter, it sheds some light to the algebraic details of the algorithm. The outcome of this study will help the reader answer questions regarding the convergence of the algorithm and its fundamental limits.

The results of the application of the proposed method on field datasets are pre-sented in chapter five. This chapter illustrates that blending is a viable step forward, since it contains the results from the application of the method to one of the first blended marine surveys.

The sixth chapter integrates the separation method with the ideas behind the surface-related multiple elimination (SRME) method. Thus, two processes are performed simultaneously and benefit from each other.

The final conclusions of the thesis are given in chapter seven.

This thesis tries to be as broad as possible in content, hence more information on different aspects of blending can be found in the appendices. The first appendix contains all the mathematical proofs required for chapter four. The second ap-pendix contains an algorithm for the computation of the leakage subspace whose role is of utmost importance when it comes to the understanding of the separa-tion algorithm. In the third appendix I will attempt to bring various separasepara-tion

(26)

16 Introduction

methods to the standard linear algebra notation with the aim to compare them. The fourth appendix describes an alternative way to perform the integrated sepa-ration and multiple removal in the form of a re-parametrized EPSI (Estimation of Primaries by Sparse Inversion) algorithm. The fifth appendix addresses blending from a business point of view and contains an estimate of the value that blending can create for a company.

(27)

2

Blending and pseudodeblending

Blended surveys have been proposed as a way to significantly increase the quality -cost ratio of seismic data. Implementing blending in the field is rather straightfor-ward in most cases. However, the challenge of separating the data into datasets for each of the contributing sources passes on to the processing side. In order to address this challenge a firm mathematical formulation is required. Such a formulation is provided in this chapter.

The formulation of blending can be utilized directly to produce a ‘pseudo’ solution. Although stemming from mathematics, this solution has a clear physical interpre-tation that will be later used for explaining the separation method presented in this thesis.

2.1

Formulating blending

Blending consists of two basic actions, namely, source encoding and subsequent summation as shown in Figure 2.1. Every source activated during a blended experiment has a unique source code1. Depending on the type of acquisition,

source codes can range from simple time delays to very sophisticated phase and amplitude codes. Source codes, in the form of sweeps, are already widely used in vibroseis land surveys. When two or more sources are blended, all of the wave fields created are summed in the subsurface (constructive interference). The resulting response recorded at the surface is a blended shot record.

1Source codes are time-dependent signals, which, if convolved with a seismic wavelet, produce the actual emitted signal by the seismic source.

(28)

18 Blending and pseudodeblending

Encode

Sum

B

LENDING

Figure 2.1: The first step of blending is source encoding. The code applied to every source can range from simple time delays to very sophisticated phase and/or amplitude codes. The response to the sources is summed in the subsurface before it is recorded at the surface.

The blending matrix

The data matrix will help us to formulate blending in a mathematical way. In the previous section, I decomposed blending into source encoding and summation. These two actions need to be expressed now in mathematical terms.

In order to encode a source, one needs to convolve a band-limited impulse with a source code. Convolution of two signals in the time domain can be carried out as a multiplication in the frequency domain. This requires that both signals undergo a Fourier transform before multiplied. Let’s study an example. Time delays can be expressed as a delayed impulse δ(t−τ1), where τ1is the delay we wish to introduce

to source 1. Transformed onto the frequency domain, it becomes a phase shift e−jωτ1. The above method for applying a source code is compatible with the data

matrix formulation, since in the latter, seismic data are already expressed in the frequency domain.

Summation of the wave fields created by sources that fire with small time intervals takes place in the subsurface. Under the assumption that the earth behaves linearly, as far as this summation is concerned, we can compute the resulting wave field by simply adding the wave fields created by the individual sources. According to the basic properties of Fourier transform, the summation can be performed in either the time or frequency domain.

(29)

2.2 Inverse problem 19

(P

11

e

− jωτ1

+ P

13

e

− jωτ3

)

(P

21

e

− jωτ1

+ P

23

e

− jωτ3

)

(P

31

e

− jωτ1

+ P

33

e

− jωτ3

)

(P

41

e

− jωτ1

+ P

43

e

− jωτ3

)

(P

12

e

− jωτ2

+ P

14

e

− jωτ4

)

(P

22

e

− jωτ2

+ P

24

e

− jωτ4

)

(P

32

e

− jωτ2

+ P

34

e

− jωτ4

)

(P

42

e

− jωτ2

+ P

44

e

− jωτ4

)

=

P

11

P

12

P

13

P

14

P21

P22

P23

P24

P31

P32

P33

P34

P41

P42

P43

P44

e

− jωτ1

0

e

− jωτ3

0

0

e

− jωτ2

0

e

− jωτ4

Figure 2.2: A toy example of blending. The unblended dataset contains four shot records.

Every source is given a time delay τiand two sources fire in every blended shot record.

a multiplication in the frequency domain and a summation. Both actions can be performed by a matrix multiplication in the frequency domain. Hence, the formulation presented in section 1.2 is an excellent candidate. For the remaining of this thesis, blending will be expressed as a matrix multiplication with a blending matrix Γ. Consequently, a blended dataset P′can be obtained from an unblended

dataset, i.e., only the response of one source is present in every shot record, as follows

P′= PΓ. (2.1.1)

The depth indicators have been omitted here for simplicity. Figure 2.2 illustrates the matrix multiplication of equation (2.1.1) for a toy example with four shot records. As we discussed, every column of Γ is related to a different blended shot record and its elements, Γij, are the source codes. In this example, the time

delays of the sources are described by a phase term. More complex codes can be also described in the same way, for instance, a linear sweep is a quadratic phase term e−jβiω

2

. Generally, all types of codes can be expressed in the form of A∗ ejφ,

where A and φ describe the amplitude and phase encoding, respectively.

An example of blended data

The notions described in the above section are here illustrated by Figure 2.3. In this example, I simulate a blended experiment based on unblended data. For this purpose, two unblended shot records are encoded with time delays and then summed to create a blended shot record. The challenge of separating blended data becomes now obvious in the rightmost panel of Figure 2.3.

2.2

Inverse problem

The linear system of equations (2.1.1) provides a way to produce a blended dataset from unblended data. In the context of optimization theory, the set of equations

(30)

20 Blending and pseudodeblending

!"#$%#%& '()*&+#,)+%& Blending

Figure 2.3: An example of blending with two shot records and time delays. The stars denote the time delay given to each of the two sources.

(2.1.1) is the forward model of blending. The output of the forward model is a blended data matrix P′, which represents the measured data.

The goal of a separation algorithm is to retrieve the unblended data matrix P from the system of equations (2.1.1). This is an inverse problem, and I will treat deblending, i.e., the separation of blended data, as such. Figure 2.2 reveals that this is an underdetermined problem since there are less equations than unknowns. As a consequence, a direct matrix inversion of the blending matrix Γ in equation (2.1.1) cannot be performed. Instead, one can cast this as a minimisation problem:

P = argmin

P

(kP′− PΓk2). (2.2.2)

Since this is an under-determined system of equations, there is still an infinite number of solutions to equation (2.2.2). Therefore, additional constraints need to be introduced in order to have a unique solution. The choice of constraints reflects the a priori assumptions about the solution. Obviously, if these assumptions are not realistic, the solution to this constrained inversion will be far from the true one.

(31)

2.3 The minimum-norm solution 21

2.3

The minimum-norm solution

A popular constraint is that of minimum energy. In this case, I am essentially seeking for the solution to the minimisation problem (2.2.2) that has the least l2-norm (minimum energy). This solution can be obtained by computing the

generalized inverse or Moore-Penrose pseudoinverse of the blending matrix. Thus, the minimum-norm solution P is

P∗= P′[ΓHΓ]−1ΓH. (2.3.3)

The blending matrix of the example presented in Figure 2.2 contains only phase terms and every source is activated only once. In this specific case, the matrix ΓHΓ is diagonal. If, in addition, the number of sources blended in every shot

record is constant for all shot records, then [ΓHΓ]−1 =1

bI, where b is the number

of blended sources. It follows that equation (2.3.3) can then be simplified to: P =1

bP

ΓH. (2.3.4)

The minimum-norm solution requires now only the computation of the conjugate transpose of the blending matrix ΓH. This is a specific setting, which is not

broad enough, yet it is realistic for blended marine surveys, where time delays are currently the only type of source encoding that can be used.

Copy

Decode

P

SEUDO

D

EBLENDING

Figure 2.4: The first step of pseudodeblending is to copy the acquired blended shot records in order to obtain a full data matrix. The pseudodeblended shot records are then obtained by cross-correlating every trace with the corresponding source code that was used in the field.

(32)

22 Blending and pseudodeblending

Pseudodeblending

Equation (2.3.4) proposes a simple way to obtain the minimum-norm solution to problem (2.2.2). If we neglect the scaling factor 1b, equation (2.3.4) describes the notion of pseudodeblending as given by Berkhout [2008]. This link is very interesting since pseudodeblending has a certain physical interpretation.

As illustrated in Figure 2.4, pseudodeblending consists of two steps, an expan-sion and a decoding step. The expanexpan-sion is performed by copying the recorded (blended) shot records b times. Then, every shot record is decoded with the code given during blending in the field. An example of this process is given in Figure 2.5. Starting with the blended shot record I computed before, I now perform the two steps of pseudodeblending. Since two sources were blended in this shot record, I need to copy it once. The resulting shot records are then decoded, i.e., correct the timing, with the time delays that had been given during blending. Compar-ing the unblended shot records of Figure 2.3 with the acquired pseudodeblended shot records of Figure 2.5, one can see that the signal appears contaminated by interference noise.

The signal in Figure 2.5 appears at correct timing, whereas the interference has some residual phase shift (time delay). However, both signal and interference noise have the same coherent character. Now, this is not the case if I perform the same type of blending and pseudodeblending for the entire dataset and then sort

!"#$%#%& '()*&+#,)+%& Pseudo deblending -.#/%)%#0"#$%#%& '()*&+#,)+%.&

(33)

2.3 The minimum-norm solution 23 !"##"$% &'()*"'$+%,-+./0% !"##"$% 123/+%,-+./0% !"##"$% 4/5/'6/0%,-+./0%

Figure 2.6: The pseudodeblended data of Figure 2.5 sorted in common receiver, common offset, and common mid-point gathers.

the data in a different domain, e.g. common receiver domain. Figure 2.6 shows different types of gathers for the same pseudodeblended dataset. In these domains, every trace belongs to a different source, hence, the interference noise appears with different residual phase shifts for every trace. This causes the incoherent character of interference noise in these types of gathers. The signal is in correct timing for all traces, so it still appears coherent. This is a fundamental property of blended surveys and it will be exploited during the design of the separation algorithm. It is worth mentioning that standard source encoding schemes, such as vibroseis sweeps, do not guarantee the spatial incoherency described in the above example. This effect is due to the use of time delays, a measure though, that can be intro-duced to any type of acquisition. Therefore, the assumption is made, hereafter, that all blending schemes include at least pseudo-random time delays in the firing time of every source.

Scaling the blending matrix

Equation (2.3.4) reveals that the minimum-norm solution can, in some cases, be a scaled version of the pseudodeblended solution. A simple change in the formulation of blending can make these two solutions identical. If we scale the

(34)

24 Blending and pseudodeblending

blending matrix, as the one appearing in Figure 2.2, by 1

b, then P∗ = P ′ΓH.

This is merely a mathematical trick and does not alter, whatsoever, the way we acquire the data in the field. A property of the new blending matrix is that it has orthonormal columns. The importance of this property regarding the stability of the separation algorithm will become clear in chapter 4. Scaled blending matrices will be assumed for the remainder of this thesis, unless otherwise stated.

(35)

3

Deblending with coherency

constraints

Several algorithms have been proposed for the separation of blended data. Cur-rently, the methods that treat deblending as an optimisation problem seem to be the most effective. The distinguishing factor between these optimisation methods is the constraint introduced in the formulation of the inverse problem. The con-straint reflects our prior knowledge about the solution, in this case the separated data.

The wave equation suggests the existence of continuous wave fronts in the far field for regularly sampled seismic data. This fundamental property will be introduced as a constraint in our inverse problem statement. Then, an iteration that solves this problem will be presented and later extended to a generalized algorithm.

3.1

Coherency constraint

A fundamental property of blended experiments is the use of source encoding. As we saw in chapter 2, source encoding leads to the creation of incoherent wave fields in some cases. In these cases, pseudodeblending and sorting our data into a domain other than the common source domain, as in Figure 2.6, reveals that coherency becomes a distinguishing factor between energy that has been correctly assigned to the contributing source (coherent) and energy that has been falsely assigned (incoherent). For the remainder of this thesis, let me call the latter cross-talk or blending noise.

(36)

26 Deblending with coherency constraints

The coherency of the seismic signal is a property that stems directly from the wave equation. On the other hand, spatially incoherent blending noise has been introduced due to our blended acquisition design. One would like to utilize this property of blended data in order to assign energy to its contributing sources. Since I have described deblending as an inverse problem, coherency will be intro-duced as a constraint in our inverse problem definition:

P = argmin

P∈R(C)

(kP′− PΓk2). (3.1.1)

where C represents a matrix implementation of a coherency–pass filter and R(C) is the range of this matrix. Ideally, the desired solution to the deblending problem (unblended data) resides in the range R(C). The choice of constraints reflects the a priori assumptions about the solution. Obviously, if these assumptions are not realistic, e.g., using a very restricting C, the result of this constrained optimization method might be far from the desired one.

Another option for a constraint would be to generalize R(C) to any set of coher-ent signals B. In this case, an operator that is able to project onto B would be required. The use of the linear operator C will simplify the analysis of our algo-rithm and, without loss of generality, will help us draw valuable conclusions, as we will see in the next chapter. In addition, an example of a non-linear operator will be discussed.

3.2

Iteration

The proposed iteration for solving problem (3.1.1) is

Pi+1= PΓH− PiC(ΓΓH− I), (3.2.2)

where Pi is the deblended estimate on iteration i. This is a linear iteration and,

under certain conditions, it converges to the unique solution of problem (3.1.1). The conditions are stated in chapter 4 and the proof can be found in appendix A. The physical interpretation of iteration (3.2.2) hinges on the fact that blending noise, as found in pseudodeblended data, can be computed exactly if the un-blended data P are known. However, the initial unun-blended data are not available and obviously, if they were there would be no need of such a deblending method. Suppose though, that an estimate of P could be obtained from the pseudode-blended data, e.g. by applying a coherency–pass filter. The second term on the rightmost side of iteration (3.2.2), ΓΓH− I, would then transform this estimate

into blending noise plus signal with reversed polarity. Subtracting this from the pseudodeblended data will result in assigning more energy to the correct source while suppressing blending noise. In this way, a better estimate is obtained for

(37)

3.2 Iteration 27

Algorithm: Linear iteration for deblending

1 Form the blending matrix Γ according to the acquisition design

2 Pseudodeblend by multiplying the acquired data P′ with ΓH. This is the

first estimate of the unblended data, P0

3 Apply the coherency–pass filter to the current estimate, PiC

4 Transform the filtered estimate into blending noise and signal with reversed polarity, PiC(ΓΓH− I)

5 Subtract from the pseudodeblended data. This is the new estimate, Pi+1

6 Check convergence. If converged, stop, otherwise go to step 3

Table 3.1: Algorithm for solving problem (3.1.1) by implementing iteration (3.2.2).

the following iteration. The algorithm is summarised in Table 3.1. The conver-gence check, step 3 of the algorithm, is performed by computing the energy of the difference of two subsequent estimates. When this falls bellow a predefined level ǫ, the algorithm stops. The lowest possible ǫ is posed by numerical precision. In real cases though, and in the presence of noise in the data, ǫ should be chosen considerably higher.

An alternative non-linear iteration that has provided interesting results is

Pi+1= PΓH− Pi(ΓΓH− I), (3.2.3)

where Pi is the projection of estimate Pi onto a set of coherent signals B. An

example of such a non-linear process could be

Pi= F {Ti{F−1{PiC}}}. (3.2.4)

In this implementation, the estimate of the unblended data Pi is obtained by applying a threshold in the time domain after the coherency–pass filter. F and F−1 denote forward and inverse temporal Fourier transforms, respectively. The

application of a threshold can be mathematically formulated as Ti{x} =

(

x if |x| ≥ νi

0 if |x| < νi, (3.2.5)

where x and Ti(x) are the input and output, respectively, and νi is the threshold

applied on iteration i.

In general, the threshold should decrease with the iterations in order to establish convergence. The motivation behind the use of a non-linear filter is to speed-up the algorithm. The example of applying the threshold in the time domain is

(38)

28 Deblending with coherency constraints

only one possible implementation of this non-linear filter. Essentially, any process capable of producing an estimate of the unblended data from pseudodeblended data can be used instead.

A similar iteration is used by the constructive method of Abma et al. [2010]. Expressing their iteration in our notation yields

Pi+1= P′ΓH+ Pi− PiΓΓH, (3.2.6)

where Pi is here obtained by applying an f -k filter followed by the application of a threshold in the frequency domain. During this iteration the previous esti-mate Pi contributes to the new estimate Pi+1 without first applying to it the

coherency–pass filter. This can potentially stabilise the algorithm but also slow-down its convergence. This iteration, among others, is further discussed in the third appendix.

3.3

Coherency–pass filter

The coherency–pass filter has a central role in iteration (3.2.2). It essentially im-plements the coherency constraint we would like to impose on our inverse problem (3.1.1). Several types of filters can act as coherency–pass filters.

Blending noise, as we saw in the previous chapter, is not continuous in the lateral direction when we sort our pseudodeblended data in common detector, common offset or common midpoint gathers. Hence, transforming such a gather to the wave number-frequency domain reveals that the blending noise has a white spectrum in the wave number direction. On the other hand, the seismic signal resides in a triangle-shaped area. As an example, Figure 3.2a and 3.2b show the f-k spectra of the seismic signal and the blending noise in the common offset domain, respectively.

In the following paragraphs, I will describe three types of filters that are quite popular in the geophysical community. All of these filters, individually or any combination of them, can implement the coherency–pass filter required by the separation algorithm.

f-k filter

The required source encoding performed in a blended survey results in interfer-ence noise that spans the full wave number bandwidth of every type of gather apart from common source, see Figure 3.1b. This property can be exploited for suppressing interference noise in pseudodeblended gathers by rejecting the parts

(39)

3.3 Coherency–pass filter 29 !"#$%&'($)*+'!, -. ) $ / & $ % 0 1 * + 2 3 -!4546 !454, 4 454, 4 ,4 64 74 84 94 :4 ;4 !"#$%&'($)*+'!, -. ) $ / & $ % 0 1 * + 2 3 -!4546 !454, 4 454, 4 ,4 64 74 84 94 :4 ;4 +"- +(-!"#$%&'($)*+'!, -. ) $ / & $ % 0 1 * + 2 3 -!4546 !454, 4 454, 4 ,4 64 74 84 94 :4 ;4

+<-Figure 3.1: f-k spectrum of the seismic signal in the common offset domain (a) and the corresponding spectrum of blending noise in the same domain (b).

of the spectrum where no seismic signal is expected. In this way the seismic signal remains unaffected. Transforming back to the space-time domain, the signal – be-ing coherent – is now expected to have larger amplitudes than the blendbe-ing noise. Intuitively, this type of filter can be coupled with the application of a threshold, which will keep only (part of) the coherent events of the gather, as shown by Mahdad et al. [2011]. Abma et al. [2010] propose to apply the threshold to the frequency-wavenumber coefficients.

Another option is to use this filter in combination with a median filter. The spatial incoherency of the blending noise makes it susceptible to suppression by short spatial median filters. Such filters can very efficiently remove or greatly reduce the amplitude of incoherent blending noise simply by comparing every time sample to its homologous sample from neighbouring traces. Naturally, such filters may also affect the amplitude of coherent signals, hence, it is wise to use them in conjunction with f -k filters which will be able to rectify, up to a certain extent, the amplitudes of coherent signals. The length of such median filters, as utilised in this thesis, range between 3–7 points.

One of the challenges for deblending methods that utilize a coherency–pass filter for the separation is the existence of spatial aliasing in the acquired data. Severe spatial aliasing can become an impediment for this type of filters, as it causes the signal to become incoherent. In this case, more sophisticated filters that are able to better treat aliasing are required. Such filters will typically use additional constraints in order to overcome the sparse sampling of the data. One should not forget though that the motivation for using source blending is to improve the spatial source sampling in order to avoid source aliasing.

(40)

30 Deblending with coherency constraints Wavenumber (m−1) Frequency (Hz) −0.02 −0.01 0 0.01 0.02 0 10 20 30 40 50 60 70 Wavenumber (m−1) Frequency (Hz) −0.02 −0.01 0 0.01 0.02 0 10 20 30 40 50 60 70 Ray parameter (ms/m) Time (s) −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Ray parameter (ms/m) Time (s) −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Wavenumber (m−1) Frequency (Hz) −0.02 −0.01 0 0.01 0.02 0 10 20 30 40 50 60 70 Wavenumber (m−1) Frequency (Hz) −0.02 −0.01 0 0.01 0.02 0 10 20 30 40 50 60 70 (a) (b) (d) (c) (e) (f)

Figure 3.2: Data in the domain where filtering takes place. The region defined by the dashed lines is the pass region and the arrows indicate the direction this region can grow. The f-k spectrum of an unblended common offset gather (a), the f-k spectrum of a pseudodeblended common offset gather (b), an unblended common offset gather in the τ-p domain (c), a pseudodeblended common offset gather in the τ -p domain (d), the

ks= 0 slice of the f -kr-ks spectrum of an unblended dataset (e), and the ks= 0 slice of

(41)

3.3 Coherency–pass filter 31

Figure 3.2a shows an unblended common offset gather of a marine dataset in the f -k domain. Notice that there is no aliasing, hence, the seismic bandwidth is clearly constrained within a triangular region. The same gather after blending and pseudodeblending appears in Figure 3.2b; the lack of aliasing enables the design of an effective filter that will only suppress energy that belongs to the interference noise. The selected design is in the shape of a triangle, parametrised only by the top angle of the triangle. Increasing this angle changes the pass region as denoted by the arrows in Figure 3.2a. This type of design can set a lower bound to the apparent velocities that will be allowed in the data. A similar design is utilized by Wapenaar et al. [2012] as the point-spread function of their separation method. The point-spread function acts as a filter in the common source domain and it lets pass the entire seismic bandwidth.

τ-p filter

A filter in the linear Radon domain can also be used as a coherency–pass filter. As in the case of an f -k filter, this transform should be applied on a type of gather other than a common source gather. Figures 3.2c and 3.2d display a common offset gather in the τ -p domain. Choosing a certain range of p for the computation of the transformed data is essentially equivalent to the triangle-shaped filter used in the f-k domain. On top of that, this transform offers control over the time axis. This allows for windowing in the time direction, hence it is possible to concentrate on areas characterized by a locally high signal-to-blending noise ratio. This property has proved to be very beneficial especially during the first iterations of this deblending process. A τ -p filter that is implemented in a least-squares sense can prove advantageous in certain cases. These cases include irregularly or sparsely sampled data. Also, the edge artefacts created by this type of filter are typically less severe than in the case of f -k filters.

f-kr-ksfilter

In this type of filter the data are not sorted into gathers; instead, parts of or the whole data matrix are treated simultaneously. A three-dimensional Fourier transform is used to compute the f -kr-ksspectrum of the dataset where kr and

ks represent the receiver and source wave numbers, respectively. Similarly to

the case of an f-k transformed common offset gather, the blending noise extends outside the signal cone. Figures 3.2e and 3.2f show the ks= 0 slice of an f -kr

-ks transformed data cube. This filter lets only certain source wavenumbers pass

for every receiver wavenumber. This means that only certain angles of incidence can be allowed for a specific reflection angle and vice versa. Given that often the interfering sources are not spaced very closely together, this filter can be

(42)

32 Deblending with coherency constraints Position (m) Time (s) 0 500 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Position (m) Time (s) 0 500 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Position (m) Time (s) 0 500 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Position (m) Time (s) 0 500 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) (b) (c) (d)

Figure 3.3: An unblended common offset gather from a 2D marine seismic survey (a), the same common offset gather after pseudodeblending (b), the solution obtained by de-blending with a very narrow band filter (c), and the solution obtained by dede-blending with

C= I (d). The amplitudes in figures 3.3b and 3.3d have been amplified2 times.

a powerful tool for distinguishing between signal and blending noise since the interfering source illuminates a point in the subsurface from different angles than the signal source. As in the case of the f-k filter, the application of a threshold in either the wavenumber-frequency domain or the space-time domain can follow in order to make sure that (almost) no blending noise leaks in the output data estimate.

3.4

Extended algorithm

The inverse problem, as formulated in (3.1.1), raises a fundamental question: how should one design the coherency–pass filter? For instance, in the case of an f -k filter, as the one shown in Figure 3.2a, one could think of two extreme cases. The first extreme case would be to design filter C as a very narrow-band filter that would capture the main contributions around k = 0. Undoubtedly, most of the

(43)

3.4 Extended algorithm 33

Algorithm: Extended algorithm for deblending

1 Form the blending matrix Γ according to the acquisition design 2 Pseudodeblend. This is the first estimate of the unblended data, P0

3 Choose/Update the coherency–pass filter, Ck

4 Solve subproblem

4a Apply the coherency–pass filter to the current estimate

4b Transform the filtered estimate into blending noise and signal with re-versed polarity

4c Subtract from the pseudodeblended data. This is the new estimate, Pi+1

4d Check convergence. If converged go to step 5, otherwise go to step 4a 5 If Ck= I stop, otherwise go to step 3

Table 3.2: Extended algorithm for the separation of blended data. i and k are the running indices of the inner and outer iterations, respectively.

interference noise would be rejected already from the first iteration, leading to a good separation after few iterations. The obtained solution, though, will be within the pass region of the filter according to the problem formulation (3.1.1), i.e., severely filtered. Figure 3.3 illustrates exactly this case. Figure 3.3a shows an unblended common offset gather. This dataset is blended with four sources contributing to every shot record. Pseudodeblending this blended dataset results in the introduction of blending noise as seen in Figure 3.3b1. If we deblend

using iteration 3.2.2 and a narrow-band coherency filter, we obtain, as expected, a severely filtered solution as the one shown in Figure 3.3c. In this case we have apparently chosen a very strict constraint that is obviously not a realistic assumption.

In the second extreme case, we will try to overcome the band limitation by choos-ing a coherency–pass filter that will let all energy pass. This can be expressed as C = I. Now, we do not reject any energy that can potentially belong to the desired signal, but we do not reject the interference noise either, thus, we cannot expect any separation2. This is illustrated in Figure 3.3d, which is essentially

identical to the pseudodeblended result.

The two examples above describe a trade-off between separating the blended data and limiting the frequency-wave number bandwidth of the obtained solution. A more in-depth analysis of this trade-off will be given in chapter 4.

1Please note that this figure is scaled to match the amplitude of the unblended data for illustration purposes.

(44)

34 Deblending with coherency constraints

In order to overcome this trade-off, an outer loop is introduced into the algorithm. Table 3.2 describes this extended algorithm, which consists of multiple executions of the basic algorithm as described in Table 3.1. The coherency–pass filter should start as a narrow-band filter and expand on every outer iteration until it reaches the identity matrix I on the last iteration. Essentially, I turn the inverse problem (3.1.1) into a subproblem that I solve multiple times. The solution obtained at the end of every outer loop is used as an initial guess for the next subproblem. The accuracy to which the subproblem will be solved is a matter of design of the deblending process, e.g., running only one iteration for every subproblem would only constitute a special case of this algorithm.

(45)

4

Convergence analysis

Any iterative algorithm leads to the question of its convergence: will it converge and if so, to what? In this chapter, we will address these questions with basic tools from linear algebra.

The insight obtained in this way will help us move one step further by establishing the desired link between acquisition parameters and separation efficiency. Specific tools that perform this link are thereby introduced.

4.1

Convergence to the unique solution of the subproblem

I will study the iterative method in (3.2.2) and state the conditions for conver-gence to the unique solution of the constrained optimization problem (3.1.1). The proofs to the theorems are given in appendix A.

Problem (3.1.1) has a unique solution and iteration (3.2.2) converges to it if and only if the following assumptions hold.

Assumption 1: The columns of Γ are orthonormal (i.e., orthogonal and of unit length).

The orthogonality of the columns of Γ is directly linked to the acquisition design. A simple implementation of such a blending matrix would require every source position to be fired only once throughout the survey. Figure 4.1(a) illustrates a simple example of a blending matrix with orthonormal columns. The columns

(46)

36 Convergence analysis

of this matrix are orthogonal since the inner product of the two columns is 0, whereas the inner product of each column with itself is 1. In addition, the latter makes the columns of Γ normal and it is achieved by the scaling factor 1

2. The

scaling factor should generally be 1

b, with b being the number of sources blended

in every shot record assuming the number of shots in every blending experiment to be equal. Please note that the required normalization is not related to the physical experiment. It is a matter of mathematical convenience, nonetheless it is a necessary condition to ensure the stability and convergence of the algorithm. However, the interpretation of the iteration (3.2.2) now differs from the one de-scribed in Mahdad et al. [2011] because of the use of the scaling factor. Instead of estimating and subtracting the interference noise only, as Mahdad et al. [2011] propose, the term PiC(I − ΓΓH) now boosts the signal while suppressing

fur-ther the interference. This follows from the existence of non-zero elements on the diagonal of the matrix (I − ΓΓH) in the current implementation as opposed

to the zeros found in the implementation in Mahdad et al. [2011]. The formula-tion proposed and used throughout this thesis uses the scaling as it stabilises the inversion.

Assumption 2: The coherency-pass filter C is an orthogonal projection. Recall that any arbitrary n × n matrix J is an orthogonal projection if and only if it satisfies J2= J and JH = J. For example, C can be an f -k filter (without

tapering, so the filtering in f -k has only multiplication by 1’s or 0’s), see Figure 4.1(b). This assumption limits the types of filters that can be used since the lack of tapering can lead to the creation of edge artefacts that may slow down the convergence of the algorithm. Furthermore, as already mentioned, C is here defined as a linear operation, thus, nonlinear operations like applying a threshold are not addressed here. However, these restrictions will enable the study of the fundamental properties of this algorithm.

Assumption 3: There is no leakage subspace: N (Γ) ∩ R(C) = {0}1.

This is the most restrictive assumption. It is the essential assumption to guarantee a unique solution to the problem (3.1.1). If there were energy in the intersection of the null space of Γ, N (Γ), and the range of C, R(C), one could add this vector to any solution of (3.1.1) to obtain another solution. This is based on the fact that energy residing in this intersection satisfies the constraint of our inverse problem (3.1.1), thus, it can be part of our solution, but it disappears when the blending matrix is applied to it because it belongs to its null space. Consequently, the residue I try to minimise in the problem (3.1.1) remains the same if I add energy from this intersection to any solution.

If the above three assumptions hold, then the iteration (3.2.2) converges, for an 1{0} represents here the set of data matrices with zero entries.

Cytaty

Powiązane dokumenty

N a początku maja Maryja staje przed nami jako wzór wszelkiego powołania: nie tylko powołania tych, którzy postanawiają poświęcić się całkowicie Bogu i

Inform acja ta, pośrednio pochodna i zależna od informacji genetycznej, tw orzona jest w rozwoju ewolucyjnym organizmów i przekazyw ana z pokolenia na pokolenia.. Inform acja

czyniąc namysł nad koniecznością podjęcia działania, człowiek rozpoznaje ostatecznie ro- zumem, który z sądów praktycznych jest zgodny, a który niezgodny z jego rozumieniem

Moim zdaniem, nie jest to przykład przekonywujący, gdyż arytmetyka peano (pa) jako teoria modelowana przez zbiór liczb naturalnych jest teorią sformalizowa- ną w

One can suppose, that the rest of the world understands fundamental values as they are presented in the Universal Declaration of Human Rights, which clearly states that the

Jeśliby zatem nie nastąpiło wyodrębnienie całości środków inwesty­ cyjnych w postaci środków celowych budżetów terenowych, wtedy środki inwestycyjne nie wykorzystane do

dzięki B ach tin o w i1 — tendencji do szerokie­ go traktow ania dialogu 2 nie uwalnia nas bowiem od poczucia sprzeczności z podstawowym znaczeniem tego słowa tam,

Charge noise directly couples to the charge densities of the QD orbitals. The coherence of a STQ is lost if the ground state orbital has a different charge density from the