• Nie Znaleziono Wyników

Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic

N/A
N/A
Protected

Academic year: 2021

Share "Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic"

Copied!
164
0
0

Pełen tekst

(1)

Hydrocarbon Reservoir

Parameter Estimation Using

(2)
(3)

Hydrocarbon Reservoir

Parameter Estimation Using

Production Data and Time-Lapse Seismic

PROEFSCHRIFT

ter verkrijging van de graad van doctor

aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof. ir. K.C.A.M. Luyben,

voorzitter van het College voor Promoties,

in het openbaar te verdedigen

op dinsdag 25 mei 2010 om 10:00 uur

door

Justyna Katarzyna PRZYBYSZ-JARNUT

Master of Science in Applied Mathematics

geboren te Mi

edzyrzecz, Polen

֒

(4)

Prof. dr. ir. A. Gisolf

Samenstelling promotiecommissie: Rector Magnificus, voorzitter

Prof. dr. ir. J.D. Jansen, Technische Universiteit Delft, promotor Prof. dr. ir. A. Gisolf, Technische Universiteit Delft, promotor Prof. dr. ir. R.J. Arts, Technische Universiteit Delft

Prof. dr. ir. A.W. Heemink, Technische Universiteit Delft Prof. dr. D.S. Oliver, University of Oklahoma, USA

Prof. dr. J. Kleppe, Norges Teknisk-Naturvitenskapelige Universitet, Norway Dr. R.G. Hanea, Technische Universiteit Delft, TNO

ISBN 978-90-8570-516-1

Copyright c 2010, by J.K. Przybysz-Jarnut

All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without the prior written permission of the author.

SUPPORT

The research for this thesis was financially supported by the Delphi Consortium.

Typesetting system: LATEX.

(5)

Contents

1 Introduction 1

1.1 Energy demand . . . 1

1.2 Recovery process . . . 1

1.3 Field development and reservoir management . . . 3

1.3.1 Data assimilation loop . . . 4

1.3.2 Optimization loop . . . 5

1.4 Different data sets . . . 6

1.4.1 Production history data . . . 6

1.4.2 Time-lapse seismic data . . . 6

1.4.3 Prior knowledge . . . 7

1.5 Research objectives and solution directions . . . 8

1.5.1 Research objectives . . . 9

1.5.2 Solution directions . . . 9

1.6 Thesis outline . . . 10

2 Literature review of state and parameter estimation 11 2.1 Introduction . . . 11

2.2 The Bayesian framework . . . 12

2.2.1 Bayes’ theorem . . . 12

2.2.2 The posterior probability . . . 12

2.2.3 Kalman filter . . . 14

2.2.4 Variational data assimilation . . . 15

2.2.5 Alternative approaches . . . 16

2.3 Parameterization . . . 17

2.3.1 Zonation . . . 17

(6)

2.3.3 Karhunen-Loeve Expansion . . . 18

2.3.4 Kernel Principle Component Analysis (KPCA) . . . 18

2.3.5 Discrete cosine transform . . . 19

2.3.6 Wavelets . . . 19

2.3.7 Representer method . . . 20

3 Time-lapse seismic 21 3.1 Introduction . . . 21

3.2 Exploration seismic . . . 22

3.3 Time-lapse seismic data . . . 23

3.4 History matching with time-lapse seismic data . . . 25

4 Forward modelling 29 4.1 Introduction . . . 29

4.2 Flow equations . . . 29

4.2.1 Two-phase model . . . 30

4.3 State space representation of two-phase flow equations . . . 33

4.3.1 Space discretization . . . 34

4.3.2 Time discretization . . . 35

4.4 Newton-Raphson iterative method . . . 36

4.5 Well model . . . 37

4.6 Density model . . . 38

4.7 Augmented reservoir model . . . 39

4.7.1 Sources of nonlinearity . . . 39

4.7.2 Sources of uncertainty . . . 40

4.7.3 Stochastic reservoir model . . . 41

5 The representer method 43 5.1 Introduction . . . 43

5.2 Literature review - Representer method . . . 44

5.2.1 Application in reservoir engineering . . . 45

5.3 Inverse modelling . . . 47

5.3.1 Measurements . . . 47

5.3.2 Minimum of least-squares objective function . . . 48

5.3.3 Expansion into basis functions . . . 50

5.3.4 Representer equations . . . 51 5.3.5 Flow chart . . . 54 5.3.6 Model checking . . . 55 5.4 Computational issues . . . 56 5.4.1 Measurement scaling . . . 56 5.4.2 Parameter update . . . 56 6 Synthetic example 1 59 6.1 Introduction . . . 59

(7)

CONTENTS 3

6.2 Reservoir model . . . 59

6.2.1 Permeability field . . . 60

6.3 Model parameters and model errors . . . 61

6.4 Real solution and prior estimates . . . 62

6.5 Observations . . . 63

6.6 The representers . . . 64

6.6.1 Representer fields for production and time-lapse seismic data 64 6.6.2 State representers . . . 65

6.7 Estimation results . . . 66

6.7.1 Estimation results with production data only . . . 66

6.7.2 Estimation results with seismic data only . . . 67

6.7.3 Estimation results with both data sets . . . 68

6.8 Sensitivity study . . . 70

6.9 Conclusions . . . 79

7 Synthetic example 2 81 7.1 Introduction . . . 81

7.2 Reservoir model . . . 82

7.2.1 True permeability field and its prior estimates . . . 82

7.3 Measurements . . . 85

7.4 Estimation results - first model . . . 86

7.4.1 Prior knowledge - Gaussian ensemble . . . 86

7.4.2 Prior knowledge - channel ensemble . . . 91

7.4.3 Conclusions . . . 95

7.5 Estimation results - second model . . . 97

7.5.1 Prior knowledge - Gaussian ensemble . . . 97

7.5.2 Prior knowledge - channel ensemble . . . 101

7.5.3 Prior knowledge - diagonal covariance matrix . . . 106

7.5.4 Conclusions . . . 111

8 Conclusions and recommendations 113 8.1 Introduction . . . 113

8.2 Conclusions . . . 114

8.3 Recommendations . . . 116

A State-space representation of the two-phase flow model 119 A.1 State-space representation . . . 119

A.2 Definitions of vectors and matrices . . . 120

B Derivation of representer equations 125 B.1 Notation . . . 125

B.2 The adjoint representer . . . 126

B.3 The parameter representer . . . 129

(8)

B.5 Forward solution ˜xηF . . . 132 B.6 The representer coefficients . . . 133

Bibliography 134

Summary 145

Samenvatting 148

Curriculum vitae 153

(9)

1

Introduction

1.1

Energy demand

The global energy demand increases every year. According to the Annual Energy Outlook (AEO) 2009, there will be a significant increase in energy consumption in the coming years, particularly in China, India, and other developing countries. At the same time, renewable energy becomes more important. Its production is less harmful to the environment and starts to become economically competitive compared to the production of energy from fossil fuels. Thus, as stated by the AEO [2009] the use of renewable fuels is expected to grow strongly over coming years. Still, oil and natural gas will remain the main energy source until at least 2050.

Oil companies find it each year more and more difficult to meet this huge demand for fossil fuels. First of all, a lot of large fields are already at a mature stage. Other fields, discovered recently, are often too small to be exploited efficiently. It is therefore essential to develop new technologies that allow for reduction of costs of maintaining oil fields and increase the rate of recovery of oil from the existing fields.

1.2

Recovery process

In general oil and gas are found in sandstones or limestones. They are trapped in small microscopic pores (empty spaces between the sand grains) in the rock. There are several properties of the porous rock which are important for oil extraction. The first one is porosity, i.e. the fraction of the rock that can be occupied by fluids. The second one is permeability, which describes the ability of rock to transmit fluids. If the pores are connected such that the fluids can flow through these linked pore

(10)

paths, the rock is called permeable. If hydrocarbons are present in the pores of a reservoir rock, they must be able to move out of them. Unless hydrocarbons can move from pore to pore, they remain locked in place. Therefore a suitable reservoir rock must be porous, permeable, and contain enough hydrocarbons to make it economically feasible for the operating company to drill and produce them. The properties of reservoir rock have big influence on the ability of the fluids to flow through the reservoir and often determine the strategies used during oil recovery. The hydrocarbon recovery process can be in general divided into three phases (Ewing [1983]).

The first phase is called primary recovery. In this phase the pressure that is found in the reservoir is often high enough to force oil or gas to flow through the porous medium and then out of the well. Extraction of oil causes a decrease of pressure and subsequently a decrease of recovery rates. At some point in time, when recovery rates become too small, the economic production can only be maintained after applying special techniques. Very often the primary recovery leaves 70 − 80% or more of hydrocarbons still in reservoir.

Secondary recovery consists of injecting gas or water into the reservoirs in order to increase the pressure and to push the remaining oil out of the pores. Usually injection occurs through the wells that are located at some distance from the production wells. The injected fluid should then displace some of the oil and push it towards production wells. The secondary recovery phase increases the recovery factor, but still 50% or more hydrocarbons remain in the reservoir.

In the tertiary recovery or enhanced oil recovery (EOR) phase the fluid properties are altered to ease its flow. Two strategies are mainly used. In thermal recovery the steam is injected to the reservoir to heat the oil and make it more fluid. In chemical recovery several different chemicals like polymers or surfactants are used to reduce the oil viscosity, or increase the viscosity of the injected water, or lower the capillary pressure that stops oil droplets from flowing through a reservoir.

All these difficulties result from the spatial heterogeneity of reservoir rock properties. It happens very often that there exist preferential paths through which injected fluid is moving toward the production well. All the oil that is located outside this path is not influenced by the injection of fluids. This causes the production of injected fluid instead of oil at an early stage. Knowing the spatial distribution of rock properties, one would be able to design the production strategy to postpone water breakthrough in the wells and maximize the recovery. However, the spatial heterogeneity and lack of direct measurements of rock properties, which are only known in well locations, introduces a lot of uncertainty that needs to be addressed if reliable future predictions of reservoir performance are to be expected.

(11)

1.3 Field development and reservoir management 3

1.3

Field development and reservoir management

After the exploration phase, in which potential reservoirs are identified and explo-ration wells are drilled, initial geological models are created based on the knowledge obtained from seismic surveys and well data. Initial predictions for the future reser-voir performance are made, and if those predictions are economically profitable the reservoir enters a field development phase. When developing a field, the main target is to maximize the economic criterion, most often in terms of oil and gas revenues. Choices are made about the number and locations of wells, the surface facilities that need to be built and the required infrastructure. Based on all available information a detailed geological model of a given reservoir is created, of which an ‘upscaled’ simpler version is used for flow simulation. This numerical reservoir model should ideally mimic all the processes occurring in the reservoir itself. If the numerical model would adequately describe the real reservoir, it would be possible to predict the reservoir behaviour properly and plan optimal strategies to maximize the recov-ery from a given field. Unfortunately, a numerical reservoir model is only a crude approximation to the truth, mainly for two reasons. Firstly, not all the processes oc-curring in a real reservoir can be modelled in an appropriate way. Very often some simplifications are imposed on the model, to make the problem easier to tackle. Secondly, there is usually a large uncertainty in the parameter values of the simu-lation model. Many rock properties that influence reservoir flow are poorly known, while there are also uncertainties in fluid properties and the amount of hydrocar-bons present in the reservoir. The uncertainties involve the reservoir structure, the initial fluid contacts, and the values of permeabilities, porosities, and fault trans-missibilities, etc. These reservoir related parameters are assumed to be known in numerical simulations. However, neglecting the uncertainties leads to results pro-duced by numerical reservoir models that contradict the data gathered from the real field. It’s then difficult to make decisions based only on the output of a numerical model. Therefore, the measured data together with numerical simulations should be used in reservoir management for improving the production rates and increasing the recovery from a field.

The new concept of a closed-loop model-based reservoir management ( Jansen et al. [2008], Jansen et al. [2009]) seems to be a proper framework for reservoir monitoring and management. It involves the use of (uncertain) reservoir and production models (depicted by multiple “system and sensor models” boxes in the middle of Figure 1.1) and combines them with available data from a real field, such as production history data or time-lapse seismic data, to continuously update the numerical models and reduce the uncertainty. The main idea is presented in Figure 1.1. The system, at the top of the figure represents reality: the reservoir with wells and other facili-ties. This system is monitored with the help of various kinds of sensors, which give knowledge about the state of the reservoir (pressures, saturations) in the form of measured output. However, the information coming from these measurements is not perfect. Two sources of measurement errors can be distinguished. The first is due

(12)

Figure 1.1: Closed-loop model-based reservoir management (after Jansen et al. [2009]).

to errors made while acquiring measurements, because of the imperfections inherent in every measurement technique, or because of human errors. The second called representativeness error takes into account the erroneous measurement prediction operator (sensor model). Also the input into the system can be uncertain and be subject to noise. When production begins and data becomes available, it is possible to update the numerical reservoir and sensor models that simulate the behaviour of the real reservoir. The gathered data is used to reduce the uncertainty of the parameters. The identification and updating is performed in a data assimilation loop, indicated in red in Figure 1.1. The updated reservoir model is then used in an optimization loop, indicated in blue, where a new optimized production strategy can be determined.

1.3.1 Data assimilation loop

Data assimilation is mainly concerned with the reconstruction of unknown quanti-ties based on the available measurements in the presence of uncertainquanti-ties (usually modelled as noise). It allows to combine available observations with a given dynamic model. The information present in the measurements is combined with the infor-mation obtained when performing numerical simulations in order to produce more realistic results. Data assimilation is widely used in fields like meteorology (Bennett

(13)

1.3 Field development and reservoir management 5

et al. [1996]), oceanography (Bennett [2002]) and groundwater flow (Valstar et al. [2004]) and, known as “history matching ”, is also widely used in reservoir simulation (Chavent et al. [1975], Gavalas et al. [1976], Gosselin et al. [2003]). There are two classes of methods in data assimilation: the sequential approach and the variational approach.

The sequential approach includes Kalman filtering (Kalman [1960]), a powerful tool for solving linear models, in which not only parameters are poorly known, but also the uncertainties in the model are present. The recently developed adaptation of Kalman filtering to handle nonlinear models called Ensemble Kalman Filter (EnKF) (Evensen [1994], Nævdal et al. [2003], Evensen [2007]) is a Monte Carlo approach that allows for updating large-scale numerical models. One starts with an ensemble of prior models and updates them each time a new measurement becomes avail-able. When all updated models (usually between 50 to 100 in reservoir engineering applications) are simulated forward in time, a future reservoir performance can be predicted and uncertainties can be estimated.

In the variational approach an objective function, defined as the misfit between mea-sured data and forecasts from the model, is minimized by adjusting initial conditions and/or parameters (Talagrand and Courtier [1987], Courtier [1997]). Very often an additional data-independent term is added to the objective function to regularize the problem. This approach is similar to ‘automatic history matching’ techniques that have been developed to adapt the parameters of a numerical reservoir model, using pressures and flow rates measured during the production process (Chen et al. [1974], Chavent et al. [1975], Li et al. [2003]). The search for updates of the param-eters is usually performed with iterative Gauss-Newton schemes for which gradient information is needed (Oliver et al. [2008]). For large-scale numerical models the gradient is calculated by solving a so-called ‘adjoint model’, which allows to calculate all derivatives by only one additional simulation.

1.3.2 Optimization loop

In the optimization loop, one tries to identify the optimal exploitation strategy, both through optimization of production controls in a given well configuration (Brouwer and Jansen [2004], Sarma et al. [2005]) as well as through determining the optimal positions of new wells (Zandvliet et al. [2008a],Vlemmix et al. [2009]). Usually the net present value (NPV) is being maximized.

Due to heterogeneous character of the reservoir rock, the water or gas injected during the secondary recovery phase flows with different velocities in different parts of the reservoir. If there exists a preferential path through which injected fluid is moving toward the production well, the oil located outside this path remains unflooded and often the production of injected fluid instead of oil occurs soon after the start of injection (“early breakthrough”). To avoid early water breakthrough in the wells, one can try to optimize the production scenarios, by controlling the injection and production in the existing wells. In the long-term these proactive strategies should

(14)

yield higher recovery factors than in the case of reactive control only (no action is undertaken until significant changes are observed in the wells). Several authors studied the optimization problem, both for a ‘known’ reservoir model (Sarma et al. [2005], Kraaijevanger et al. [2007]), as well as in the presence of uncertainties in the parameters influencing the flow (Zandvliet [2008], van Essen et al. [2009]).

When the data assimilation loop and the optimization loop are combined, they form a framework for model-based closed-loop reservoir management. Usually the initial reservoir models are updated with the help of data assimilation schemes, using available data from different sources and providing a better estimate of the system state and the parameters. The assimilation is then followed by an optimization in which the optimal production strategy or locations for new wells are identified, for an updated reservoir model. Both loops are repeated sequentially as new data becomes available (Nædval et al. [2006], Sarma et al. [2006], Chen and Oliver [2008]).

1.4

Different data sets

There is data available from different sources that could be used in the data assimila-tion loop to improve the characterizaassimila-tion of the reservoir and improve the reliability of reservoir flow predictions. Generally, data can be divided into two classes: sparse data available only at well locations and dense data gathered everywhere in the reser-voir. These data sets differ in accuracy and field coverage, but both contain valuable information about the reservoir and the parameters influencing the flow, and should be used together to obtain a reservoir model that explains all data simultaneously.

1.4.1 Production history data

Usually production history data, obtained from wells in the form of wellhead or bottom hole pressures and flow rates, is used in history matching algorithms, to update the uncertain parameters. This type of data is typically acquired with an accuracy between 5%-20%. However, because the number of model parameters to be estimated is very large, production history data has a limited resolving power. It does provide some information on the unknown properties in the neighbourhood of the wells, but not further away from them. As a result, there are many reservoir models that give rise to the same production history data, but yield different predictions for the future performance of the reservoir.

1.4.2 Time-lapse seismic data

Due to developments in geophysics, especially in the field of seismic, it becomes possible to determine not only the position of the reservoir, but also to track the fluids movements in the reservoir itself. This additional information in the form of

(15)

1.4 Different data sets 7

time-lapse seismic data can be utilized, together with production data, to narrow the solution space when minimizing the misfit between gathered measurements and their forecasts from the numerical model.

Time-lapse seismic is the process of carrying out a seismic survey before the pro-duction begins and then repeating surveys over the producing reservoir. Seismic data is sensitive to static properties like e.g. lithology, pore volume, net/gross ratio but also to dynamic (i.e. time varying) properties like fluid saturation and pore pressure. From one single seismic survey one is not able to differentiate between features caused by static properties and those caused by dynamic properties. By comparing two different seismic surveys acquired over the producing reservoir at different times, however, it is possible to extract information about the changes in dynamic properties.

In this thesis seismic data in the form of measured travel times and amplitudes is not used directly. Instead an interpretation step is included in which the direct seismic measurements are inverted to produce variables that can be represented in a reser-voir model or in a rock physics model. This process is depicted in Figure 1.2. The reservoir is being monitored with different kinds of sensors, yielding noisy produc-tion data in the form of bottom hole pressures and flow rates and seismic data in the form of travel time information and amplitudes. These seismic attributes undergo an inversion process, in which they are translated into time-lapse changes in density, acoustic propagation velocity and shear modulus. The inverted seismic data is then used together with production data as input into a data assimilation scheme. Time-lapse seismic data is available everywhere in the reservoir. Although less ac-curate than production data, it contains information about the reservoir properties everywhere and can be used to infer parameter values away from wells.

1.4.3 Prior knowledge

History matching is an ill-posed problem i.e. many different parameter sets may re-sult in the same measurement predictions. To limit the solution space regularization (prior knowledge) is often applied and only models which are ‘close’ in some prede-fined sense to the initial model are searched for. Although in most of the literature prior knowledge is separated from measured data, conceptually, from the Bayesian point of view, it can be considered as additional data set.

In petroleum engineering, due to the complexity and high dimensionality of the models, and due to the lack of direct measurements of the unknown parameters, the prior knowledge is usually described by an ensemble of models that all conform to the geological information. Most of the data assimilation schemes available utilize only two point-statistics. They take into account only the mean value and covariance matrix derived from the ensemble, indirectly assuming a Gaussian distribution for the unknown parameters and neglecting higher order statistics, which are essential if one wants to take geological features into consideration. Therefore, even if proper prior knowledge is specified, poor estimation results can be obtained depending on

(16)

the amount of data and data-types used. This issue is further investigated in this thesis.

Figure 1.2: Sensor and measurement models.

1.5

Research objectives and solution directions

The closed-loop reservoir management framework described previously contains a data assimilation loop with its main focus on improving the characterization of the reservoir by reducing the uncertainty in the key reservoir parameters and state vari-able predictions. The data assimilation or ‘history matching’ problem is usually solved by incorporating direct measurements of some uncertain parameters and in-direct measurements, whose relationship with the unknown parameters is given by some model. Unfortunately there are rarely enough direct measurements available to obtain a sufficiently accurate description of uncertain parameters, like permeabil-ities or porospermeabil-ities. These parameters are usually known near the well locations, but further away they can only be described by indirect measurements, like production measurements, seismic measurements, electromagnetic measurements, etc.

(17)

1.5 Research objectives and solution directions 9

both sets of data described the same reservoir. This separation was dictated by few reasons. The first was the different spatial scale of production data, which are available at wells versus seismic data available for the entire reservoir. Additionally, although the availability of production data is limited to only few spatial locations, they supply the production information in nearly continuous time and are used for short-term production optimisation.

Seismic surveys, on the other hand, are usually performed only few times in the entire production life of the reservoir. They are the main source of information to build saturation maps and provide guidelines for drilling new wells in the field de-velopment phase.

Due to different spatial and time scales, both data sets were, and often still are, used separately, resulting in the updated reservoir models which differ significantly from each other. Each of those models is created to explain only a particular set of data, disregarding the additional knowledge coming from the other measurement set. The models updated in this way, would often contradict some of the observa-tions obtained from the true reservoir. There are several examples in the literature where the history matched estimates would yield good predictions of future reservoir performance, while not conforming to the true field. With combined use of produc-tion and seismic data one can constrain the inversion in such a way that the final estimates resemble to some degree the true model.

1.5.1 Research objectives

The objectives of this research are as follows:

• To investigate the role of time-lapse seismic data for estimation of key reservoir parameters within the data assimilation framework. This data is later on combined with production data to obtain an estimate that honours all data simultaneously.

• To investigate the influence of the prior knowledge on the estimation results. Different correlation structures are imposed and their impact on the final es-timates is assessed. Additionally some attention is paid to the cases in which wrong prior information is utilized during assimilation.

• To produce estimates that have a good history match to all available data and predictive capabilities, but at the same time contain the features present in a true field.

1.5.2 Solution directions

To include the time-lapse seismic data in the history matching procedure a particu-lar data assimilation method was chosen, namely the representer method, described

(18)

further on in this thesis. This method uses an optimal parameterization by expand-ing the parameter field into a finite set of basis functions called representers. The only unknowns are the expansion coefficients that need to be adjusted to match the available data. In this way the number of independent estimation parameters is reduced to the number of measurements used in the assimilation (as there is one representer defined per each measurement) while still providing a solution to a full inverse problem. Moreover, every representer provides the information on the influ-ence in space and time of the measurement on the final solution, when a particular prior knowledge is used. In this way the importance of prior information on the final results can be analyzed and assessed.

However, because time-lapse seismic data gives a full coverage of a reservoir, a ‘smart’ choice has to be made concerning the locations and the amount of seismic data used for inversion and only the most informative part of the data is used within the data assimilation scheme.

1.6

Thesis outline

This thesis is organized as follows. Chapter 2 gives a general overview of parame-ter estimation methods published in the open liparame-terature with the focus mostly on different data assimilation schemes used and different parameterizations involved. Chapter 3 deals with time-lapse seismic. Some background information is given, and seismic data, in the form that is used in this thesis, are described.

The flow equations and the numerical reservoir model used in this thesis are pre-sented in Chapter 4.

Chapter 5 describes the representer method applied in this thesis for parameter es-timation. The general framework is presented and detailed derivation is given for the 2D model used in this research.

In Chapters 6 and 7 the estimation results obtained using the previously described methodology are given for two different synthetic examples. The sensitivity study results are presented and discussed.

Chapter 8 contains conclusions and some recommendations are given for a possible future development.

(19)

2

Literature review of state and

parameter estimation

2.1

Introduction

When modelling a real subsurface hydrocarbon reservoir, many model parameters are not known properly, while some of the uncertainties in the reservoir descrip-tion are ignored. As a result, the predicdescrip-tions obtained from such a model are only approximate and often do not coincide with the measurements taken from the real field. However, the misfit between the measured and predicted data can be used in estimation algorithms to update the uncertain states and/or parameters and im-prove the future reservoir performance.

Determination of the quantities of interest, given the set of data one has measured, is an inverse problem, which can be approached deterministically or probabilistically. In the deterministic case one usually focuses on minimizing a misfit function that measures how the calculated data deviate from the measured ones. Because the data acquired is not sufficient, the minimization problem is non-unique and estimates ob-tained from inversion are poorly constrained.

When reconstructing a given quantity, there is no way of assuring that the recon-struction is correct. One would like to find a way of expressing this uncertainty. When inverse problems are considered in a probabilistic framework, the probability distribution, given all available data, is defined over model space. This distribution represents all the knowledge one has about the quantity of interest. Using the well known Bayesian statistical inference, one can define the probability distribution for reservoir states and/or parameters, and by sampling from these distributions one can asses the uncertainty in future reservoir performance.

(20)

2.2

The Bayesian framework

In Bayesian theory (Blum and Rosenblatt [1972], Ulrych et al. [2001]) the unknown states and/or parameters are treated as realizations of random variables, described by multivariate probability distributions. After the prior probabilities for states and/or parameters have been assigned, the posterior conditional probabilities of the unknowns, given the new measured data, are calculated.

In case of reservoir simulation, the stochastic character of state variables is caused by uncertainty in the model parameters (permeabilities, porosities) and in initial conditions, and is due to simplifications introduced in the reservoir model. These uncertainties can be reduced, when new data becomes available.

2.2.1 Bayes’ theorem

Let x be a model vector (consisting of states and/or parameters) and d be a vector of observations.

Bayes’ theorem states:

p(x|d) = p(d|x)p(x)p(d) , (2.2.1) where:

• p(x) is the prior probability function that represents our a priori knowledge about the model x before any observation is made.

• p(d|x) is the forward probability function. It is a conditional probability of observations d given model x. This probability is often called the likelihood function as it describes how likely the observations are given a certain model. It is a measure of the degree of fit between data predicted from models and data actually observed.

• p(x|d) is called the posterior probability function and describes our knowledge of model x after making observations.

• p(d) is the probability of observations, it does not depend on model x, and is calculated as a normalization factor for the posterior distribution.

2.2.2 The posterior probability

The posterior probability given in equation (2.2.1) combines the prior knowledge about the uncertain model vector, with the information contained in the measure-ment set. If there are multiple data sets available, d1 and d2 for example, that

(21)

2.2 The Bayesian framework 13

are independent of each other, the likelihood function p(d|x) can be written as the product of likelihood functions for each data set separately.

The posterior probability is then proportional to:

p(x|d1,d2) ∝ p(d1|x)p(d2|x)p(x) . (2.2.2) This property is used in sequential data assimilation schemes (Kalman filter) de-scribed later in this chapter.

Because of the imperfect measuring techniques, the observed data is usually con-taminated by noise. To model this uncertainty, additive noise is introduced into the data-model relationship:

d = h(x) + ν , (2.2.3)

where h(x) is a possibly nonlinear function describing the relationship between ob-servations and uncertain states and/or parameters, and where ν is the noise vector. For additive noise the likelihood function of observations, given a certain model, can be expressed as:

p(d|x) = pν d − h(x) 

, (2.2.4)

where pν denotes the probability of the noise vector ν.

For simple noise and state/parameter models it is possible to derive an analytical form of the posterior probability density. In most cases, however, when the re-lationship between data and model states/parameters is nonlinear, one must use approximate methods for describing the posterior distribution.

In case of a Gaussian prior model and Gaussian uncertainties in the observations, the posterior probability is:

p(x|d) ∝ exp− J (x), (2.2.5) with J (x) = (x − ¯x)TP−x1(x − ¯x) +  d − h(x)TP−ν1  d − h(x), (2.2.6) where Pxis the prior covariance for states/parameters, ¯x is the mean state/parameter vector, and Pν is the measurement noise covariance matrix.

The maximum of this posterior distribution called maximum a posteriori estimate (MAP estimate) can be found by minimizing the objective function in equation (2.2.6). MAP estimate, however, is not meaningful for complicated multidimen-sional distributions, as they can be multi-modal with many secondary maxima. Still it is being used as the estimate of the uncertain states/parameters in variational data assimilation.

(22)

2.2.3 Kalman filter

One of the methods, that allows for updating the uncertain model, given all avail-able data, is Kalman Filter (Kalman [1960]). The Kalman filter is a sequential data assimilation approach that can be derived on the basis of Bayes rule. It relies on the assumption that the initial condition and the system and measurement noise pro-cesses are independent of each other. In that case the measurements can be assimi-lated sequentially. The Kalman filter has a predictor-corrector structure (Heemink et al. [2010]). First, based on all available information prior to measurement time tk, a prediction of the unknowns at time tk is made. Based on this prediction, it is possible to forecast the measurement value at time tk. When this measurement has become available the difference between this measurement and its forecasted value is used to update the uncertain quantities. However, it is optimal only for linear mod-els with Gaussian distribution for the unknowns. With those conditions satisfied, the conditional probability is Gaussian, and therefore it is completely characterized by the mean and covariance matrix. Higher order moments do not have to be taken into account. When nonlinear models are considered, the conditional probability is non-Gaussian and has to be described by an infinite number of moments. However, this approach is computationally not feasible.

Several modifications of classical Kalman filter were proposed in the literature for nonlinear dynamics, including Extended Kalman filter and Ensemble Kalman fil-ter (EnKF). The Ensemble Kalman filfil-ter introduced by Evensen [1994] is a Monte Carlo approach, in which the probability distribution of the estimate is described by an ensemble of possible realizations. The ensemble members are then updated as more data becomes available. If integrated forward in time from the last measure-ment time, the ensemble members yield future predictions. The spread in the future predictions gives an indication about the accuracy (uncertainty) of the predicted quantities. The initial model realizations are drawn randomly from an initial distri-bution (usually generated with stochastic simulations), and therefore non-Gaussian distributions can be utilized. However, only first and second moments of the dis-tributions are updated, which influences the performance of EnKF when posterior distributions are far from Gaussian (Aanonsen et al. [2009]).

The choice of the ensemble size (usually between 50 and 100 realizations in reservoir engineering applications) is very essential for the performance of the algorithm. The computational effort associated with this algorithm is proportional to the ensemble size; so ideally the ensemble size should be as small as possible. However, if the ensemble size is too small, the algorithm can become inaccurate or unstable. The algorithm is completely independent of model implementation and treats the reser-voir simulator as a black box. The reserreser-voir simulator is run a number of times for different initial ensemble members to compute approximations of the mean and co-variance, and therefore neither the tangent linear model nor the adjoint model needs to be derived. The implementation of the EnKF algorithm is relatively simple, as only the interface between the simulator and the filter code needs to be developed. Thus, it is used in the reservoir engineering context as an alternative to traditional

(23)

2.2 The Bayesian framework 15

history matching (Nævdal et al. [2003], Evensen [2007]). A comprehensive review of the applications of EnKF within petroleum engineering and updating of reservoir models is given in Aanonsen et al. [2009].

2.2.4 Variational data assimilation

Variational data assimilation (4D Var) is an alternative to sequential assimilation methods (Kalman filter). In this approach (Talagrand and Courtier [1987]) the model solution (states and/or parameters) over the assimilation period, is adjusted to the available measurements, with propagation of the information contained in the measurement set both forward and backward in time. An objective function J is defined that measures the misfit between the observations and their predictions from the given model. Very often a regularization term is added that limits the search space to the models close to the prior vector ¯x:

J (x) =d − h(x)TP−1ν d − h(x)

| {z }

data mismatch term

+ (x − ¯x)TP−1 x (x − ¯x)

| {z }

regularization term

. (2.2.7)

A particular solution is searched for that minimizes the objective function and obeys the constraints imposed by the dynamical model. The matrices P−1

ν and P−1x are positive definite matrices, which reflect one’s belief in the observation model and the prior knowledge. This objective function does not have any probabilistic meaning (Jazwinski [1970]), but if unknowns and measurement errors are considered to be Gaussian random variables, whose mean and covariance matrices are known, while they are not cross-correlated then the least-squares problem stated above is equiva-lent to finding the maximum a posteriori estimate.

The variational data assimilation is a constraint minimization problem, which is solved iteratively with a gradient-based optimization method. First, the problem is reformulated as an unconstrained minimization problem, by adding the model equa-tions and constraints to the objective function. The gradients are then obtained using a so-called adjoint method that allows us to calculate all the sensitivities by only two simulations, one backward and one forward in time.

In petroleum engineering this is known as history matching and it is widely used to adjust the unknown parameters of a numerical simulation conditioned to well data (Reynolds et al. [1996], Li et al. [2003], Oliver et al. [2008]). In this case the objective function in equation (2.2.7) is expressed in terms of the unknown parameter θ:

J (θ) =d − h(θ)TP−ν1 

d − h(θ)+ (θ − ¯θ)TP−θ1(θ − ¯θ) . (2.2.8) One of the possible algorithms for minimization of the objective function is the Gauss-Newton method, closely related to the representer method used in this thesis. The Gauss-Newton method calculates first and approximates the second derivative of all measurement predictions with respect to parameters being estimated. The

(24)

parameter updates can be calculated from (Tarantola [1987]): θm+1= ¯θ+ Pθ ∂h(θ) ∂θ T  ∂h(θ) ∂θ Pθ ∂h(θ) ∂θ T + Pν −1 [d − h(θm) − ∂h(θ) ∂θ ( ¯θ− θm)  (2.2.9) The columns of Pθ∂h(θ)∂θ T

are equal to the parameter representers used in this thesis to parametrize uncertain parameters and ∂h(θ)∂θ ( ¯θ− θm) term acts similar to the for-ward solution term introduced later on in this thesis (Valstar [2001], Douma [2009]). Compared to pure gradient based methods, where only one adjoint simulation is performed to calculate the sensitivities, the number of adjoint simulations that are required for Gauss-Newton method is equal to the number of measurements, as the Jacobian of the measurements predictions ∂h(θ)∂θ needs to be formed explicitly. The minimization of the variational objective function yields only one particular solution to the problem and does not allow for uncertainty assessment. Kitanidis [1995] and Oliver [1996] introduced the so-called randomized maximum likelihood method (RML), which allows for the creation of samples from the posterior distribu-tion, using a variational approach. This method has been demonstrated to sample properly if the relationship between model states/parameters and data is linear. First a realization from the prior distribution is generated that is not constrained and matches observed data only by chance. Then, a realization of the observed measurements is created by adding random errors to the actual observations. An updated conditional realization is obtained by minimizing the misfit function given in equation (2.2.7) with the known vector ¯x equal to the prior realization. When posterior realizations are available, the uncertainty estimates can be attached to the future reservoir performance predictions.

2.2.5 Alternative approaches

There are several other computer-assisted history matching methods, that have been developed in the reservoir engineering community to match the observed and pre-dicted data by updating unknown parameters. They are mentioned here for com-pleteness, but not discussed in details. The interested reader is referred to the mentioned references.

In Vasco and Datta-Gupta [1997] the streamline simulator, which is orders of mag-nitude faster than traditional numerical simulator, is used for rapid inversion of multiphase production data, while in Vasco et al. [1999] streamline simulation is used to rapidly compute sensitivities of saturations along streamlines.

Simulated annealing and genetic algorithms (Sen et al. [1995], Schulze-Riegert et al. [2002]) are used to search for the global solutions.

Gradual deformation method (Roggero and Hu [1998]) and the probability pertur-bation method (Caers [2003]) were developed to match data while honouring geo-statistical constraints.

(25)

2.3 Parameterization 17

There are also several methods designed specifically to quantify the uncertainty: the neighbourhood algorithm (Erba¸s and Christie [2007]) and Monte Carlo simulation (Bonet-Cunha et al. [1998], Barker et al. [2001]). However, these methods require a large number of reservoir simulations to be performed, which is computationally too demanding for real life applications. Therefore, the ‘proxy’ models, which are supposed to mimic the behavior of the real reservoir, were developed to perform the uncertainty assessments (Omre and Lødøen [2004]).

2.3

Parameterization

Usually history matching consists of perturbing parameters at many locations until a reasonable match between predicted and observed data is obtained. However, the number of parameters being updated in the real reservoir model is very big, ranging from 104to 106. Such an approach is not feasible if one takes into account the computational burden associated with this procedure. Thus the number of independent estimation parameters needs to be reduced through parameterization. There are several ways of parameterizing the reservoir properties. A number of these parameterizations will be briefly discussed bellow.

2.3.1 Zonation

Zonation is one of the oldest parameterization techniques used for reducing the number of unknown parameters being estimated in reservoir simulations (Jacquard and Jain [1965], Jahns [1966]). In this approach the reservoir is divided into a small number of subdomains (zones), in which the properties are assumed to be uniform. The production data is then matched by adjusting the properties only in those chosen subdomains. Although there are some guidelines on how to divide the reservoir into the zones based on the best understanding of the reservoir, the number of zones and the assignment of the zone boundaries remain to a certain extent arbitrary.

2.3.2 Pilot point method

In Marsily de et al. [1984], Cartes and de Marsily [1991] the so-called pilot point method was developed to reduce the number of unknowns in the history matching scheme. First, given all available knowledge about the reservoir from wells, logs, geologic data etc., a reservoir model is created for simulation. The model is then pa-rameterized by choosing the pilot point locations at which the parameters are to be adjusted. The initial reservoir model to be updated by history matching is generated through stochastic simulation, conditioned to available parameter measurements at well locations. When production data becomes available, this is compared to sim-ulated data by means of the least-squares objective function. The minimization

(26)

is performed by perturbing parameters only at selected pilot points. The changes made at pilot points are communicated to the neighbouring points either by kriging or by conditional simulation. This technique preserves the geological variogram in-ferred from observed parameter values. Originally the locations for pilot points were specified a priori. Although RamaRao et al. [1995] proposed placing pilot points in high sensitivity zones, where their potential for reducing the objective function is the highest, it is still uncertain how many pilot points one should use. Using pilot point methodology one can generate multiple solutions all favouring observed data, and try to assess the uncertainty in future reservoir performance by building an empirical distribution.

2.3.3 Karhunen-Loeve Expansion

If the initial reservoir model is considered stochastic with known first two moments (mean and covariance) or if the uncertainty is described by a set of realizations, the Karhunen-Loeve (K-L) expansion (known also as linear principal component analysis (linear PCA) or proper orthogonal decomposition (POD)) can be used to parameterize the uncertain field (Gavalas et al. [1976], Oliver [1996], Reynolds et al. [1996]). Any random field can be represented as a series expansion of deterministic functions and corresponding uncorrelated random coefficients. The basis functions in the K-L expansion are calculated by an eigenvalue decomposition of the covari-ance matrix. Usually, only the eigenvectors corresponding to the largest eigenvalues are used in the expansion. As a consequence, the shortest correlation lengths are disregarded and only the most dominant patterns are used. However, only two-point statistics are preserved with this parameterization, and therefore it is only optimal for parameterizing multi-Gaussian random fields. If the covariance matrix was ob-tained from a set of realizations where higher order statistics are important, this parameterization can still be used, but the realizations obtained in this way will only preserve the covariance structure, and the higher order statistics will not be reproduced.

2.3.4 Kernel Principle Component Analysis (KPCA)

Caers [2003] has shown that, if one aims for good predictive capabilities of the up-dated reservoir models, geologically realistic models should be used. In this way the additional prior knowledge that is used to regularize the inversion should incorporate geological constraints, that are captured by higher order statistics. In theory, the kernel principle component analysis (KPCA) representation of the unknown field (Sarma et al. [2007]) enables preservation of arbitrarily high statistics of random fields and allows reproduction of complex geology. The approach is similar to linear PCA but is performed in different space, called the feature space, which is nonlin-early related to the original parameter space (Sarma et al. [2008]). By using higher

(27)

2.3 Parameterization 19

order polynomial kernels, multipoint geostatistics can be preserved and channel-ized structures can be recovered, in contrast to linear PCA (K-L expansion), where only two-point statistics are honoured. Similar to K-L expansion, the KPCA is a differentiable parameterization and therefore well suited for gradient-based history matching techniques.

2.3.5 Discrete cosine transform

Jafarpour and McLaughlin [2007a] introduced a different parameterization for per-meability estimation, being the discrete cosine transform (DCT), which does not require specification of covariance matrices or other statistics. In this approach a permeability field is expanded into predefined basis functions that do not depend on the covariance matrix and do not need to be estimated from data. The basis func-tions in the DCT are real cosine funcfunc-tions. For a given field, the DCT coefficients are ordered from largest to smallest, and depending on the desired level of accuracy, only the significant modes (basis functions) are retained. In history matching, were the permeability field is unknown, the leading DCT coefficients cannot be identified in advance. Common practice is then to retain the basis function terms that appear to be most important for a given training set (the coefficients are averaged over the realizations in the training set, and then ordered), or, when prior information is not available, the basis functions are specified by modeller. In both cases the unknown coefficients are estimated from the history matching. Jafarpour and McLaughlin [2007b] combined the Ensemble Kalman filter with the discrete cosine transform for efficient history matching. Authors showed that this form of parameterization gives almost identical estimation results as the ones obtained from a more expensive approach that estimates the unknown states and parameters in every block of the computational grid.

2.3.6 Wavelets

Sahni and Horne [2005], Sahni and Horne [2006] parametrized the permeability field with wavelets. In their approach the logarithm of the permeability field is trans-formed into wavelet space, in which it is fully described by the set of wavelet coeffi-cients. Haar wavelets are used for this transformation, as they allow for representing parameter distributions at different resolutions using linear combinations of the pa-rameter itself. In case of Gaussian fields, knowing mean, variance and variogram of the estimated field, one can compute mean, variance and variogram for different sets of wavelet coefficients. This property is used to simulate equiprobable reservoir models, consistent with both production data and geostatistical information about the unknown field. In the algorithm authors proposed, a history-matched model is first mapped into the wavelet space with the Haar wavelet transform and wavelet coefficients, most sensitive to production data, are determined. Those wavelet

(28)

coef-ficients are kept fixed, while the other coefficients are modified to incorporate prior geostatistical knowledge. As a result, multiple history-matched reservoir model re-alizations are generated, while the history matching procedure is performed only once.

2.3.7 Representer method

The representer method was introduced by Bennett [1992]. This method allows for reduction of the number of unknown states/parameters to the number of measure-ments used in the inversion process, by introducing a reparameterization in terms of representer functions. This parameterization was proven to be optimal for linear estimation problems (Reid [1996]), in a sense that, although, the number of inde-pendent estimation parameters is reduced to the number of measurements used, it still provides a solution to the full inverse problem. This parameterization method was used in a petroleum engineering context by several authors and a brief review can be found in Chapter 5.

(29)

3

Time-lapse seismic

3.1

Introduction

In the last few years there has been an increased interest in the use of time-lapse seismic data for reservoir characterization and management. Seismic data itself pro-vides the structural image of the subsurface. The formations, where oil and gas may be trapped, can be identified and potential flow barriers like faults can be mapped. From the way the structures are built up the geologist might be able to infer the type of the rocks, the environment and the time in which those rocks were formed. However, this process is very non-unique, as different combinations of rocks and en-vironments can produce the same seismic response and some additional information is needed to make the interpretation more unique.

With the recent developments, it becomes possible to provide not only the structure of the subsurface but also infer the properties of the rock formation or the fluid fill, from seismic data, through inversion. Seismic data is sensitive to static properties like e.g. lithology, pore volume, net/gross ratio, but also to dynamic (i.e. time-varying) properties like fluid saturation, pore pressure and temperature. From one single seismic survey it is not possible to differentiate between features caused by static properties and those caused by dynamic properties. By comparing two differ-ent seismic surveys acquired over a producing reservoir at differdiffer-ent times, however, it is possible to extract information about the changes in dynamic properties. This process is referred to as time-lapse seismic and provides additional knowledge that can be used to constrain the reservoir model and improve its prediction performance.

(30)

3.2

Exploration seismic

Seismic data is used through the entire life cycle of a reservoir. In the early explo-ration phase, usually before any well information is present, seismic data provides general information about the thickness of sediments and the overall structure of the subsurface. In a later stage, when some wells have been drilled, a more detailed study is performed with the aim of defining and drilling valid traps. More seismic data is needed at this stage to unveil the complexity of the underlying formation. When planning the development of an oil field, detailed seismic mapping is used to position injection and production wells. In the production phase, when additional producers or injectors are introduced to optimize recovery, a detailed structural map as well as information on lateral variations of reservoir quality that can be obtained from seismic, are of high importance. Moreover, with repeated surveys over time (time-lapse seismic) additional information about movements of the hydrocarbons can be obtained and areas of bypassed oil can be detected.

Before seismic data can be fully utilized, it undergoes several processing stages, being data acquisition and processing, seismic imaging and reservoir characterization. Data acquisition and processing

An acoustic or elastic wavefield is transmitted by a seismic source (vibrator, air gun or dynamite). The waves are travelling through the subsurface and are reflected at boundaries where a change in rock properties occurs. These reflected waves travel up to the surface and are recorded by receivers (geophones). The travel time from the source to the reflector and back to the receivers gives knowledge about the depth of the reflecting boundary, while the recorded amplitudes provide information about the change in the rock properties across the interface. The recorded seismic data, however, is noise corrupted and usually only the strong reflectors are visible in the raw seismic data. Therefore, special data processing steps need to be applied, before a structural image is created. In seismic data processing, the acquired seismic data is manipulated and re-worked to suppress noise, to remove the imprint of the near surface conditions, to reduce the amount of data in a summation approach that increases signal-to-noise ratio, and to estimate the subsurface velocities needed in the imaging phase. Ideally, after seismic data processing the subsurface structures and reflection geometries will become more apparent, which should lead to a better interpretation.

Seismic imaging

In the seismic imaging stage, the pre-processed seismic data are transformed into images of the subsurface. This process is similar to optical imaging with the help of a camera, but applies to much bigger objects with extent of many square kilometres horizontally and several kilometres depth. A number of very sophisticated imaging algorithms (e.g. Kirchhoff migration, reverse time migration, depth migration) can be applied resulting in pictures of the subsurface in terms of reflection amplitudes.

(31)

3.3 Time-lapse seismic data 23

These images are used by geologists for interpretation. If the geologists determine that a particular area is likely to contain hydrocarbons, they use the information from the seismic data to predict and estimate how much oil or gas might be in the reservoir. Those estimates are later used to determine whether or not it is economically viable to drill a well.

Reservoir characterization

Reservoir characterization is a process that integrates information from different dis-ciplines (geology, geophysics, petrophysics and petroleum engineering) and provides a more complete description of the physical properties of a reservoir. At this stage usually an inversion is applied to translate the seismic data (amplitude variation with source/receiver offset (AVO data)) into elastic reservoir properties. Several au-thors investigated this problem in the past, with different views on which properties can be estimated. Debski and Tarantola [1995] argued that the elastic properties֒ should be expressed in terms of seismic P-wave (compressional wave) impedance, Poisson’s ratio and mass density or in terms of P-wave impedance, S-wave (shear wave) impedance and mass density. Veire and Landrø [2001] and Khare and Rape [2007] discussed the joint inversion of PP- (compressional to compressional) and PS-(compressional to shear) seismic data for the estimation of P-wave velocity, S-wave velocity and density. There is disagreement between different authors on the reli-ability of the density estimates obtained from seismic data. Some of them claim that density obtained from PP/PS data will be unreliable on real seismic data (Jing and Rape [2004]), others claim that if seismic data with wide-incidence angles are available, reliable density estimates could be obtained. In Gisolf [2009] it is argued, however, that a nonlinear inversion approach in which the full nonlinear data model is honoured, will yield stable broad band density predictions.

When rock physics is used, the elastic parameters obtained from seismic data can be converted into reservoir properties like water saturation or porosity (Vernik et al. [2002], Kabir et al. [2005], Li et al. [2005]). However, it is difficult to differentiate between the effects caused by pressure or fluid saturation on a single seismic data set. The time-lapse seismic analysis, on the other hand, offers the opportunity to discriminate between those effects.

3.3

Time-lapse seismic data

Time-lapse seismic is a process of repeatedly acquiring seismic surveys (Bacon et al. [2003]). The basic idea of time-lapse seismic is that the changes in the reservoir caused by production introduce changes in the seismic response over time. With 3D seismic data, structural information about the reservoir is obtained, in terms of identified faults, layer thicknesses or lateral extent. The dynamic information, how-ever, is limited, because of the difficulties with separation of pressure, saturation and temperature effects on single seismic data set. Therefore, by carrying out a baseline

(32)

survey before production begins and then comparing it to repeated surveys over the lifetime of the reservoir, one can obtain valuable information about the changes in the reservoir state. Fluid movements through the reservoir can be monitored and the areas of unswept oil can be identified. As pointed out by Koster et al. [2000], time-lapse seismic contributes significantly to improved well placement and produc-tion strategies.

The major problem in time-lapse seismic is repeatability (Jack [1997], Bacon et al. [2003], Oldenziel [2003]). If one wants to compare travel times, amplitudes or seis-mic velocities between baseline and repeat survey, repeatability needs to be assured, because only then the observed differences can be related to changes in the dynamic properties. However, there are several possible causes for differences in the seismic response, which obscure the comparison. One of them is ambient noise, which varies from one survey to another. It often happens that some of the areas, where the base-line survey was shot, are not accessible any more because, for example, production facilities have been installed. Also near surface conditions may vary from one survey to another and acquisition geometry may have changed if compared with baseline survey. To alleviate this problem specialized processing steps are applied to seismic data that enhance the true changes over the reservoir interval.

Time-lapse seismic data can be included into data assimilation schemes in different ways. One can use travel times and amplitudes or elastic parameters obtained from inversion of the recorded signals. A brief review of the use of time-lapse seismic within data assimilation framework is given in Section 3.4.

Inverted seismic data can appear in the form of time-lapse acoustic impedance data, time-lapse changes in shear modulus, time-lapse changes in density or time-lapse changes in the acoustic propagation velocity. There is still an open debate between researchers concerning the reliability of the above mentioned attributes estimated from seismic data. Nevertheless, when elastic parameters are estimated from seismic, a petroelastic model is needed to translate those parameters into dynamic reservoir states: pressures and saturations. One can even go further and use the changes in the state variables: pressures and saturations obtained through inversion of time-lapse amplitude-versus-offset (AVO) data as proposed by Landrø [2001]. If the inversion is embedded in a stochastic environment and the Bayesian framework is employed, estimated pressure and saturation changes from AVO data together with the asso-ciated uncertainties in those estimates can be obtained (Veire et al. [2006]). However, in this thesis, a different approach was taken. Based on the research con-ducted within the Delphi Consortium [2009] aiming at the nonlinear, full waveform seismic inversion for compressibility, shear modulus and density, it was assumed that time-lapse changes in the density would be available.

The bulk density for a particular vintage is given by (assuming only two phases, oil and water, present):

ρ= φ[Soρo+ Swρw] + (1 − φ)ρs , (3.3.1) or

(33)

3.4 History matching with time-lapse seismic data 25

where φ is the porosity, ρo is the oil density, ρwis the water density, ρsis the sand grain density. Sodenotes oil saturation and Swdenotes water saturation.

The bulk density is a function of pressure (through phase densities), saturation and porosity. For pressure changes it is reasonable to assume that density remains practically unchanged (Landrø [2001]). Assuming that the porosity distribution does not change over time, the time-lapse changes in the density are dominated by changes in water saturation. The time-lapse changes in the density can then be predicted from the simulated reservoir saturations by:

∆ρ = ρ2− ρ1= φ∆Sw(ρw− ρo) , (3.3.3) where ∆Sw= S2w− Sw1 and superscripts 1 and 2 denote two different surveys. Assuming that it is possible to derive ‘good’ quality density changes from time-lapse seismic, and assuming that the fluid and rock compressibilities are relatively small, as is often the case in undersaturated reservoirs, one should be able to identify the location of the saturation front moving over time. This information is important for two reasons. First the saturation information gives a direct picture of the remain-ing oil and can, therefore, be used to plan the position of infill wells, for example. Secondly, the shape and position of the saturation front contain information about the underlying permeability and porosity fields (and other parameters influencing a flow pattern). This information is indirect and needs to be inferred from available measurements through inversion.

Figure 3.1 shows the possible synthetic time-lapse density changes obtained here by differencing two density distributions from different time steps for different underly-ing permeability fields. In principle the density changes are available over the entire reservoir. In practice, however, there are areas showing no change, meaning that the water saturation front has not reached those locations yet. The idea is to restrict the use of interpreted seismic measurements to those only along the saturation front, as we believe them to be the most informative ones.

In the synthetic examples used in this thesis the saturation front is clearly visible. In reality, however, the location of the saturation front as inferred from time-lapse seismic is not always very sharp and special data analysis techniques may have to be applied to the data to enhance the visibility of the front.

3.4

History matching with time-lapse seismic data

There are several examples in the literature on the use of time-lapse seismic data combined with production data for reservoir characterization. Different forms of time-lapse seismic were assumed and some of those choices are discussed here. In Landa and Horne [1997] the time-lapse changes in water saturations were assumed to be available. From the experiments that the authors performed, they concluded that 4D seismic information does not assist much in reservoir characterization, when considered alone. However, when combined with more traditional data such as pro-duction data, seismic data significantly improves the prediction of future reservoir

(34)

5 10 15 20 2 4 6 8 10 12 14 16 18 20 −31.5 −31 −30.5 −30 −29.5 −29 −28.5 5 10 15 20 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 5 10 15 20 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 5 10 15 20 2 4 6 8 10 12 14 16 18 20 −31 −30.5 −30 −29.5 −29 −28.5 5 10 15 20 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 5 10 15 20 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 5 10 15 20 2 4 6 8 10 12 14 16 18 20 −30.5 −30 −29.5 −29 −28.5 −28 5 10 15 20 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 5 10 15 20 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40

Figure 3.1: The density changes obtained from reservoir simulator for three different perme-ability fields. First column shows the respective permeperme-ability fields. Second column shows density changes at the saturation front ‘as seen by’ seismic. Third column shows the loca-tions of possible seismic measurements.

performance.

Huang et al. [1997] and Dadashpour et al. [2007] used the time-lapse changes in the amplitudes. They minimized the objective function consisting of two terms: one which penalizes the misfit between observed and simulated production data and another which penalizes the misfit between observed and simulated changes in the seismic amplitudes. The optimization was performed by adjusting the uncertain reservoir parameters, being porosity and permeability. In both cases a petroelastic model was employed, which relates some of the reservoir properties such as pore space, pore fluid, fluid saturation, pressures and rock composition to seismic elastic parameters like P-wave and S-wave velocities, acoustic impedance, Poisson’s ratio, or seismic amplitudes. Arenas et al. [2001] used the P-wave velocity in the his-tory matching procedure. The predicted seismic data were obtained with the use of Gassmann fluid replacement model (Gassmann [1951]). A large improvement in the

(35)

3.4 History matching with time-lapse seismic data 27

estimate of the permeability field was achieved when production data and seismic data were combined and the main heterogeneities in the true permeability field were recovered.

Several authors (Gosselin et al. [2003], Yamamoto et al. [2004], Dong and Oliver [2005]) used the seismic data in terms of P-wave impedance data for permeability estimation. Gu´erillot and Pianelo [2000] used seismic and production data to es-timate simultaneously the impedance and permeability. Skjervheim et al. [2005], Leeuwenburgh et al. [2008] used the EnKF together with seismic observations to estimate unknown parameter fields in synthetic examples. They assumed that pres-sure, saturation, acoustic impedance or two-way travel times are available. They found significant improvement in the permeability estimation, compared to cases when only production data were used. However, this improvement deteriorated when higher seismic measurement errors were assumed. Trani et al. [2009] used ver-tically averaged seismic data in terms of time-lapse differences in pore pressure and saturation and used the covariance localization techniques in EnKF to utilize fully the important information carried by frequent 4D seismic, and improve the quality of the history match and of the production forecast.

(36)

Cytaty

Powiązane dokumenty

Książka została wydrukowana w Zakładzie Graficznym W. Katowicki oddział Ligi Morskiej i Kolonialnej potwierdził 30 grudnia 1937 roku przyjęcie rachunku i przelanie na

[r]

Keywords: underwater noise; pile driving; impact hammer; vibratory hammer; sound level; Scholte waves; solid-fluid interaction; acoustics; offshore

Figure 2 (a) Contact angle of CO 2 /water/shale system as a function of pressure at a constant temperature of 318 K, and (b) Advancing and receding contact angles of quartz in

Wśród wymienionych nazwisk nie brak tych o polskim brzmieniu, lecz może to być pozór wywołany ówczesną pisownią, jak choćby w przypadku dowódcy obrony Szigetvaru

Być może człowiekowi potrzebne jest poczucie zagrożenia i lęk przed przyszłością, aby mógł wyraźnie zobaczyć jak bardzo Bóg potrzebny jest człowiekowi.. On bowiem,

2005, Rezydencja pierwszych Piastów na poznańskim grodzie, [w:] Poznań we wczesnym średniowieczu 5.. 2005, W czesnośredniowieczne paciorki szklane pochodzące z

The diversification of glaciofluvial and fluvial processes during the phase of stagnation and recession of uppervistulian ice-sheet in the vicinity of Piaski Pomorskie