• Nie Znaleziono Wyników

Linear multigrid methods for numerical reservoir simulation

N/A
N/A
Protected

Academic year: 2021

Share "Linear multigrid methods for numerical reservoir simulation"

Copied!
170
0
0

Pełen tekst

(1)
(2)

Td d<

u

' '*V

Linear Multigrid Methods

for

(3)

Linear Multigrid Methods

for

Numerical Reservoir Simulation

Proefschrift

ter verkrijging van de graad van doctor

aan de Technische Universiteit Delft.^.——-H^ op gezag van de Rector Magnificujsj^ ° ^

prof. dr. J.M. Dirken, /$/ "k? in het openbaar t e verdedigen P r o

en overstaan van eeiTcommissie < kollege van Dekanen d a a r t o e aangev op dinsdag 1 december 1987 te 16.0(Xuur

door

Robert Kettler

geboren te 's-Gravenhage, wiskundig ingenieur

TR diss

1591

(4)

Dit proefschrift is goedgekeurd door de promotor

prof. dr. ir. P. Wesseling

(5)

Stellingen

behorende bij het proefschrift

"Linear Multigrid Methods for Numerical Reservoir Simulation"

van

R. Kettler

1. De 'smoothing' factor van puntsgewijze Gauss-Seidel iteratie toegepast op modelprobleem (2.38e) uit dit proefschrift is \\fö indien e [ 0, en 0 indien

e = 0 (literatuur: [Ket82bj). Het verschil in de factor heeft te maken met het

wel significant zijn van Fouriercomponenten die buiten de convectierichting vallen in het geval van e [ 0, en het niet significant zijn van deze componenten in het geval van e — 0.

2. Uit de 'Assumptions' 1.5, 1.6 en 1.8 van dit proefschrift en de veronder­ stelling dat de functiewaarde van de constante functie niet wijzigt onder een roostertransitie-operator, volgt dat:

(a) het differentiemolecule-^[—1 2 —1] alleen invariant is onder grofrooster-correctie indien de restrictie-operator 'full weighting' of de prolongatie-operator lineaire interpolatie (en RQ = ^) is

(b) het difFerentiemolecule |[—1 1 0] alleen invariant is onder grofrooster-correctie (met RQ = ■§) in het geval van 'downstream' prolongatie of 'downstream' restrictie

(zie Hoofdstuk 3 van dit proefschrift voor terminologie). Derhalve bestaat er, onder de gemaakte veronderstellingen, geen vergelijkingsonafhankelijk tran-sitie-operatorenpaar dat zowel de moleculen van de centraal gediscretiseerde diffusievergelijking als die van de 'upwind' gediscretiseerde convectievergelij-king invariant houdt onder grofrooster-correctie.

3. De vergelijkingsafhankelijke transitie-operatoren zoals beschreven in Hoofd­ stuk 3 van dit proefschrift laten na toepassing van de Galerkin-approximatie-formule weliswaar de moleculen ^ [ — 1 2 — 1] en j;[ —1 1 0] invariant op de grove roosters, maar laten het gemengde convectie-diffusie discretisatiemo-lecule a\[— 1 1 0] + /3-jp\— 1 2 — l] (a,/? > 0) niet invariant. Invariantie voor het laatste geval wordt wel verkregen door een van de volgende transitie-operatorenparen toe te passen: (P'(C),R'(D)) of (P'(D),R'{C)), waarbij C en D respectievelijk het 'upwind' gediscretiseerde convectiedeel en het cen­ traal gediscretiseerde diffusiedeel van A voorstellen. (De elementen Cl;- en

Dij kunnen als volgt uit Aij berekend worden: C,j = 'J~ J'~2 ''~ (»' ^ j)\

(6)

4. Publikaties waarin de prestaties van numerieke iteratieve technieken wor­ den uitgemeten in hoeveelheden vectoroperaties zullen snel aan actualiteit inboeten.

5. Het snel en robuust kunnen oplossen van driedimensionale anisotrope dif­ fusievergelijkingen met een relatief zwakke diffusie in één (willekeurige) rich­ ting vormt een uitdaging voor de ontwerper van een lineaire multirooster-methode die voor praktische toepassing bedoeld is.

6. Het gebruik maken in reservoirsimulatoren van rechthoekige roosters waarbij de roosterlijnen in lengte kunnen variëren is weinig zinvol, vanwege de inhe­ rente ingewikkeldheid van de programmatuur en de voortdurende toename van de rekencapaciteit van computers.

7. Een 'user interface management' systeem dient de taak van de ontwerper van een (liefst gebruiksvriendelijke) 'user interface' te verlichten. De ge-, bruiksvriendelijkheid van bestaande 'user interface management' systemen laat echter meestal te wensen over.

8. Vectorisatie van de huidige circuitsimulatieprogramma's vergt een grote in­ spanning. Alleen al vanwege het feit dat er op dit moment relatief goedkope en efficiënte zogenaamde 'very large instruction word' machines op de markt verschijnen, is het twijfelachtig of deze inspanning de moeite waard is. 9. Nieuwe computerarchitecturen dwingen de numericus om algoritmen die als

inefficiënt bekend staan steeds opnieuw te herwaarderen met betrekking tot hun efficiëntie.

10. Het artikel "De meerwaarde van het doctoraat" van E. Sanders (Intermediair, jaargang 23, nummer 36, 1987) vergroot zowel de twijfel ten aanzien van het nut van promoveren voor de promovendus, als de overtuiging dat een diepgaand onderzoek naar het wel en wee van de (ex-)promovendus geschikt is voor een dissertatie.

11. Het is extraordinair elitair en exorbitant pedant om usantiële expressies zonder evocatieve necessiteit excessief te compliceren.

12. Invoering van lengteklassen bij sporten als volleybal en basketbal stelt de wat kleinere sporter in staat deze sporten 'op niveau' te kunnen beoefenen. 13. Het gebruik van vliegers in plaats van zeilen geeft de zich op wind voortbe­

wegende watersporter de mogelijkheid om zich ook in een bosrijke omgeving aangenaam en snel voort te bewegen.

14. Zowel voor wielrennen als voor zwemmen is plaatsing van de mond op het achterhoofd en de ogen boven op het hoofd handig.

(7)

Nevertheless, even today's situation is still unsatisfactory in several respects.

K. Stüben, U. Trottenberg [ST82]

(8)

Preface

The research for this thesis was carried out while the author was associated with the Numerical Group of the Department of Mathematics and Informatics, Delft University of Technology, Delft.

The project was initiated in co-operation with and sponsored by the General Re­ search Department (Mathematics Section) and the Recovery Processes Department (Reservoir Simulation Section) of the 'Koninklijke/Shell Exploratie en Produktie Laboratorium' (KSEPL), Rijswijk, The Netherlands. The author, as a member of the Mathematics Section, was able to use all KSEPL facilities (notably, the com­ puting facilities).

The author would like to thank Koos Meijerink and Neil Carmichael for their contributions to the work, Felix Jacobs and Neil Carmichael for proof-reading, Brian Felstone and Joanne Jenkins for editing, Agnes van der Linden for typing (and many other things!), Philips/Corporate ISA/CAD Centre for supplying typing facilities, KSEPL for financing the printing, and, last but not least, all the author's friends for their encouragement and support.

(9)
(10)

Contents

Preface ' VII

1 Introduction 1 1.1 General Introduction 1

1.1.1 Reservoir Simulation 1 1.1.2 Sparse Linear System Solvers; Multigrid Methods 1

1.1.3 Background to the Thesis 3

1.2 Terminology 3 1.2.1 Stationary Iteration 3 1.2.2 Smoothing . . . 4 1.2.3 Coarse-grid Correction 5 1.2.4 Two-grid Iteration . . 5 1.2.5 Multigrid Iteration 6 1.3 Starting Point and Outline 6

1.3.1 Context 7 1.3.2 Basic Choices 8 1.3.3 Further Design Considerations 10

1.3.4 Model Problems 11 1.3.5 Further Outline of the Thesis 12

2 S m o o t h i n g 13 2.1 Introduction and Notation 13

2.2 Basic Iterative Methods; Local Mode Analysis 14 2.2.1 Incomplete Factorization Schemes 14 2.2.2 Local Mode Analysis of Incomplete Factorization Schemes . . 17

2.2.3 Pattern Schemes 19 2.2.4 Local Mode Analysis of Pattern Schemes 19

2.3 Smoothing Behaviour of Basic Iterative Methods 20

2.3.1 One-dimensional Case 20 2.3.2 Two-dimensional Case 21

2.4 Concluding Remarks 32 IX

(11)

X CONTENTS

3 Coarse-grid C o r r e c t i o n 33 3.1 Introduction and Notation 33 3.2 One-dimensional Case 35

3.2.1 Galerkin Approximation 35 3.2.2 Conventional Level Transition Operators 36

3.2.3 Equation-dependent Level Transition Operators 36

3.2.4 Remaining Degrees of Freedom 38

3.3 Two-dimensional Case 39 3.3.1 Conventional Level Transition Operators 39

3.3.2 Equation-dependent Level Transition Operators 41

3.4 Concluding Remarks 46 4 M u l t i g r i d & C o n j u g a t e G r a d i e n t s 47

5 N u m e r i c a l R e s u l t s 49 5.1 Introduction 49

5.1.1 Notation 49 5.1.2 Iteration Work Estimates 49

5.1.3 Preliminary Work Estimates 51

5.2 Experiments 51 5.2.1 Introduction 51 5.2.2 Simple Diffusion Problems with Discontinuities 52

5.2.3 Stone's and Kershaw's Problems 55 5.2.4 Convection-diffusion Problems 62

5.2.5 Additional Results 67 5.3 Concluding Remarks 68 6 M e t h o d s E x t e n d e d t o T h r e e Dimensions a n d S y s t e m s 69

6.1 General Introduction 69 6.2 Methods Extended to Three Dimensions 70

6.2.1 Introductory Remarks 70

6.2.2 Smoothing 71 6.2.3 Coarse-grid Correction 76

6.2.4 Final Remarks 79 6.3 Methods Extended to Systems 80

6.4 Concluding Remarks 81 7 I m p l e m e n t a t i o n a n d A d d i t i o n a l N u m e r i c a l Results 83

7.1 Introduction 83 7.2 Implementation into a Linear Solution Package 83

7.2.1 Unmodified Package 83 7.2.2 Modified Package 85 7.3 Numerical Results 88 7.4 Concluding Remarks 91

(12)

CONTENTS XI

8 F i n a l Discussion 93

A p p e n d i c e s 95 A Reservoir S i m u l a t i o n E q u a t i o n s 97

A.l General Introduction 97 A.2 Fundamental Equations 98 A.3 Discrete Equations 102 B M u l t i g r i d M e t h o d s : a Simple E x a m p l e , 105 B.l Introduction 105 B.2 Smoothing 106 B.3 Coarse-grid Correction 108 B.4 Two-grid Iteration 109 C N e s t e d D i s c r e t i z a t i o n 111 D Algebraic M u l t i g r i d 113 D.l Introduction 113 D.2 Algebraic Aspects 113 D.3 Approximated Direct Multigrid Solver 115

E Local M o d e S m o o t h i n g Analysis: Brief I n t r o d u c t i o n 117

F I n c o m p l e t e Block F a c t o r i z a t i o n 119 F.l Introduction 119 F.2 Two-dimensional Case 120 F.3 Three-dimensional Case 122 G C o n t o u r P l o t s of R e d u c t i o n Factors of Fourier C o m p o n e n t s 125 List of F i g u r e s 143 List of Tables 144 B i b l i o g r a p h y 145 S u m m a r y 153 S a m e n v a t t i n g 155 A b o u t t h e A u t h o r 157

(13)
(14)

Chapter 1

Introduction

1.1 General Introduction

1.1.1 Reservoir Simulation

The adequate and efficient simulation of flow processes has always been of major importance to practitioners in the field of production of oil/gas reservoirs. With the development of large-scale computers, physical laboratory models gave way to mathematical ones, which are essentially more flexible. The latter models consist mainly of partial differential equations, which are solved by means of numerical methods for discretization and linearization. A computing-intensive part of a simu­ lation is the solution of sparse linear algebraic systems. This thesis investigates the applicability of a recently developed numerical technique, the multigrid method, to these large sparse linear systems that occur in contemporary reservoir simulation. A modern reservoir simulator must be able to cope with a variety of oil/gas flow problems. Simulators therefore consist of huge program packages, which are very expensive to develop. The main physical process inside a reservoir under production is the flow of initial reservoir and injection fluids through the porous rock formation. The governing equations are described in Appendix A, as well as their discretizations on rectangular grids, and the resulting sparse systems of linear algebraic equations. We recall that the solution of these systems is usually a computing-intensive part of a reservoir simulation, which itself may take hours of computing time on a present-day vector computer.

1.1.2 Sparse Linear System Solvers; Multigrid Methods

Sparse linear systems that result from finite difference or finite element approxi­ mations of partial differential equations can be solved either by direct or iterative methods. Direct methods (based on Gaussian elimination) are generally more ex­ pensive than iterative ones in terms of computing time and storage for large and significant simulations, i.e. if the number of gridpoints N is large. During the past decades numerical analysis made significant progress in the area of solving large

(15)

2 CHAPTER 1. INTRODUCTION

sparse linear systems that result from discretizations on rectangular grids. The present iterative solution methods are orders of magnitude faster than the conven­ tional direct ones, and they need considerably less storage (on the other hand, of course, iterative methods may suffer from divergence problems).

A class of iterative methods which currently plays an important role in reser­ voir simulation is formed by the CG (conjugate gradient) and related methods ((HS52), [Fle76], [Man77], [Vin76], [Axe80] [Son84])7 These solvers are often highly effective when used in combination with a suitable preconditioning technique. Well-known methods in this area are ICCG [MV77] and MICCG [Gus78], i.e. (modified) incomplete Choleski & conjugate gradients.

A relatively new class of iterative methods is that of MG (multigrid) methods, which essentially make use of a sequence of systems on increasingly coarse (or fine) grids. Though MG was previously introduced in [Fed62], [Fed64] and [Bak66], its actual efficiency and potential were first recognized by Brandt in [Bra73], [Bra76] and [Bra77j. Meanwhile, MG had been further developed in [Ast71], [Nic75], [Nic77j and [Hac76]. Although MG showed promise as a very suitable iteration scheme in adaptive (local) grid refinement, its main initial successes were in the field of the solution of systems of difference equations. Especially since [Bra77], [Nic77] and [Hac76], there has been a spectacular increase in practical and theoretical MG research (and literature, see [Bra]), of which a substantial amount treated MG mainly as a sparse linear system solver.

The strong point about MG is that it has proved to be asymptotically optimal (in several cases; see [Hac85] for a summary), that is, MG needs 0(N) computational work (and storage), whereas other well-known iterative methods such as ICCG need

0(Nn);r; > 1 computational work, for a fixed accuracy. Figure 1.1 qualitatively shows MG versus ICCG in terms of computational work W as a function of N. For large N, MG will always be faster, but the point of interest is the intersection point TV*. In practice, it has been verified that a well-designed MG method behaves basically as in Figure 1.1; for simple 2D elliptic problems N* appears to be roughly 1000 (see for example, [WS80]).

(16)

1.2. TERMINOLOGY 3

1.1.3 Background to the Thesis

Computational grids containing a thousand points are not unusual in present-day numerical reservoir simulation, and there is a tendency to much larger simulations, encouraged by increased computational capacity. Hence, it was felt in 1980, at the start of the investigations described here, that MG might be a valuable tool for numerical reservoir simulation. At that time, however, MG had only been effectively applied to uniform discretizations of simple 2D elliptic problems with rectangular and homogeneous domains, which are seldom encountered in realistic reservoir simulation (see Appendix A).

It was therefore decided to start a study to modify the conventional MG method in order to make it an efficient and reliable linear solver for a more general set of reservoir simulation systems.1 The study was jointly supported by Delft University

of Technology and KSEPL.

This thesis summarizes the results of the study. The emphasis is on the two-dimensional, single-variable (IMPES, see Appendix A) case. In addition, extensions of the methods to the three-dimensional and multivariable ('systems') cases are de­ scribed. Most of the results presented in this thesis have been previously published in [KM81], [Ket82b], [HKWZ83] and [KW86]), but some unpublished results have been included.

1.2 Terminology

The following terminology is used when discussing linear system solvers.

1.2.1 Stationary Iteration

Let the (non-singular) linear system to be solved be denoted as:2

Lhuh = fh {uh,fh€Uh; Lh:Uh->Uh), (1.1) where U^ is the set of real functions on the computational grid 0/, C Md; h denotes a

discretization parameter; d denotes the number of dimensions. Stationary iteration (i.e. a special case of defect correction; see for example, [Ste78j) to approximate the solution of (1.1) is given by

unh+1=u"h + Kh(fh-Lhunh), n€JN, (1.2) where

K^ = Lh = Lh + Fh (1.3)

'There were also practical reasons for restricting MG to the role of a linear solver since this would make it easier to interface with existing simulators.

(17)

4 CHAPTER 1. INTRODUCTION

is a (non-singular) linear approximation of L*. For the defects (or residuals) d% =

fh — Lftujj and the errors e£ = u/, — u£, we have

Lhenh = dnh, (1.4)

and we can write:

dnh+1 = Dhdnh; Dh = Ih- LhKh = FhKh, (1.5)

enh+l = Ehenh; Eh = Ih- KhLh = KhFh, (1.6) where lh denotes the identity operator. The spectral radius p*{Eh) (= p*{Dh))

is often used as a measure of convergence of the iterative process; clearly, {u£} converges to uh if p*(Eh) < 1. Eh will be called the iteration matrix of the iteration process under consideration.

Henceforth, Lh is an approximation of Lh such that the right-hand side of (1.2) is relatively easy to compute (note that the special case using Lh = Lh represents direct solution). The following subsections present some examples of stationary iter­ ation, namely, basic iteration/smoothing, coarse-grid correction, two-grid iteration, and multigrid iteration.

1.2.2 Smoothing

This subsection gives a few examples of basic iteration/smoothing. The corre­ sponding iteration matrix will be called Sh- The approximations are usually based on incomplete factorizations, see also Subsection 2.2.1.

For example, by splitting the matrix Lh into its strictly lower triangular, diag­ onal, and strictly upper triangular parts as follows:

Lh = LI + L°h + L+, (1.7)

one may choose

Lh=-L0h; Sh = Ih-v(L0h)~1Lh (1.8)

v

or

Lh = -L°h + Lh; Sh = Ih-v(Lh + vLh)-lLh (1.9)

v

(v€lR\{Q}), thus obtaining the familiar J;v ('damped'Jacobi) and GS;u ('damped'

Gauss-Seidel, or SOR: successive over-relaxation) methods, respectively. These 'basic' iterative methods (of computational complexity 0(N'1);T) > 1 if p*{Sh) < 1,

see (Var62j) can be very efficient 'smoothing' methods in the multigrid context, that is, they may reduce the error components of relatively short wavelength efficiently (see for example, [Bra77] and Appendix B).

(18)

1.2. TERMINOLOGY 5

1.2.3 Coarse-grid Correction

Let us introduce a 'coarse' grid QH; H>h, its set of gridfunctions UH, a restriction (averaging) operator Rnh '■ Uh —* UH, a prolongation (interpolation) operator PhH :

UH —+ Uh, and LH, an 0 # approximation of Lh, and select:

Kh = PhHLff Rfih- (110)

The result is a 'coarse-grid correction' method. The corresponding iteration matrix will be called Ch, and is given by:

Ch = Ih~ PhHLH RffhLh- (111)

Iteration (1.2) involves the solution of

LHuH = !H {/H = RHhdl), (1.12) which is an QH approximation of (1.4). Although this coarse-grid correction method

with iteration matrix Ch usually does not supply the desired approximate solution to ( l . l ) , it may very well serve to annihilate the smooth error components (see for example, [Bra77] and Appendix B).

1.2.4 Two-grid Iteration

This subsection deals with a combination of smoothing and coarse-grid correction, known as two-grid iteration.

The following quasi-ALGOL procedure TQ describes one iteration of a general two-grid method with rx pre-smoothings (before coarse-grid correction) and r2

post-smoothings (after coarse-grid correction):

procedure TQ{v,h); begin for i := l ( l ) r i do üh :~ üh + K^(fk - Lhüh)\ (pre-smoothing) JH ■= RHh(fh - Lhüh); j UH := L~^fn\ > (coarse-grid correction) üh ■■= üh + PhHUH; J for i := l ( l ) r2 do üh := üh + K%(fh - Lhüh); (post-smoothing) end;

Here, üh denotes the approximation of U/,, and the superscript S refers to smoothing. The iteration matrix of the two-grid method is given by

(19)

6 CHAPTER 1. INTRODUCTION

1.2.5 Multigrid Iteration

Instead of solving (1.12) directly, the above two-level iteration is repeated in order to solve this equation approximately (by introducing a coarser grid than Ü#). By continuing this process on increasingly coarse grids a recursive multigrid algorithm is obtained. Let us use the level index I (/ = 1(1) A, from fine to coarse) as a subscript for grids, operators and functions (with e.g. Qj = Hfci). Multigrid iteration is

described by the following procedure H.Q: p r o c e d u r e >(<?(/,£/, ƒ!);

if / = A t h e n ttj := LJ1 fi else (direct solution on level A) begin

for t := l(l)ri do ü| := «j + K^(fi — Ljtii); (pre-smoothing)

fi+i '■— Ri+i,i{fi — Liüt); 1

teiT=°i(l)edo>45(/ + l,fi,

+

i,/,

+

i); (

COarse

"

grid c

°"

e c t i o n

)

«/ := «I + fl,j+i«j+ii J

for i := 1(1)T2 do üj := üj + K?(fi — Ljüj); (post-smoothing) end;

which makes use of the customary zero initial iterands. In the algorithm, f denotes the number of recursive M§ applications; £, TX and r2 may actually depend on /;

£ = 1,2 supply the so-called V(TI,TZ) cycle and W(ri,r2) cycle, respectively. The multigrid iteration matrix can be found recursively from:

A^A-l — ^A-l(^A-l ~ Ph-l,\L^ Rh,K-lL\-.i)Sj?_x,

Mi = S?(It - P,,,+1(/,+1 - Mf+1)L;+\Rl+uL,)Sr, I = A - 2 ( - l ) l ; (1.14)

Eh = M „

see for example, [ST82]. For special cases it has been proved that the method is of computational complexity 0 (N) for a fixed accuracy. The theory is summarized and unified in [Hac85].

The basic multigrid concepts are illustrated in greater detail with reference to a simple ID Poisson example in Appendix B. General introductions to multigrid concepts are given in [Bra82b), [Hac83], [Hac85], [Hem8l], [ST82] and [Wes82b[, where especially [Bra82b] and [Hac85] give a broader treatment of MG, for example, as a solution method for non-linear partial differential equations.

1.3 Starting Point and Outline

This section describes the context, basic choices, design considerations and model problems for the multigrid method to be developed. It also outlines the remainder of this thesis.

(20)

1.3. STARTING POINT AND OUTLINE 7

1.3.1 Context

We recall that our aim is to develop an efficient and reliable multigrid solution method for a large set of linear systems that occur in reservoir simulation. Let us first state the usual 'linear solver' constraints:

A s s u m p t i o n 1.1 The solver is autonomous ('black box').

(This means that the user does not need to and cannot adjust the solution method to a particular problem; e.g. there are no parameters to be set.)

A s s u m p t i o n 1.2 The only input required by the solver is the (algebraic) matrix

problem (i.e. Lh and ƒ/, of equation (1.1)), and a tolerance.

Assumptions 1.1 and 1.2 imply that the user perceives the solver just like any other standard linear algebra subroutine. In addition, we state:

A s s u m p t i o n 1.3 The matrix problem arises in present-day numerical reservoir

simulation.

This, apparently obvious, statement is meant to explicitly restrict the application area of the solver, that is, the multigrid method does not have to be able to solve general matrix problems, e.g. problems of which the matrix has an irregular sparsity structure. In particular, Assumption 1.3 implies that equation (1.1) can be assumed to be a discretization on a rectangular grid of a (continuous) linear differential equation

Lu = ƒ (u, ƒ EU; L:U ->£/), (1.15)

with U an appropriate space of real functions. Equations (1.1) and (1.15) are thus expected to reflect 'reservoir' features, such as strong anisotropies and inhomo-geneities, mixed convection-diffusion, and a general geometry (see also Appendix A). Because of Assumption 1.3 together with (1.15) the multigrid method to be developed is called 'linear', see for example, [Hac85].

From the preceding section it follows that one has to select the following items for a multigrid linear solver: n;+ 1, if,5, iZf+1(, P( ( + 1, Lj+1(/ = l(l)A — 1) and the MG

schedule (TI,T2, £)• Conventional choices described in the literature are:

• uniform coarsening by a factor of 2 (i.e. ili+i C fii;/i'+1 = 2/i'),

• Gauss-Seidel or damped-Jacobi basic iteration, • linear interpolation (for Pi,i+i and R?+ii)i

• Li+i is obtained by straightforward (finite difference) discretization of L. In its 2D form, this method was already suggested in [Bra77] and since then often applied sucessfully, so that the method may be called 'classical'. Indeed, analyses such as those given in Appendix B, and numerical experiments, show that the method is suitable for various convection-diffusion equations with smoothly varying

(21)

8 CHAPTER 1. INTRODUCTION

coefficients (see for example, [Bra77], [Bra82a], [FW81], [Hac78], [ST82], [WS80], [Wes82aj and [Wes82b]). However, this classical method degrades in performance for other problems, notably for those with features such as strong anisotropies, strong inhomogeneities, dominant convection and a general geometry, which occur in reservoir simulation (see [ABDP81], [KM81] and the following chapters). Hence, the method must be modified (or redesigned) to enlarge its range of applicability. (Perhaps superfluously, we remark that the design of a suitable multigrid method is not trivial because of the large number of degrees of freedom.)

Let us start with the following general notion: the objective of coarsening (i.e., 'approximating finer grid systems on a coarser grid') strongly resembles that of discretization (i.e., 'approximating continuous problems on grids'). This implies, among other things, that if both (1.15) and the discretization procedure had been available, they might also have been utilized for coarsening (which leads to the con­ cept of 'nested discretizations', see Appendix C, or [Bra82b], Section 11: 'Coars­ ening Guided by Discretization'). However, according to Assumption 1.2, neither (1.15) nor the discretization procedure is available, so that this road is blocked. We must therefore follow another trail.

1.3.2 Basic Choices

Let us assume for simplicity:

A s s u m p t i o n 1.4 The solver is not adaptive.

In other words, the solution method does not adapt itself to the grid or to the solution as it gradually becomes available. Although this assumption may turn out to be too restrictive in practice, it diminishes the number of degrees of freedom. Notably, under Assumption 1.4 there is no need to apply different procedures on dif­ ferent levels (as the solver does not have any criteria to enable it to choose between them). We can therefore restrict ourselves to the choice of fi#,K%,Rjfh,PhH>Lu and (TX,T-I, £) in order to obtain a robust method.3

The coarse grid UH is selected as follows:

A s s u m p t i o n 1.5 fi^ is obtained by standard coarsening from Clh, »-e.

fin C nh; H = 2/i. (1.16)

It follows that:

n, = { ( z

1

, . . .

>

z

( f

) e n u r | z y = t

y

2

|

-

1

fc

>

i> e z, j = i(i)d}, i =

I ( I ) A ,

(1.17)

with 0 the physical domain and T the physical boundary. See Figure 1.2 for the two-dimensional case.

3Henceforth, the subscripts h and H are frequently omitted, while A = Lh and B = Ly/ are

(22)

1.3. STARTING POINT AND OUTLINE 9

Figure 1.2: T h e grids fij, and fi#

NB We wish t o m a i n t a i n the rectangular s t r u c t u r e on coarser levels. In general, this is achieved for all coarsenings with ^ G M\1R~; h = (hj); H = (Hy);

j — \(\)d. B u t -^- $. IN is too complex, -^ ^ ^ (semi-coarsening; suitable, for

e x a m p l e , for anisotropic problems) antagonizes Assumption 1.4, and 1 ^ 2 p r o d u c e s a relatively expensive m e t h o d (see [Bra82bJ; the total n u m b e r of coarse p o i n t s is approximately N/((^)d — 1))).

F u r t h e r m o r e we assume:

A s s u m p t i o n 1.6 LH is the so-called 'Galerkin approximation', i.e.

LH = RHhLhPhH. (1.18)

NB T h i s formula represents a suitable 'discretization' procedure which fits well with A s s u m p t i o n s 1.1 and 1.2, see also Appendix C. However it involves a d d i t i o n a l preliminary work before the actual iteration.

A s s u m p t i o n 1.7 RHK is the transpose (or adjoint in a certain inner product) of

PhH, >-e.

RHH = PÏH, (1.19) at least for symmetric problems.

(This is t h e n a t u r a l assumption in finite element methods.)

A s s u m p t i o n 1.8 RH^ and PhH have stencils that do not exceed the (usual) 3d-point

stencil of

Lh-(This is to preserve the sparsity of LH, as c o m p u t e d by (1.18).)

A s s u m p t i o n s 1.5 to 1.8 occur t h r o u g h o u t t h e literature on 'geometric' linear multigrid solvers. In this respect, we mention t h e p o r t a b l e geometric multigrid

(23)

10 CHAPTER 1. INTRODUCTION

codes BOXMG (cf. [Den82], [Den83], [Den86]) and MGD (cf. [Wes82a], [HKWZ83], (HWZ84), [HZ85]).

Some of these assumptions (notably, (1.18) and (1.19)) are also made in AMG (algebraic multigrid, cf. [BMR84], [Stu83], [RS85]), see Appendix D. This method is a general sparse matrix technique in the sense that it does not require a certain (regular) sparsity pattern. The method deliberately does not take advantage of geometric and physical properties (it omits Assumption 1.3). For our applications, however, the price to be paid for this generality is the loss of the regular sparsity pattern on the coarser levels. Moreover, until now AMG has been not as efficient as the geometric multigrid methods for solving our type of problem (see [RS85]). We have not pursued this trail.

1.3.3 Further Design Considerations

In order to obtain a robust and efficient multigrid linear system solver, it remains to select suitable K%, P^H (and RHH) and (ri,r2, £).

The Fourier analysis given in Appendix B is obviously too restrictive in its application (it cannot handle the special reservoir features). Hence, we are led by more intuitive notions (cf. [Bra82b]).

First, the roles of smoothing and coarse-grid correction are distinguished: • Smoothing should reduce the components with a (relatively) short wavelength

(a local process).

• Coarse-grid correction should reduce the components with a (relatively) long wavelength (a global process).

Smoothing can then be analysed further by means of local Fourier (mode) anal­ ysis, cf. [Bra77]; see also Appendix E. This type of analysis assumes 'frozen' (con­ stant) coefficients and excludes the boundary conditions. In this way the smoothing properties of basic iterative methods are compared in Chapters 2 and 6.

Coarse-grid correction is more difficult to analyse using Fourier methods (due to its global character). In Chapters 3 and 6 prolongations and restrictions are selected such that, under Assumption 1.6, 'adequate' approximations are obtained, i.e. approximations which preserve diagonal dominance and consistency to a high degree.

In this way, the analyses of smoothing (K%) and coarse-grid correction {P^H and

RHH) are separated. Practical experience allows us to choose the schedule (ri,r2, £).

The choices are evaluated further using numerical experiments with the complete multigrid algorithm.

(24)

1.3. STARTING POINT AND OUTLINE 11

1.3.4 Model Problems

In the analyses and comparisons we make use of special occurrences of the following general linear second-order elliptic partial differential equation:

- V . D ( I ) V U ( I ) + ( « ( I ) . V ) U ( I ) + C ( I ) U ( I ) = / ( i ) , xGÜC]Rd, (1.20) where D = (Dij){i,j — l(l)d) is symmetric positive definite, v. = (u,-)(» = \(l)d), c is non-negative, Q is bounded and d € {1,2,3}. Equation (1-20) may have the following properties:

• a general geometry (e.g. non-rectangular shape) of the domain Q; • strong inhomogeneities, i.e. strong discontinuities in D;

• strong anisotropies (for d > 1), so that the eigenvalues of D differ strongly in

magnitude.

Equation (1.20) is completed by appropriate Dirichlet, Neumann or mixed condi­ tions along the boundary T.

The discretizations on the rectangular grid fi/, may vary, but in any case we assume:

• Discontinuities in D occur only at grid lines; boundaries are situated along portions of grid lines.

• The difference stencil is contained in the usual 3d-point stencil.

• The discretization of the even-order derivatives is consistent of at least order 2; the discretization of the odd-order derivatives is consistent of at least order 1. Furthermore, we try to satisfy:

• V«: AtjKO (i±j); 5 > y > 0, (1.21)

j

where the Aij (i,j € {1, ...,./V}) denote the elements of A = Lh- Condition (1.21) guarantees that A is semi-diagonally dominant. Together with non-singularity and irreducibility, condition (1-21) makes A an M-matrix for which the convergence of some basic iterative methods can be proved (see also [Var62]). It turns out that one cannot always satisfy condition (1.21) for a fixed grid fih, see Subsection 2.3.2.

It is assumed that the model problems will be representative of a large set of reservoir simulation systems, notably IMPES discretizations (see Appendix A).

(25)

12 CHAPTER 1. INTRODUCTION

1.3.5 Further Outline of the Thesis

The thesis proceeds as follows. Smoothing and coarse-grid correction (notably, the level transition operators) are selected for 2D problems in Chapters 2 and 3, respectively. The reliability of the multigrid solver is further improved by its combination with CG-type methods in Chapter 4. Chapter 5 selects the multigrid schedule and presents 2D numerical results. The multigrid methods are extended to 3D and systems in Chapter 6. Chapter 7 describes the actual implementation (for reservoir simulation) of the methods, and some additional numerical results. Chapter 8 contains a final discussion.

(26)

C h a p t e r 2

Smoothing

2.1 Introduction and Notation

In this chapter, we search for robust and efficient smoothing methods for a large class of (single-variable) 2D problems. Section 2.2 contains a description of basic iterative methods and their local mode analysis. The smoothing behaviour of some of these methods is discussed in Section 2.3. Concluding remarks are given in Section 2.4.

Let us introduce the convenient notation of difference stars (molecules/stencils). Therefore, the system (1.1), which is denoted here by

Ay = b, (2.1)

is rewritten as

52 A„{x)y[x + voh) = b(x), xeüh, (2.2)

ueWA

where W C 2Zd is the stencil (of possible non-zeros) of A, Av(x) is the (matrix) connection between the points x and x + i/oh, and uoh = (fi/ii,..., udhj).

The difference star (or molecule) corresponding to A, denoted by A*, is the array

\AV\\ v € WA, for example: A* A0,l A-lfl ^ 0 , 0 -^1,0 AQ-I (2.3)

for d = 2 and WA = { ( 0 , - 1 ) , ( - 1 , 0 ) , (0,0), (1,0), (0,1)}, the usual five-point sten­ cil. It follows that a product (YX)* = Y* * X* (X and Y difference operators) can be calculated straightforwardly as follows:

[YX)u(x) ~52YA^)^-x{x + \oh), (2.4) A

where only A € WY and v — A 6 Wx contribute. 13

(27)

14 CHAPTER 2. SMOOTHING

Henceforth, we shall often use the well-known three, five, seven and nine-point stencils depicted in Figure 2.1. The stencils will be referred to as WJ, where j represents the number of points in the stencil.

x2 *l W\ - * - < ► - * IK * H ^ *

-Wi

.1.

wl

• : point of stencil

drawn lines: axes dashed lines: sides

of 2 x 2 square

Figure 2.1: ID and 2D stencils

2.2 Basic Iterative M e t h o d s ; Local M o d e Analy­

sis

The following subsections discuss incomplete factorization schemes, local mode anal­ ysis of incomplete factorization schemes, pattern schemes, and local mode analysis of pattern schemes, respectively.

2.2.1 Incomplete Factorization Schemes

Let the stationary iteration formula (1.2) be denoted by

yn+l=yn + A-1{b-Ayn), n £ JV. We consider the set of methods for which

A = {L + D)D'\D + Ü)=A + F,

(2.5)

(2.6)

where L, D and 0 have prescribed disjoint stencils W , W and W , respectively. Formula (2.6) represents an incomplete 'LDU' factorization of A. The formula enables a convenient classification of various basic iterative methods, including the well-known J and GS schemes (see Subsection 1.2.2).

Let us introduce

T = L + D + Ü, (2.7)

so that

W? = WL\JW®uWU. (2.8)

It is usually assumed that T is constructed such that

(28)

2.2. BASIC ITERATIVE METHODS; LOCAL MODE ANALYSIS 15

where is an operator which sets all elements outside W^ to zero (the other elements are not affected). Equation (2.9) implies that the factorization is exact on

W . Let us expand equation (2.6) as follows:

A = L+D + Ü + LD~lÜ = f + LD~lÜ = A + F. (2.10) It turns out that the elements of L, D and D can be successively computed by:

T„(x) := i 4 „ ( i ) - ^ IA( i ) ( D "1)M( i + Ao/i)&1,_A_M(H-Aofe + /io/i), v 6 WT'. (2.11)

Special cases of incomplete factorization schemes are: • J - (point) Jacobi,

• LJt - line Jacobi, with lines in the 1,-direction,

• GSFt' - (point) Gauss-Seidel forwards in the z.-direction, • GSBi' - (point) Gauss-Seidel backwards in the x.-direction, • L G S F J - Hne Gauss-Seidel forwards, with lines in the xt-direction,

• LGSBt - line Gauss-Seidel backwards, with lines in the i,-direction, • ILU(.,.) - incomplete LU factorization; cf. [MV81],

• IBLU/ILLU - incomplete block/line LU factorization; see Appendix F.

For several schemes the stencils W , W , W and WF are depicted in Table 2.1, for WA - W\, W*, IKg, respectively.

In the methods mentioned above, the factorization (2.6) is relatively easy to perform. Notably, for Jacobi and Gauss-Seidel methods one has:

T = $T{A). (2.12)

For I(B)LU methods one must calculate (2.11) in the point (or line) numbering order (from - too to + in Table 2.1). For IBLU one has:

L = %L{A), Ü = $Ü(A); (2.13)

see also Appendix F. We remark that the basic iterative schemes with damping parameter v (see Section 1.2.2) arise from (2.6) with A and F replaced by

Au=A + l—^tD{A), Fv =F-^-$D{A), (2.14) respectively, so that $V[F) ^ 0 for v ^ 1.

(29)

16 CHAPTER 2. SMOOTHING J LJ1 LJ2 GSF1 GSB1 LGSF1 ILU(1,1) ILU(1,2) ILU(1,3) ILU(2,4) IBLU

v\

■k ■k -k k ■k • • o • • • o o o • o • o * o • - o • • o -• • o o o • + - o + — • • + + - o + • * - o + * • • • + + + + • — o + + » • •

+

« • • • o o o * * * *

-w]

■k * k -k -k ■k -k • • • o • • • • • o o o • • • o • o * o • • • - o « • o -• -• • • o o o • + - o + — • • + + - o + • • - o + » • • • + + + + • — o + + » • •

++

« • • • o o o » » » »

-wl

k k * * k k k k k • • • • o » • • • • • • o o o • • • • o » • o * • o » • • • — o » • o — • • • • • • o o o • + + - o + • • -o-j-» • • • • — o + + » • • •

+++

• « • • • o o o * * * »

-Table 2.1: Stencils WL (denoted by - ) , WD (denoted by o), WU (denoted by +) and WF (denoted by •) of various 2D factorization schemes

(30)

2.2. BASIC ITERATIVE METHODS; LOCAL MODE ANALYSIS 17 The iteration formula (2.5) can be denoted as follows:

d" :=b-Ayn; 6yn := A^d"; yn+1 := yn + 6yn, n e IN, (2.15) where, according to (2.6), the second step actually embodies:

zn := ( I + D)~1dn- 6yn:=(I + D~1Ü)-1zn. (2.16) Consequently, one smoothing iteration basically consists of a residual calculation

and the solution of two (block-)triangular systems. However, for special cases a great deal of computational work can be saved, for example, by applying:

yn+1:=A-1(Fyn + b), (2.17) if F is sparser than A. Convergence of incomplete factorization methods has been

proved in several cases, notably if condition (1.21) is satisfied (see for example, [Var62], [You7l], [MV77], [Vor82a], [Mei83]).

2.2.2 Local Mode Analysis of Incomplete Factorization Sche­

mes

Local Fourier analysis proceeds assuming constant coefficients, see Appendix E. Assuming constant Av, one computes T„ = Lv+Dv + Ü„ from the following constant coefficient version of (2.11):

TV:=AV- £ Lx{D~l)sÜ„, v e WT. (2.18)

For Jacobi and Gauss-Seidel methods one directly obtains Tv — Av, but for I(B)LU methods one has to perform several (sometimes even more than 1000) iterations of (2.18) to obtain the stable real solution T„ (assuming that condition (1-21) is satisfied). As an initial iterand one can select Tv — 0.

Let us recall from Section 1.2 that the iteration matrix S is given by:

S = I - A'1 A = A_ 1F . (2.19)

It follows (cf. Appendices E and B) that the reduction factors ph,u = |5(w)|, w G $h can be calculated from:

Ph,w — | 1 - - j 1 - I-« 1, A[w) - — -x x x , (Z.Oi)

A(u) A(u) (L[u) + D{u)){D{u) + U{u))

with

(31)

18 CHAPTER 2. SMOOTHING according to ( E . l l ) . The smoothing factors can then be computed from

Ph - sup ph,w;

u,e*t

sup ph.

h (2.22)

As an example, let us consider J;u iteration applied to the ID Poisson equation of Appendix B, discretized on an infinite grid. It follows that WA = W\, A* = £ [ - 1 2 - 1], V1 = WÜ = 0, WD = {0}, D" = [ ^ ] , A(w) - &(1 - c o s^ ) >

D = ;k-, so that:

Ph.ui = | 1 - V + V COS w / l | , (2.23)

(see equation (B.9), where w = T ( 7 T ) | — 7r). Figure 2.2 depicts ph u for J ; | and

J;l. Note the resemblance of this figure with Figure B.1 (for J ; | ) . With fttf C 0/,;

H = 2/i as usual, one obtains $£ = ( - ^ , ^ 1 and $^ = ( - f , - ^ ] U ( ^ , £ ] as

depicted in Figure 2.2, so that p = ph — ~ for J;^ and p = pn = 1 for J;l.

t

P/l,W

r

• • . ^ - f ^ n

• J*^"^* y^ . 1 • X*^ • * ^ ^ ' ' 0 — " = - f

r~-r^

• ^ " N ^ •

* r \

• ^

.*

S ss ^ *

J;l

• ^ s > • • ^ N ^# 1 ^ " * ~ — —ir h — 7T 2h

n

*2

2/i

n

Figure 2.2: Smoothing behaviour of J;* and J;l for the infinitely discretized ID Poisson equation

If one application of (2.5) consists of two basic iteration steps (with superscripts 1 and 2, respectively), one obtains:

5 — 5 5 ; PhlU — Ph.wPh.w (2.24) but in general ph ^ p\p\ and p ^ plp2, unless 52 = 51. Examples of such 'com­

pound' methods are:

• GSFiBj - GSFt' followed by GSBj, • LGSFi'Bj - LGSFi followed by LGSBj, • LGSFi'Bj'FA; - LGSFtBj' followed by LGSFfc.

(32)

2.2. BASIC ITERATIVE METHODS; LOCAL MODE ANALYSIS 19

2.2.3 P a t t e r n Schemes

Another example of basic iteration is that of pattern iteration. Pattern schemes have been used as smoothers throughout the multigrid literature (see for example, [ST82]).

The grid fi/, is split into S disjoint subgrids fi£;x — 1(1)-- One iteration consists of 5 partial steps. Each step consists of the application of an incomplete factorization scheme (e.g., J or GS) to the system on the fl£ subgrid. One thus obtains:

n

h

= |Jn*; s = ns*, (

2

-

25

)

X X

where Sx denotes the xth partial iteration step. Examples are checkerboard and zebra iteration. Their subgrids are depicted in Figure 2.3.

x • x • x x x x x x x • x • x • x • x • • • • • ' • x • x • x x • x • x x x x x x x • x • x • x • x • • • • • • x • x • x x • x • x x x x x x x • x • x checkerboard pattern zebra-Xi pattern zebra-12 pattern

Figure 2.3: Checkerboard and zebra patterns (• 6 ft\, x G fi£) The following abbreviations are used for pattern iterative methods: • CJ - checkerboard pattern with partial J iteration

(also called RB, i.e. red-black iteration), • Zt'LJj - zebra-i,- pattern with partial LJj iteration, • HZ = Z1LJ1 - horizontal zebra iteration,

• VZ = Z2LJ2 - vertical zebra iteration, • AZ = HZ.VZ - alternating zebra iteration.

2.2.4 Local Mode Analysis of P a t t e r n Schemes

In local mode analysis of pattern schemes, constant coefficients and infinite grids are again assumed. However, because of the partial steps, several components are coupled (as in Appendix B for the coarse-grid correction), so that 5fc(w);w € $/,

are 5 x 5 matrices; the smoothing factors ph and p are then obtained from a straightforward generalization of (E.13). It follows that the order of the partial steps does not influence ph and p. On the other hand, if one applies r pattern iterations before the coarse-grid correction, this causes

(33)

20 CHAPTER 2. SMOOTHING

unlike non-pattern iteration. Large values of r appear to have an adverse effect on the smoothing efficiency of pattern methods, see [ST82]. For more details of pattern schemes and their local mode analysis we refer to [ST82].

2.3 Smoothing Behaviour of Basic Iterative Meth­

ods

In this section we examine the smoothing behaviour of several of the basic iterative schemes introduced in Section 2.2 for the model problem (1.20). For the sake of local mode analysis constant coefficients are assumed, so that equation (1-20) yields:

- V.DVu{x) + (v.V)u(x) +cu(x) = f{x), xeRd. (2.27) Aspects of variation of the coefficients and (general) geometries are thus neglected,

but these are local properties which, it is assumed, do not strongly influence the smoothing behaviour (cf. [Bra77], [Bra82b]). The following two subsections discuss the ID and 2D cases, respectively.

2.3.1 One-dimensional Case

In accordance with the assumptions made in Subsection 1.3.4 (especially (1.21)), one can select for discretization:

dx* h*

[1 u

dx \ i[o - l i] i f „ < o '

1 l

°

1 0 1 , [

*-

a

>

i.e., central differences for j ^ and I, and upwind differences for j ^ . In any case, one obtains the following molecule:

A* = [-a, a + b + d, - 6 ] ; a , 6 , d > 0 , (2.29)

thus satisfying condition (1.21). For J;v iteration one finds:

'*-

= !1

~

v + v[

^TTd

)c +

'

w{

^TbTd

)s

\>

(2

'

30)

with c = cosuh and s = sinw/i. It follows that Ph,u(rf = 0) > Ph,u(d > 0), so that:

Ph,u, < | l - v + t>c + i v ( — ^ ) s | . (2.31) a + b

The 'worst' cases are a = 0 and b = 0, so that:

(34)

2.3. SMOOTHING BEHAVIOUR OF BASIC ITERATIVE METHODS 21

For v = j , the right-hand side of this inequality equals J\ + \c, so that p < | \ / 2 . Consequently, J ; j is a smoother (though rather inefficient) for (2.29).

Another basic iterative method is GSF for which one finds:

>> b , ,

Phw = i ; ; - r < r 1 •-.■ (2-33) \a + o+ a — ac + ois| \a + o — ac + ais\

It follows that phiU = Oi(b = OAa + d>0 (GSF is then exact), and Ph,u = 1 if a = rf = O A 6 > 0 . (We see that the incomplete factorization should preferably include the predominant connections.) Moreover, one obtains p = g\/5 for a = b > 0 A d = 0 (Poisson). It can easily be verified that GSFB is a robust smoother for (2.29), with p < | \ / 5 , which is more efficient than two applications of J;^.

Of course, the above ID example is only of academic interest, since solution of tri-diagonal systems can straightforwardly be applied. It is, however, illustrative of the more general (2D) case, as will become clear in the following subsection.

2.3.2 T w o - d i m e n s i o n a l Case

In this subsection we discuss discretization and smoothing of the general 2D elliptic partial differential equation (1.20).

D i s c r e t i z a t i o n

Let us demonstrate that condition (1.21), i.e. the condition of semi-diagonal domi­ nance, cannot always be satisfied in the general 2D case under the given conditions of sparsity and consistency. As an example, let us consider the following diffusion equation:

V. c2 + es2 (1 - e)cs

(1 - e)cs ec2 + s2

Vu(i) = f(x),

x e

(2.34)

where c = cosö, s = sinö, O < 0 < 7 r , 0 < E < 1. Equation (2.34) is actually the anisotropic diffusion equation —V. j I Vu = ƒ rotated under an angle 6. Its general 0{h2) W% discretization star is given by:

h* I » - ' ) " h2 1 2 -1 - -1 1 - 2 1 -1 1

+

h=

+ £

- 1 2 - 1 1 - 2 -2 4 1 - 2

+

1 - 2 1 (2.35) f GiR,

where the first three molecules are linear finite element discretizations of ■£-?, dxl and dx\dx2 , respectively, and the last molecule represents an 0(h2) discretization of *4a &

(35)

22 CHAPTER 2. SMOOTHING

Let us try to satisfy condition (1.21) for the above discretization. Notice that

Av — A-v for all v € IKg. In order to satisfy ^4-i,i < 0, one must select f < 0. Furthermore, assuming for simplicity 0 < 5 < c (i.e., 0 < 0 < ^ ) , one can derive from (2.35) that Aifi > A0,i, and that ylii0 > 0 if e < T ~ ' | ■ Hence, condition

(1.21) can only be satisfied for all e if 9 = 0(modj). Let us select the following discretizations of the various derivatives:

a2 ± dx2. h* dxidx? ' ' 1 - 2 1 : 1 2/l2 1 2h"2" . J - 1 1 - 1 1 a2 ~ l - 1 1 2 - 1 - 1 1 - 2 1 1 - 1 1 - 2 1 if Du > 0; if D12 < 0, d dx\ - 1 1 0 0 - 1 1 if Vx > 0; if Vi < 0, (2.36) dX2 0 1 if Vo > 0: if V3 < 0,

NB The stars for the mixed derivative are linear finite element discretizations in right-angled triangles with hypotenuses closest to the predominant direction of anisotropy.

(36)

2.3. SMOOTHING BEHAVIOUR OF BASIC ITERATIVE METHODS 23

It turns out that condition (1.21) is only satisfied for the anisotropic diffusion equation (2.34) if

see Figure 2.4 (see also [Hem83]).

/Jle^) c j t j h i fc 5>0 V c[c+s) > f(g+c i — (- c+s) <) ' fU+c '1 £!£+£.. [c~s) ' c(s — c) rf) if cs < 0, (2.37) 3 - 2 % / 2 r i :

Figure 2.4: Domain where the discretized anisotropic diffusion equation is semi-diagonal dominant

From now on we restrict ourselves, for simplicity, to the cases for which condition (1.21) is satisfied. Some comments on the general anisotropic diffusion equation are included at the end of this subsection, see Remark 2.2.

(37)

24 CHAPTER 2. SMOOTHING

Model Equations

It is difficult to demonstrate robust smoothing behaviour of basic iterative methods in the general 2D case. We therefore restrict ourselves to a (rather extensive) subset of cases for which condition (1.21) is satisfied, namely:

Equation - A u = ƒ - « . i l - £".22 = ƒ - e u . n - U.22 = ƒ ' - e A u + u.i + u.2 = ƒ —eAu + u.i = ƒ

—eAu + u.i — u,2 = ƒ

—eAu — u.2 = ƒ

—eAu — u i — u.2 = ƒ

—eAu — u.i = ƒ

—eAu — u.i + u.2 = ƒ

- e A u 4- u.2 = ƒ Molecule -e 2 + 2e - e - 1 - e - 1 4 e + 2 - e , -e - 1 —e - e - 1 4e + 1 - e —e • ' - e - 1 - e - 1 4e + 2 - e —e - e - 1 —e 4e + 1 - e —e - e - 1 -e 4e + 2 - e - 1 —e - e 4e + 1 - e - 1 —e - e 4e + 2 - e - 1 - e - 1 —e 4e + 1 —e - e - 1 Symbol - 1 - 1 4 - 1 - 1 —e 1 2 +2e - 1 —e / \ / \ [b|

Ml

[f] IJ] |k| (2.38)

where Au = u ,a + ui22, w,„ = f^f, «,, = ^ and e e {1,0.1,0.01,0.001}. Equation

(38)

2.3. SMOOTHING BEHAVIOUR OF BASIC ITERATIVE METHODS 25

equations (for e < 1); the equations (2.38d-k) are convection-diffusion equations. The symbols indicate anisotropy or convection directions. Note that we have as­ sumed the discretization parameter h = 1 for simplicity. Henceforth, the subscript

h is omitted from the reduction factors phu and the smoothing factors ph.

Smoothing Factors

For the model problems described above, smoothing factors p have been calculated for various basic iterative methods, under the customary assumption of standard coarsening. Contour plots of iso-lines of pw, w £ (—7r, 7r]2 are used as additional

illustrations.

For example, let us consider J, LJ1 and GSF iteration when applied to (2.38a); the contour plots are given in Figure 2.5. The figure for J is the 2D projection of the figure on the cover. The iso-lines with pu = 0 have been omitted, but these can easily be constructed. The long and short wavelength regions for standard coarsening are depicted in Figure 2.6. It follows that p = 1 for J and LJ1, and p = 0.5 for GSF. In general, these factors have to be calculated by means of maximum finding algorithms.

J, LJ1, GSF,

Figure 2.5: Reduction factors of Fourier components for J, LJ1 and GSF In this way, several of the iterative methods described in Section 2.2 will be compared with respect to their smoothing properties. Methods with parameters (i.e., u / 1 ) are not included. Notably, J;v is omitted since the GS schemes are, in general, better smoothers (see [Bra82bj).

(39)

CHAPTER 2. SMOOTHING

<H

.„l____J

• short wavelength region (ui € $J|) • long wavelength region (w G <&£)

Figure 2.6: Long and short wavelength regions

Appendix G, and Tables 2.2, 2.3 and 2.4, contain catalogues of contour plots (of pj), and smoothing factors p, respectively. The basic iterative methods selected are:

• GSF, GSFB, LGSF1, LGSF1F2 and LGSF1B1F2B2,

• ILU(1,1), ILU(1,2), ILU(1,3), ILU(2,4) and ILU(7,9) (see [MV81]; the set is assumed to be representative),

• IBLU, • HZ and AZ.

In Appendix G we have included contour plots of pu for GSF, GSFB, LGSF1, LGSF1F2, LGSF1B1F2B2, ILU(1,1), ILU(1,2) and IBLU for the set of model prob­ lems, with e 6 {0.1,0.01}. The contours for the remaining ILU methods bear close resemblance to those for ILU(1,2), but enclose a smaller area (i.e., they show slightly smaller reduction factors). Contour plots for pattern methods might have been given for reduction factors of coupled modes, but this is not very instructive and has therefore been omitted. In Appendix G, the iso-lines pw — 0.1(0.2)0.9 and the points pw — 1.0 are labelled, if possible; if no labels are provided, the ex­ terior iso-line pu = 0.1. Perhaps superfluously, we mention that the figures also carry information irrespective of the multigrid concept (e.g., on the behaviour of the methods by themselves or as a preconditioning, cf. [Ket82b]).

Tables 2.2, 2.3 and 2.4 contain the smoothing factors p (for standard coarsen­ ing) of all the above-mentioned methods applied to the model problems, together with the limit factors limejo/>, which have been computed either theoretically or

experimentally (cf. [Ket82b]). However, in order to compare actual efficiency, one must also take into account aspects of computational work. We therefore introduce smoothing efficiency factors in the following subsection.

(40)

2.3. SMOOTHING BEHAVIOUR OF BASIC ITERATIVE METHODS 27 GSF iWS = g GSFB {WS = 18 LGSFl iWs = 9 HZ iWs = 9 LGSF1F2 {WS = 18 AZ {WS = 18 LGSF1B1F2B2 <WS - 36 ILU(1,1) <WS = 14 ILU(1,2) •W5 = 18 ILU(1,3) ' Ws = 26 ILU(3,4) *WS = 3 7 1LU(7,9) >WS = 77 IBLU {WS = 25 lim e 1 0 1 [00]

M

I 5 - 5 [26] 1 [10] 1 5 - 5 [52] 1 1 [52] 1 [00] 1 [00] 1 [00] 1 [00] 1 [00] 1 [36] e O.C = 101 C .998 .996 .447 .125 .446 .125 .199 .916 .843 .776 . .718 .506 JQ9 Z ).01 980 961 447 125 438 122 192 768 607 483 398 193 176 s = 0.1 .835 .697 .447 .125 .373 .102 .139 .477 .273 .165 .139 .048 .133 • .500 [30] .250 [301 .447 [26] .250 [151 .149 [22] .048 [14] .022 [22] .204 [20] .126 [20] .055 [21] .029 [24] .005 [33] nco. [20] e = 0.1 .835 .697 .833 .826 .373 .102 .139 .477 .165 .108 .025 .003 .108 e = e = 0.01 0.001 .980 .998 .961 .996 .980 .998 .980 .998 .438 .446 .122 .125 .192 .199 .768 .916 .171 .172 .149 .164 .031 .033 .004 .004 14Q i c j lim e 1 0 1 [00] 1 [oo| 1

M

[00J 1 5 - 5 [52] 1 [20] 1 [52] 1

M

1 3+2\/2 [24] __ 1 3+2V2 [34] 0.34 [25] .004 [33[ 1 3+2V2 [33]

Table 2.2: Smoothing factors and smoothing efficiencies (between []) for discretized 2D (anisotropic) diffusion operators

(41)

28

\ t

/ i

1 0.1 0.01 0.001 i 0.1 0.01 0.001 limc|„ W .260 .273 .216 .368 .382 .065 .437 .459 .008 .446 .470 .001 5-1 0 |52] (55] 101 G S F B .273 .273 .382 .382 .469 .459 .470 .470 Av/2

k*

| 5 5 | |55] .216 .273 .260 .065 .382 .368 .008 .459 .437 .001 .470 .446 0 5 3 [0] [55] | 5 2 | L G S F 1 F 2 .153 .226 .250 .057 .413 .402 .007 .489 .442 .001 .499 .447 0 6-5t | 0 | (60) [52] .150 .226 .157 .413 .160 .489 .160 .499 .161 i [23| |60) .069 .150 .153 .005 .157 .057 .000 .160 .007 .000 .160 .001 0 .161 0 |0] [23) !°] L G S F 1 .282 .454 .636 .068 .488 .915 .008 .499 .9£ .OC 0 1 .500 .999 0 1 2 1 lo] [30] looi .333 .632 .333 .914 .333 .990 .333 .999 l 3 1 [19] M .282 .454 .636 .068 .488 .915 .008 .499 .990 .001 .500 .999 0 2 1 [0] [30] M AZ .064 .056 .064 .159 .149 .159 .191 .234 .191 .195 .248 .195 .196 i .196 25 30 25 .056 .056 .149 .149 .234 .234 .248 .248 1 i I |30] |30] .064 .056 .064 .159 .149 .159 .191 .234 .191 .195 .248 .195 196 196 | 2 5 | I*»] |25] HZ .243 .154 .243 .559 .225 .559 .711 .247 .711 .730 .250 :730 .733 l .733 [67] [15] |671 .360 .360 .735 .735 .962 .962 .996 .996 1 1 H M .243 .154 .243 .559 .225 .559 .711 .247 .711 .730 . 2 5 0 .730 . 7 3 3 i .733 [67] [15] [67| L G S F 1 B 1 F 2 B 2 .017 .033 .017 .002 .056 .002 .000 .065 .000 .000 .067 .000 0 l 0 [0] [31] [°] .033 .033 .056 .056 .065 .066 .067 .067 i_ i ir. |31] |31] .017 .033 .017 .002 .056 .002 .000 .065 .000 .000 .067 .000 0 l 15 0 |0] [31]

l°l

Table 2.3: Smoothing factors and smoothing efficiencies for discretized 2D convec­ tion-diffusion operators CHAPTER 2. SMOOTHING € = 1 € — 0.1 € = 0.01 e = 0 . 0 0 1 Hm^ji, [»] G S F .567 .464 .333 .606 .464 .667 .606 .567 .847 .453 .083 .875 .453 .917 .875 .847 .981 .468 .010 .985 .468 .990 .985 .981 .998 .471 .001 .998 .471 .999 .998 . 9 9 8 1 J V 2 0 I iV2 I I 1 [oo| [28] [0] 1°°] [28] [oo] [oo] [oo]

(42)

2.3. SMOOTHING BEHAVIOUR OF BASIC ITERATIVE METHODS 29 1 e = 0.1 e = 0.01 e = 0.001 lim,l (, [») c = 1 0.1 e = 0.01 e = 0.001 l i mtj , i \o\ ILU(i.l) .228 .218 .172 .218 .218 .172 .218 .228 .404 .389 .048 .389 .389 .048 .389 .404 .443 .487 .006 .487 .487 .006 .487 .443 .447 .499 .001 .499 499 .001 .499 .447 s-i i o 1 X 0 i 6 - i [40] (47) |0] (47) |47] [0| [471 («1 ILÜ(2,4j^ .036 .015 .023 .053 .053 .023 .015 .036 .047 .000 .005 .272 .272 .005 .000 .047 .048 .000 .001 .467 .467 .001 .000 .048 .048 .000 .000 .497 .497 .000 .000 .048 .048 0 0 1 1 2 2 0 0 .048 [28] | 0 | [0[ [123] [123] |0l |0l [28] 1LU(1,2) .146 .123 .093 .112 .112 .093 .123 .146 .224 .042 .018 .369 .369 .018 .042 .224 .250 .005 .002 .484 .484 .002 .005 .250 .253 .001 .000 .498 .498 .000 .001 .253 .254 0 0 X X 1 2 0 0 .254 [30] [OJ [0] [60] [60] |0| 10] [30] ILU(7,9X .006 .001 .003 .019 .019 .003 .001 .006 .003 .000 .000 .159 .159 .000 .000 .003 .002 .000 .000 .433 .433 .000 .000 .002 .002 .000 .000 .493 .493 .000 .000 .002 .002 0 0 i i 2 2 0 0 .002 [27] [0| [0| [256] (256] [0| (O) [27] ILU(1,3) .065 .045 .043 .081 .081 .043 .045 .065 .094 .006 .010 .309 .309 .010 .006 .094 .101 .000 .001 .475 .475 .001 .000 .101 .102 .000 .000 .497 .497 .000 .000 .102 .102 0 0 X X 2 2 0 0 .102 [26] (0] [0] [86] [86] (0[ [0] [26] IBLU .049 .038 .049 .080 .080 .049 .038 .049 .015 .006 .015 .162 .162 .016 .006 .015 .002 .000 .002 .195 .195 .002 .000 .002 .000 .000 .000 .200 .200 .000 .000 .000 0 0 0 i l 5 5 0 0 0 [0] |0] [0] [36] [36] [O] |0| [01

Table 2.4: Smoothing factors and smoothing efficiencies for discretized 2D convec­ tion-diffusion operators (continued)

(43)

30 CHAPTER 2. SMOOTHING

S m o o t h i n g Efficiency F a c t o r s

We estimate the computational work 'Ws, i.e. the number of floating-point op­ erations + , — , * , / per gridpoint necessary for one smoothing iteration (2.5). For simplicity all special savings (e.g., by scaling or by assuming constant coeffi­ cients) are neglected, except for the savings that are made possible by storage of tri-diagonal factorizations and, in certain cases, the application of (2.17) (if F is sparser than A).

For the schemes with incomplete factorization (2.6) we arrive at:

W5 = min(2 * \WA\ + WL + WV + 1, 2 * IB^I + WL + Wv), (2.39)

with and

WL = s\gn{\WL\){2 * \WL\ + 2 * \WD\ - 1) (2.40)

Wv = s i g n ( | r ^ | ) ( 2 * I B ^ I + 2 * \VD\ - 1). (2.41) The two terms in (2.39) inside the brackets arise from (2.15) and (2.17); WL (Wv)

denotes the number of floating-point operations per gridpoint necessary for the solution of the lower (upper) triangular system; the sign function indicates that computations do not have to be performed if W = 0 (W = 0). For simplicity, it is assumed that the compound methods with S — S2S1 have a computational work 4WS - W5' + iWSi, and that the pattern methods are as expensive as their non-pattern variants (although special savings can be achieved, see for example, [ST82]).

A smoothing efficiency is defined as follows: iWs

a=-—^, (2.42) \ogp

i.e., the approximate number of floating-point operations per gridpoint required to achieve (asymptotically) a reduction of the short wavelength error by a factor of 0.1. For the cases (2.38a), and (2.38b-k) with e [ 0, smoothing efficiencies have been included in Tables 2.2, 2.3 and 2.4.

(44)

2.3. SMOOTHING BEHAVIOUR OF BASIC ITERATIVE METHODS 31

C o m p a r i s o n a n d D i s c u s s i o n

Detailed analysis of A p p e n d i x G a n d Tables 2.2, 2.3 a n d 2.4 is omitted and left to the r e a d e r . We restrict ourselves t o the following general observations:

O b s e r v a t i o n 2 . 1 For all modes (frequencies) w we have pu = p _u, and further­ more po,o = 1; see also (2.20) using (2.21), and (1.21) with Y,v Au = 0.

O b s e r v a t i o n 2 . 2 On the average, modes of long wavelength are more difficult to

reduce by basic iteration than modes of short wavelength, and modes in weakly cou­ pled directions (i.e., perpendicular to the anisotropy or convection direction) are more difficult to reduce than those in strongly coupled directions (see, for example, GSF and LGSF1B1F2B2).

O b s e r v a t i o n 2 . 3 Incomplete factorizations are adequate if they include the pre­

dominant connections, e.g. GSF becomes exact for (2.S8d); z J. 0. An extended scheme has lower reduction factors, but is not necessarily more efficient (see, for example, the ILU methods).

O b s e r v a t i o n 2 . 4 Coarsening is most effective if it is applied in the strongly coupled

directions only (see also Observation 2.2). Hence, uniform coarsening is adequate for the Poisson equation, but, for instance, it is less adequate for the anisotropic dif­ fusion equation (unless a line method is used, see also [Bra82bJ: 'the block-relaxation rule').

O b s e r v a t i o n 2 . 5 With standard coarsening, all methods considered are good smoo­

thers for Poisson's equation. Line (or block) methods are good smoothers for aniso­ tropic equations as long as the preferred directions match. ILU(1,2) is more ro­ bust than ILU(1,1) and more efficient than ILU(1,3), ILU(2,4) and ILU(7,9). (Moreover, the last three schemes require more memory storage than ILU(1,2).) Pattern schemes behave much like non-pattern schemes; they are slightly more effi­ cient for diffusion equations and somewhat less efficient for convection-dominated equations. Robust smoothers for the present set of test problems are LGSF1F2, LGSF1B1F2B2, IBLU and AZ (as well as several other alternating methods such as alternating ILU(l,2)).

F u r t h e r m o r e , we present t h e following r e m a r k s :

R e m a r k 2 . 1 As in ID, GSF is not a robust smoother in 2D, but now, in contrast

with the ID case, GSFB is also not a robust smoother (due to anisotropies, al­ though it is suitable for convection-dominated problems). Meanwhile, LGSFl does not take over in 2D the role of GSF in ID, since it cannot sufficiently reduce all modes with Wi ^ 0. Hence, LGSFIBI also fails, notably for equation (2.38c). But LGSF1B1F2B2 is a robust smoother, at least for the usual five-point discretization star satisfying (1.21). This is shown as follows. Let A* be given by:

Cytaty

Powiązane dokumenty

Jeśli na w ystaw ie «Zyjnia», zorganizowanej kilka lat tem u przez krakowską galerią Teatru Stu, zatrzymywano się przed planszą przedstawiającą szlak kuchennej

[r]

Key words: Navier–Stokes equations, turbulence, large eddy simulation, nonlinear Galerkin method, suitable

Such equations have many applications, for example, in the field of numerical control, model reduction and for the computation of second moments (variance) in systems modeled

The discretized solution equations (22 and 23) were used for the result in Figure 3. The result shows the pressure across the formation with time for various permeability values.

However, for the TMAL and the TOP‑ SIS method with the Euclidean distances, the best results were obtained for the combination of weights C2 (0.5; 0.25; 0.25) – this means that

Generally speaking, from S. Miłkowski’s work emerges a conception of the far-reaching reconstruction of property relations in all sectors of economy, consisting in elimination of the

Thus, in order to integrate the species equations in time in a stable manner, time integration methods have to be found that can handle stiff systems.. Besides the stability issue