• Nie Znaleziono Wyników

On random numbers for the random-choice method

N/A
N/A
Protected

Academic year: 2021

Share "On random numbers for the random-choice method"

Copied!
80
0
0

Pełen tekst

(1)

September 1987

ON RANDOM NUMBERS

FOR THE RANDOM-CHOICE METHOD

by

D. S. Anderson and

J. J.

Gottlieb

TEe NISCliE U

IVERSJTEIT

DELFT

.UCiliVAAHT- E UIMTEVAARTTECHN EK

BI LlOTHEEK

{luyverweg

1 - 2629

HS DELFT

04

NOV. 1

1

UTIAS Technical Note No. 258

CN ISSN 0082-5263

(2)

ON RANDOM Nut,

1BERS

FOR THE RANDOM-CHOICE METHOD

by

D. S. Anderson and J. J. Gottlieb

Submitted August

1987

©

Institute for Aerospace Studies

1987

September

1987

UTIAS Technical Note No. 258

(3)

Acknowledgements

Special thanks are extended to W.T. Taferner and C.P.T. Groth for their helpful advice. We also acknowledge

.

Bernhard Anet, who stimulated this work by asking penetrating questions regarding the quasi-random numbers used in the RCM. Additional thanks are due to Tony Roberts for computational assistance.

Contract funding from the Defence Research Establish-ment Valcartier and grants from Natural Sciences and

Engineer-ing Research Council, both of Canada, are acknowledged with many thanks.

(4)

Abstract

The random-choice method (RCM) solves Euler's equations in integral form by means of a single point Monte Carlo estimate of the Riemann solution for each cello From Monte Carlo theory, sampling with quasi-random versus random numbers results in lower absolute

integration errors or a bet ter convergence rate for the RC!I. The sequence due to Van der Corput has of ten been used and discrepancy theory gives an upper bound of hlln(h)1 for the convergence rate for smooth functions, where h is the node spacing, and this has been thought to deteriorate to h1 /2 Iln(h)1 because of discontinuities in the RCM solution. In this study, numerical evaluations of absolute integration errors demonstrate that the convergence rate is at least hlln(h)1 and even better for RCM solutions. A new quasi-random number sequence is also given, and it has a lower discrepancy and provides the RCM with an improved convergence rate of OEh], which we believe makes the RCM first-order accurate. Furthermore, this new

sequence is generalized to a dimensional sequence for a

two-dimensional RCM. However, this sequence does not perform better than Halton's generalization with dual prime numbers of Van der Corput's sequence, in spite of its bet ter performance (first-order accurate) on integrating smooth two-dimensional functions. The theory and analysis behind these developments is presented in detail.

(5)

Table of Contents Title Page Acknowledgements Abstract Table of Contents 1.0 I NTRODUCTI ON

2.0 GODUNOV'S FIRST-QRDER FINITE-DIFFERENCE SCHElm 3.0 RANDOM SAMPLING IN THE RANDOM-CHOICE .&IETHOD 4.0 MONI'E CARLO INI'EGRATION AND ERROR ANALYSIS

S.O

QUASI-RANDOM NUMBER SEQUENCES •

S.l

Van der Corput's 1-D Sequence

S.2

Halton's 1-D Sequence

.

S.3

New Quas i-Random 1-D Sequence

.

S.4

Van der Corput's 2-D Sequence

S.S

Halton's 2-D Sequence

S.6

New 2-D Sequence

6.0 NUMERICAL RESULTS FOR 1-D INTEGRATION ERROR 7.0 NUMERICAL RESULTS FOR 2-D INTEGRATION ERROR

i H Hi iv 1 2 S 6 10 10 12 13

IS

16 16 18 24 8.0 APPLICATION OF 1-D QUASI-RANDOM SEQUENCES TO THE RCM 26 9.0 FlNAL DISCUSSION AND CONCLUSIONS • 27

10.0 REFERENCES • 29

(6)

1.0 INTRODUCTION

Nonstationary compressible flow problems in gasdynamics are generally modelled by a set of nonlinear mixed hyperbolic and elliptic partial differen-tial equations. Because of the nonlinear nature of these equations, transit ion fronts like shocks and contact surfaces may be present in the solution. When detailed or microscopic information of these transitions is required, it is necessary to include the effects of viscosity and heat conduct ion in the model and solve the full or an approximate set of Navier-Stokes equations. However, in many cases the analyst is mainly concerned with the macroscopic properties of the solution and does not need any detailed knowledge of the flow inside these short transitions. In such instances it is significantly easier to model the problem with a simpIer set of hyperbolic equations, the inviscid Euler equations, for which these transitions become discontinuities.

In preparation for numerical solution the Euler equations may be recast into a finite-difference, finite-element, or finite-volume form. However, because Euler's equations permit shock and contact discontinuities in the

solu-tion, these methods must contain a means of handling such discontinuities. Some common approaches include the introduction of artificial and/or numerical viscosity, or the inclusion of a special discontinuity capturing scheme. Vis-cosity and truncation errors which are inherent in most of these approaches cause dissipation and diffusion which 'smears out' shock and contact discon-tinuities or mayalso create numerical oscillations near them.

In recent years an alternative numerical scheme known as the Random-Choice Method (RCM) has been developed. It handles discontinuities routinely without utilizing any artificial or numerical viscosity, and also without any

shock capturing techniques. The RCM is basically a finite-volume technique which is explicit and employs the random sampling of the solutions of Riemann problems for cells on one time level to advance the solution to the next time level. Glimm [1] originated this method as a theoretical tooI to help obtain existence proofs for all time for weak solutions of hyperbolic systems of conservation laws. The RCM was first made into a practical tooI for solving gasdynamic problems by Chorin [2,3] and later improved by Sod [4], Colella

[SJ, and others. The RCM is highly successful for solving unsteady perfect

gas flows with one space dimension, but th is success has not yet been carried over to the case of two and three space dimensions, although a number of researchers have worked on this problem [2,5,6].

The purpose of this paper is firstly to discuss the random sampling procedure and its implications with respect to the RCM. Since finite-volume techniques essentially involve flux integrations around cell boundaries, it will be seen that the random sampling procedure in the RCM is actually a series of Monte Carlo estimates of these integrals. A summary of Monte Carlo estima-tion theory will be presented, in which it will be shown that the convergence process can generally be accelerated (or the discrepancy lowered) by employing

quasi-random sequences instead of purely random or pseudo-random numhers. Dif-ferent quasi-random sequences on a unit line (one-dimensional) are reviewed, and a new sequence is presented, in order to illustrate which has the lowest discrepancy or. fastest convergence rate. The new one-dimensional quasi-random sequence has the best performance, being better than Van der Corput's sequence.

The second goal of this paper is to extend the ideas behind the new one-dimensional sequence to the two-one-dimensional case of a unit square, which would have application in a two-dimensional RCM. This new two-dimensional sequence

(7)

is compared with other sequences which are currently in use, by using numerical techniques. The numerical results illustrate that the new sequence leads to faster integration convergence rates for smooth functions with bounded first derivatives, but its performance on functions with discontinuities is poor.

From a practical standpoint, the application of the RCM in gasdynamics

is reasonably straightforward. Unfortunately, the same is not necessarily true

of the theoretical considerations related to the method, especially those per-taining to the random sampling procedure. The relevant literature which is currently available on this subject is firmly entrenched in number theory and

tends to be relatively abstract.

On

the other hand, future innovative ideas

concerning the RCM may weIl stem from the engineering science community. With

these considerations in mind, it is hoped that the present paper will serve as a useful outline and guide for the engineering analyst.

2.0 GODUNOV' S FIRST-ORDER FINITE-DIFFERENCE SCHEME

A convenient place to start the discussion of random sampling in the RCM is with the well-known hyperbolic equations of gasdynamics for one-dimensional

unsteady flows. Following the solution procedure of Godunov [7-9], these

equa-tions may be first rewritten in general contour integral form and eventually

reduced to his special deterministic flux differencing form. This approach is

useful in establishing the fundamental integral which is evaluated by an aver-aging procedure in Godunov's method and by the technique of Monte Carlo esti-mates or random sampling in the RCM.

For one-dimensional unsteady inviscid flows the continuity, momentum and energy equations in strong conservation form may be given in matrix notation as

(2.1)

with the state matrix U and flux matrix F defined by

F(U) (2.2)

where u, p, pand e denote the flow velocity, density, pressure, and internal energy per unit mass, and x and t are the Eulerian coordinates of distance and

time. This equation may be re-expressed in the form of a double integral, that

is !!(aN/ax - aM/ay)dxdy, and if N and Mare continuous and differentiable it

can be reduced to a contour integral of the form

§

Mdx + Ndy by using Green's

theorem in the plane. However, the generalized solutions of Euler's equations

allow discontinuities in state variables. In this case the standard Green's

theorem does not apply directly, but, by combining this theorem with a special analysis pertaining to the Euler equations, identical results can be obtained [7,8]. Therefore, the contour integral for the conservation equations become

§

[Udx - F(U)dt]

=

O. (2.3)

This integral may be evaluated around an arbitrary contour in the (x,t)-plane,

(8)

and it will admit piecewise continuous functions as solutions.

Now consider a domain in the (x,t)-plane. Suppose that at some initial

time level t

=

to the solution U is piecewise constant and may be represented

on the spatial intervals (xm,xm+l)' where m equals ••• -2, -I, 0, I, 2, ••

• , xm = mh, and h is the interval width, by a series of step changes such that

the solution within each interval is constant. This is illustrated in Fig. 1. If these intervals are cal led cells the constant solutions within each cell can

be denoted at time level to by Um+l/2' and at the next time level tO+~ by

um+1 / 2 • In this manner the solution domain is discretized and a solution is

sought at time level tO+~.

The discontinuities at cell boundaries imposed by the assumed initial conditions will produce a series of V-shaped wave patterns which emanate from the cell boundaries as illustrated in Fig. 1. These patterns are self-similar

(i.e., depend on x/t) and do not cross the cell, if the time interval ~ is

sufficiently small. (The Courant-Friedrichs-Lewy criterion would be suitable

to ensure that this time interval restriction is met.) The solution at time

tO+~ can be determined exactly from the initial distribution, by solving the flow associated with the wave patterns at the cell boundaries.

The problem of resolving the flow solution within each V-shaped pattern

and thus determining the solution U at time tO+~ is called a Riemann problem

[7-9]. The solution of such problems involves determining the types of waves

in the V-shaped pattern and their strengths, given the initial constant data on either side of the cell boundaries [3,7-10]. This solution consists of two oppositely moving waves and an intervening contact surface. which delineate

four constant flow regions, as illustrated in Fig. 2. Each left and right wave

Wl and Wr is either a shock or rarefaction wave. depending on the initial

con-ditions for the Riemann problem. For polytropic gases a unique solution to the

Riemann problem always exists and this solution can be determined from a set of nonlinear algebraic equations by an iterative procedure.

It should be emphasized again that once the Riemann problems between adjacent cells have been solved at the initial time level to the exact solution

at time tO+~ can be completely specified everywhere in the spatial domain,

pro-vided that ~ is small such that the adjacent wave patterns do not cross the

cell, and if the initial distribution is exact. Based on this information

Godunov devised a unique finite-difference algorithm which uses the data from

the Riemann problems to solve the flow field at tO+~' for a given discretized

initial solution distribution, and then reduces this solution at the new time level to another discretized distribution. The repeated use of th is algorithm will permit the Euler equations to be integrated in time and gives approximate numerical solutions.

Godunov developed his first-order finite-difference scheme by starting with the contour integral form of the conservation laws and by considering an

isolated cell in the solution domain. By expressing Eq. 2.3 as an integral

around the cell with a left spatial boundary xm' right spatial boundary xm+1'

and temporal boundaries to and tO+~' he obtained

xm

S

U(x,tO)dx xm+1 to+~ xm+l

S

F(xm,t)dt +

S

U(x,tO+~)dx

to xm to

- S

F(xm+l,t)dt

=

O. tO+~ (2.4)

(9)

between the boundaries at xm and xm+l' and therefore the first integral of Eq.

2.4 can be evaluated easily. From the Riemann problem solution it is known

that U is constant along any ray drawn in the (x,t)-plane through the center of

the wave pattern. Hence, U(xm,t) and U(xm+l,t) are constant along their cell

boundaries xm and xm+l and are independent of time (provided the waves do not

cross the cell in one time step ~). They are also known from the Riemann

prob-lem solution. This implies that F(xm,t) and F(xm+l,t) are also constant along

the corresponding boundaries and are known from the Riemann solution. If these

fluxes at boundaries xm and xm+l are denoted by Fm and Fm+l' Eq. 2.4 becomes xm+l

~+1/2

=

t

J

U(x,to~)

dx = x

m

(2.5)

where um+l / 2 is clearly the average of the solution for the cell at time tO+~.

By representing the solution within each cell by this integrated average a

dis-cretized solution distribution is then recovered. Note that Eq. 2.5 could be

derived directlyon the basis of control-volume theory; however, Godunov's approach is complementary and provides some additional insight into the meaning of Um+l/2.

By doing this integration process around each cell in a similar manner, Euler's equations can then be integrated over the entire solution domain. Therefore, Eq. 2.5 is Godunov's explicit finite-difference formulation which determines the solution at the next time level in terms of the known solution

from the previous time level.

It is worthwhile discussing a few key features of this difference

tech-nique. Firstly, Eq. 2.5 for the integrated average provides a simple method

of evaluating the integral without actually performing the integration over the top of each cell. Secondly, it should be noted that the difference equations satisfy the contour integral form of the conservation laws exactly between time levels, and it is only the use of the integrated ave rage values

to represent the initial solution in the cell at the next time level that

introduces the numerical error. The integral is exact but the average is

not. This averaging procedure makes the Godunov scheme formally first-order

accurate, that is of O[h], if the solution U is continuous and differentiable. Thirdly, this should be obvious from Eq. 2.5, or from a Taylor series

expan-sion. Furthermore, for generalized noncontinuous solutions to the Euler

equations Godunov has demonstrated that approximate solutions which are con-structed from this finite-difference scheme satisfy the criterion that must be met by weak or generalized solutions of the differential equations as h goes to zero [11]. Finally, it should be noted that, as with all first-order differ-ence schemes, Godunov's technique is dissipative and therefore discontinuities

are smeared. This is a problem which plagues many numerical methods that are

developed to solve Euler's equations.

In a broad sense the integral expression of Eq. 2.5, for the solution U

at time tO+~ in terms of the solution at the previous time step and the fluxes

at cell boundaries, could be considered as the starting point for all explicit

finite-volume or finite~difference numerical methods for solving Euler's

equa-tions. Such explicit methods for solving Euler's equations all involve the

solution of this integral, and the different approaches in which the various terms in Eq. 2.5 are resolved numerically will result in the varied solution techniques.

(10)

3.0 RANDmf SAMPLING IN THE RANDOM-CHOICE )lETHOD

The RCM is c10se1y re1ated to the first-order Godunov finite-difference

method. It a1so solves a series of Riemann prob1ems over a simi1ar discretized

spatia1 domain to advance the solution to Eu1er's equations in time. However, in this alternative method the integrated average of the solutions within the

ce11s at time tO+~ derived from the Riemann prob1ems are not used to represent

the discretized solution at the next time level. Instead, the RCM emp10ys a sampling technique whereby the solution of the Riemann prob1em for each cel1 at

time tO+~ is samp1ed randomly, and a single state is thereby chosen to rep

re-sent the solution at this time level. (In the RCM the time step ~ is about

one-half that of Godunov's method, because the wave patterns must not overlap, which is more restrictive than not crossing the ce11.) The RCM a1gorithm can

be expressed as fol10ws. Given the piecewise continuous solution U(x,tO+~)

for Xm

i

x

i

Xm+1' which can be computed from the solution of the Riemann

prob-1em for that ce11, a discretized solution at the time tO+~ is assigned by using

lJDl+1/2 = (3.1)

where the random number 0 directs the sampling process in the ce11. This

num-ber lies in the range 0

<

0

<

1.

Note that this is equivalent to a single point estimation of the average

of the Riemann solution at time tO+~. To show this, let the average va1ue be

~+1/2 =

J

Xm+1

U(x,tO+~)

g(x) dx,

x

m

(3.2)

where the function g(x) = l/h is a uniform probabi1ity density distribution.

Now, let

x

=

(x - xm)/h to obtain

~+1/2 (3.3)

where 0

i

x

i

1. A Monte Car10 estimate of Eq. 3.3 yie1ds (see chapter 4)

N

-m+1/2 _

!

~ U( - h )

U - N

L

xm+x i ,tO+~ , (3.4)

i=l

or by taling N = 1 and sampling at xl = n we get lJDl+1/2 ~ U(xm+nh,tO+~).

G1imm [1] original1y proposed this solution sampling procedure, and he suggested that if 0 was a random number then the sampling technique wou1d constitute a Monte Car10 estimation of the integra1 found in the deterministic Godunov scheme (Eq. 2.5). In other words, the RCM wou1d essential1y perform a Monte Carlo integration of Euler's equations over the solution domain. The primary advantage of the random sampling procedure in the RCM, as compared to the averaging process of Godunov's scheme, is that each chosen state conserves mass, momentum and energy exact1y without introducing any artificia1 dissipa-tion or diffusion. Therefore, shock and contact-surface discontinuities in the solution are preserved, which may be important in many app1ications invo1v-ing numerous waves which interact. Fina11y, the random choice method a1so happens to be unconditiona1ly stab1e, even when the Courant-Friedrichs-Lewy

(11)

criterion is violated. However, this vio1ation does resu1t in a degeneration of the solution accuracy.

It shou1d be apparent fr om Eqs. 3.1 to 3.4 that the sequence of random numbers in the sampling process will affect the numerical solution. Chorin [2] recognized that the quality of the numerical solution cou1d be greatly improved if the same value of the random number was used in each ce1l on one time level. This wil1 ensure that the domains of dependence for the discrete points in the

sampled solution do not overlap if the Courant-Friedrichs-Lewy criterion is maintained. However, the question remains as to what type of sequence shou1d be used to obtain optimum solution convergence at successive time levels. This question may be answered by investigating the performance of various number sequences when they are emp10yed in Monte Car10 integration processes.

4.0 MONIE CARLO INTEGRATION AND ERROR ANALYSIS

Integration by the Monte Carlo method is an approximate procedure# and it normally has an associated error to c1assify the accuracy. This is a1so true of other numerical methods of quadrature. The error in Monte Car10 urC) integration depends on which sequence or type of sequence of random numbers is used in the integration. Furthermore# there are different procedures for esti-mating the integration error, and some of these are easy to app1y but give only an approximate upper bound on the error. Different methods of determining the error of MC integration with random number sequences are discussed herein# so that the error fr om various sequences can be given in the fo110wing chapter.

Suppose that we wish to evaluate a multidimensiona1 integral of the form

I (4.1)

where

A statisticalor simple Monte Carlo estimation of the integra1 I can be obtain-ed by emp10ying random numbers from a uniform probability density distribution in the sampling procedure

I where =

!.

N (4.2) and {R .:OiR .<1}. S1

S1-The word 'simp1e' indicates that pure1y random numbers {Rsi} are to be used in the sampling process. In practice# however# we use pseudo-random (computer generated) numbers for a simp1e MC integration procedure. In order for the MC estimation to convergeto the proper solution as N goes to infinity# we require that the sampling points Pi be equidistributed on domain D. The quantity

(12)

or &(fiE)

=

I - JN may be defined, where E refers to the type of 'integration net' or the number sequence that is used. Further, 1&(fiE)1 will be referred to as the absolute integration error, or more simply as the 'error' in the MC process. It is weIl known that for a purely random net ER = (Rsil the error

is of O[N!1/2] [12]. Note that this error will not strictly apply to pseudo-random numbers, due to periodicity of the numbers, but in practice 1&(fiE)

I

is generally assumed to be of O[N!1/2], because many pseudo-random numbers are sufficiently random to have th is property.

Past studies of multidimensional quadrature have revealed that conver-gence rates better than O[N!1/2] are possible, depending on the choice of the number sequence (and hence the integration net E). Essentially, areduction of the integration error can be achieved through selection of a net which is more equidistributed than a purely random net, although this will depend upon the properties of the integrand and the dimension d of the integration domain [13]. Number sequences which generate a net of this type are now referred to as quasi-random sequences or numbers [12,13], and the estimation processes are called quasi-random Monte Carlo techniques or quasi-Monte Carlo methods.

The quality of a quasi-random sequence is customarily judged by means of what is known as the discrepancy of the sequence. In order to define discrep-ancy in a precise manner, let us return to equation 4.3. Sobol' [14] gives an exact expression for the upper error bound in the case of d = 2 and for any type of integration net E as

1 1

J J

I

S(x,y)

I

+ L12 N - xy dxdy.

o

0

In this equation, S(x,y) is the number of points of the net E a rectangle with sides of length x and y, as depicted in Fig. L2 and LI2 denote upper bounds on the first and mixed partial f(x,y) such that

(4.4)

which lie inside 3. The terms Ll' derivatives of

(4.5)

The uniformity of the net E helps reduce the error, and this can be seen from Eq. 4.4 and is illustrated in Fig. 3. One might note, however, that highly regular oscillations in the integrand can cause large errors when coupled with a quasi-random net. The one-dimensional analogue of Eq. 4.4 is given by

(4.6)

where

L

=

I

max

[df]1

(13)

Note that S(x) now represents the number of points xn of the sequence such that xn

<

x.

We will define for convenience the lat ter part of Eq. 4.6 by

....

D

1

S

Illil -

xldx

o

N

(4.8)

for future use. In addition. the discrepancy of the one-dimensional sequence used to create E can now he defined as [15]

sup

I

llil _

I

DN

=

Oixil N x (4.9)

and. for the two-dimensional case

sup

IS(X,Y)

I

DN = Oixil N - xy • O~il

(4.10)

where the supremum is taken over all sampling points within the unit square. Equation 4.9 corresponds to the discrepancy of the first 'N' terms of the one-dimensional sequence. and it is a measure of how uniformly the net E is distri-buted over the interval 0 i x i 1. In terms of the discrepancy DN' a theorem due to Koksma [16] gives

(4.11) where

V(f)

=

S:

[:!]

dx

=

f(l) - f(O) (4.12) is the variation of f(x) over the domain D. Since

IV(f) I i L (4.13)

and

....

D = i Oixil N sup

I~

- I

x

=

(4.14)

one can see that Eqs. 4.6 and 4.8 yield 1&(fiE) I

i

L·D

N• (4.15)

Hence. 1&(fiEI

i

L'DN gives an extreme upper bound on the error incurred in the IfC integration process as a function of Land the discrepancy DN'

It should emphasized out that the error given by Eq. 4.15 is actually a bound on the upper bound given by Eq. 4.11. In the application of quasi-random

sequences to quadrature problems one is actually concerned with the convergence order of 1&(fiE)I. A true a priori estimate of 1&(fiE)1 requires knowledge of convergence properties of t~e estimator JN (Eq. 4.2). which may he of a dif-ferent order than those of D or of the discrepancy DN' The quantity JN will depend on the quality of net being used. as weIl as the integrand. Note that

(14)

O[IN] gives the actual error convergence rate for a particular integration problem and number sequence. IN is normally integrand dependent and its evalu-ation may prove intractable for all but the simplest of nets. On the other hand, the discrepancy DN is most of ten used in the literature because it is

theoretically accessible and independent of the integrand (or problem). Since

IV(f)I·DN represents an extreme upper bound for 1&(fiE)I. it may yield somewhat weaker convergence estimates than those given by IN (if available) and

encoun-tered in practice. However, this is the price paid for using DN when IN is unavailable.

Although there exist several types of discrepancies which have been

defined in the past [IS]. we will restriet ourselves to the definition given by

Eq. 4.9. Other discrepancies can be deduced from DN through simple relations.

For the one-dimensional case. Eq. 4.9 can be restated as follows [IS]:

=

max [

I

nl

I

n-l l ] liniN max[ xn -

N •

xn -

N ]

(4.16) or 'D =

--1

max

I

2n-1

1

N 2N + 1iniN xn - ~ • (4.17)

An inspection of Eq. 4.17 reveals that DN has a lower bound of (2N)-I. and the

second term represents the absolute deviation of the sequence point xn from the midpoint of the segment [(n-l)/N.n/N]. This deviation is. essentially. what determines the performance or quality of a quasi-random sequence with respect

to the discrepancy. lt is worth pointing out here that the sequence

corres-ponding to DN

=

(2N)-1 is given by

1/2N. 3/2N ••••••• (2n-l)/2N.

...

,

(2N-l)/2N. (4.18)

This sequence generates a net which is simply the midpoint or trapezoidal rule. By substituting the above sequence into Eq. 4.8 one obtains

_ ,-l/2N D = 2J~ x dx +

o

N-l (2n+l)/2N

~

S

I: -

xl dx.

n=1 (2n-l)/2N N-1 1/2N 1 4N2 +

l

S

lxi dx, n=1 -1/2N (4.19)

which can be reduced to D-= (4N)-I. Therefore. the order of convergence is the same in both cases. but the constants are different. As previously discussed. a more accurate estimate of 1&(fiE)1 would require knowledge of the integrand in order that IN may be evaluated. For example. for the particular function

f(x) = 3x2 - 2x and the previous trapezoidal net. one may obtain [17]

N-l 2 ~ ~ [3(2n+l) _ 2(2n+l)] N

L

2N 2N n=O

=

4t7.

1 (4.20) such that !&(fiE) !

=

1-

~

-

Ol

=

4if.

1 (4.21)

(15)

be expected from numerical analysis [18]. Therefore. in this type of problem the upper bounds

D

and DN are rather weak as convergence estimators for the absolute error 1&(f.E)I. When more complicated nets are used (or the integrand properties are not adequately known). we will not have the luxury of prior knowledge concerning JN' and then the upper bounds

L·D

and IV(f)I·DN will become invaluable.

5.0 QUASI -RANDOM NUMBER SEQUENCES

The more commonly used one- and two-dimensional quasi-random sequences are considered in this chapter. and new ones are also introduced. The presen-tation beg ins with one of the best one-dimensional (l-D) sequences originally proposed by Van der Corput [19]. and then the generalization of this sequence by Halton [13] is given. followed by the new one-dimensional sequence. The presentation continues with two-dimensional (2-D) sequences due to Van der Corput and Halton. and the chapter concludes with the introduction of a new 2-D sequence.

5.1 Van der Corput's 1-D Seguence

Any integer n can be written in binary notation as

n = + n m 2 • m where (5.1)

If the digits ni are simply reversed and a decimal point inserted at the begin-ning, a unique fraction that lies between 0 and 1 can be constructed as

+ ••••• + n 2 -m-1 • (5.2) =

m

This is the Van der Corput sequence in one dimension (l-D). and the first six-teen numbers are illustrated in table 1 (top of the next page).

The Van der Corput sequence may be visualized more easily by means of the recursion relation

for neven,

= (S.3)

o

(n-1) + 1

2 ~ for n odd.

The first fifteen numbers of this sequence are shown graphically on a line in Fig. 4a. On completion of a 'level' corresponding to n equal to 7 the sequence

is perfectly equidistributed on the interval (0.1). This behaviour is typical of the Van der Corput sequence. and perfect equidistribution is achieved when n equals 1. 3. 7. IS, etc •• or when n = 2k+1-1, where k = O. 1, 2. 3, ..• , etc. can be referred to as the kth level. Note also that even terms of the sequence lie in the interval (0,1/2), and odd terms always lie in the interval [1/2,1) because they are the previous even number plus 1/2. This helps illustrate that the sequence flip-flops or alternates over the domain. This flip-flop actually continues to all subintervals. Consider the subinterval (0.1/4). The first number that falls in this interval will be in the left-hand half between 0 and

(16)

Table 1: Van der Corput' s 1-D Quasi-Random Numbers

n ni 02 (n) °2(n)

(dec imal) (binary) (b inary) (dec imal)

1 1 0.1 0.5 2 10 0.01 0.25 3 11 0.11 0.75 4 100 0.001 0.125 5 101 0.101 0.625 6 110 0.011 0.375 7 111 0.111 0.875 8 1000 0.0001 0.0625 9 1001 0.1001 0.5625 10 1010 0.0101 0.3125 11 1011 0.1101 0.8125 12 1100 0.0011 0.1875 13 1101 0.1011 0.6875 14 1110 0.0111 0.4375 15 1111 0.1111 0.9375 16 10000 0.00001 0.03125

1/8, and the next one that falls in this interval will lie in the other half between 1/8 and 1/4. This flip-flop behaviour is what produces the approximate equidistribution of the number sequence throughout the interval (0,1), whenever the sequence is ternl1nated af ter N numbers. Note that this f1 ip-flop act ion and the 'left-hand rule' can be employed to produce the entire Van der Corput number sequence. We have, in fact, devised such a method based on a graphical or physical technique and reproduced Van der Corput's sequence as an

interest-ing exercise.

One upper bound on the discrepancy DN defined by Eq. 4.9 for the Van der Corput sequence is

4 In(N)

N In(2)

=

O[ln~N)

], (5.4)

as given elsewhere [13,14]. Note that a better upper discrepancy bound has been obtained by Tijdeman and reported by Bejian and Faure [20]. This result

N.D

N i 3·ln(2) In(N) + 1 (5.5)

has the same convergence order for very large N as Eq. 5.4. The upper discrep-ancy bound of O[~11n(N)] for the Van der Corput sequence is considerably bet-ter than O[~1/2] for a net based on purely random numbers [12]. However, it is still not as good as the upper discrepancy bound (2N)-1 for the trapezoidal net. As mentioned previously, the best possible discrepancy for any dimension

is O[~l].

Van der Corput's number sequence can be produced in the computer by using the FORTRAN statements given in table 2. Two versions are presented for this sequence. Both algorithms retrieve the random number

RAND

for a specified

input number N. The main difference is that the subroutine CORPUTI is based on the graphical or physical approach described earlier, whereas CORPUT2 makes

(17)

Table 2: FORTRAN Subroutines for Generating Van der Corput's 1-D Numbers SUBROUTINE CORPUTI (N.RAND)

L=N RAND=O.O 10 NSUM=l 20 IF (L.LT.NSUM) GOTO 30 NSUM=2*NSUM GOTO 20 30 RAND=RAND+l.0/NSUM L=L-NSmf/2 IF (L.GT.O) 6OTO 10 RETURN END

SUBROUTINE CORPUT2 (N.RAND) M=O K=l J=N RAND=O.O 10 M=M+l K=2*K MD=MOD(J.K) IF (MD.EQ.O) 6OTO 10 RAND=RAND+l.0/K J=J-ltID IF (J.NE.O) GOTO 10 RETURN END

use of the binary expansion methode However. the CPU time (based on generating the first 20.000 numbers) for CORPUTI is about 2.6 times smaller than that of CORPUT2.

5.2 Halton's 1-D Seguence

Van der Corput's quasi-random number sequence was generalized by Halton [13] for any radix R. where R

=

2. 3. 4 •.•..• etc. For this generalized case let n be given by

n =

and then define

~(n)

+ ••••• + nR. m

m where (5.6)

(5.7)

to be the unique fraction on the interval (0.1). The corresponding discrepancy bound for the general case is given by

(3R-2)

!AUll.

= o[ln~N)].

DN

~

ln(R) N (5.8)

Note that the constant (3R-2)/ln(R) in this equation depends on the radix. and it increases for larger values of R. This serves to increase the discrepancy bound for a given sequence of numbers of length N. Therefore. the Van der Cor-put sequence (with a radix of 2) is probably the most efficient sequence of this family. even though they all have the same order of discrepancy bound.

Halton's generalized sequence for any radix may be obtained by using the FORTRAN statements in table 3. The next quasi-random number is obtained from the previous one. and a convenient value of zero for RAND will start the pro-cedure. Halton's short algorithm will generate random numbers more efficiently by a factor of ten than CORPUTI for the Van der Corput sequence. because it builds directlyon the previous random number. For this reason it is recom-mended for sequentially generating random numbers for the case of all radices.

(18)

Tab1e 3: FORTRAN Subroutine for Producing Halton's 1-D Numbers SUBROUfINE HALTON (RADIX,RAND)

X=1.0-RAND Y=1.0/RADIX 10 IF (X.GT.Y) GOTO 20 Y=Y/RADIX GOTO 10 20 RAND=RAND+(RADIX+1.0)*Y-1.0 RETURN END

5.3 New Quasi-Random 1-D Seguence

A1though it might first appear that the Van der Corput sequence is the best possib1e for sequentia1 sampling, this thought was set aside and a better quasi-random one-dimensional sequence was sought. Based on experience gained in reproducing Van der Corput's sequence on the basis of physica1 princip1es rather than number theory, we were able to construct a number sequence which has a lower descrepancy DN and absolute error 1&(fiE)

I.

This new quasi-random sequence wi11 now be described.

The phi10sophy'behind th is new sequence is to image or 'mirror' each new point of the net about the points of the previous uniform1y comp1eted level. To i11ustrate this principle, the first fifteen numbers of the sequence are

shown in Fig. 4b. The sequence is split into levels denoted by k = 0, 1, 2, •••• , etc. The first point divides the segment [0,1] into equal parts by sampling at x = 1/2 (k = 0). This comp1etes the first level. For the level corresponding to k = 1, there are 2k sampling points, one at x = 1/22 and one at 3/22 • Note that any odd term in the sequence is mere1y a ref1ection of the previous even term. The next level (k = 2) invo1ves comp1eting images about the points 1/4, 3/4 and 1/2. This type of a1gorithm is repeated1y app1ied to obtain a sequence of numbers.

A recursion re1ation which can be used to generate the new sequence is

=

{

t

Q(n/2) 1 - Q(n-l) for neven, O(n) (5.9) for n odd,

where the initia1 number 0(1) = 1/2. Some additiona1 interesting properties of the new sequence, given as

N-l Q(N) =

..x..

2

N-l i=1 were a1so discovered.

1:.

N N

1

j=l Q(j) (N-2)/2 Q(i) -

2

j=1

1:.

2 for N odd N/2 Q(2j) -

2

Q(2k+l) k=1 (5.10) for Neven, (5.11)

(19)

Two FORTRAN listings are given in table 4 for generating numbers from

this new sequence (N

>

1). The subroutine NEWSEQ1 produces the random number

RAND corresponding to a specified value of N, whereas NEWSEQ2 gives the numbers sequentially by building on the previous number and level. Hence, NEWSEQ2 will produce sequential numbers more quickly. The first sixty numbers of the new sequence are given in table 5 for reference.

Table 4: FORTRAN Subroutines for Producing New Quasi-Random 1-D Numbers

SUBROUTINE NEWSEQ1 (N,RAND) NLEV=2 NSUM=l 10 NSUM=NSUM+NLEV IF (N.LE.NSUM) GOTO 20 NLEV=NLEV*2 GOTO 10 20 JLEV=N-NSUM+NLEV CENI'ER=1.0/NLEV RAND=CENI'ER/2.0 30 NLEV=NLEV/2

IF (JLEV.GT .NLEV) THEN RAND=2.0*CENI'ER-RAND JLEV=JLEV-NLEV END IF CENTER=CENTER*2.0 IF (NLEV.GT.l) GOTO 30 RETURN END

SUBROUTINE NEWSEQ2 (N, KLEV , NSUM, RAND) DATA KLEV/1/, NSUM/O/ (in main prog.) NLEV=KLEV IF (N.LE.NSUM) GOTO 10 NLEV=NLEV*2 NSUM=NSUM+NLEV KLEV=NLEV 10 JLEV=N-NSUM+NLEV CENI'ER=1.0/NLEV RAND=CENTER/2.0 20 NLEV=NLEV/2

IF (JLEV .GT .NLEV) THEN RAND=2.0*CENI'ER-RAND JLEV=JLEV-NLEV END IF CENTER=CENTER*2.0 IF (NLEV.NE.l) GOTO 20 RETURN END

Table 5: Quasi-Random Numbers from the New 1-D Sequence

n 0 n 0 n 0 1 0.5 21 0.78125 41 0.890625 2 0.25 22 0.28125 42 0.390625 3 0.75 23 0.71875 43 0.609375 4 0.125 24 0.09375 44 0.140625 5 0.875 25 0.90625 45 0.859375 6 0.375 26 0.40625 46 0.359375 7 0.625 27 0.59375 47 0.640625 8 0.0625 28 0.15625 48 0.046875 9 0.9375 29 0.84375 49 0.953125 10 0.4375 30 0.34375 50 0.453125 11 0.5625 31 0.65625 51 0.546875 12 0.1875 32 0.015625 52 0.203125 13 0.8125 33 0.984375 53 0.796875 14 0.3125 34 0.484375 54 0.296875 15 0.6875 35 0.515625 55 0.703125 16 0.03125 36 0.234375 56 0.078125 17 0.96875 37 0.765625 57 0.921875 18 0.46875 38 0.265625 58 0.421875 19 0.53125 39 0.734375 59 0.578125 20 0.21875 40 0.109375 60 0.171875

(20)

The discrepancy bound DN for the new quasi-random sequence could not be determined analytically by employing Eq. 4.9 or 4.17. whereas DN for the Van der Corput sequence was given earlier by Eq. 5.4 and 5.5. For this reason. DN was evaluated numerically for the first one thousand terms. by using Eq. 4.17.

and this was done for'both the new and Van der Corput sequences. The results

are shown in Fig. 5 (a and b). where N·DN/ln(N) is plotted as a function of N. The fluctuation of the discrepancy with N is typical. whereas any upper bound

is generally smooth. Note that N·DN/ln(N) in Fig. Sa for the Van der Corput

sequence is amply bounded by the constant 4/ln(2) = 5.77 given in Eq. 5.4.

Tijdeman's estimate.

1 + 1

3 ·ln(2r

IïïtNT·

(5.12)

more closely bounds the numerical results for the Van der Corput sequence. The results for N·DN/ln(N) for the new sequence in Fig. Sb have the same trend as those for Van der Corput's sequence. but the values are smaller or equal for

each N. This illustrates that the new sequence has a discrepancy that is less

than or equal to that for the Van der Corput sequence. and equality occurs

only.' upon complet ion of each level of numbers. The new sequence has a better

equidistribution of numbers or net between the completion of various levels. Although not presently accessible for the new quasi-random sequence. the

quantity

D

would perha~s be a better estimator for the integration error. The

direct calculation of D for the first several terms of the new and Van der Corput sequences indicated that it is also smaller for the new sequence. This

is an encouraging trend that partly motivated the development of the current

sequence. In spite of the generality of discrepancy theory in being

indepen-dent of the integrand or problem. the performance of this theory in setting accurate upper bounds for integration error is of ten inadequate when compared with convergence rates obtained in practice.

5.4 Van der Corput's 2-D Seguence

For the case of two dimensions. Van der Corput suggested using the sequence

(5.13)

where n

=

1.2.3 •••••• N. In this expression. 02(n) is the one-dimensional Van

der Corput sequence. and the terms n/N are simply a uniformly increasing sequence equivalent to the rectangular rule.

When this sequence is used. the last value N must be decided upon in advance. in order to determine the step size n/N for the first co-ordinate of each pair for the net. However. many problems require that the sampling pro-cedure continue until a certain tolerance is achieved. rather than sampling a predetermined number of times. This type of sampling to an initially

indeter-minate N is known as sequential sampling. Hence. it is clear that the Van der

Corput two-dimensional sequence is not suitable for sequential sampling applica t ions.

A discrepancy bound for Van der Corput's 2-D sequence is given by [13] (5.14)

(21)

One might have expected that this is of the same order as the one-dimensional sequence, because the discrepancy order will be no better than the slowest convergence rate of the two co-ordinate sequences. In this case the 02(n) co-ordinate determines the overall discrepancy convergence rate for the two-dimensional Van der Corput sequence. It may be noted that the two-two-dimensional Van der Corput sequence has been generalized to higher dimensions by Hammersley [12], but these results wil 1 not be discussed in this paper.

5.5 Halton'§ 2-D Seguence

Certain problems may require the use of sequences which are suitable for sequential sampling, as noted in section 5.4. For this reason Halton [13] has suggested the sequence

(5.15) for use in two-dimensional problems. Here, 0R1(n) and 0R2(n) are two separate one-dimensional Halton sequences, where Rl and R2 are usually chosen to be the first two prime numbers (2 and 3), although other prime number combinations (3 and 5) could be used as weIl.

The discrepancy of the two-dimensional Halton sequence has the bound

where

=

3Ri - 2

ln(Ri

J

(5.16)

(5.17)

is a radix-dependent coefficient. The most commonly employed set of radices (R1,R2) is (2,3), because this set gives the minimum values of Cl and C2, as mentioned above.

The two-dimensional sequence of Halton appears to be the best available in the literature. Although other sequences may perform slightly better, they are either not suitable for sequential sampling (e.g., Van der Corput's 2-D sequence), or they are very cumbersome to generate with a computer [see 21].

5.6 New 2-D Seguence

A new one-dimensional quasi-random sequence was introduced in section

5.3. The central philosophy behind this 1-D sequence was to use an imaging

process in order to gain a highly symmetrical set of equidistributed numbers with a low discrepancy and absolute integration error. This principle may be extended to two dimensions for the case of the unit square, where the imaging now takes place within rectangular subdomains of the square. It should be pointed out that the resulting 2-D quasi-random sequence [Q1(n),02(n)] does not stem directly from the new 1-D sequence, but rather that the general sequence construction techniques follow the same basic approach in both cases.

A subroutine for generating the new two-dimensional sequence of numbers (n = 1,2,3, ••••• ) is given in table 6. In order to use this subroutine, the

(22)

Table 6: FORTRAN Subroutine for Producing New Quasi-Random 2-D Numbers

SUBROUTINE NEW2D (mlEG1.

+ mlEG2.N)

COMMON/No1/NLEV.JLEV.NSUM COMAfON/ No2 / I LEV • CENI'LEV IF (N.EQ.1) TBEN OMEG1=0.S OMEG2=0.S GO TO 30 END IF IF (N-NSUM.GT .0) THEN ILEV=ILEV*4 CENI'LEV=CENI'LEV*O.S NSUM=NSUM+ILEV END IF

J LEV= N-NSUM+ I LEV NLEV=ILEV

CENf=CENfLEV XSIZE=CENf YSIZE=CENf

10 IF (CENI'.GE.O.S) GO TO 20 CALL RECT (XSIZE. YSIZE.

+ CENf.N) GO TO 10 20 OMEG1=POS1 (XSIZE) OMEG2=POS2 (YSIZE) 30 RETURN END

FUNCTION POS1 (XSIZE) COMMON/No1/NLEV.JLEV, NSUM COMMON/No2/ILEV.CENfLEV RAT=FLOAT(JLEV)/NLEV IF (RAT.GT.O.SO) GO TO 10 IF (RAT.GT.0.2S) THEN POS1=0.SO+XSIZE/2.0 EL SE POS1=0.SO-XSIZE/2.0 END IF GO TO 20 10 IF (RAT.GT.0.7S) THEN POS1=0.SO+XSIZE/2.0 EL SE POS1=0.SO-XSIZE/2.0 END IF 20 RETURN END

SUBROUTINE RECT (XSIZE.

+ YSIZE.CENT,N) COMMON/No1/NLEV .JLEV. NSmf COMMON/No2/ILEV.CENI'LEV RAT=FLOAT(JLEV)/NLEV NLEV=NLEV/4 CENf=CENf*2.0 IF (RAT.GT.O.SO) GO TO 1 IF (RAT.GT.0.2S) THEN XSIZE=CENf-XSIZE YSIZE=CENf-YSIZE JLEV=JLEV-NLEV ELSE XSIZE=CENf+XSIZE YSIZE=CENf+YSIZE END IF GO TO 20 10 IF (RAT.GT.0.7S) THEN XSIZE=CENf-XSIZE YSIZE=CENf+YSIZE JLEV=JLEV-3*NLEV EL SE X SI ZE=CENI'+XSIZE YSIZE=CENf-YSIZE JLEV=JLEV-2*NLEV END IF 20 RETURN END

FUNCTION POS2 (YSIZE)

CO~WON/No1/NLEV.JLEV.NSUM COMMON/No2/ILEV.CENfLEV RAT=FLOAT(JLEV)/NLEV IF (RAT.GT.O.SO) GO TO 10 IF (RAT.GT.0.2S) THEN POS2=0.SO+YSIZE/2.0 ELSE POS2=0.SO-YSIZE/2.0 END IF GO TO 20 10 IF (RAT.GT.0.7S) THEN POS2=0.SO-YSIZE/2.0 ELSE POS2=0.SO+YSIZE/2.0 END IF 20 RETURN END

(23)

common blocks shown in table 6 must also be included in the main computer pro-gram. Furthermore. certain variables must be initialized in the main program as indicated in the following brackets (ILEV=I. NSUM=I. CENTLEV=I.0. OMEG1=0.0. OMEG2=0.0). This program generates the paired co-ordinates sequentially. by building on the previous random numbers and other level information. The two co-ordinate pairs OMEG1 and O~mG2 are returned corresponding to the value of n

(capital N in the FORTRAN code listing). By building on the previous informa-tion, the co-ordinate pairs are generated efficiently. In spite of the length of the code listing, the co-ordinate pairs are generated only about 1~ slower than the corresponding numbers produced by Halton's 2-D algorithm.

The first 94 co-ordinate pairs of numbers from the new 2-D sequence are presented for reference in table 7 (on the next page).

6.0 NUMERICAL RESULTS FOR 1-D INTEGRATION ERROR

Since we are primarily interested in quasi-random sequences in the con-text of Monte Carlo integrations, the next natural step is to test both the new and Van der Corput 1-D sequences on several test problems to obtain additional results for the absolute integration error. The goal is to gain more insight into the convergence properties of these new sequences for various test prob-lems, even though the results will be somewhat problem dependent.

From a number of integration problems investigated in this study, the results of seven typical ones will be presented. They are summarized below.

Case A: Case B: Case C: Case D: Case E: 1 I =

I

2x dx

o

1 I =

~

Io

cos(~x) dx 1 I =

I

f(x) dx for 0 1 I = 16

I

f(x) dx

n

0 for f(x) 1 I = 180

I

f(x) dx

IDO

= (6.1) (6.2)

e/

2 i f

o

i

x

i

1/3 f(x) = 3/4 i f 1/3

<

x

i

1 (6.3)

{2

+

32./3

-128x2/9 i f

o

i

x

i

3/8 (6.4) 13/5 -8x/5 i f 3/8

<

x

i

1 3/5 i f

o

i

x

i

1/9 7/10 i f 1/9

<

x

i

7/18 for f(x) = (6.5) 4/5 i f 7/18

<

x

i

2/3 1/2 i f 2/3

<

x

i

1

(24)

Table 7: Quasi-Random Numbers from the New 2-D Sequence n 01 °2 n °1 °2 1 0.50000 0.50000 48 0.18750 0.68750 2 0.25000 0.25000 49 0.81250 0.31250 3 0.75000 0.75000 50 0.31250 0.18750 4 0.25000 0.75000 51 0.68750 0.81250 5 0.75000 0.25000 52 0.31250 0.81250 6 0.12500 0.12500 53 0.68750 0.18750 7 0.87500 0.87500 54 0.06250 0.18750 8 0.12500 0.87500 55 0.93750 0.81250 9 0.87500 0.12500 56 0.06250 0.81250 10 0.37500 0.37500 57 0.93750 0.18750 11 0.62500 0.62500 58 0.43750 0.31250 12 0.37500 0.62500 59 0.56250 0.68750 13 0.62500 0.37500 60 0.43750 0.68750 14 0.12500 0.37500 61 0.56250 0.31250 15 0.87500 0.62500 62 0.06250 0.31250 16 0.12500 0.62500 63 0.93750 0.68750 17 0.87500 0.37500 64 0.06250 0.68750 18 0.37500 0.12500 65 0.93750 0.31250 19 0.62500 0.87500 66 0.43750 0.18750 20 0.37500 0.87500 67 0.56250 0.81250 21 0.62500 0.12500 68 0.43750 0.81250 22 0.06250 0.06250 69 0.56250 0.18750 23 0.93750 0.93750 70 0.18750 0.06250 24 0.06250 0.93750 71 0.81250 0.93750 25 0.93750 0.06250 72 0.18750 0.93750 26 0.43750 0.43750 73 0.81250 0.06250 27 0.56250 0.56250 74 0.31250 0.43750 28 0.43750 0.56250 75 0.68750 0.56250 29 0.56250 0.43750 76 0.31250 0.56250 30 0.06250 0.43750 77 0.68750 0.43750 31 0.93750 0.56250 78 0.18750 0.43750 32 0.06250 0.56250 79 0.81250 0.56250 33 0.93750 0.43750 80 0.18750 0.56250 34 0.43750 0.06250 81 0.81250 0.43750 35 0.56250 0.93750 82 0.31250 0.06250 36 0.43750 0.93750 83 0.68750 0.93750 37 0.56250 0.06250 84 0.31250 0.93750 38 0.18750 0.18750 85 0.68750 0.06250 39 0.81250 0.81250 86 0.03125 0.03125 '40 0.18750 0.81250 87 0.96875 0.96875 41 0.81250 0.18750 88 0.03125 0.96875 42 0.31250 0.31250 89 0.96875 0.03125 43 0.68750 0.68750 90 0.46875 0.46875 44 0.31250 0.68750 91 0.53125 0.53125 45 0.68750 0.31250 92 0.46875 0.53125 46 0.18750 0.31250 93 0.53125 0.46875 47 0.81250 0.68750 94 0.03125 0.46875

(25)

3/5 i f

o

~ x ~ 1/5 1 1 i f 1/5

<

x ~ 3/5 Case F: I =

J

f(x) dx for f(x) = (6.6) 0 8/5 i f 3/5

<

x ~ 4/5 4/5 i f 4/5

<

x ~ 1 3/4 i f

o

~ x

i

1/9

M-

IÏx i f 1/9

<

x

i

3/18 144

t

Case G: I = 101 f(x) dx for f(x) = 7/10 i f 3/18

<

x

i

7/18 (6.7) 0 4/5 i f 7/18

<

x

i

2/3 3/5 i f 2/3

<

x

i

1

These seven test problems include linear, smoothly curved, and discontinuous integrands, as depicted in Fig. 6. The integral I equals unity in all cases. Furthermore, the integrands for cases A and B, having continuous derivatives up to second order, belong to a class of functions called C2 for brevity. The

integrands for cases E, F and G have multiple discontinuities with various jump amplitudes.

Consider problem A first (linear ramp). The motivation behind choosing a linear ramp stems from the expression 1&(fiE)1 given by Eq. 4.6. The deriv-ative of the integrand is a constant, and therefore the inequality goes over to an equality. For this case with L

=

2 one may obtain.

-= =

2·n

(6.8)

as an expression for the error from Eq. 4.6. A numerical solution of problem A will yield 1&(fAi E

)I

as a function of N. Because of the property of Eq. 5.10 of the new sequence, the linear ramp is integrated exactly for odd numbers N

(i.e., JN

=

1 for N odd). This means that for the new sequence we only have error data for Neven. Nevertheless, the information for N even leads to use-ful insight into the behaviour of the integration error.

The absolute integration error 1&(fAi E

)I

versus N from 1 to 1000 is given in Fig. 7 for the Van der Corput and new sequences. The error fluctuates with N in a manner that reflects the structure of both the integrand and the sequence. This is more apparent in the case of the new sequence, which has a strikingly more ordered structure. The comparison also illustrates that the results for the new sequence have a lower error for all N.

The oscillations that cause the darkened area of the integration error for the new sequence and their amplitudes can be elucidated by plotting the error multiplied by N. The results for

=

(6.9)

(26)

with N and Cnew has a banded structure which is bounded by a constant envelope. These trends for both Cvdc and Cnew continue for larger N (checked to N equal

to 10,000). Furthermore, these results illustrate that the integration error

for the new sequence is of O[~I], and the constant is less than or equal to

unity for this problem. (Note that the vertical scales for Cvdc and Cnew are

different.) This convergence order is clearly better than that for the Van der

Corput sequence.

The numerical results for Cnew can be shown to follow the relationships

{

O(~

- 1)

O(~ + 1)

for Neven and N/2 odd, for Neven and N/2 even,

(6.10)

where O(J) is the Jth term of the new sequence. In the spirit of Eq. 4.6 and

using Eqs. 6.8 and 6.9 we define

=

-to be a numerically determined value of D. By using Eq. 6.10 this gives

{

O<N/2 - 1)

2N

O(N/2 + 1)

2N

for Neven and N/2 odd

for Neven and N/2 even

(6.11)

(6.12)

in terms of the sequence terms O(J). Eq. 6.12 gives

D

exp as a function of N

for Neven. Numerically one finds the trend that 1&(f;Enew)I

=

2·Dex for N

odd is in general Ie ss than the error corresponding to the previous

<~

even)

term. Intuitively, this is reasonable given the structure of the sequence,

since the odd terms in N complete asymmetry about the point x = 1/2 which did

not exist at the previous step in the integration. This means !hat Eq. 6.12, although exact for Neven, will still yield a good estimate of Dexp for all N by tracking the oscillatory behaviour of 1&(fA;Enew ) I versus N. In fact, this oscillatory trend in both the error and Cnew data is a predominant feature of

all the integrations on smooth functions which were investigated. This is not

too surpris ing , because the oscillations for the case of a linear ramp were

found to arise directly from similar properties of the sequence. By using Eq.

4.11 a numerical 'discrepancy' mayalso be defined by means of

= (6.13 )

where once again V(f) is the variation of f on the domain D. Since V(fA)

=

2

and Cnew

<

1, Eq. 6.11 gives

and therefore

<

-1&(fiE) I J. 2N'

Ivu)

I

2N

is postulated as an approximate error bound for a function satisfying the

(6.14)

(27)

previously mentioned smoothness requirements. By using Eqs. 4.6 and 6.12. for Neven and N/2 odd

1&(fiE) 1 ~ (6.16)

for Neven and N/2 even

is also postulated as a general error bound for the same type of integrand. Now consider the results for problem B (smoothly varying curve). which are displayed in Figs. 9 and 10. These results for the Van der Corput and new sequences are similar to those for the ramp. and the error is about the same magnitude. In addition. the behaviour of Cnew

=

1&(fBiEnew)I·N in Fig. lOb is very similar to the results in Fig. 8b. and Eq. 6.16 gives

for Neven and N/2 odd

(6.17) for Neven and N/2 even

with L

=

(n/2)2. This expression tracks the oscillatory behaviour of the integration error shown in Fig. 9b very weIl. Furthermore. by using Eq. 6.15 with V(f)

=

n/2. we get

=

(6.18)

or

Cnew ~ 0.785. (6.19)

This is in agreement with the results in Fig. lOb. In fact. for functions that satisfy reasonable smoothness requirements. the general error estimate given by Eq. 6.15 is useful for all the integration problems tested to date. It should be stressed that tb is is not a theoretical result. but rather one which follows

from tbe data for the linear ramp. The error graph corresponding to the Van der Corput sequence and shown in Fig. 9a shows the presence of local minima in the form of spikes at N of 15. 31. 63. 127. 255 ••••••• 2k-1. These correspond to completed levels in the Van der Corput sequence (and in the new sequence). and tberefore converge as 0[~1] (rectangle rule). This can easily be verified from the principles outlined in chapter 4 (i.e •• Eq. 4.6). or it can be observ-ed from Fig. 10a. Note that the values of the absolute error 1&(fBiEydc)1 at these minima trace out a curve which is approximately equal to the mean conver-gence of the new sequence. Whereas these values are minima for the Van der Corput sequence. the new sequence oscillates about this mean. In this case it attains an average error which is essentially equivalent to the rectangle rule.

The numerical results for absolute error and absolute error times

N

for problems

A

and B for the ramp and smoothly varying functions are typical of most C2 integrands (smooth functions with bounded first derivatives).

Let us now turn to problems C and

D

which involve the integration of functions witb one discontinuity. The results 1& (fc;Eydc) I. 1& (fC;Enew) I. Cvdc and Cnew for case C and D are presented versus N in Figs. 11 to 14. The trends in these numerical results are fairly similar but there are basic

(28)

dif-ferences. In the case of the new sequence. the convergence order of the error remains of O[Nll]. but the constant Cnew is slightly larger than in the case without a discontinuity. This behaviour is typical of the new sequence for problems with and without discontinuities. On the other hand. in the case of Van der Corput's sequence. the convergence order improves somewhat for cases C and D. In case D the convergence order for Van der Corput's sequence actually approaches O[Nll]. However. in this particular case the integration error is still larger than that of the new sequence.

The discontinuity in case D has been made relatively large. When the magnitude of this discontinuity is decreased. so that the function becomes smoother. the performance of the new sequence improves while the performance of the Van der Corput sequence deteriorates. Even larger discontinuities will lead to more-or-less equivalent performance of the two sequences. Furthermore. the location of the discontinuity will have an effect on the performance of both sequences. The performance of each sequence will most of ten be adversely affected differently by the location of the discontinuity. because of the particular ordering of the sequence numbers.

For case C with a piecewise constant integrand the trend is similar. However. as the discontinuity disappears both sequences must produce the exact solution. and their relative performance is now mostly a function of the

location of the discontinuity.

Note that the simplicity of the piecewise constant integrand of case C shown in Figs. 11 and 12 permits the highly ordered structure for both the Van der Corput and new sequences to surface. whereas the more complex integrand of problem D obscures such structure in Figs. 13 and 14.

Now consider the case of integrands with multiple discontinuities. The results for integrating the functions of cases E. F and Gare displayed in Figs. 15 to 20. In spite of the number of discontinuities in each integrand.

the main trends are similar to the case of a single discontinuity. and the locations of the discontinuities continue to play an important role. The new sequence usually performs bet ter than Van der Corput's sequence. especially if the magnitudes of the discontinuities are small. However. the combination of large discontinuities at certain locations (see case F) can cause the perfor-mance of the two sequences to become very similar. In fact. the Van der Corput sequence can perform slightly better than the new sequence under these condi-tions. Note that the magnitude and location of the discontinuities for case G are intermediate to cases E and F. and the performance of the new sequence is bet ter than the sequence of Van der Corput.

In concluding this discussion. one might note that the graphs for Cvdc or 1&(fiE)I·N were plotted in order to illustrate that the Van der Corput sequence does not generally converge with O[Nll] for smooth integrands. Even though numerically determined convergence rates for 16(fiE) I were not a primary objective of this study. we would like to point out that. for problems with smooth integrands (e.g •• cases A and B).

3/4

O[ (1~N) ]

,1&(fiEvdc) I

--

(6.18)

in contrast to the discrepancy bound of O[Nl1l n (N)] given by Eq. 5.4. This is due in part to the functional dependence of N·DN/ln(N) on N given by Eq. 5.12. and also from the fact that Eq. 5.4 is only an upper bound on the discrepancy.

(29)

7.0 NUliERICAL RESULTS .FOR 2-D INfEGRATION ERROR

In this chapter we are mainly interested in 2-D quasi-random sequences in the context .of !fonte Carlo integrations. The approach is to test both the new 2-D sequence and Halton's 2-D sequence (Rl=2,R2=3) on several test prob-lems. as was done in the last chapter for l-D sequences, to obtain useful results for the absolute integration error. The primary goal is to gain more

insight into the convergence properties of these sequences, despite that the results will be somewhat problem dependent.

From a number of integration problems investigated in this study, the results of eight typical ones are presented here in. They are summarized below.

Case A: I =

tt(x

+ y) dxdy

o

0

[!12 1 1

Case B: I

J J

sin(!x) s in(!y) dxdy

o

0

[3!12

tt

3 sin3(!y)

Case C: I = sin (!x) dxdy

o

0

2

J

1

t

2 2

Case D: I = [Itl (x+l)x (y+l)y dxdy

o

0

1 1

Case E: I = 3.815

J J

C sin(!x) sin(!x) dxdy for C =

o

0

1 1

Case F: I = 8.409

J J

C sin(!x) sin(!x) dxdy for C =

o

0 Case G: 1 1 1 C __

{1

1 /2 32

J J

C dxdy for

~

00 i f x

>

3/4 y

>

1/4 otherwise 1 i f x

>

5/9 y

>

5/9

!

otherwise 1 i f x

>

5/9 y

>

5/9 t otherwise (7.1) (7.2) (7.3) (7.4) (7.5) (7.6) (7.7) Case H: I 1 1 1.336

J J

C dxdy

o

0 for (x-t)2 + (y-t)2

>

0.16 (7.8) C = {11/2 i f otherwise

Cytaty

Powiązane dokumenty

a) Czy zaprezentowane w trakcie laboratorium metody faktycznie mogą stanowić samodzielne sposoby kompozycji, czy służyć za źródło inspiracji dla kompozytora? Odpowiedź uzasadnij.

TK wskazał: „Wymóg efek- tywności kontroli rozstrzygnięć zapadłych w danej sprawie należy rozpatry- wać w perspektywie konstytucyjnych gwarancji prawa do sądu (art. Co prawda

Proof. We choose a, B, 50, p according to the assumptions of the theorem.. On the Stability of Solutions of Differential Equations... Let sets Slz and £2° be defindet similarly

In the following by N we shall denote a positive integer-valued random variable which has the distribution function dependent on a parameter 2(2 &gt; 0) i.e.. We assume that

Zgodnie z tytułem Droga do Rosji jest prawie w całości (dokładnie – w trzech czwartych) podporządkowana relacji z podróży do miejsca pracy przymusowej

Przestrzeń sepulkralna jest częścią indywidualnej przestrzeni turystycznej człowieka, a po spełnieniu określonych warunków może stanowić wycinek realnej przestrzeni turystycznej