• Nie Znaleziono Wyników

Jendo S., Kolanek K. Evaluation of adaptive Monte Carlo methods for reliability analysis.

N/A
N/A
Protected

Academic year: 2021

Share "Jendo S., Kolanek K. Evaluation of adaptive Monte Carlo methods for reliability analysis."

Copied!
10
0
0

Pełen tekst

(1)

EVALUATION OF ADAPTIVE MONTE CARLO

METHODS FOR RELIABILITY ANALYSIS

Jendo S., Kolanek K.

Institute of Fundamental Technological Research, Polish Academy of Sciences, Swietokrzyska 21 Str. , 00-049 Warsaw, Poland

Abstract: The cross-entropy method is a new approach to estimate rare event probabilities by Monte Carlo simulation. Application of this method for structural reliability analysis is presented. The efficiency of the approach is tested on some benchmark problems typical for reliability analy-sis.

1. Introduction

Evaluation of the probability of failure is an essential problem in a structural reliabil-ity analysis. Probabilreliabil-ity of failure is defined as the integral of probabilreliabil-ity densreliabil-ity function over the region in the random variable space, for which failure occurs (Melchers 1999). Due to a usually high number of random variables in real life appli-cations, numerical integration is inefficient for this problem. In practice, first or sec-ond order approximation methods (FORM/SORM (Melchers 1999)) are often used to evaluate the probability of failure. However, applicability of these methods is limited only to problems satisfying certain conditions. An alternative is the Monte Carlo inte-gration. Since a failure event is usually rare, it is common to apply importance sam-pling in order to facilitate calculations. Some well developed algorithms for structural reliability are available, however all of them have certain limitations. Thus, the au-thors found it interesting to investigate efficiency of the cross-entropy approach ap-plied to structural reliability analysis problems.

The cross-entropy method is a recently proposed approach for simulation of rare events (Rubinstein 1997). Its application for structural reliability analysis is very sim-ple and requires only a straightforward formulation of the problem. Basically the cross-entropy method is a form of importance sampling. It is interesting because it is based on a very elegant and efficient approach to selection of the sampling distribu-tion by adaptive simuladistribu-tion without need of any addidistribu-tional optimisadistribu-tion algorithm.

(2)

The results of numerical experiments presented in the paper show that application of the cross-entropy method seems to be a reasonable approach solving structural relia-bility problems.

2. Structural reliability analysis by importance sampling

The time invariant structural reliability problem is usually defined as follows (Melch-ers 1999). Uncertain structural paramet(Melch-ers are represented by a real-valued random vector

X

X

1

,

X

2

,

,

X

n

, with joint probability density function f

 

x .

Struc-tural performance with respect to random parameters is reflected by a limit state func-tion g

 

x . The limit state function is defined to take negative values for parameters for which failure occurs. Thus, the limit state function defines a subset in the random variable space called the failure domain

F

x g

:

 

x

0

. Finally, the

probabil-ity of failure is defined as

 

  F f PF x d .x (1)

can be estimated by means of Monte Carlo integration. Since for engineering struc-tures a small probability of failure is desired, the crude Monte Carlo method is ineffi-cient for such problems. Therefore, application of variance reduction techniques, like importance sampling, is usually attempted. The formula for importance sampling in evaluation of P is based on (1) rewritten as follows:F

 

   

 

  

 

h

x

x

f

0

x

g

I

E

x

d

x

h

x

h

x

f

P

h F F (2)

where h

 

x is an importance sampling density, I

 

is the indicator function of

the failure domain, and E denotes the expectation operation with respect to theh

density h

 

x . Having m independent sample points

x

 k , k1 , ,m from the

distribution h

 

x , the expectation in (2) can be estimated from:

 

 

 

 

 





 m 1 k k k k F

X

h

X

f

0

X

g

I

m

1

(3)

The optimal density function h

 

x that minimizes variance of this estimator has the following form:

(3)

 

 

 



otherwise

,

0

,

0

x

g

if

,

P

x

f

x

h

f (4)

However, this formula is rather theoretical, since generation of independent random variables requires the knowledge of the term of interest P . In practice, the distri-F

bution, from which samples are produced, is usually chosen to resemble the distribu-tion with density h

 

x

3. Adaptive importance sampling

In real life applications, the choice of the sampling distribution is usually reduced to a parametric family of distributions for which it is easy to generate independent random samples. Moreover, a common practice is to employ the family

fV

x,v,v

F (

v

is a vector of parameters) including the distribution of

the random vector

X

for which the reliability problem is defined. The probability

density function of

X

will be denoted by f

x,u

. Obviously, the parameters

v

should be selected to facilitate the estimation of probability of failure by the formula (3).

An example of the considered strategy is sampling from the multivariate normal dis-tribution if the reliability problem is defined in a standard normal space. This ap-proach includes the most popular implementation of the importance sampling in structural reliability, when the sample is generated from the standard normal distribu-tion shifted to the design point.

In general, the parameters of the sampling distribution can be obtained by minimizing variance of the importance sampling estimator. This problem can be formulated as follows:  

  

        f x,v u , x f x I Var min f x,v f V v (5) or alternatively by:  

  

. v , x f u , x f x I E min 2 2 f v , x f V v                   (6)

The above optimisation problem can be solved using the following estimate of the ex-pectation:

 

   

 

 

 

   n 1 i 1 i i i i i f V v

f

X

,

v

u

,

X

f

v

,

X

f

u

,

X

f

X

I

n

1

min

, (7)

(4)

where

f

x

, v

1

is the probability density of the random sample

X

 i , i1 , ,n

.

A simple adaptive algorithm for estimation of failure probability based on the for-mula (7) can be forfor-mulated as follows:

1. Take

f

x

,

v

1

f

 

x

,

u

. Generate the sample X1,,XN with density

x

, v

1

f

and solve the optimization problem (7). Denote the solution as vˆ.

Assume vˆ as the estimate of the optimal parameter vector v.

2. Estimate the probability of failure with (3) taking h

 

x f

x,vˆ

.

In order to obtain more accurate estimate of v, take v  vˆ

1 and repeat the first

step of the algorithm

4. The cross-entropy method

An alternative method for selection of the importance sampling distribution parame-ters can be formulated using the cross-entropy (Rubinstein 1997). The cross-entropy, known also as the Kullback-Leibler distance, of two probability distributions with densities f

 

x and g

 

x is defined as:

 

 

 

dx x g x f ln y f g , f D  . (8)

It should be mentioned that the cross-entropy is not a distance in the formal sense, since for instance, it does not satisfy the symmetry requirementD

f,g

D

g, f

. The cross-entropy of the distribution h given by (4) and the distribution

F

f x,v can be expressed as follows:

  

  

  

 

 

  

x

,

v

.

f

u

,

x

f

x

I

P

ln

x

I

P

E

x

d

v

,

x

f

u

,

x

f

x

I

P

ln

u

,

x

f

x

I

P

v

,

x

f

,

x

h

D

f 1 f f 1 f u , x f f 1 f f 1 f

         (9)

Because the distributions h and f

x,v

should be similar, it seems reasonable to

require the cross-entropy of distributions h and f

x,v

to be minimal. Thus the

optimal parameter v according to the cross-entropy criteria is the solution of the

(5)

min

vV

D

h

   

x

,

f

x

,

v

(10) or alternatively (Homem-de Mello & Rubinstein 2002):

max

vV

D

 

v

E

f x,u

I

f

   

x

ln

f

x

,

v

(11) The solution of the problem (10) can be approximated by means of importance sam-pling:

 

 

 

 

 

   n 1 i i 1 i i i f n V v

f

X

,

v

ln

f

X

,

v

u

,

X

f

X

I

n

1

v

max

. (12)

The problems (11) and (6) aim at the same goal; finding parameters of the importance sampling distribution which are optimal in some sense. Because the variance of the estimate is minimized in (6), it seems that considering the problem (11) is pointless. However the cross-entropy method is based on a much nicer optimisation problem, which can be solved even analytically in some cases (De Boer et all. 2004). More-over, according to (Homem-de Mello & Rubinstein 2002), the proof can be found that the solutions of both problems are equivalent for probability of failure going to zero. Thus, the application of the cross-entropy method is justified if only savings by solv-ing of the easier optimisation problem compensate the use of the sub-optimal sam-pling density.

5. Basic cross-entropy algorithm

Because for typical problems the function

D

in (11) is convex and differentiable with respect to

v

, the solution of (12) can be found by solving the following system of equations:

 

 

   

 

X

,

v

ln

f

X

 

,

v

0

f

u

,

X

f

X

I

n

1

v

n 1 i i 1 i i i f n

  . (13)

The above system of equations takes very simple form for independent random vari-ables. For instance consider a set of independent normal variables with joint probabil-ity densprobabil-ity functions given by:

 2 i 2 i i i n 1 i

2

x

exp

2

1

σ

,

μ

,

x

f

(14)

(6)

where

μ

1

,

,

n

are mean values and

σ

1

,

,

n

are standard devia-tions of the components. The gradient of the logarithm of the probability density has elements of the following form:

ln

f

x

,

μ

,

σ

2

x

,

i

1

,

n

,

i i i i

(15)

ln

f

x

,

μ

,

σ

 

x

3

,

i

1

,

n

.

i 2 i 2 i i i

(16) Thus, the following set of equations for optimal parameters of (14) can be obtained by substituting formulas (15) and (16) into (13):

 

   

 

   n 1 i 1 1 i i i f i i

σ

,

μ

,

X

f

1

,

0

,

X

f

X

I

x

n

1

ˆ

(17)

 

   

 

   n 1 i 1 1 i i i f 2 i i 2 i

σ

,

μ

,

X

f

1

,

0

,

X

f

X

I

x

n

1

ˆ

. (18)

With the set of equations (13), the algorithm proposed for minimum variance criteria can be adapted easily for the probability of failure estimation using the cross-entropy optimal parameters.

It should be mentioned that results (17) and (18) are equivalent to the well-known ap-proach employing the sampling distribution with the same first or second moments as distribution (4) (Bucher 1988).

6. Adaptive algorithm

The basic algorithm cannot be applied effectively for problems with a very small probability of failure. In (De Boer et all. 2004) it is recommended that 105

f

P .

Otherwise it is necessary to employ the adaptive algorithm. Obviously, whether to use or not to use the adaptive algorithm depends on the effectiveness of estimation of the distribution parameters by importance sampling with initially guessed parameters. Thus, in some cases even for higher probability of failure it is reasonable to employ the adaptive cross-entropy algorithm.

The following adaptive algorithm was proposed in (Homem-de Mello & Rubinstein 2002). Here it is presented with notation used in structural reliability problems. The parameters needed to be specified for this algorithm are

,

1,  0 and

N which is the number of simulations made in each iteration. The algorithm

(7)

1. Initialization of the algorithm. Set 0 . For a sample X 1,,X n from

the distribution with probability density

f

x

,

v

ˆ

t1

evaluate a sample quantile

of the random variable

 

 i

i

g

Y

X

, and denote it by

ˆ0:  

 

g

X

ˆ

,

i

1

,

,

N

P

i 0 0

(19) Set t1.

2. Use the same sample X 1, ,X n

 to solve the optimisation problem:

 

 

   

 

 

                  N 1 i i 1 t i i i 1 t ˆ i X g V v t

ln

f

X

,

v

,

X

f

u

,

X

f

X

I

n

1

max

arg

, (20) where

 

 

1 ˆt i g

I

X is the indicator function of the set for which

 

 

i

ˆ

t1

g

X

.

3. Generate a new sample X 1, ,X n

 from probability density function

t

f

x ˆ

,

v

, and set t .

4. For the current sample X 1, ,X n

 , evaluate quantile t of the random

vari-able

Y

i

g

 

X

 i and denote it by ˆt

5. If ˆt 0 set ˆt 0 and find the estimate of the parameter v denoted by T

, by solving the following problem

 

 

  

 

    N 1 i i 1 t i i i f V v T

ln

f

X

,

v

,

X

f

u

,

X

f

X

I

n

1

max

arg

. (21) Go to step 7.

6. If ˆt 0 check if there exist  such that  , being a  -quantile of the ran-dom sample g

 

X i ,g

 

X i

, , satisfies

max

0

,

ˆ

t1

:

 If  exists and  t, then set t t1 and repeat the iterations from step 2;

 If  exists and t, then set t  and return to the step 4;

 Otherwise (for example when  does not exist), increase the number of simulations

in the step as N

N and return to the step 3.

7. Estimate the probability of failure with importance sampling using the sampling density function

f

x ˆ

,

v

T

.

In (Homem-de Mello & Rubinstein 2002) it was proven that the algorithm converges in a finite number of iterations to the solution of the problem (11) with probability 1.

(8)

It is worth to notice that the algorithm estimates

v

by a sequence of approximations controlled by a sequence of parameters t. In practice the algorithm is initiated for

5 . 0 1 . 0 0  

 , and values t can be kept constant during the run of the

algo-rithm.

7. Numerical examples

The efficiency of the algorithm presented in the preceding section was tested on the benchmark problems used in (Engelund and Rackwitz 1993) to evaluate the perfor-mance of various importance sampling algorithms. The limit state functions used in the benchmark problems were selected in order to evaluate the algorithms with re-spect to difficulties characteristic for reliability analysis.

The following examples were computed in the standard normal space. The sampling distributions were chosen from multivariate normal family with the identity covari-ance matrix. The mean values of the components were adjusted according to (17). Compared to the algorithm presented above, a slightly modified computational

scheme was used. The parameter

was kept constant during the execution of the

algorithm. In each iteration the simulations were performed until distribution parame-ters were estimated with required accuracy

. For each problem some initial runs of

the algorithm were performed in order to select a combination of

and

allow-ing to estimate the probability of failure with reasonable computational effort. How-ever, the aim of the initial runs was the selection of a reasonable set of parameters, not the optimisation of them. Then, each problem was solved several hundred of times and a mean number of the samples required for estimation of the probability of failure with 0.1 coefficient of variation was calculated. Some of the numbers pre-sented in the following may seem to indicate that the considered algorithm is ineffi-cient. However it should be remembered that the given number of simulations shows the total effort required for solving the problem, including estimation of the sampling distribution parameters.

Example 1. High number of variables and probability levels

The aim of this example is to evaluate performance of the algorithm for various levels of probability and for different number of random variables. The limit state function used in this example is the n-dimensional hypersurface:

n i i

U

n

g

1 1

, (22)

(9)

where U , i i 1 , ,n are the standard normal random variables and  is the

re-liability index.

The numerical experiments were performed for the following values of the parame-ters:  1.0,  5.0,  10.0 and n2, n10, n50. The results

of the experiments are presented in the Table 1. As we can see the required numerical effort increases with the number of random variables and the value of reliability

in-dex. The lack of convergence reported for  10.0 and n10,50 means that

satisfied accuracy of the estimate was not obtained after the allowed number of simu-lations.

Table 1. Algorithm parameters and total number of simulations. Example 1

β=1.0 β=5.0 β=10.0 n=2 ρ 0.3 0.3 0.3 α 0.1 0.1 0.1 N total 235.25 871.25 1856.75 n=10 ρ 0.5 0.5 0.45 α 0.5 0.5 0.05 N total 360.01 1374.75 3180.95 16% not converged n=50 ρ 0.5 0.5 0.45 α 0.1 0.1 0.05 N total 881.75 2968.0 8568.55 38% not converged Example 2. Nonlinearity of the limit state function and probability levels. In this example, the limit state function is given by the following formula:

C

X

g

n i i

1 2 , (23)

where the random variables X , i i 1 , ,n are independent and exponentially

distributed with the parameter

. After transformation to the standard normal space, the considered limit state function becomes highly nonlinear

n i i

C

U

g

1 2

ln

1

, (24)

where U , i i1 , ,n are standard normal random variables, and

is the

(10)

The problem was computed for

1 and n20 and different values of C , which are presented in Table 2. The average number of simulations needed for esti-mation of failure probability with required accuracy are given in Table 2 as well. As it can be seen the numerical effort increases rapidly with dropping value of C . It is due to the fact that when C is decreasing, then  grows making the number of it-erations to increase, and the important region of the failure domain shrinks causing estimation of the distribution parameters more difficult. Taking this into account the presented results are quite satisfying.

Table 2. Algorithm parameters and total number of simulations. Example 2

C 16 .1 75 11. 077 8.95 1 7.4 35 6.277 5.343 Pf 0. 20 10 −2 10 −3 10 −4 10−5 10−6 β 0. 84 1 2.3 28 3.09 3 3.7 22 4.268 4.765 ρ 0. 47 5 .3 .3 .3 .2 .2 α 0. 31 3 0.0 24 .01 5 .01 .005 0.001 N to-tal 49 3. 25 667 .87 5 30 14 .5 551 2.2 5 8505.05 1% not con-verged 15022.02 1.5% not con-verged

8. Conclusion

In the paper the application of the cross-entropy to estimation of structural failure probability was outlined. The performance of the algorithm was tested on problems with difficulties typical for reliability analysis. The obtained results show that if the numerical effort is considered, the cross-entropy is not an alternative for importance sampling methods using the design point (compare with results in (Engelund & Rack-witz (1993)). However, for problems where the design point cannot be found easily, this method seems to be a reasonable approach. The numerical effort is high but still acceptable and it seems that it can be rewarded by the ease of implementation of the method for a specific problem.

(11)

Acknowledgments

The support of the Polish State Committee for Scientific Research grants 4 T07A 002 26 and 5 T07A 018 24 is kindly acknowledged.

References

1. Bucher, C. G.: Adaptive sampling – an iterative fast Monte Carlo procedure. Structural Safety, Vol.5, pp 119-126,1988.

2. De Boer, P-T., Kroese, D.P, Mannor, S. and Rubinstein, R.Y.: A tutorial on the

cross-entropy method. Annals of Operations Research. Vol. 134, pp 19-67, 2005.

3. Engelund, S. and Rackwitz, R.: A Benchmark Study on Importance Sampling

Techniques in Structural Reliability. Structural Safety, Vol.12, pp. 255-276,

1993.

4. Homem-de Mello, T. and Rubinstein, R.Y.: Rare event estimation for static

mod-els via cross-entropy and importance sampling. Submitted for publication, 2002.

5. Melchers, R. E.: Structural Reliability and Prediction. Wiley, 1999.

6. Rubinstein, R. Y.: Optimization of computer simulation models with rare events. European Journal o f Operations Research, Vol. 99, 89-12, 1997.

Cytaty

Powiązane dokumenty

The expediency of applying of study of elektrokardiosignal as a periodically correlated random process by sinphase method, and methods of spectral correlation

The algorithm will be employed to approximate the time series data (x, y x ). It is assumed that in the example the approximation function ŷ x will be the logistic function.

As a criterion of approximation of the group of the engi- ne operation static states under the HDDTT dynamic test con- ditions, without taking into account states of negative engine

Jak wykazują wyniki tych badań, turbulizacja przepływu paliwa w korpusie rozpylacza w istotny sposób zmienia charakterystykę emisji akustycznej rozpylanego paliwa. W

and mean square intensity for all range of scattering angles. Examples of distributions of the scattered electromagnetic wave compared with the analytical solutions are pre- sented

The presented research shows that for a typical light scattering problem treated by the VIE and DDA methods, i.e. scattering of the light by a particle with a size comparable with

We propose the Galerkin method with finite-dimensional spaces based on the Lagrangean finite element of degree k £ N (see Sec.. In the case of strong ellipticity

FINDS FROM THE SHAFT TOMBS The excavation of the filling of the shaft tombs brought over one hundred bigger and smaller decorated fragments, mostly from the walls and ceiling of