• Nie Znaleziono Wyników

A nonintrusive reduced order modelling approach using Proper Orthogonal Decomposition and locally adaptive sparse grids

N/A
N/A
Protected

Academic year: 2021

Share "A nonintrusive reduced order modelling approach using Proper Orthogonal Decomposition and locally adaptive sparse grids"

Copied!
27
0
0

Pełen tekst

(1)

A nonintrusive reduced order modelling approach using Proper Orthogonal

Decomposition and locally adaptive sparse grids

Alsayyari, Fahad; Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen

DOI

10.1016/j.jcp.2019.108912

Publication date

2019

Document Version

Final published version

Published in

Journal of Computational Physics

Citation (APA)

Alsayyari, F., Perkó, Z., Lathouwers, D., & Kloosterman, J. L. (2019). A nonintrusive reduced order

modelling approach using Proper Orthogonal Decomposition and locally adaptive sparse grids. Journal of

Computational Physics, 399, [108912]. https://doi.org/10.1016/j.jcp.2019.108912

Important note

To cite this publication, please use the final published version (if applicable).

Please check the document version above.

Copyright

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy

Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.

This work is downloaded from Delft University of Technology.

(2)

Green Open Access added to TU Delft Institutional Repository

‘You share, we take care!’ – Taverne project

https://www.openaccess.nl/en/you-share-we-take-care

Otherwise as indicated in the copyright section: the publisher

is the copyright holder of this work and the author uses the

Dutch legislation to make this work public.

(3)

Contents lists available atScienceDirect

Journal

of

Computational

Physics

www.elsevier.com/locate/jcp

A

nonintrusive

reduced

order

modelling approach

using

Proper

Orthogonal

Decomposition

and

locally

adaptive

sparse

grids

Fahad Alsayyari

,

Zoltán Perkó,

Danny Lathouwers,

Jan

Leen Kloosterman

DelftUniversityofTechnology,FacultyofAppliedSciences,DepartmentofRadiationScienceandTechnology,Mekelweg15,Delft,2629JB, the Netherlands

a

r

t

i

c

l

e

i

n

f

o

a

b

s

t

r

a

c

t

Articlehistory: Received4April2019

Receivedinrevisedform30July2019 Accepted23August2019

Availableonline28August2019 Keywords:

ProperOrthogonalDecomposition Reducedordermodelling Adaptivesparsegrids Locallyadaptive

Large-scale complex systems require high fidelity models to capture the dynamics of the system accurately. The complexity of these models, however, renders their use to be expensive for applications relying on repeated evaluations, such as control, optimization, and uncertainty quantification. Proper Orthogonal Decomposition (POD) is a powerful Reduced Order Modelling (ROM) technique developed to reduce the computational burden of high fidelity models. In cases where the model is inaccessible, POD can be used in a nonintrusive manner. The accuracy and efficiency of the nonintrusive reduced model are highly dependent on the sampling scheme, especially for high dimensional problems. To that end, we study integrating the locally adaptive sparse grids with the POD method to develop a novel nonintrusive POD-based reduced order model. In our proposed approach, the locally adaptive sparse grid is used to adaptively control the sampling scheme for the POD snapshots, and the hierarchical interpolant is used as a surrogate model for the POD coefficients. An approach to efficiently update the surpluses of the sparse grids with each POD snapshots update is also introduced. The robustness and efficiency of the locally adaptive algorithm are increased by introducing a greediness parameter, and a strategy to validate the reduced model after convergence. The proposed validation algorithm can also enrich the reduced model around regions of detected discrepancies. Three numerical test cases are presented to demonstrate the potential of the proposed POD-Adaptive algorithm. The first is a nuclear reactor point kinetics, the second is a general diffusion problem, and the last is a variation of the analytical Morris function. The results show that the developed algorithm reduced the number of model evaluations compared to the classical sparse grid approach. The built reduced models captured the dynamics of the reference systems with the desired tolerances. The non-intrusiveness and simplicity of the method provide great potential for a wide range of practical large scale applications.

©2019 Elsevier Inc. All rights reserved.

1. Introduction

Complexsystemsaredescribed byinteractions ofmulti-physicsphenomenathatoccuratvariousscales.Capturingsuch inter-disciplinary interactions requires buildingcomplex coupled models. Nuclear reactors, forexample, are described by interactionsofradiationtransport,heattransfer,andfluiddynamics.Highfidelitysimulationtoolsareoftenusedtoprovide

*

Correspondingauthor.

E-mailaddress:f.s.s.alsayyari@tudelft.nl(F. Alsayyari).

https://doi.org/10.1016/j.jcp.2019.108912

(4)

asolutionforsuchcoupledproblems.Nevertheless,evenwiththeincreasingpoweroftoday’ssupercomputers,highfidelity models requireatremendous amountofcomputationaltime andmemoryallocation.Forapplicationssuchasdesign opti-mization,uncertaintyquantification,andcontrol,wheremanyrepeatedmodelevaluationsareneeded,suchmodelsaretoo expensive.

ReducedOrderModelling (ROM) isan effectivetechniquetoreduce thedimensionalityoflarge-scalecomplexsystems. It replacesthehighfidelitymodelwithalow-dimensionalefficientmodelpreserving onlytheprominentdynamicsofthe system.Thereducedmodelcanthenbeusedtoprovidefastsolutionswithacontrolledlevelofaccuracy.Differentreduced ordermodelling techniqueshavebeenproposedinliterature.Benneretal. (2015) [1] presentedacomprehensivesurveyon ROM withnumerousexampleson theirapplications. ProperOrthogonalDecomposition(POD) isthefavoured methodfor nonlinear systems[2].PODwasfirstintroducedasastatisticaltechniquetoextractdominantcharacteristicsfromasetof data. Asa reducedordermethod,themethodwas later developedby Lumley (1967) [3] tomodelcoherentstructuresin turbulentflows.

The POD method is based on samplingthe high fidelity model at several points in parameter space to construct a so-calledsnapshotmatrix.Then,areducedbasisiscreatedthroughSingular ValueDecomposition(SVD).Theoriginalhigh fidelitymodelcanthenbeprojectedontothecreatedreducedbasisspacebymeansofaGalerkinprojection.Thesnapshot matrixandthemodelarebuiltduringtheofflinephase,whichisexecutedonlyonce.Afterwards,duringtheonlinephase, the reducedmodel can be run inexpensively atanydesired parameterpoint. POD-Galerkin has beenstudied extensively in many fields,such as in fluid dynamics to modelcompressible flows [4–6] and incompressible flows [7–9]. In reactor physicsapplications,PODwasappliedtotheeigenvalueproblemofthediffusionequation[10],thetime-dependentdiffusion equation [11],and tomodel thecoolant pool ofthe Lead-cooledFastReactor [12]. A variantof thisapproachcalled the ReducedBasismethod[13–15] wasdevelopedbyaddinganaposteriorierrorestimatorandagreedysamplingschemeto modelparametrizedpartialdifferentialequations.

Projection-based POD methods are code intrusive, which isa major limitation. For legacycodes where access to the governing equationsisunattainable, thisapproach cannotbe applied.In such cases,aslightlydifferentnonintrusive POD technique can be employed. Theidea isto benefitfromthe orthogonalityofthe subspacebasis togenerate theGalerkin expansion coefficientsatthesampledpoints.Then, asurrogate modelcanbe constructedtocompute thesolutionatany required non-sampled point. Different surrogate models havebeen suggested tocompute the expansion coefficients. For lower dimensionalproblems,directinterpolationorsplinescanbe used [16].Problemswithhigherdimensionalityrequire moreadvancedtechniques.TheuseofRadialBasisFunction(RBF)isoneofthecommonmethodsforsuchproblems[2,17]. HesthavenandUbbiali (2018) [18] usedneuralnetworkstobuildasurrogatemodelforthecoefficients.Gaussianregression canalsobeusedtobuildthesurrogatemodel[19].

For the ROM to produce an accurate representation of the original model, the snapshots need to capture the entire dynamicsoftheoriginalmodelwithinthedesiredrange.Consequently,thechoiceofthesamplingschemedirectlyaffects the accuracy of the POD method.Moreover, in the nonintrusive approach,the sampling points should be dense enough for the surrogate modelto reproduce a reliable solution atnon-sampled locations. Thus, the samplingstrategy becomes evenmorerelevantfornonintrusivemethods.Inaddition,problemsparametrizedonhighdimensionalspacesareproneto the curse ofdimensionality, that isthe exponential increase ofthecomputational time withthe increase inthe number of dimensions. In thesecases, theefficient selection of thesampling points is crucialfor anypractical application. Latin HypercubeSampling(LHS)canbeanefficientwaytoaddressthisissue[20].However,thelackofadaptivityisalimitation in nonlinear cases.Guenot etal. (2013) [21] proposed an extension of LHSthat improves the initial snapshot matrixby adaptivelyselectingnewpointsbasedonthe“influence”ofthenewpointonthesnapshotmatrix.

A different sampling method based on sparse grids can also be applied. Sparse grids were first introduced by Smolyak (1963) [22] and, eversince, have been used to cope with highdimensional multivariate integration and inter-polation problems.The method builds a hierarchicalgrid that preserves theproperties forthe unidimensional rule by a specific combinationofthetensorizedproduct[23].Inthecontextofreducedordermodels,Peherstorfer (2013) [24] sug-gested the use ofsparse grids asa machine learning tool for reducedorder models, which was tested onheat transfer problems.In addition,Xiao et al. (2015) [25] presentedamethodofpropagating theexpansion coefficientsthroughtime withtheuseofasparsegridinterpolant,whichwasdemonstratedontheNavier-Stokesequations.Elman et al. (2011) [26] suggestedtheuseofReducedBasismethodtofurtherreduce thecomputationalburdenofstochastic collocationmethods basedonsparsegrids.

Sparse gridsweredevelopedundertheassumptionthatthefunctionto beapproximatedissufficientlysmooth.Inthis case, the algorithm provides optimal selection of subspaces contributing to the interpolant function. However, in many applications,thesmoothnessofthefunctionisunknownandcannotbeestablishedapriori.Tothatend,adaptivestrategies canbeemployedtomodifytheclassicalsparsegridsalgorithm.Incaseswherethemultivariatefunctionisonlysensitiveto certain dimensions,the anisotropicsparsegrid approachissuitable.Inthisstrategy, whichisalsocalled(global) adaptive sparse grids, the grid is constructed by placing morepoints along certain dimensions that have higherimportance. The importance ofeach dimension is identified during the construction stage by testingand comparing all dimensions[27]. Chen andQuarteroni (2015) [28] combinedtheanisotropic adaptivesparse grid withthe reducedbasis methodforerror estimation.

However, thedimensionadaptiveapproachfallsshortofidentifyingregionswithsteepgradientsordiscontinuities.For these cases,an alternativelocaladaptive strategy canbe moreeffective. In fact,one ofthe earliest workon sparsegrids

(5)

by Zenger (1990) [29] suggested theuseoflocal adaptivityfornon-smoothfunctions. The objectiveofthisstrategy isto identifycertain regionsofhigherimportance andonly refinethegridwithin theseregionsby benefitingfromthe hierar-chicalstructureofthe grids [30]. Griebel (1998) [31] showedhowlocallyadaptivesparse gridscan beusedtoadaptively discretizepartialdifferentialequation.Inthisimplementation,Dirichletboundarycondition isassumed,andthe unidimen-sionalrule ischosen tonot placeany pointsalong the boundaries.Pflueger (2010) [32] extended thisideaby modifying the basis functions in a waythat extrapolatestowards the boundaries. The author used thisalgorithm for classification problems.Thisapproachissuitableifthevalueattheboundariesisnotimportantandonlyanestimateisrequired.Maand Zabaras (2009) [33] used aunidimensional rulethat places points atthe boundariesforan adaptivecollocation method. TheauthorsthenusedthealgorithmwithanAnchored-ANOVAapproachtomodelstochasticprocesses [34].Applicationsof thelocallyadaptivesparsegridscanalsobeseenintrackingfunctiondiscontinuitiesinhighdimensionalspaces [35],high dimensionalintegrations [36],andeconomicmodelling [37].

In thispaper,our goalis toexploit the hierarchicalnature ofthe adaptivesparse grids inorder to efficientlybuild a POD-basedreducedordermodelinanonintrusivemanner.Wepresentanapproachtoutilizethelocaladaptivityinorderto identifyregionsofhighimportanceforthePODdevelopment.Inournonintrusiveapproach,noassumptionismadeonthe valueofthemodelattheboundariesoftheparameterdomain.Therefore,wefollowtheworkbyMaandZabaras (2009) [33] indefiningtheunidimensionalrulefortheadaptivesparsegrids.We alsointroduceanapproachtoiteratively updatethe surplusesofthe sparsegrids asthePODmodesdevelop. Wesuggesta criterionfortherefinementstrategy basedonthe physicalspaceratherthanthesurpluses ofthesparsegrids.Wealsoextendthelocallyadaptivealgorithmby introducing a parameterthatcontrolsthe greedinessofthealgorithmingenerating thesnapshots.Additionally,a strategyto validate andupdate the reduced modelis proposed, which increases therobustness ofthe algorithm. Althoughthe nonintrusive approachisconsideredinthispaper,theproposedalgorithmcanequallybecombinedwithaGalerkin-PODapproach.

The remainder of thispaperis organized asfollows: in Section 2the problemis formulated andthe POD methodis introduced.Section 3presentsthesparsegrids asan interpolationtechniqueby firstintroducingtheclassicalsparsegrids andsubsequentlythelocallyadaptiveversion.Inthissection,therefinementstrategyandtheproposedvalidationalgorithm arealsopresented.ThecombinedPOD-AdaptivealgorithmispresentedinSection4alongwiththemethodofupdatingthe surpluses.ThreeapplicationsarepresentedinSection5thattesttheproposedalgorithmnumerically.Thefirstisaneutron point kineticsproblemin5 dimensions presentedintwo cases:one is stronglynonlinearandthe second caseis weakly nonlinear.Thesecondapplicationisageneraldiffusionproblemin18dimensions,andthelast applicationisananalytical functionof20dimensions.Finally,ourconclusionsarediscussedinSection6.

2. ProperOrthogonalDecomposition

Physical phenomena are modelled by capturing the dynamics of the system in a set of governing equations. These equationscan thenbe solved numericallybysome discretization technique.The solutionofthe discretizedmodelresults inthestate(orfield)vectordescribingthestateofthesystem,whichinturnisafunctionofsomedesignparametersthat characterisefeaturesofthesystem(geometry,materials,...,etc.).Thediscretizedmodelcanbewrittenintheform

R

(

y

(

x

),

x

)

=

0

,

(1)

wherey

∈ R

nisavectorwithn statevariables,andx

∈ R

disavectorofthedesignparameterswithdimensiond.Forhigh fidelitymodels,thedimensionofthestatevector(n)isusuallylarge,whichrenderssolvingthemodeltobecomputationally expensive.ROMaimsatrecastingthehighfidelitymodelintoasimplermodelwithdimensionr

<

n.Themodelcanthen besolvedwithreducedcomputationaleffort.ThePODmethodapproximatesthestatevectorintermsofbasisvectors(ina discreteanalogytoFourierexpansion)as

y

(

x

)

r



h=1

ch

(

x

)

uh

,

(2)

wherech is theamplitude ofthebasis vector uh.ThePODbasis vectors(also calledPOD modes)aredata-driven, thatis

they arebuiltbasedon datacollected fromthemodeltodescribe thestate vector. Theamplitude ch isa functionofthe designparameterx.

ThePODmethodisbasedonsamplingthemodelatdifferentdesignparametervalues.Eachstatesolutionisasnapshot ofthemodelatacertainparametervalue.Thesnapshotsarecollectedinthesnapshotmatrix

M

= [

y

(

x1

),

y

(

x2

), . . . ,

y

(

xp

)

] ∈ R

n×p

,

(3)

wherep isthenumberofsamplingpoints.ThegoalofthePODmethodistofindtheoptimalbasisvectorsinsomesubspace

V

ofdimensionr

<<

n thatminimizestheerroroftheapproximationinthe L2 norm.Oncethebasisvectorsare known, theamplitudech

(

x

)

canbecomputedeitherintrusively,byprojectingthegoverningequations,ornon-intrusivelyusing re-gressionmethods.Ourapproachisnon-intrusive,andthereforethemodelcanbeconsideredasablackbox,mappingagiven inputtothedesiredoutput.WecanwriteafunctionalminimizingtheapproximationerrorintheL2normasfollows[38]:

(6)

min uh E

=

p



j=1







y

(

xj

)

r



h=1 ch

(

xj

)

uh







L2

.

(4)

Thebasisvectorsarechosensuchthattheyareorthonormal(i.e.,

<

ug

,

uh

>

= δ

gh).ThePODbasissolvesthe

minimiza-tionprobleminEquation (4).TheycanbeobtainedwiththeSingularValueDecomposition(SVD)astheleftsingularvectors ofthesnapshot matrix[2].Using orthogonalityofthemodes, thevalueoftheamplitude ch

(

x

)

atparametervaluexj can

becomputedas

ch

(

xj

)

=<

uh

,

y

(

xj

) > .

(5)

Ifthenumberofnon-zerosingularvaluesis g,itcanbeshownthattherankofthesnapshotmatrixisalsog.ThePOD basis vectorsare formedby thefirstr left singular vectors(wherer

g). Ifr is chosento bestrictly lessthan g,a POD truncationerrorcanbequantifiedusingthesingularvalues

(

σ

)

asfollows:

er

=



n k=r+1

σ

k2



n k=1

σ

k2

.

(6)

TosetacriterionforselectingthenumberofPODmodes(r),acut-offthreshold(

γ

tr)canbedefinedsuchthat

er

<

γ

tr

r

∈ [

1

, . . .

n

].

(7)

Notethatthetruncationerroronlyquantifiestheerrorinrepresentingthestatesolutionsincludedinthesnapshotmatrix. However, withsufficientsamplingpoints,it canbe usedasanindicator fortheerrorinrepresenting(new) solutionsnot includedinthesnapshotmatrix.

3. Sparsegridsforinterpolation

In ordertogenerate thesnapshot matrix,we needtoexplore the parameterspaceby samplingthemodelatdiscrete points.Choosinganappropriatenumberofsamplingpointsisakeychallengeforanysamplingstrategy.Coveringtheentire range ofdynamics ofthe unknown function isimperative for asuccessful construction ofthe ROM.In addition, extend-ing the samplingstrategyto highdimensionalproblems isanother challengethat must beaddressed. Different sampling schemes havebeenstudiedto determineanoptimalsetofsamplingpoints forincreasedspacecoverage(e.g.,LHS). How-ever,such methodsselectthesamplingpoints aprioriwithoutanyinsightintothefunctionbeingsampled.Thiscan lead to overlookingsomelocalizednonlinearitiesordiscontinuities.Toreducesuch risk,uniformsamplingwithsmallintervals canbeused.However,thisstrategyisprohibitivelyexpensive.

Sparse grids can bevery effectiveforproblems ofhighdimensionality.The construction ofthe sparsegrids is a hier-archicalapproach thatsuccessivelybuildsthenewgridbasedonprevious gridselection.Such constructionissuitable for adaptivestrategiessinceit canbeusedtobuildan algorithmthatwillterminate whenevera desiredaccuracyisreached. Wewillfirstintroducetheclassicalsparsegridmethodthenpresentourapproachforimplementingadaptivity.

3.1. Classicalsparsegrids

Sparsegridsareconstructedbasedonselectingasetofnodesseparatelyforeachdimension.Thesenodesaregenerated in a hierarchical manner by levels, where each level is assignedan integer index i.The unidimensionalnodes are then tensorizedtoformthefinalsparsegrids.Wefirstconsideronedimension,thengeneralizeittothemultidimensionalcase.

Many choicesarepossibleforselecting theunidimensional nodes.Whilethenodes canbe disjoint(non-nested)asin [26] and[39],nestednodesaremoreconvenientandefficientsincefunctionevaluationsinthiscasearenotrepeatedwith increasing the sparsegridlevel.Therefore, wechoose thenodesina nestedmanner, thatis

X

i

X

i+1,where

X

i isthe setofnodesattheindexlevel i.Sincenodesarenested,we canalsodefine thedifference setas

X

i+1

=

X

i+1

\

X

i,where

X

i+1

 isasetthatcontainsonlythenewlyaddednodesatleveli

+

1.Anoverviewofdifferentpossiblesparsegridchoices canbefoundin[40].Wegeneratethenodesintherange

[

0

,

1

]

whichisthenscaledaccordingtothephysicalrangeofthe parametersin

X

i.Inordertoavoidconfusion,wereservetheuseoftheterm“node”fortheunidimensionalpointwhilea multidimensionalvectorofcoordinatesformedbynodesalongeachdimensionisgiventheterm“point”.

Ultimately,weaimtocombinesparsegridswiththePODmethodinordertobuildanonintrusiveROMmodel.This im-posesanadditionalconstraintontheselectionoftheunidimensionalnodes.Thisisbecausethenodesneedtobeseparated sufficiently in parameter spaceto produce enriched POD modescovering the completerange ofdynamics ofthe model. Nevertheless, suchselection ofnodesmightnot beideal forinterpolation. Inmanystudies,Chebyshev nodeswere found toperformbetterthanuniformsampling[40].However,Chebyshevnodesproducemorenodesveryclosetoeachotherat theboundariesandfewernodesinthecentralregion,increasingtheriskofoverlookingsomedynamicsintheinnerregion. Therefore,inordertoachievemaximumseparationofnodesovertheentireparameterdomain,wechoosetheequidistant ruletogeneratethenodes.

(7)

ASmolyakinterpolantisbuiltfortheamplitudesc

(

x

)

(fromEquation (5),wheretheindexh isdroppedfornotational convenience)byconsideringanoperatorUi

(

c

)(

x

)

thatapproximatesthefunctionc

(

x

)

byanexpansionasfollows:

c

(

x

)

Ui

(

c

)(

x

)

=



xij∈Xi c

(

xij

)

ai xi j

(

x

),

(8)

wherei is the levelindex,

X

i isthe setofnodes xij atlevel i,ai xi

j

(

x

)

are basisfunctions, andc

(

xij

)

isthefunction value evaluated at the support nodes xij. In principle, different choices for the basis functions are possible. However, due to our selectionof equidistantnodes,any choiceof (global) polynomialbasis function is likelyto yield poorapproximation becauseofRunge’sphenomenon [23].Additionally,polynomialfunctionsarenotsuitable forlocaladaptivestrategiessince their supportcoverstheentiredomain.Piecewisemultilinearfunctions,ontheother hand,areflexiblebecausethey have localsupportandthuscanbeusedtorefinespecificregionsofthedomain.Thesebasisfunctions,whicharealsocalledthe hatfunctionsbecauseof theirshape, satisfyai

xi j

(

x

)

C

(

[

0

,

1

])

,ai xi j

(

xij

)

=

1,ai xi j

(

yij

)

=

0

yij

X

i, xij

=

yij.Forequidistant typenodes,wecandefinethebasisfunctionsasfollows[40]:

a1x1

=

1 if i

=

1, ai xij

(

x

)

=

1

− (

mi

1

)

· |

x

xij

|,

if

|

x

xij

| <

1 mi

1, 0

,

otherwise, (9)

wheremiandtheequidistantnodesxij aredefinedasfollows:

mi

=



1 if i

=

1

,

2i−1

+

1 if i

>

1

,

(10) xij

=

0

.

5 for j

=

1 if mi

=

1

,

j

1 mi

1 for j

=

1

,

2

, ...,

m i if mi

>

1

.

(11)

mi representsthenumberofnodesatleveli (cardinalityof

X

i).

Beforegeneralizingtothemultivariatecase,wefirstdefinethedifferenceformula



i

(

c

)(

x

)

=

Ui

(

c

)(

x

)

Ui−1

(

c

)(

x

),

(12) withU0

=

0.As aconsequenceofselectingnestednodes,theinterpolantatleveli canalwaysrecreatetheinterpolantat level i

1 (i.e., Ui−1

(

c

)(

x

)

=

Ui

(

Ui−1

(

c

)(

x

))

). Therefore,Equation (12) can berewritteninterms ofthebasis functionsin Equation (8) as,



i

(

c

)(

x

)

=



xi j∈Xi ai xij

(

x

)

c

(

x i j

)



xi j∈Xi ai xij

(

x

)(

U i−1

(

c

)(

xi j

)),

(13)

=



xi j∈Xi ai xij

(

x

)(

c

(

x i j

)

Ui−1

(

c

)(

xij

)).

(14)

Sincetheinterpolantiscompletelyrepresentedatleveli

1,

(

c

(

xi

j

)

Ui−1

(

c

)(

xij

))

=

0

,

xij

X

i−1.Thus,



i

(

c

)(

x

)

=



xi j∈Xi ai xij

(

x

)(

c

(

x i j

)

Ui−1

(

c

)(

xij

)).

(15)

This means that the interpolant needs to be evaluated only at the newly added nodes ateach levelincrease. Thus, we canredefinexij asthe jth elementintheset

X

i.Knowing thatthenumberofnewlyaddednodes(cardinality of

X

i)is

mi

=

mi

mi−1,thedifferenceformulacanbewrittenas,



i

(

c

)(

x

)

=

mi 



j=1 ai xij

(

x

)(

c

(

x i j

)

Ui−1

(

c

)(

xij

)).

(16)

Hence, thesumonly runsoverthe newlyadded elements thatare storedin

X

i

. Thecontributions oftheprevious level neednottobeconsidered.

(8)

Fig. 1. Sparse grid points for d=2 and different values of l.

Table 1

Numberofsparsegridpointsbylevelanddimension.

Level(l) d=2 d=3 d=4 d=5 d=6 0 1 1 1 1 1 1 5 7 9 11 13 2 13 25 41 61 85 3 29 69 137 241 389 4 65 177 401 801 1457 5 145 441 1105 2433 4865 6 321 1073 2929 6993 15121

TheSmolyakalgorithmcanbeappliedtocombinetheunidimensionalnodesintosparsegridsbysatisfyingthefollowing condition:

d

≤ |

i

| ≤

l

+

d

,

(17)

whered isthenumberofdimensions,

|

i

|

=

i1

+

i2

+ ...

+

idwithin beingtheindexlevelalongdimensionn,andl defines thelevelofthesparsegrids.Therefore,thepointsofthesparsegridatlevell areformedbytheset

Bl,d

=

d≤|i|≤l+d

(

X

i1

X

i2

⊗ ... ⊗

X

id

)

(18)

Fig.1showsthedevelopmentofthesparsegridsfromlevell

=

0 tol

=

4 foratwodimensionalspace(d

=

2).Table1lists thenumberofsparsegridpointsgeneratedperlevelanddimension.

The unidimensional formulation in Equation (8) can be extended to the multivariate case using the tensor product operation

(

Ui1

(

c

)(

x 1

)

⊗ · · · ⊗

Uid

(

c

)(

xd

))

=

mi1



j1=1

· · ·

mid



jd=1 c

(

xi1 j1

, . . . ,

x id jd

)

· (

a i1 xi1 j1

(

x1

)

⊗ · · · ⊗

aid xidjd

(

xd

)).

(19)

This showsthat buildingtheinterpolantneeds

dk=1mik function evaluations,which increasesexponentially withthe

dimension.Smolyakcombinationtechniquecanbeusedtoreducethenumberoffunctionevaluations.Theideaisbasedon thefactthatnotallpointscontributeequallytotheinterpolant,somehaveaminimalcontributionwhichcanbeneglected. Therefore, a hierarchically structured algorithm can be built that includes points iteratively until a desired accuracy is reached.

The Smolyak combinationtechnique forms themultivariate interpolantfrom theunivariate difference formula (Equa-tion (16))asfollows: Al,d

(

c

)(

x

)

=



|i|≤l+d



i1

(

c

)(

x 1

)

⊗ · · · ⊗ 

id

(

c

)(

xd

),

(20)

wherel and

|

i

|

aredefinedasinEquation (17).ThisEquationcanbesplitintotwoparts,

Al,d

(

c

)(

x

)

=



|i|<l+d

(

i1

(

c

)(

x 1

)

⊗ · · · ⊗ 

id

(

c

)(

xd

))



Al−1,d(c)(x)

+



|i|=l+d

(

i1

(

c

)(

x 1

)

⊗ · · · ⊗ 

id

(

c

)(

xd

))



Al,d(c)(x)

.

(21)

The first term ( Al−1,d

(

c

)(

x

)

) represents the interpolant value at the previous interpolation level and the second term

(



Al,d

(

c

)(

x

)

) is the interpolation contributions fromthe newly added points. Since the points ateach level are defined

asasubsetofthenextlevel(

X

i

X

i+1),thesecondtermcanbewrittenintermsofthebasisfunctionsas



Al,d

(

c

)(

x

)

=



|i|=l+d



j

(

ai1 xi1 j1

(

x1

)

⊗ · · · ⊗

aid xidjd

(

xd

))

· (

c

(

x i1 j1

, . . . ,

x id jd

)

Al−1,d

(

c

)(

x i1 j1

, . . . ,

x id jd

)



wi j

),

(22)

(9)

Fig. 2. TheprogressionoftheinterpolantAl,dforaonedimensionalfunction f(x)=x2sin2x)

2fromthefirstlevell

=

0 tolevell=2.Theweightswi j

andthebasisfunctionsai

xjarealsoshown.ThegeneratednodesareX

1

= {0.5}atlevell

=

0,X2= {0,1}atlevell=1,andX3= {0.25,0.75}atlevel

l=2.

Fig. 3. TreestructureforthenodesinXi

wherethedepthisassignedthelevelindexi.Nodesareaddedateachleveltohalfthedistancesbetweenthe

nodesinthepreviouslevel.

where j is a multi-index

(

j1

,

. . . ,

jd

)

, jk

=

1

,

. . . ,

mik, k

=

1

,

. . . ,

d, andm

ik

 is the number of newly added nodes along dimension k. Note that anypoint xi

j

= (

x

i1

j1

,

. . . ,

x

id

jd

)

atlevel l is included inthe set Bl,d (from Equation (18)). The term

denoted by wij is called hierarchicalsurplus [40] which is the difference between the true function values at the newly addedpointsandthecorrespondingapproximationoftheinterpolantatthepreviouslevel.Therefore,thesecoefficientsare simplyacorrectionoftheinterpolantatlevell

1 totheactualvalues.Fig.2illustratestheprogressoftheinterpolantfor asimpleone-dimensionalfunction.

3.2. Locallyadaptivesparsegrids

The localadaptivemethod canbe illustrated by showingtheunidimensional nodes

X

i ina tree-likestructure.Fig.3 shows such a tree where the depth of the tree has been assigned the level index i. The root of the tree has a single node

X

1

= {

0

.

5

}

.Itisevidentthat nodesareaddedateach leveltohalfthedistancesbetweenthenodesintheprevious levels. Therefore,each node hasan ancestry asshown inthe treestructure (Fig. 3). Eachnode has one parent andtwo children, exceptat indexlevel i

=

2 whereeach node has only one child.This ancestrydependencecan be extended to multidimensionalpointsbyrelatingeachpointtoasetofneighbouringpointscalledforwardpoints.Specifically,aforward pointtox isapointonthegridthatsharesallnodeswithx exceptalongonedimensionwheretheforwardpointnodeis achild ofthe nodeofx.Tothatend, wedefine aforwardneighbourhood operator

(

S)

thatoperateson asetofpoints

S = {

xq

|

q

=

1

,

. . . ,

n

}

andreturnsallforwardpointsforallpointsin

S

asfollows:

(

S

)

= {(

y1

, . . . ,

yd

)

| ∃

i

,

q

:

b

(

yi

)

=

xq,i

yj

=

xq,j

j

=

i

,

q

∈ [

1

, . . . ,

n

],

j

,

i

∈ [

1

, . . .

d

]},

(23)

whereb

(

x

)

isafunctionthatreturnstheparentofanodex fromthetree.Wealsodefineabackwardpointforx asapoint withaparentnodealongoneofthedimensionsofx.Abackwardneighbourhoodoperator



−1

(

S)

thatoperatesontheset

S

andreturnsthesetofallbackwardpointscanbedefinedas



−1

(

S

)

= {(

y1

, . . . ,

yd

)

| ∃

i

,

q

:

b

(

xq,i

)

=

yi

yj

=

xq,j

j

=

i

,

q

∈ [

1

, . . . ,

n

],

j

,

i

∈ [

1

, . . .

d

]}.

(24)

Eachpointon thegridissurroundedby 2d forwardpointsandd backwardpoints.However, becauseoftheexception attree level2 where nodesgenerateonly one child,points that containa node from level2 havelessthan 2d forward points. Additionally, points thatcontain the rootnode 0

.

5 have lessthand backward points becausethe parentfunction

(10)

Fig. 4. A 2-dimensional example showing the four forward points of the points x= (0.25,0.75)and the three forward points of the point x= (0.75,0). forwardpoint canbe generatedfromdifferentpoints. Thus,it isimportanttokeeptrackofthegeneratedforwardpoints to avoid duplicationof points. By applying thebackward neighbourhood operator successively, we can define the set of ancestorpoints

(

S)

forallpointsx in

S

(

S

)

=

L

q=1





−1



q

(

S

),

(25)

where





−1



L

(

S)

= (

0

.

5

,

. . . ,

0

.

5

)

.The set ofancestorsfor apoint x represents all points withbasis functionsthat con-tribute totheconstruction oftheinterpolant atpointx.Fig. 4showsan examplepointx

= (

0

.

25

,

0

.

75

)

withits forward points.Thefigurealsoshowsapointontheboundarycontaininganodefromlevel2x

= (

0

.

75

,

0

)

,whichhaslessthan2d forwardpoints.

3.2.1. Locallyadaptivealgorithm

Thebasicideaofthelocallyadaptivealgorithmistosetacriterionforselectingimportantpointsthenrefiningthegrid iterativelybyaddingonlytheforwardpointsoftheselectedimportantpoints.

Let

Z

k−1 be the set of important points, and Ik−1 be a set of inactive points that were considered unimportant at iterationk

1.Duringthenextiteration(k),thealgorithmgeneratesatestingset

T

kfromtheforwardpointsof

Z

k−1,and Ik−1 as

T

k

= (

Z

k−1

)

Ik−1

,

(26)

andidentifiesa subset

Z

k

T

k thatisconsidered important,whichisthen addedtothegrid

X

k (i.e.,

X

k

=

Z

k

X

k−1). Thisprocessisrepeateduntilsomeglobalcriterionismet.

Inordertoidentifytheimportantpoints

Z

k,weneedtodefinealocalerrormeasure(

k

j).Apointxj inthetestingset

T

k isconsideredimportantandisadmittedin

Z

k ifithasanerror(

k

j)aboveacertainthreshold(

γ

int)

Z

k

= {

x

j

T

k

|

kj

>

γ

int

}.

(27)

Pointsthatdonotmeetthiscriterionarestoredintheinactiveset

I

k

= {{

T

k

\

Z

k

} ∪

I

k−1

}.

(28)

At each iteration, weneed to evaluatethe interpolantatthe testingpoints

T

k in orderto compute

k

j.However, the

interpolantinEquation (21) iswrittenintermsofthe(global)levell whichisnotrelevantintheadaptiveschemebecause points are added based on their location and ancestry. Therefore, we can rewrite the interpolant in Equation (21) and Equation (22) foranypointx

= (

x1

,

. . . ,

xd

)

intermsofiterationk as

(11)

Fig. 5. Acomparisonshowingthedifferenceinsamplingthefunction f(x)= 1

0.8−x2+0.1 betweenadaptiveandclassicalsparsegridsalgorithms.Thefirst

65sampledpointsfrombothalgorithmsaremarked,whichshowstheadaptivealgorithmselectingmorepointsaroundthepeakandfeweratthesmooth region.



Ak,d

(

c

)(

x

)

=

m k



n=1 wkn

n

(

x

),

(30) wherem

k

=

card

(

Zk

)

,and

n isthed-dimensionalbasisfunctionforthepointxn

Z

k,

n

(

x

)

=

d



p=1 aip xni p,p

(

xp

),

(31)

wherexn has supportnodes

(

xin1,1

,

. . . ,

x id

n,d

)

,andip isthelevel (treedepth) indexforthesupport node x ip

n,p.The surplus

correspondingtoxn isdefinedas

wkn

=

c

(

xn

)

Ak−1,d

(

xn

).

(32)

Once theimportant points are identified, they are addedto the set

X

k andtheir corresponding surpluses are stored

intheset

W

k.The hierarchicalsurplusasdefinedinEquation (32) isa naturalcandidateforthelocalerrormeasure

k j.

These surpluses are defined locally (for each point) andrepresent the deviationfrom the true value. This criterion was appliedin[33,35].However,Griebel(1998) [31] showedthat takingtheabsolutevalueofthesurplusesistoosharpofan indicator andcan leadto a non-terminatingalgorithmin some cases.The authorsuggestedweighingthe surpluses with theintegralofthecorrespondingbasisfunctionsinordertogivemoreimportancetopoints withbasisfunctionsthathave wider support.Thiscriterion was usedin[36,34,32].In ourimplementation,we combinetheadaptivity withthePODto modelaphysicalfield.Therefore,wechoosealocalerrormeasurebasedonphysicalspaceratherthanthesurpluseswhich are defined in parameter space. The local error measure inour approach is presented in Section 4 after presentingthe methodofintegratingthePODwiththeadaptivesparsegrids.

Tohighlightthedifferencebetweentheclassicalandthelocallyadaptivesparsegrids,Fig.5showsananalyticalfunction that was sampled using both approaches. The first 65 sampled points from both algorithms are marked on the figure. The classical sparse grids algorithm resulted in a uniform samplingregardless of the function’s behaviour. The adaptive algorithm,ontheother hand,was moreefficientbyspending morepointsaround thesteepgradient andfewer points in thesmoothregion.

3.2.2. Includingancestorpointsintheadaptivity

Arefinementcriterionbasedentirelyonthelocalerrormeasurecanleadtoprematureterminationofthealgorithm.This isobservedwhenthetruefunctionvalueintersects(orcloselyintersects)theinterpolantattheforwardpoints.Forexample, the adaptivealgorithmbased onlyon alocal errormeasurenever convergedwhen testedonthe Rosenbrock functionin 2D,definedas f

(

x

,

y

)

=

100

(

y

x2

)

2

+ (

1

x

)

2 andx

,

y

∈ (

0

,

1

)

.Thereasonisthat thevalueofthisfunctionattheroot point

(

0

.

5

,

0

.

5

)

andatone ofits forwardpoints

(

0

.

5

,

0

)

are equal.Therefore,the surpluscomputedatthe point

(

0

.

5

,

0

)

(12)

is zero which falsely impliesthat the interpolant is accurate around this point. As a consequence, the algorithm stops refiningaround thepoint

(

0

.

5

,

0

)

andassumesaconstantinterpolationbetween

(

0

.

5

,

0

.

5

)

and

(

0

.

5

,

0

)

,whichisincorrect. The missingbehaviour around

(

0

.

5

,

0

)

cannot berecovered atsubsequent iterations. Thisis significantbecause thebasis functionat

(

0

.

5

,

0

)

hassupportthatspanshalfthedomain.

Thisissuecanbemitigatedbyredefiningtheselectionoftheimportantpointstoincludetheancestors.Includingmissing ancestorshasbeendiscussedaspartofbuildinganaccuratehierarchicalinterpolant(see [32]).Bungartz et al. (2003) [36] usedthemissingancestorsassupportpointsforthed-polynomialbasisfunctionswheretheycontributetothecalculations of the surplusesof their descendants. Inour approach,the goalis to usethe adaptivity toexplore the parameterspace. Therefore,theancestorsarenotonlyincludedforbuildingtheinterpolantbutalsotosearchintheforwardpointsofthese ancestorsforanypossiblemissingbehaviour.Inthiscase, thelocalerrorindicator

k

j isstillameasurefortheimportance

of theregion around thepoint butthealgorithm prioritizesthe searchin thevicinity ofancestors beforemoving to the forwardpoints.Thisisimportantbecausethebasisfunctionsforancestorshavewidersupportcomparedtothedescendants. However,inordertokeepthenumberofevaluationsreduced,notallancestorsareincludedintheimportantset.

Theimportantsetisredefinedtotakeintoaccounttheancestorsasfollows:

C

k

= {

x j

T

k

|

kj

>

γ

int

},

Z

k a

= {

xj

C

k

| (

xj

)

X

k−1

},

Z

k b

= {

yi

∈ (

xj

)

|

xj

C

k

,

(

xj

)

C

k

= ∅ ∧

yi

/

X

k−1

∧ (

yi

)

X

k−1

},

Z

k

=

Z

k a

Z

bk

.

(33)

In this definition, we first identify a set of candidate points

C

k containing all points with an error above the defined

threshold

γ

int. Then, the important points set

Z

k is formed by two parts. The first

Z

ak are points within the candidate

points

C

kthathavealltheirancestorsalreadyincludedinthesparsegrid

X

k−1 frompreviousiterations.Thesecond

Z

k b are

themissingancestorsofcandidatepoints withpartialancestryin

X

k−1.However,candidatepointsthathaveanyancestor pointalsoasacandidatewillnotbeconsideredbecausetheerroratthesepointsislikelytobehighduetotheerroratthat ancestor.Suchpoints willbeaddedtotheinactivesettobetestedagaininthenext iterationafterincludingtheancestor pointfirst.ApplyingthisstrategytotheaforementionedRosenbrockfunctionresultedinthealgorithmconvergingwith967 pointstoarelativeerrorof1%.

Such definitionfor theset ofimportantpoints (Equation (33)) enhancesthequality ofexploring the parameterspace becauseforeveryimportantpointidentifiediniterationk,all2d forwardpointswillbetestedinthenextiteration(Equation (26)). However, for high dimensional problems where the model is linear (or almost linear) along certain dimensions, refiningthegridinalldimensionsunnecessarilyincreasesthenumberofmodelevaluations.Forsuchcases,wecancontrol the numberof modelevaluations by introducing aparameter

μ

that tunes the greedinessof thesamplingscheme. This parameterreducesthenumberofpointsinthetestingset

T

k+1 asfollows:

T

k+1

=



xj

∈ (

Z

k

)





card

(

−1

(

xj

)

X

k

)

card

(

−1

(

x j

))

1

μ



I

k

,

(34)

wheretheoperatorcard

(Y)

returnsthecardinalityofaset

Y

,and

μ

isthegreedinessparameterthathasavalue

∈ [

0

,

1

]

. Foreveryforwardpointin

(Z

k

)

,thefractionofitsbackwardpointsthatareincludedintheimportantset

X

k isrequired

tobegreaterthanorequalto1

μ

inorderforthispointtobetested.Notethateachpointinad-dimensionalgridhasup tod backwardpoints.Thealgorithmisgreedyfor

μ

=

1 becauseallforwardpointswillbeadmittedandtested.For

μ

=

0,a pointwillonlybetestedifallofitsbackwardpointswereimportant,whichdirectsthealgorithmtoavoidsearchingregions withnoimportantpoints. Theconceptofonlyconsideringpoints whosebackwardpointsareimportantisinspiredbythe (anisotropic) dimensionadaptivesparse grids,whereindicesofimportantgridsare identifiedbasedontheimportance of theindicesofitsbackwardneighbours(frompreviousiterations)[27].Byextendingthisconcepttothelocaladaptivity,we havebettercontrolofthenumberofmodelevaluations,attheexpenseofexploringtheparameterspacelessthoroughly.

3.2.3. Validation

Including the ancestors can onlymitigate the premature terminationissue butnot resolve it completely. This canbe observed, forexample,in theone-dimensional sine function sin

(

2

π

x

)

,which hasthe samevalue at therootpoint {0

.

5} andat itsforward points {0,1}.Both forwardpoints, inthiscase, willhave zerosurpluses (w21

=

w22

=

0). Therefore,the algorithm will still terminate withoutany furtherrefinement even withincludingthe ancestors rule.This issuecan also arise in multidimensionalfunctions. We cantry todefine a differenterrorcriterion to circumventthis case.However, as statedbyGriebel[31],foranygivenerrorcriteria,wecanalwaysfindafunctionthatwillcausethealgorithmtoterminate prematurely. Therefore, we propose to include a validation step after convergence that will test the model atrandomly generated points. If anyof the random points results in an error greater than the tolerance,the algorithm enriches the model withmore points around that point. This is achieved by considering all points fromthe inactive set with basis functionscontributingtotheinterpolantattherandomvalidationpoints

(13)

Q

= {

xn

I

k

| |

xr,p

xinp,p

| <

1

mnip

1

,

ip

=

1

,

p

=

1

, . . . ,

d

},

(35)

wherexr

= (

xr,1

,

. . . ,

xr,d

)

isthetestedrandompoint,x ip

n,p isthesupportnodeofthepointxn alongdimension p withtree

depth ip,andmnip isdefinedasinEquation (10).The pointsin

Q

arethen consideredcandidatepoints (i.e.,

C

k

=

Q

) and

theadaptivealgorithm isresumed.The basicprinciplehereisthat iftherandompoint hasan errorabovethe tolerance, itimpliesthatthealgorithmincorrectlycategorizedoneofthecontributingbasisfunctionsasnotimportant(addedtothe inactive set)andfailedto build an accurateinterpolantin thatregion. Therefore,reconsidering thesepoints ascandidate pointswillenrichthemodelaroundthisregion.

4. POD-Adaptivealgorithm

Theadaptive algorithmisused toguidethesamplingschemeforthePOD method.Ateach iteration,thehighfidelity model y

(

x

)

is sampled at new grid points

T

k. Then based on the predefined error measure, an important subset

Z

k is identified and added to

X

k. The snapshots corresponding to the points in

Z

k are then added to the snapshots set

F

k

= {

y

(

xj

)

|

xj

X

k

,

j

=

1

,

. . .

mk

}

.WethenperformaSVDonthesnapshotsandobtainnewPODmodes

{

uh

|

h

=

1

,

. . .

rk

}

.

Each iteration will result in a new set of POD modes uh and, consequently, a new set of amplitudes ch

(

x

)

. Moreover,

the number of POD modes might increase from iteration to the next because of the addition of new snapshots. As a consequence,thenumberoffunctionstobeinterpolated(amplitudesofthePODmodes)alsoincreases.Inthissection,we proposeaschemethatisabletokeeptrackofthechangesintheamplitudeswithminimizedcomputationalcost.

Atiterationk,thehighfidelitymodelisapproximatedas y

(

x

)

rk



h=1

ch

(

x

)

uh

.

(36)

UsingtheorthogonalityofthePODmodes,wecandefinetheamplitudesatanypointxias

ch

(

xi

)

=<

y

(

xi

),

uh

> .

(37)

Weaimto approximatech

(

x

)

withthe interpolant Ak,d

(

ch

)(

x

)

(Equation (30))andeventuallybuild aROMya

(

x

)

approxi-matingy

(

x

)

suchthat ya

(

x

)

=

rk



h=1

Ak,d

(

ch

)(

x

)

uh

.

(38)

The interpolant Ak,d

(

ch

)(

x

)

depends on the grid points

X

k and the surpluses

W

hk. For every amplitude

(

ch

)

, a specific

interpolantisbuiltwithacorrespondingsetofsurpluses

W

k h.

Atiterationk

+

1,theadaptivealgorithmselectsanewsetofgridpoints

Z

k+1.Then,thesetofsnapshotsareupdated,

F

k+1

= {

y

(

x

j

)

|

xj

X

k+1

,

j

=

1

,

. . .

mk+1},where

X

k

X

k+1 andmk+1

>

mk.Asaconsequence,weobtainanewsetofPOD

modes

uh

|

h

=

1

,

. . .

rk+1} andcorrespondingamplitudes

ch

(

x

)

|

h

=

1

,

. . .

rk+1}.ThePOD modelatiterationk

+

1 canbe writtenas ya

(

x

)

=

rk+1



g=1

ˆ

ch

(

x

)

u

ˆ

h

.

(39)

Inprinciple, ch

ˆ

(

x

)

isanewfunction ofx andisnot relatedto ch

(

x

)

because thePODmodesare different(i.e.,uh

= ˆ

uh).

Therefore,inordertoconstructtheROMasinEquation (38),a newinterpolantforc

ˆ

h ( Ak+1,d

(

c

ˆ

h

)(

x

)

)needs tobe rebuilt

hierarchicallystartingfromthefirstiteration.FromEquation (29), Ak+1,d

(

ch

ˆ

)(

x

)

isformedbytwoparts

Ak+1,d

(

c

ˆ

h

)(

x

)

=

Ak,d

(

c

ˆ

h

)(

x

)

+ 

Ak+1,d

(

c

ˆ

h

)(

x

).

(40)

Thefirstterm Ak,d

(

c

ˆ

h

)(

x

)

istheinterpolantfromthepreviousiterations, whichneeds tobecomputedfirstforallxj

X

k

inordertoobtaintheunknownsurpluses

W

ˆ

hk.Recomputingtheentireinterpolantforallpreviouspointsateachiteration isinefficientandcounter-productivetothehierarchicalstructureoftheSmolyakalgorithm.However,sincethegridpoints areselectedinanestedmanner,wecanfindanefficientwaytoupdatethesurplusesfromthepreviousiterationswithout havingtorecompute theinterpolanthierarchically.Oncethesesurpluses areupdated, thesurplusesforthenewpointsin

Z

k+1canbecomputedforthesecondterm



A

k+1,d

(

c

ˆ

h

)(

x

)

asinEquation (32).

Toupdatethesurpluses

W

ˆ

hk,wefirstnoticethattheamplitudesfromthetwoconsecutiveiterationsk andk

+

1 arenot equal,thatis

ˆ

(14)

Inphysicalspace,however,assumingnegligibleSVDtruncationerror,bothPODmodelsfromEquation (36) andEquation (39) aredefinedtoreproducetheexactsnapshotsatthepointsin

X

k becausethepointsarenested(

X

k

X

k+1).Therefore,

rk+1



g=1

ˆ

cg

(

xj

)

u

ˆ

g

=

rk



h=1 ch

(

xj

)

uh

xj

X

k

.

(42)

Wecanusetheorthogonalitypropertyandprojecttheequationontou

ˆ

g toobtaintheamplitudes

ˆ

cg

(

xj

)

=

rk



h=1 ch

(

xj

) <

uh

,

u

ˆ

g

>

xj

X

k

,

g

=

1

, . . . ,

rk+1

.

(43) Sincetheinterpolant Ak,d

(

c

ˆ

g

)(

x

)

isafunctionofthesurplusesratherthanthefunctionc

ˆ

g,itismoreconvenienttofinda

relationbetweenw

ˆ

kjandwkj.Wefirstnotethattheset

X

kisformedbytheunionoftheimportantpointsfromallprevious

iterations,thatis

X

k

=

k

l=1

Z

l

.

(44)

ThedefinitionofthesurplusesinEquation (32) canbewrittenforamplitudech

(

xj

)

atanyiterationl as

wlj,h

=

ch

(

xj

)

Al−1,d

(

ch

)(

xj

),

xj

Z

l

.

(45)

SubstitutingEquation (45) inEquation (43)

ˆ

wlj,g

+

Al−1,d

(

c

ˆ

g

)(

xj

)

=

rk



h=1



wlj,h

+

Al−1,d

(

ch

)(

xj

)



<

uh

,

u

ˆ

g

>

xj

Z

l

,

l

=

1

, . . . ,

k

,

g

=

1

, . . . ,

rk+1

.

(46) WecanfurtherreduceEquation (46) bynoticingthat A0,d

(

c

ˆ

g

)(

x

)

=

A0,d

(

ch

)(

x

)

=

0.Therefore,forl

=

1,

ˆ

w1j,g

=

rk



h=1



w1j,h



<

uh

,

u

ˆ

g

>

xj

Z

1

,

g

=

1

, . . . ,

rk+1

.

(47) Forl

=

2,

ˆ

w2j,g

+

A1,d

(

c

ˆ

g

)(

xj

)

=

rk



h=1



w2j,h

+

A1,d

(

ch

)(

xj

)



<

uh

,

u

ˆ

g

>

xj

Z

2

,

g

=

1

, . . . ,

rk+1

.

(48) Notethatbothinterpolants Ak,d

(

cg

ˆ

)(

x

)

and Ak,d

(

ch

)(

x

)

sharethesamesupportnodes.

FromthedefinitionoftheinterpolantinEquation (29) andEquation (30),itfollowsthat

A1,d

(

c

)(

xj

)

= 

A1,d

=

m 1



n=1 w1n

n

(

xj

).

(49)

Thus,Equation (48) becomes

ˆ

w2j,g

+

m 1



n=1

ˆ

w1n,g

n

(

xj

)

=

rk



h=1

w2j,h

+

m 1



n=1 w1n,h

n

(

xj

)

⎦ <

uh

,

u

ˆ

g

>

xj

Z

2

,

g

=

1

, . . . ,

rk+1

.

(50) UsingEquation (47) toreplace w

ˆ

n1,g inEquation (50),weget

ˆ

w2j,g

+

m 1



n=1 rk



h=1 w1n,h

<

uh

,

u

ˆ

g

>

n

(

xj

)

=

rk



h=1

w2j,h

+

m 1



n=1 wn1,h

n

(

xj

)

⎦ <

uh

,

u

ˆ

g

>

xj

Z

2

,

g

=

1

, . . . ,

rk+1

,

(51) whichsimplifiesto

ˆ

w2j,g

=

r2



h=1 w2j,h

<

uh

,

u

ˆ

g

>

xj

Z

2

,

g

=

1

, . . . ,

rk+1

.

(52)

Cytaty

Powiązane dokumenty

This research on the influence of heat and dust on the production process of the Oxygen Steel Factory is done at Tata Steel in IJmuiden.. In this chapter a general introduction

следует закону 1'одип—р степеней свободы, где 8 = Х*Х, а 8п является блочной матрицей матрицы

19 Based on participant responses, we found that Slovenian mayors, on average, were (before being firstly elected to the mayoral office) municipal councillors for 3.5 years, which

Key words: Model reduction, proper orthogonal decomposition, reaction-diffusion sys- tems, lambda-omega systems, chemical

In the fourth chapter the author explains that the requirement of a written contract as a condition of lawful marriage, which in some cases passed into law by the legislation of

Schulzer, Die Benützung der Schriften „ De monogamia ” und „De ieiunio ” beiHeronymus „Adversus Iovinianum ”, „N eue Jahrbücher für deutsche Theologie” III (1894),

Przekaz ten jest powszechnie charakteryzowany – przez odpowiedzialnych za komunikację medialną w biurach biskupów (Directors of Communications for the Archbishops’

Natomiast w odniesieniu do owych „kształtów zewnętrznych”, czyli tego, co można nazwać formami przypadłościowymi rzeczy, Augustyn pisze tak: „Jeśli, mając na uwadze tylko